Directory of Open Access Journals (Sweden)
Francesco Bonavolontà
2014-10-01
Full Text Available The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.
Compressive Sampling based Image Coding for Resource-deficient Visual Communication.
Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen
2016-04-14
In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.
Compressed sensing of roller bearing fault based on multiple down-sampling strategy
International Nuclear Information System (INIS)
Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang
2016-01-01
Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults. (paper)
Compressed sensing of roller bearing fault based on multiple down-sampling strategy
Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang
2016-02-01
Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults.
Architectural Design Space Exploration of an FPGA-based Compressed Sampling Engine
DEFF Research Database (Denmark)
El-Sayed, Mohammad; Koch, Peter; Le Moullec, Yannick
2015-01-01
We present the architectural design space exploration of a compressed sampling engine for use in a wireless heart-rate monitoring system. We show how parallelism affects execution time at the register transfer level. Furthermore, two example solutions (modified semi-parallel and full...
International Nuclear Information System (INIS)
Zhang, Leihong; Liang, Dong
2016-01-01
In order to solve the problem that reconstruction efficiency and precision is not high, in this paper different samples are selected to reconstruct spectral reflectance, and a new kind of spectral reflectance reconstruction method based on the algorithm of compressive sensing is provided. Four different color numbers of matte color cards such as the ColorChecker Color Rendition Chart and Color Checker SG, the copperplate paper spot color card of Panton, and the Munsell colors card are chosen as training samples, the spectral image is reconstructed respectively by the algorithm of compressive sensing and pseudo-inverse and Wiener, and the results are compared. These methods of spectral reconstruction are evaluated by root mean square error and color difference accuracy. The experiments show that the cumulative contribution rate and color difference of the Munsell colors card are better than those of the other three numbers of color cards in the same conditions of reconstruction, and the accuracy of the spectral reconstruction will be affected by the training sample of different numbers of color cards. The key technology of reconstruction means that the uniformity and representation of the training sample selection has important significance upon reconstruction. In this paper, the influence of the sample selection on the spectral image reconstruction is studied. The precision of the spectral reconstruction based on the algorithm of compressive sensing is higher than that of the traditional algorithm of spectral reconstruction. By the MATLAB simulation results, it can be seen that the spectral reconstruction precision and efficiency are affected by the different color numbers of the training sample. (paper)
International Nuclear Information System (INIS)
Xu, Lijun; Ren, Ying; Sun, Shijie; Cao, Zhang
2016-01-01
In this paper, an under-sampling method for wideband capacitance measurement was proposed by using the compressive sensing strategy. As the excitation signal is sparse in the frequency domain, the compressed sampling method that uses a random demodulator was adopted, which could greatly decrease the sampling rate. Besides, four switches were used to replace the multiplier in the random demodulator. As a result, not only the sampling rate can be much smaller than the signal excitation frequency, but also the circuit’s structure is simpler and its power consumption is lower. A hardware prototype was constructed to validate the method. In the prototype, an excitation voltage with a frequency up to 200 kHz was applied to a capacitance-to-voltage converter. The output signal of the converter was randomly modulated by a pseudo-random sequence through four switches. After a low-pass filter, the signal was sampled by an analog-to-digital converter at a sampling rate of 50 kHz, which was three times lower than the highest exciting frequency. The frequency and amplitude of the signal were then reconstructed to obtain the measured capacitance. Both theoretical analysis and experiments were carried out to show the feasibility of the proposed method and to evaluate the performance of the prototype, including its linearity, sensitivity, repeatability, accuracy and stability within a given measurement range. (paper)
Energy Preserved Sampling for Compressed Sensing MRI
Directory of Open Access Journals (Sweden)
Yudong Zhang
2014-01-01
Full Text Available The sampling patterns, cost functions, and reconstruction algorithms play important roles in optimizing compressed sensing magnetic resonance imaging (CS-MRI. Simple random sampling patterns did not take into account the energy distribution in k-space and resulted in suboptimal reconstruction of MR images. Therefore, a variety of variable density (VD based samplings patterns had been developed. To further improve it, we propose a novel energy preserving sampling (ePRESS method. Besides, we improve the cost function by introducing phase correction and region of support matrix, and we propose iterative thresholding algorithm (ITA to solve the improved cost function. We evaluate the proposed ePRESS sampling method, improved cost function, and ITA reconstruction algorithm by 2D digital phantom and 2D in vivo MR brains of healthy volunteers. These assessments demonstrate that the proposed ePRESS method performs better than VD, POWER, and BKO; the improved cost function can achieve better reconstruction quality than conventional cost function; and the ITA is faster than SISTA and is competitive with FISTA in terms of computation time.
Statistical conditional sampling for variable-resolution video compression.
Directory of Open Access Journals (Sweden)
Alexander Wong
Full Text Available In this study, we investigate a variable-resolution approach to video compression based on Conditional Random Field and statistical conditional sampling in order to further improve compression rate while maintaining high-quality video. In the proposed approach, representative key-frames within a video shot are identified and stored at full resolution. The remaining frames within the video shot are stored and compressed at a reduced resolution. At the decompression stage, a region-based dictionary is constructed from the key-frames and used to restore the reduced resolution frames to the original resolution via statistical conditional sampling. The sampling approach is based on the conditional probability of the CRF modeling by use of the constructed dictionary. Experimental results show that the proposed variable-resolution approach via statistical conditional sampling has potential for improving compression rates when compared to compressing the video at full resolution, while achieving higher video quality when compared to compressing the video at reduced resolution.
Compressive Sampling of EEG Signals with Finite Rate of Innovation
Directory of Open Access Journals (Sweden)
Poh Kok-Kiong
2010-01-01
Full Text Available Analyses of electroencephalographic signals and subsequent diagnoses can only be done effectively on long term recordings that preserve the signals' morphologies. Currently, electroencephalographic signals are obtained at Nyquist rate or higher, thus introducing redundancies. Existing compression methods remove these redundancies, thereby achieving compression. We propose an alternative compression scheme based on a sampling theory developed for signals with a finite rate of innovation (FRI which compresses electroencephalographic signals during acquisition. We model the signals as FRI signals and then sample them at their rate of innovation. The signals are thus effectively represented by a small set of Fourier coefficients corresponding to the signals' rate of innovation. Using the FRI theory, original signals can be reconstructed using this set of coefficients. Seventy-two hours of electroencephalographic recording are tested and results based on metrices used in compression literature and morphological similarities of electroencephalographic signals are presented. The proposed method achieves results comparable to that of wavelet compression methods, achieving low reconstruction errors while preserving the morphologiies of the signals. More importantly, it introduces a new framework to acquire electroencephalographic signals at their rate of innovation, thus entailing a less costly low-rate sampling device that does not waste precious computational resources.
Sampling theory, a renaissance compressive sensing and other developments
2015-01-01
Reconstructing or approximating objects from seemingly incomplete information is a frequent challenge in mathematics, science, and engineering. A multitude of tools designed to recover hidden information are based on Shannon’s classical sampling theorem, a central pillar of Sampling Theory. The growing need to efficiently obtain precise and tailored digital representations of complex objects and phenomena requires the maturation of available tools in Sampling Theory as well as the development of complementary, novel mathematical theories. Today, research themes such as Compressed Sensing and Frame Theory re-energize the broad area of Sampling Theory. This volume illustrates the renaissance that the area of Sampling Theory is currently experiencing. It touches upon trendsetting areas such as Compressed Sensing, Finite Frames, Parametric Partial Differential Equations, Quantization, Finite Rate of Innovation, System Theory, as well as sampling in Geometry and Algebraic Topology.
Informational analysis for compressive sampling in radar imaging.
Zhang, Jingxiong; Yang, Ke
2015-03-24
Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.
Analysis on soil compressibility changes of samples stabilized with lime
Directory of Open Access Journals (Sweden)
Elena-Andreea CALARASU
2016-12-01
Full Text Available In order to manage and control the stability of buildings located on difficult foundation soils, several techniques of soil stabilization were developed and applied worldwide. Taking into account the major significance of soil compressibility on construction durability and safety, the soil stabilization with a binder like lime is considered one of the most used and traditional methods. The present paper aims to assess the effect of lime content on soil geotechnical parameters, especially on compressibility ones, based on laboratory experimental tests, for several soil categories in admixture with different lime dosages. The results of this study indicate a significant improvement of stabilized soil parameters, such as compressibility and plasticity, in comparison with natural samples. The effect of lime stabilization is related to an increase of soil structure stability by increasing the bearing capacity.
Directory of Open Access Journals (Sweden)
Szi-Wen Chen
Full Text Available In this paper, a reweighted ℓ1-minimization based Compressed Sensing (CS algorithm incorporating the Integral Pulse Frequency Modulation (IPFM model for spectral estimation of HRV is introduced. Knowing as a novel sensing/sampling paradigm, the theory of CS asserts certain signals that are considered sparse or compressible can be possibly reconstructed from substantially fewer measurements than those required by traditional methods. Our study aims to employ a novel reweighted ℓ1-minimization CS method for deriving the spectrum of the modulating signal of IPFM model from incomplete RR measurements for HRV assessments. To evaluate the performance of HRV spectral estimation, a quantitative measure, referred to as the Percent Error Power (PEP that measures the percentage of difference between the true spectrum and the spectrum derived from the incomplete RR dataset, was used. We studied the performance of spectral reconstruction from incomplete simulated and real HRV signals by experimentally truncating a number of RR data accordingly in the top portion, in the bottom portion, and in a random order from the original RR column vector. As a result, for up to 20% data truncation/loss the proposed reweighted ℓ1-minimization CS method produced, on average, 2.34%, 2.27%, and 4.55% PEP in the top, bottom, and random data-truncation cases, respectively, on Autoregressive (AR model derived simulated HRV signals. Similarly, for up to 20% data loss the proposed method produced 5.15%, 4.33%, and 0.39% PEP in the top, bottom, and random data-truncation cases, respectively, on a real HRV database drawn from PhysioNet. Moreover, results generated by a number of intensive numerical experiments all indicated that the reweighted ℓ1-minimization CS method always achieved the most accurate and high-fidelity HRV spectral estimates in every aspect, compared with the ℓ1-minimization based method and Lomb's method used for estimating the spectrum of HRV from
On music genre classification via compressive sampling
DEFF Research Database (Denmark)
Sturm, Bob L.
2013-01-01
Recent work \\cite{Chang2010} combines low-level acoustic features and random projection (referred to as ``compressed sensing'' in \\cite{Chang2010}) to create a music genre classification system showing an accuracy among the highest reported for a benchmark dataset. This not only contradicts previ...
The possibilities of compressed sensing based migration
Aldawood, Ali; Hoteit, Ibrahim; Alkhalifah, Tariq Ali
2013-01-01
Linearized waveform inversion or Least-square migration helps reduce migration artifacts caused by limited acquisition aperture, coarse sampling of sources and receivers, and low subsurface illumination. However, leastsquare migration, based on L2-norm minimization of the misfit function, tends to produce a smeared (smoothed) depiction of the true subsurface reflectivity. Assuming that the subsurface reflectivity distribution is a sparse signal, we use a compressed-sensing (Basis Pursuit) algorithm to retrieve this sparse distribution from a small number of linear measurements. We applied a compressed-sensing algorithm to image a synthetic fault model using dense and sparse acquisition geometries. Tests on synthetic data demonstrate the ability of compressed-sensing to produce highly resolved migrated images. We, also, studied the robustness of the Basis Pursuit algorithm in the presence of Gaussian random noise.
The possibilities of compressed sensing based migration
Aldawood, Ali
2013-09-22
Linearized waveform inversion or Least-square migration helps reduce migration artifacts caused by limited acquisition aperture, coarse sampling of sources and receivers, and low subsurface illumination. However, leastsquare migration, based on L2-norm minimization of the misfit function, tends to produce a smeared (smoothed) depiction of the true subsurface reflectivity. Assuming that the subsurface reflectivity distribution is a sparse signal, we use a compressed-sensing (Basis Pursuit) algorithm to retrieve this sparse distribution from a small number of linear measurements. We applied a compressed-sensing algorithm to image a synthetic fault model using dense and sparse acquisition geometries. Tests on synthetic data demonstrate the ability of compressed-sensing to produce highly resolved migrated images. We, also, studied the robustness of the Basis Pursuit algorithm in the presence of Gaussian random noise.
Compressive sensing based ptychography image encryption
Rawat, Nitin
2015-09-01
A compressive sensing (CS) based ptychography combined with an optical image encryption is proposed. The diffraction pattern is recorded through ptychography technique further compressed by non-uniform sampling via CS framework. The system requires much less encrypted data and provides high security. The diffraction pattern as well as the lesser measurements of the encrypted samples serves as a secret key which make the intruder attacks more difficult. Furthermore, CS shows that the linearly projected few random samples have adequate information for decryption with a dramatic volume reduction. Experimental results validate the feasibility and effectiveness of our proposed technique compared with the existing techniques. The retrieved images do not reveal any information with the original information. In addition, the proposed system can be robust even with partial encryption and under brute-force attacks.
Pamula, Venkata Rajesh; Valero-Sarmiento, Jose Manuel; Yan, Long; Bozkurt, Alper; Hoof, Chris Van; Helleputte, Nick Van; Yazicioglu, Refet Firat; Verhelst, Marian
2017-06-01
A compressive sampling (CS) photoplethysmographic (PPG) readout with embedded feature extraction to estimate heart rate (HR) directly from compressively sampled data is presented. It integrates a low-power analog front end together with a digital back end to perform feature extraction to estimate the average HR over a 4 s interval directly from compressively sampled PPG data. The application-specified integrated circuit (ASIC) supports uniform sampling mode (1x compression) as well as CS modes with compression ratios of 8x, 10x, and 30x. CS is performed through nonuniformly subsampling the PPG signal, while feature extraction is performed using least square spectral fitting through Lomb-Scargle periodogram. The ASIC consumes 172 μ W of power from a 1.2 V supply while reducing the relative LED driver power consumption by up to 30 times without significant loss of relevant information for accurate HR estimation.
Rate-distortion optimization for compressive video sampling
Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee
2014-05-01
The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.
Atomic effect algebras with compression bases
International Nuclear Information System (INIS)
Caragheorgheopol, Dan; Tkadlec, Josef
2011-01-01
Compression base effect algebras were recently introduced by Gudder [Demonstr. Math. 39, 43 (2006)]. They generalize sequential effect algebras [Rep. Math. Phys. 49, 87 (2002)] and compressible effect algebras [Rep. Math. Phys. 54, 93 (2004)]. The present paper focuses on atomic compression base effect algebras and the consequences of atoms being foci (so-called projections) of the compressions in the compression base. Part of our work generalizes results obtained in atomic sequential effect algebras by Tkadlec [Int. J. Theor. Phys. 47, 185 (2008)]. The notion of projection-atomicity is introduced and studied, and several conditions that force a compression base effect algebra or the set of its projections to be Boolean are found. Finally, we apply some of these results to sequential effect algebras and strengthen a previously established result concerning a sufficient condition for them to be Boolean.
Permeability and compression characteristics of municipal solid waste samples
Durmusoglu, Ertan; Sanchez, Itza M.; Corapcioglu, M. Yavuz
2006-08-01
Four series of laboratory tests were conducted to evaluate the permeability and compression characteristics of municipal solid waste (MSW) samples. While the two series of tests were conducted using a conventional small-scale consolidometer, the two others were conducted in a large-scale consolidometer specially constructed for this study. In each consolidometer, the MSW samples were tested at two different moisture contents, i.e., original moisture content and field capacity. A scale effect between the two consolidometers with different sizes was investigated. The tests were carried out on samples reconsolidated to pressures of 123, 246, and 369 kPa. Time settlement data gathered from each load increment were employed to plot strain versus log-time graphs. The data acquired from the compression tests were used to back calculate primary and secondary compression indices. The consolidometers were later adapted for permeability experiments. The values of indices and the coefficient of compressibility for the MSW samples tested were within a relatively narrow range despite the size of the consolidometer and the different moisture contents of the specimens tested. The values of the coefficient of permeability were within a band of two orders of magnitude (10-6-10-4 m/s). The data presented in this paper agreed very well with the data reported by previous researchers. It was concluded that the scale effect in the compression behavior was significant. However, there was usually no linear relationship between the results obtained in the tests.
Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm
Elahi, Sana; kaleem, Muhammad; Omer, Hammad
2018-01-01
Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.
Blind compressed sensing image reconstruction based on alternating direction method
Liu, Qinan; Guo, Shuxu
2018-04-01
In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.
Compressed sensing along physically plausible sampling trajectories in MRI
International Nuclear Information System (INIS)
Chauffert, Nicolas
2015-01-01
Magnetic Resonance Imaging (MRI) is a non-invasive and non-ionizing imaging technique that provides images of body tissues, using the contrast sensitivity coming from the magnetic parameters (T_1, T_2 and proton density). Data are acquired in the κ-space, corresponding to spatial Fourier frequencies. Because of physical constraints, the displacement in the κ-space is subject to kinematic constraints. Indeed, magnetic field gradients and their temporal derivative are upper bounded. Hence, the scanning time increases with the image resolution. Decreasing scanning time is crucial to improve patient comfort, decrease exam costs, limit the image distortions (eg, created by the patient movement), or decrease temporal resolution in functional MRI. Reducing scanning time can be addressed by Compressed Sensing (CS) theory. The latter is a technique that guarantees the perfect recovery of an image from under sampled data in κ-space, by assuming that the image is sparse in a wavelet basis. Unfortunately, CS theory cannot be directly cast to the MRI setting. The reasons are: i) acquisition (Fourier) and representation (wavelets) bases are coherent and ii) sampling schemes obtained using CS theorems are composed of isolated measurements and cannot be realistically implemented by magnetic field gradients: the sampling is usually performed along continuous or more regular curves. However, heuristic application of CS in MRI has provided promising results. In this thesis, we aim to develop theoretical tools to apply CS to MRI and other modalities. On the one hand, we propose a variable density sampling theory to answer the first impediment. The more the sample contains information, the more it is likely to be drawn. On the other hand, we propose sampling schemes and design sampling trajectories that fulfill acquisition constraints, while traversing the κ-space with the sampling density advocated by the theory. The second point is complex and is thus addressed step by step
Physics Based Modeling of Compressible Turbulance
2016-11-07
AFRL-AFOSR-VA-TR-2016-0345 PHYSICS -BASED MODELING OF COMPRESSIBLE TURBULENCE PARVIZ MOIN LELAND STANFORD JUNIOR UNIV CA Final Report 09/13/2016...on the AFOSR project (FA9550-11-1-0111) entitled: Physics based modeling of compressible turbulence. The period of performance was, June 15, 2011...by ANSI Std. Z39.18 Page 1 of 2FORM SF 298 11/10/2016https://livelink.ebs.afrl.af.mil/livelink/llisapi.dll PHYSICS -BASED MODELING OF COMPRESSIBLE
Composite Techniques Based Color Image Compression
Directory of Open Access Journals (Sweden)
Zainab Ibrahim Abood
2017-03-01
Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.
Methods for Sampling and Measurement of Compressed Air Contaminants
International Nuclear Information System (INIS)
Stroem, L.
1976-10-01
In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit
Methods for Sampling and Measurement of Compressed Air Contaminants
Energy Technology Data Exchange (ETDEWEB)
Stroem, L
1976-10-15
In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit
Development of a compressive sampling hyperspectral imager prototype
Barducci, Alessandro; Guzzi, Donatella; Lastri, Cinzia; Nardino, Vanni; Marcoionni, Paolo; Pippi, Ivan
2013-10-01
Compressive sensing (CS) is a new technology that investigates the chance to sample signals at a lower rate than the traditional sampling theory. The main advantage of CS is that compression takes place during the sampling phase, making possible significant savings in terms of the ADC, data storage memory, down-link bandwidth, and electrical power absorption. The CS technology could have primary importance for spaceborne missions and technology, paving the way to noteworthy reductions of payload mass, volume, and cost. On the contrary, the main CS disadvantage is made by the intensive off-line data processing necessary to obtain the desired source estimation. In this paper we summarize the CS architecture and its possible implementations for Earth observation, giving evidence of possible bottlenecks hindering this technology. CS necessarily employs a multiplexing scheme, which should produce some SNR disadvantage. Moreover, this approach would necessitate optical light modulators and 2-dim detector arrays of high frame rate. This paper describes the development of a sensor prototype at laboratory level that will be utilized for the experimental assessment of CS performance and the related reconstruction errors. The experimental test-bed adopts a push-broom imaging spectrometer, a liquid crystal plate, a standard CCD camera and a Silicon PhotoMultiplier (SiPM) matrix. The prototype is being developed within the framework of the ESA ITI-B Project titled "Hyperspectral Passive Satellite Imaging via Compressive Sensing".
WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing
Directory of Open Access Journals (Sweden)
Zhouzhou Liu
2015-01-01
Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.
Signal Recovery in Compressive Sensing via Multiple Sparsifying Bases
DEFF Research Database (Denmark)
Wijewardhana, U. L.; Belyaev, Evgeny; Codreanu, M.
2017-01-01
is sparse is the key assumption utilized by such algorithms. However, the basis in which the signal is the sparsest is unknown for many natural signals of interest. Instead there may exist multiple bases which lead to a compressible representation of the signal: e.g., an image is compressible in different...... wavelet transforms. We show that a significant performance improvement can be achieved by utilizing multiple estimates of the signal using sparsifying bases in the context of signal reconstruction from compressive samples. Further, we derive a customized interior-point method to jointly obtain multiple...... estimates of a 2-D signal (image) from compressive measurements utilizing multiple sparsifying bases as well as the fact that the images usually have a sparse gradient....
Compressive sampling by artificial neural networks for video
Szu, Harold; Hsu, Charles; Jenkins, Jeffrey; Reinhardt, Kitt
2011-06-01
We describe a smart surveillance strategy for handling novelty changes. Current sensors seem to keep all, redundant or not. The Human Visual System's Hubel-Wiesel (wavelet) edge detection mechanism pays attention to changes in movement, which naturally produce organized sparseness because a stagnant edge is not reported to the brain's visual cortex by retinal neurons. Sparseness is defined as an ordered set of ones (movement or not) relative to zeros that could be pseudo-orthogonal among themselves; then suited for fault tolerant storage and retrieval by means of Associative Memory (AM). The firing is sparse at the change locations. Unlike purely random sparse masks adopted in medical Compressive Sensing, these organized ones have an additional benefit of using the image changes to make retrievable graphical indexes. We coined this organized sparseness as Compressive Sampling; sensing but skipping over redundancy without altering the original image. Thus, we turn illustrate with video the survival tactics which animals that roam the Earth use daily. They acquire nothing but the space-time changes that are important to satisfy specific prey-predator relationships. We have noticed a similarity between the mathematical Compressive Sensing and this biological mechanism used for survival. We have designed a hardware implementation of the Human Visual System's Compressive Sampling scheme. To speed up further, our mixedsignal circuit design of frame differencing is built in on-chip processing hardware. A CMOS trans-conductance amplifier is designed here to generate a linear current output using a pair of differential input voltages from 2 photon detectors for change detection---one for the previous value and the other the subsequent value, ("write" synaptic weight by Hebbian outer products; "read" by inner product & pt. NL threshold) to localize and track the threat targets.
Compression-based inference on graph data
Bloem, P.; van den Bosch, A.; Heskes, T.; van Leeuwen, D.
2013-01-01
We investigate the use of compression-based learning on graph data. General purpose compressors operate on bitstrings or other sequential representations. A single graph can be represented sequentially in many ways, which may in uence the performance of sequential compressors. Using Normalized
Artificial neural network does better spatiotemporal compressive sampling
Lee, Soo-Young; Hsu, Charles; Szu, Harold
2012-06-01
Spatiotemporal sparseness is generated naturally by human visual system based on artificial neural network modeling of associative memory. Sparseness means nothing more and nothing less than the compressive sensing achieves merely the information concentration. To concentrate the information, one uses the spatial correlation or spatial FFT or DWT or the best of all adaptive wavelet transform (cf. NUS, Shen Shawei). However, higher dimensional spatiotemporal information concentration, the mathematics can not do as flexible as a living human sensory system. The reason is obviously for survival reasons. The rest of the story is given in the paper.
HVS-based medical image compression
Energy Technology Data Exchange (ETDEWEB)
Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)
2005-07-01
Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.
HVS-based medical image compression
International Nuclear Information System (INIS)
Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao
2005-01-01
Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time
International Nuclear Information System (INIS)
Je, U.K.; Lee, M.S.; Cho, H.S.; Hong, D.K.; Park, Y.O.; Park, C.K.; Cho, H.M.; Choi, S.I.; Woo, T.H.
2015-01-01
In practical applications of three-dimensional (3D) tomographic imaging, there are often challenges for image reconstruction from insufficient sampling data. In computed tomography (CT), for example, image reconstruction from sparse views and/or limited-angle (<360°) views would enable fast scanning with reduced imaging doses to the patient. In this study, we investigated and implemented a reconstruction algorithm based on the compressed-sensing (CS) theory, which exploits the sparseness of the gradient image with substantially high accuracy, for potential applications to low-dose, high-accurate dental cone-beam CT (CBCT). We performed systematic simulation works to investigate the image characteristics and also performed experimental works by applying the algorithm to a commercially-available dental CBCT system to demonstrate its effectiveness for image reconstruction in insufficient sampling problems. We successfully reconstructed CBCT images of superior accuracy from insufficient sampling data and evaluated the reconstruction quality quantitatively. Both simulation and experimental demonstrations of the CS-based reconstruction from insufficient data indicate that the CS-based algorithm can be applied directly to current dental CBCT systems for reducing the imaging doses and further improving the image quality
Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators
International Nuclear Information System (INIS)
Flammia, Steven T; Gross, David; Liu, Yi-Kai; Eisert, Jens
2012-01-01
Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. Firstly, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e. the sample complexity of tomography decreases with the rank. Secondly, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. In this paper, we present a new theoretical analysis of compressed tomography, based on the restricted isometry property for low-rank matrices. Using these tools, we obtain near-optimal error bounds for the realistic situation where the data contain noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators—the matrix Dantzig selector and the matrix Lasso—with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low-rank estimate using direct fidelity estimation, and describe a method for compressed quantum process tomography that works for processes with small Kraus rank and requires only Pauli eigenstate preparations
Duong, Hieu N.; Snasel, Vaclav
2016-01-01
We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods. PMID:27965708
Directory of Open Access Journals (Sweden)
Vu H. Nguyen
2016-01-01
Full Text Available We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods.
Compressed Sensing-Based Direct Conversion Receiver
DEFF Research Database (Denmark)
Pierzchlewski, Jacek; Arildsen, Thomas; Larsen, Torben
2012-01-01
Due to the continuously increasing computational power of modern data receivers it is possible to move more and more processing from the analog to the digital domain. This paper presents a compressed sensing approach to relaxing the analog filtering requirements prior to the ADCs in a direct......-converted radio signals. As shown in an experiment presented in the article, when the proposed method is used, it is possible to relax the requirements for the quadrature down-converter filters. A random sampling device and an additional digital signal processing module is the price to pay for these relaxed...
Harmonic analysis in integrated energy system based on compressed sensing
International Nuclear Information System (INIS)
Yang, Ting; Pen, Haibo; Wang, Dan; Wang, Zhaoxia
2016-01-01
Highlights: • We propose a harmonic/inter-harmonic analysis scheme with compressed sensing theory. • Property of sparseness of harmonic signal in electrical power system is proved. • The ratio formula of fundamental and harmonic components sparsity is presented. • Spectral Projected Gradient-Fundamental Filter reconstruction algorithm is proposed. • SPG-FF enhances the precision of harmonic detection and signal reconstruction. - Abstract: The advent of Integrated Energy Systems enabled various distributed energy to access the system through different power electronic devices. The development of this has made the harmonic environment more complex. It needs low complexity and high precision of harmonic detection and analysis methods to improve power quality. To solve the shortages of large data storage capacities and high complexity of compression in sampling under the Nyquist sampling framework, this research paper presents a harmonic analysis scheme based on compressed sensing theory. The proposed scheme enables the performance of the functions of compressive sampling, signal reconstruction and harmonic detection simultaneously. In the proposed scheme, the sparsity of the harmonic signals in the base of the Discrete Fourier Transform (DFT) is numerically calculated first. This is followed by providing a proof of the matching satisfaction of the necessary conditions for compressed sensing. The binary sparse measurement is then leveraged to reduce the storage space in the sampling unit in the proposed scheme. In the recovery process, the scheme proposed a novel reconstruction algorithm called the Spectral Projected Gradient with Fundamental Filter (SPG-FF) algorithm to enhance the reconstruction precision. One of the actual microgrid systems is used as simulation example. The results of the experiment shows that the proposed scheme effectively enhances the precision of harmonic and inter-harmonic detection with low computing complexity, and has good
Optically compressed sensing by under sampling the polar Fourier plane
International Nuclear Information System (INIS)
Stern, A; Levi, O; Rivenson, Y
2010-01-01
In a previous work we presented a compressed imaging approach that uses a row of rotating sensors to capture indirectly polar strips of the Fourier transform of the image. Here we present further developments of this technique and present new results. The advantages of our technique, compared to other optically compressed imaging techniques, is that its optical implementation is relatively easy, it does not require complicate calibrations and that it can be implemented in near-real time.
Subsampling-based compression and flow visualization
Energy Technology Data Exchange (ETDEWEB)
Agranovsky, Alexy; Camp, David; Joy, I; Childs, Hank
2016-01-19
As computational capabilities increasingly outpace disk speeds on leading supercomputers, scientists will, in turn, be increasingly unable to save their simulation data at its native resolution. One solution to this problem is to compress these data sets as they are generated and visualize the compressed results afterwards. We explore this approach, specifically subsampling velocity data and the resulting errors for particle advection-based flow visualization. We compare three techniques: random selection of subsamples, selection at regular locations corresponding to multi-resolution reduction, and introduce a novel technique for informed selection of subsamples. Furthermore, we explore an adaptive system which exchanges the subsampling budget over parallel tasks, to ensure that subsampling occurs at the highest rate in the areas that need it most. We perform supercomputing runs to measure the effectiveness of the selection and adaptation techniques. Overall, we find that adaptation is very effective, and, among selection techniques, our informed selection provides the most accurate results, followed by the multi-resolution selection, and with the worst accuracy coming from random subsamples.
Application of content-based image compression to telepathology
Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace
2002-05-01
Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.
Predicting the compressibility behaviour of tire shred samples for landfill applications.
Warith, M A; Rao, Sudhakar M
2006-01-01
Tire shreds have been used as an alternative to crushed stones (gravel) as drainage media in landfill leachate collection systems. The highly compressible nature of tire shreds (25-47% axial strain on vertical stress applications of 20-700 kPa) may reduce the thickness of the tire shred drainage layer to less than 300 mm (minimum design requirement) during the life of the municipal solid waste landfill. There hence exists a need to predict axial strains of tire shred samples in response to vertical stress applications so that the initial thickness of the tire shred drainage layer can be corrected for compression. The present study performs one-dimensional compressibility tests on four tire shred samples and compares the results with stress/strain curves from other studies. The stress/strain curves are developed into charts for choosing the correct initial thickness of tire shred layers that maintain the minimum thickness of 300 mm throughout the life of the landfill. The charts are developed for a range of vertical stresses based on the design height of municipal waste cell and bulk unit weight of municipal waste. Experimental results also showed that despite experiencing large axial strains, the average permeability of the tire shred sample consistently remained two to three orders of magnitude higher than the design performance criterion of 0.01cm/s for landfill drainage layers. Laboratory experiments, however, need to verify whether long-term chemical and bio-chemical reactions between landfill leachate and the tire shred layer will deteriorate their mechanical functions (hydraulic conductivity, compressibility, strength) beyond permissible limits for geotechnical applications.
Memory hierarchy using row-based compression
Loh, Gabriel H.; O'Connor, James M.
2016-10-25
A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.
Wavelet-based audio embedding and audio/video compression
Mendenhall, Michael J.; Claypoole, Roger L., Jr.
2001-12-01
Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.
Accelerated Compressed Sensing Based CT Image Reconstruction.
Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C
2015-01-01
In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.
Accelerated Compressed Sensing Based CT Image Reconstruction
Directory of Open Access Journals (Sweden)
SayedMasoud Hashemi
2015-01-01
Full Text Available In X-ray computed tomography (CT an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.
Compressed Sensing, Pseudodictionary-Based, Superresolution Reconstruction
Directory of Open Access Journals (Sweden)
Chun-mei Li
2016-01-01
Full Text Available The spatial resolution of digital images is the critical factor that affects photogrammetry precision. Single-frame, superresolution, image reconstruction is a typical underdetermined, inverse problem. To solve this type of problem, a compressive, sensing, pseudodictionary-based, superresolution reconstruction method is proposed in this study. The proposed method achieves pseudodictionary learning with an available low-resolution image and uses the K-SVD algorithm, which is based on the sparse characteristics of the digital image. Then, the sparse representation coefficient of the low-resolution image is obtained by solving the norm of l0 minimization problem, and the sparse coefficient and high-resolution pseudodictionary are used to reconstruct image tiles with high resolution. Finally, single-frame-image superresolution reconstruction is achieved. The proposed method is applied to photogrammetric images, and the experimental results indicate that the proposed method effectively increase image resolution, increase image information content, and achieve superresolution reconstruction. The reconstructed results are better than those obtained from traditional interpolation methods in aspect of visual effects and quantitative indicators.
Online sparse representation for remote sensing compressed-sensed video sampling
Wang, Jie; Liu, Kun; Li, Sheng-liang; Zhang, Li
2014-11-01
Most recently, an emerging Compressed Sensing (CS) theory has brought a major breakthrough for data acquisition and recovery. It asserts that a signal, which is highly compressible in a known basis, can be reconstructed with high probability through sampling frequency which is well below Nyquist Sampling Frequency. When applying CS to Remote Sensing (RS) Video imaging, it can directly and efficiently acquire compressed image data by randomly projecting original data to obtain linear and non-adaptive measurements. In this paper, with the help of distributed video coding scheme which is a low-complexity technique for resource limited sensors, the frames of a RS video sequence are divided into Key frames (K frames) and Non-Key frames (CS frames). In other words, the input video sequence consists of many groups of pictures (GOPs) and each GOP consists of one K frame followed by several CS frames. Both of them are measured based on block, but at different sampling rates. In this way, the major encoding computation burden will be shifted to the decoder. At the decoder, the Side Information (SI) is generated for the CS frames using traditional Motion-Compensated Interpolation (MCI) technique according to the reconstructed key frames. The over-complete dictionary is trained by dictionary learning methods based on SI. These learning methods include ICA-like, PCA, K-SVD, MOD, etc. Using these dictionaries, the CS frames could be reconstructed according to sparse-land model. In the numerical experiments, the reconstruction performance of ICA algorithm, which is often evaluated by Peak Signal-to-Noise Ratio (PSNR), has been made compared with other online sparse representation algorithms. The simulation results show its advantages in reducing reconstruction time and robustness in reconstruction performance when applying ICA algorithm to remote sensing video reconstruction.
The effects of aging on compressive strength of low-level radioactive waste form samples
International Nuclear Information System (INIS)
McConnell, J.W. Jr.; Neilson, R.M. Jr.
1996-06-01
The Field Lysimeter Investigations: Low-Level Waste Data Base Development Program, funded by the US Nuclear Regulatory Commission (NRC), is (a) studying the degradation effects in organic ion-exchange resins caused by radiation, (b) examining the adequacy of test procedures recommended in the Branch Technical Position on Waste Form to meet the requirements of 10 CFR 61 using solidified ion-exchange resins, (c) obtaining performance information on solidified ion-exchange resins in a disposal environment, and (d) determining the condition of liners used to dispose ion-exchange resins. Compressive tests were performed periodically over a 12-year period as part of the Technical Position testing. Results of that compressive testing are presented and discussed. During the study, both portland type I-II cement and Dow vinyl ester-styrene waste form samples were tested. This testing was designed to examine the effects of aging caused by self-irradiation on the compressive strength of the waste forms. Also presented is a brief summary of the results of waste form characterization, which has been conducted in 1986, using tests recommended in the Technical Position on Waste Form. The aging test results are compared to the results of those earlier tests. 14 refs., 52 figs., 5 tabs
Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.
Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham
2017-12-01
During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Compressive sensing based algorithms for electronic defence
Mishra, Amit Kumar
2017-01-01
This book details some of the major developments in the implementation of compressive sensing in radio applications for electronic defense and warfare communication use. It provides a comprehensive background to the subject and at the same time describes some novel algorithms. It also investigates application value and performance-related parameters of compressive sensing in scenarios such as direction finding, spectrum monitoring, detection, and classification.
Compressive Sampling for Non-Imaging Remote Classification
2013-10-22
spectro -‐polarization imager, a compressive coherence imager to resolve objects through turbulence...2 The relay lens for UV -‐CASSI, which focuses the aperture code onto the monochrome detector...below in Fig. 3, with a silicon UV sensitive detector on the left, and a UV
Image acquisition system using on sensor compressed sampling technique
Gupta, Pravir Singh; Choi, Gwan Seong
2018-01-01
Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.
Identification of Coupled Map Lattice Based on Compressed Sensing
Directory of Open Access Journals (Sweden)
Dong Xie
2016-01-01
Full Text Available A novel approach for the parameter identification of coupled map lattice (CML based on compressed sensing is presented in this paper. We establish a meaningful connection between these two seemingly unrelated study topics and identify the weighted parameters using the relevant recovery algorithms in compressed sensing. Specifically, we first transform the parameter identification problem of CML into the sparse recovery problem of underdetermined linear system. In fact, compressed sensing provides a feasible method to solve underdetermined linear system if the sensing matrix satisfies some suitable conditions, such as restricted isometry property (RIP and mutual coherence. Then we give a low bound on the mutual coherence of the coefficient matrix generated by the observed values of CML and also prove that it satisfies the RIP from a theoretical point of view. If the weighted vector of each element is sparse in the CML system, our proposed approach can recover all the weighted parameters using only about M samplings, which is far less than the number of the lattice elements N. Another important and significant advantage is that if the observed data are contaminated with some types of noises, our approach is still effective. In the simulations, we mainly show the effects of coupling parameter and noise on the recovery rate.
ECG biometric identification: A compression based approach.
Bras, Susana; Pinho, Armando J
2015-08-01
Using the electrocardiogram signal (ECG) to identify and/or authenticate persons are problems still lacking satisfactory solutions. Yet, ECG possesses characteristics that are unique or difficult to get from other signals used in biometrics: (1) it requires contact and liveliness for acquisition (2) it changes under stress, rendering it potentially useless if acquired under threatening. Our main objective is to present an innovative and robust solution to the above-mentioned problem. To successfully conduct this goal, we rely on information-theoretic data models for data compression and on similarity metrics related to the approximation of the Kolmogorov complexity. The proposed measure allows the comparison of two (or more) ECG segments, without having to follow traditional approaches that require heartbeat segmentation (described as highly influenced by external or internal interferences). As a first approach, the method was able to cluster the data in three groups: identical record, same participant, different participant, by the stratification of the proposed measure with values near 0 for the same participant and closer to 1 for different participants. A leave-one-out strategy was implemented in order to identify the participant in the database based on his/her ECG. A 1NN classifier was implemented, using as distance measure the method proposed in this work. The classifier was able to identify correctly almost all participants, with an accuracy of 99% in the database used.
Wavelet-based compression of pathological images for telemedicine applications
Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun
2000-05-01
In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.
EPC: A Provably Secure Permutation Based Compression Function
DEFF Research Database (Denmark)
Bagheri, Nasour; Gauravaram, Praveen; Naderi, Majid
2010-01-01
The security of permutation-based hash functions in the ideal permutation model has been studied when the input-length of compression function is larger than the input-length of the permutation function. In this paper, we consider permutation based compression functions that have input lengths sh...
Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing
Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong
2016-08-01
Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
International Nuclear Information System (INIS)
Hampton, Jerrad; Doostan, Alireza
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
Hampton, Jerrad; Doostan, Alireza
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.
Compressed sampling for boundary measurements in three-dimensional electrical impedance tomography
International Nuclear Information System (INIS)
Javaherian, Ashkan; Soleimani, Manuchehr
2013-01-01
Electrical impedance tomography (EIT) utilizes electrodes on a medium's surface to produce measured data from which the conductivity distribution inside the medium is estimated. For the cases that relocation of electrodes is impractical or no a priori assumptions can be made to optimize the electrodes placement, a large number of electrodes may be needed to cover all possible imaging volume. This may occur in dynamically varying conductivity distribution in 3D EIT. Three-dimensional EIT then requires inverting very large linear systems to calculate the conductivity field, which causes significant problems regarding storage space and reconstruction time in addition to that data acquisition for a large number of electrodes will reduce the achievable frame rate, which is considered as major advantage of EIT imaging. This study proposes an idea to reduce the reconstruction complexity based on the well-known compressed sampling theory. By applying the so-called model-based CoSaMP algorithm to large size data collected by a 256 channel system, the size of forward operator and data acquisition time is reduced to those of a 32 channel system, while accuracy of reconstruction is significantly improved. The results demonstrate great capability of compressed sampling for overriding the challenges arising in 3D EIT. (paper)
Huffman-based code compression techniques for embedded processors
Bonny, Mohamed Talal
2010-09-01
The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures
Compressive Sensing with Cross-Validation and Stop-Sampling for Sparse Polynomial Chaos Expansions
Energy Technology Data Exchange (ETDEWEB)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Vane, Zachary Phillips; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.
2017-07-01
Compressive sensing is a powerful technique for recovering sparse solutions of underdetermined linear systems, which is often encountered in uncertainty quanti cation analysis of expensive and high-dimensional physical models. We perform numerical investigations employing several com- pressive sensing solvers that target the unconstrained LASSO formulation, with a focus on linear systems that arise in the construction of polynomial chaos expansions. With core solvers of l1 ls, SpaRSA, CGIST, FPC AS, and ADMM, we develop techniques to mitigate over tting through an automated selection of regularization constant based on cross-validation, and a heuristic strategy to guide the stop-sampling decision. Practical recommendations on parameter settings for these tech- niques are provided and discussed. The overall method is applied to a series of numerical examples of increasing complexity, including large eddy simulations of supersonic turbulent jet-in-cross flow involving a 24-dimensional input. Through empirical phase-transition diagrams and convergence plots, we illustrate sparse recovery performance under structures induced by polynomial chaos, accuracy and computational tradeoffs between polynomial bases of different degrees, and practi- cability of conducting compressive sensing for a realistic, high-dimensional physical application. Across test cases studied in this paper, we find ADMM to have demonstrated empirical advantages through consistent lower errors and faster computational times.
Compressive sensing based wireless sensor for structural health monitoring
Bao, Yuequan; Zou, Zilong; Li, Hui
2014-03-01
Data loss is a common problem for monitoring systems based on wireless sensors. Reliable communication protocols, which enhance communication reliability by repetitively transmitting unreceived packets, is one approach to tackle the problem of data loss. An alternative approach allows data loss to some extent and seeks to recover the lost data from an algorithmic point of view. Compressive sensing (CS) provides such a data loss recovery technique. This technique can be embedded into smart wireless sensors and effectively increases wireless communication reliability without retransmitting the data. The basic idea of CS-based approach is that, instead of transmitting the raw signal acquired by the sensor, a transformed signal that is generated by projecting the raw signal onto a random matrix, is transmitted. Some data loss may occur during the transmission of this transformed signal. However, according to the theory of CS, the raw signal can be effectively reconstructed from the received incomplete transformed signal given that the raw signal is compressible in some basis and the data loss ratio is low. This CS-based technique is implemented into the Imote2 smart sensor platform using the foundation of Illinois Structural Health Monitoring Project (ISHMP) Service Tool-suite. To overcome the constraints of limited onboard resources of wireless sensor nodes, a method called random demodulator (RD) is employed to provide memory and power efficient construction of the random sampling matrix. Adaptation of RD sampling matrix is made to accommodate data loss in wireless transmission and meet the objectives of the data recovery. The embedded program is tested in a series of sensing and communication experiments. Examples and parametric study are presented to demonstrate the applicability of the embedded program as well as to show the efficacy of CS-based data loss recovery for real wireless SHM systems.
Tang, Gang; Hou, Wei; Wang, Huaqing; Luo, Ganggang; Ma, Jianwei
2015-10-09
The Shannon sampling principle requires substantial amounts of data to ensure the accuracy of on-line monitoring of roller bearing fault signals. Challenges are often encountered as a result of the cumbersome data monitoring, thus a novel method focused on compressed vibration signals for detecting roller bearing faults is developed in this study. Considering that harmonics often represent the fault characteristic frequencies in vibration signals, a compressive sensing frame of characteristic harmonics is proposed to detect bearing faults. A compressed vibration signal is first acquired from a sensing matrix with information preserved through a well-designed sampling strategy. A reconstruction process of the under-sampled vibration signal is then pursued as attempts are conducted to detect the characteristic harmonics from sparse measurements through a compressive matching pursuit strategy. In the proposed method bearing fault features depend on the existence of characteristic harmonics, as typically detected directly from compressed data far before reconstruction completion. The process of sampling and detection may then be performed simultaneously without complete recovery of the under-sampled signals. The effectiveness of the proposed method is validated by simulations and experiments.
A new hyperspectral image compression paradigm based on fusion
Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto
2016-10-01
The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.
Light-weight reference-based compression of FASTQ data.
Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan
2015-06-09
The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.
Directory of Open Access Journals (Sweden)
Muhammad Bilal
2018-01-01
Full Text Available Transformed domain sparsity of Magnetic Resonance Imaging (MRI has recently been used to reduce the acquisition time in conjunction with compressed sensing (CS theory. Respiratory motion during MR scan results in strong blurring and ghosting artifacts in recovered MR images. To improve the quality of the recovered images, motion needs to be estimated and corrected. In this article, a two-step approach is proposed for the recovery of cardiac MR images in the presence of free breathing motion. In the first step, compressively sampled MR images are recovered by solving an optimization problem using gradient descent algorithm. The L1-norm based regularizer, used in optimization problem, is approximated by a hyperbolic tangent function. In the second step, a block matching algorithm, known as Adaptive Rood Pattern Search (ARPS, is exploited to estimate and correct respiratory motion among the recovered images. The framework is tested for free breathing simulated and in vivo 2D cardiac cine MRI data. Simulation results show improved structural similarity index (SSIM, peak signal-to-noise ratio (PSNR, and mean square error (MSE with different acceleration factors for the proposed method. Experimental results also provide a comparison between k-t FOCUSS with MEMC and the proposed method.
On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.
Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi
2018-02-01
On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.
Schwarz-based algorithms for compressible flows
Energy Technology Data Exchange (ETDEWEB)
Tidriri, M.D. [ICASE, Hampton, VA (United States)
1996-12-31
To compute steady compressible flows one often uses an implicit discretization approach which leads to a large sparse linear system that must be solved at each time step. In the derivation of this system one often uses a defect-correction procedure, in which the left-hand side of the system is discretized with a lower order approximation than that used for the right-hand side. This is due to storage considerations and computational complexity, and also to the fact that the resulting lower order matrix is better conditioned than the higher order matrix. The resulting schemes are only moderately implicit. In the case of structured, body-fitted grids, the linear system can easily be solved using approximate factorization (AF), which is among the most widely used methods for such grids. However, for unstructured grids, such techniques are no longer valid, and the system is solved using direct or iterative techniques. Because of the prohibitive computational costs and large memory requirements for the solution of compressible flows, iterative methods are preferred. In these defect-correction methods, which are implemented in most CFD computer codes, the mismatch in the right and left hand side operators, together with explicit treatment of the boundary conditions, lead to a severely limited CFL number, which results in a slow convergence to steady state aerodynamic solutions. Many authors have tried to replace explicit boundary conditions with implicit ones. Although they clearly demonstrate that high CFL numbers are possible, the reduction in CPU time is not clear cut.
Wireless Sensor Networks Data Processing Summary Based on Compressive Sensing
Directory of Open Access Journals (Sweden)
Caiyun Huang
2014-07-01
Full Text Available As a newly proposed theory, compressive sensing (CS is commonly used in signal processing area. This paper investigates the applications of compressed sensing (CS in wireless sensor networks (WSNs. First, the development and research status of compressed sensing technology and wireless sensor networks are described, then a detailed investigation of WSNs research based on CS are conducted from aspects of data fusion, signal acquisition, signal routing transmission, and signal reconstruction. At the end of the paper, we conclude our survey and point out the possible future research directions.
Near-lossless multichannel EEG compression based on matrix and tensor decompositions.
Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej
2013-05-01
A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.
Near-field acoustic holography using sparse regularization and compressive sampling principles.
Chardon, Gilles; Daudet, Laurent; Peillot, Antoine; Ollivier, François; Bertin, Nancy; Gribonval, Rémi
2012-09-01
Regularization of the inverse problem is a complex issue when using near-field acoustic holography (NAH) techniques to identify the vibrating sources. This paper shows that, for convex homogeneous plates with arbitrary boundary conditions, alternative regularization schemes can be developed based on the sparsity of the normal velocity of the plate in a well-designed basis, i.e., the possibility to approximate it as a weighted sum of few elementary basis functions. In particular, these techniques can handle discontinuities of the velocity field at the boundaries, which can be problematic with standard techniques. This comes at the cost of a higher computational complexity to solve the associated optimization problem, though it remains easily tractable with out-of-the-box software. Furthermore, this sparsity framework allows us to take advantage of the concept of compressive sampling; under some conditions on the sampling process (here, the design of a random array, which can be numerically and experimentally validated), it is possible to reconstruct the sparse signals with significantly less measurements (i.e., microphones) than classically required. After introducing the different concepts, this paper presents numerical and experimental results of NAH with two plate geometries, and compares the advantages and limitations of these sparsity-based techniques over standard Tikhonov regularization.
Compressive and Flexural Tests on Adobe Samples Reinforced with Wire Mesh
Jokhio, G. A.; Al-Tawil, Y. M. Y.; Syed Mohsin, S. M.; Gul, Y.; Ramli, N. I.
2018-03-01
Adobe is an economical, naturally available, and environment friendly construction material that offers excellent thermal and sound insulations as well as indoor air quality. It is important to understand and enhance the mechanical properties of this material, where a high degree of variation is reported in the literature owing to lack of research and standardization in this field. The present paper focuses first on the understanding of mechanical behaviour of adobe subjected to compressive stresses as well as flexure and then on enhancing the same with the help of steel wire mesh as reinforcement. A total of 22 samples were tested out of which, 12 cube samples were tested for compressive strength, whereas 10 beams samples were tested for modulus of rupture. Half of the samples in each category were control samples i.e. without wire mesh reinforcement, whereas the remaining half were reinforced with a single layer of wire mesh per sample. It has been found that the compressive strength of adobe increases by about 43% after adding a single layer of wire mesh reinforcement. The flexural response of adobe has also shown improvement with the addition of wire mesh reinforcement.
Harnessing Disorder in Compression Based Nanofabrication
Engel, Clifford John
The future of nanotechnologies depends on the successful development of versatile, low-cost techniques for patterning micro- and nanoarchitectures. While most approaches to nanofabrication have focused primarily on making periodic structures at ever smaller length scales with an ultimate goal of massively scaling their production, I have focused on introducing control into relatively disordered nanofabrication systems. Well-ordered patterns are increasingly unnecessary for a growing range of applications, from anti-biofouling coatings to light trapping to omniphobic surfaces. The ability to manipulate disorder, at will and over multiple length scales, starting with the nanoscale, can open new prospects for textured substrates and unconventional applications. Taking advantage of previously considered defects; I have been able to develop nanofabrication techniques with potential for massive scalability and the incorporation into a wide range of potential application. This thesis first describes the manipulation of the non-Newtonian properties of liquid Ga and Ga alloys to confine the metal and metal alloys in gratings with sub-wavelength periodicities. Through a solid to liquid phase change, I was able to access the superior plasmonic properties of liquid Ga for the generation of surface plasmon polaritons (SPP). The switching contract between solid and liquid Ga confine in the nanogratings allowed for reversible manipulation of SPP properties through heating and cooling around the relatively low melting temperature of Ga (29.8 °C). The remaining chapters focus on the development and characterization of an all polymer wrinkle material system. Wrinkles, spontaneous disordered features that are produced in response to compressive force, are an ideal for a growing number of applications where fine feature control is no longer the main motivation. However the mechanical limitations of many wrinkle systems have restricted the potential applications of wrinkled surfaces
Research on compressive sensing reconstruction algorithm based on total variation model
Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin
2017-12-01
Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.
Block-Based Compressed Sensing for Neutron Radiation Image Using WDFB
Directory of Open Access Journals (Sweden)
Wei Jin
2015-01-01
Full Text Available An ideal compression method for neutron radiation image should have high compression ratio while keeping more details of the original image. Compressed sensing (CS, which can break through the restrictions of sampling theorem, is likely to offer an efficient compression scheme for the neutron radiation image. Combining wavelet transform with directional filter banks, a novel nonredundant multiscale geometry analysis transform named Wavelet Directional Filter Banks (WDFB is constructed and applied to represent neutron radiation image sparsely. Then, the block-based CS technique is introduced and a high performance CS scheme for neutron radiation image is proposed. By performing two-step iterative shrinkage algorithm the problem of L1 norm minimization is solved to reconstruct neutron radiation image from random measurements. The experiment results demonstrate that the scheme not only improves the quality of reconstructed image obviously but also retains more details of original image.
Curvelet-based compressive sensing for InSAR raw data
Costa, Marcello G.; da Silva Pinho, Marcelo; Fernandes, David
2015-10-01
The aim of this work is to evaluate the compression performance of SAR raw data for interferometry applications collected by airborne from BRADAR (Brazilian SAR System operating in X and P bands) using the new approach based on compressive sensing (CS) to achieve an effective recovery with a good phase preserving. For this framework is desirable a real-time capability, where the collected data can be compressed to reduce onboard storage and bandwidth required for transmission. In the CS theory, a sparse unknown signals can be recovered from a small number of random or pseudo-random measurements by sparsity-promoting nonlinear recovery algorithms. Therefore, the original signal can be significantly reduced. To achieve the sparse representation of SAR signal, was done a curvelet transform. The curvelets constitute a directional frame, which allows an optimal sparse representation of objects with discontinuities along smooth curves as observed in raw data and provides an advanced denoising optimization. For the tests were made available a scene of 8192 x 2048 samples in range and azimuth in X-band with 2 m of resolution. The sparse representation was compressed using low dimension measurements matrices in each curvelet subband. Thus, an iterative CS reconstruction method based on IST (iterative soft/shrinkage threshold) was adjusted to recover the curvelets coefficients and then the original signal. To evaluate the compression performance were computed the compression ratio (CR), signal to noise ratio (SNR), and because the interferometry applications require more reconstruction accuracy the phase parameters like the standard deviation of the phase (PSD) and the mean phase error (MPE) were also computed. Moreover, in the image domain, a single-look complex image was generated to evaluate the compression effects. All results were computed in terms of sparsity analysis to provides an efficient compression and quality recovering appropriated for inSAR applications
Vibration-based monitoring and diagnostics using compressive sensing
Ganesan, Vaahini; Das, Tuhin; Rahnavard, Nazanin; Kauffman, Jeffrey L.
2017-04-01
Vibration data from mechanical systems carry important information that is useful for characterization and diagnosis. Standard approaches rely on continually streaming data at a fixed sampling frequency. For applications involving continuous monitoring, such as Structural Health Monitoring (SHM), such approaches result in high volume data and rely on sensors being powered for prolonged durations. Furthermore, for spatial resolution, structures are instrumented with a large array of sensors. This paper shows that both volume of data and number of sensors can be reduced significantly by applying Compressive Sensing (CS) in vibration monitoring applications. The reduction is achieved by using random sampling and capitalizing on the sparsity of vibration signals in the frequency domain. Preliminary experimental results validating CS-based frequency recovery are also provided. By exploiting the sparsity of mode shapes, CS can also enable efficient spatial reconstruction using fewer spatially distributed sensors. CS can thereby reduce the cost and power requirement of sensing as well as streamline data storage and processing in monitoring applications. In well-instrumented structures, CS can enable continued monitoring in case of sensor or computational failures.
Fractal Image Compression Based on High Entropy Values Technique
Directory of Open Access Journals (Sweden)
Douaa Younis Abbaas
2018-04-01
Full Text Available There are many attempts tried to improve the encoding stage of FIC because it consumed time. These attempts worked by reducing size of the search pool for pair range-domain matching but most of them led to get a bad quality, or a lower compression ratio of reconstructed image. This paper aims to present a method to improve performance of the full search algorithm by combining FIC (lossy compression and another lossless technique (in this case entropy coding is used. The entropy technique will reduce size of the domain pool (i. e., number of domain blocks based on the entropy value of each range block and domain block and then comparing the results of full search algorithm and proposed algorithm based on entropy technique to see each of which give best results (such as reduced the encoding time with acceptable values in both compression quali-ty parameters which are C. R (Compression Ratio and PSNR (Image Quality. The experimental results of the proposed algorithm proven that using the proposed entropy technique reduces the encoding time while keeping compression rates and reconstruction image quality good as soon as possible.
Learning-based compressed sensing for infrared image super resolution
Zhao, Yao; Sui, Xiubao; Chen, Qian; Wu, Shaochi
2016-05-01
This paper presents an infrared image super-resolution method based on compressed sensing (CS). First, the reconstruction model under the CS framework is established and a Toeplitz matrix is selected as the sensing matrix. Compared with traditional learning-based methods, the proposed method uses a set of sub-dictionaries instead of two coupled dictionaries to recover high resolution (HR) images. And Toeplitz sensing matrix allows the proposed method time-efficient. Second, all training samples are divided into several feature spaces by using the proposed adaptive k-means classification method, which is more accurate than the standard k-means method. On the basis of this approach, a complex nonlinear mapping from the HR space to low resolution (LR) space can be converted into several compact linear mappings. Finally, the relationships between HR and LR image patches can be obtained by multi-sub-dictionaries and HR infrared images are reconstructed by the input LR images and multi-sub-dictionaries. The experimental results show that the proposed method is quantitatively and qualitatively more effective than other state-of-the-art methods.
The possibilities of compressed-sensing-based Kirchhoff prestack migration
Aldawood, Ali; Hoteit, Ibrahim; Alkhalifah, Tariq Ali
2014-01-01
An approximate subsurface reflectivity distribution of the earth is usually obtained through the migration process. However, conventional migration algorithms, including those based on the least-squares approach, yield structure descriptions that are slightly smeared and of low resolution caused by the common migration artifacts due to limited aperture, coarse sampling, band-limited source, and low subsurface illumination. To alleviate this problem, we use the fact that minimizing the L1-norm of a signal promotes its sparsity. Thus, we formulated the Kirchhoff migration problem as a compressed-sensing (CS) basis pursuit denoise problem to solve for highly focused migrated images compared with those obtained by standard and least-squares migration algorithms. The results of various subsurface reflectivity models revealed that solutions computed using the CS based migration provide a more accurate subsurface reflectivity location and amplitude. We applied the CS algorithm to image synthetic data from a fault model using dense and sparse acquisition geometries. Our results suggest that the proposed approach may still provide highly resolved images with a relatively small number of measurements. We also evaluated the robustness of the basis pursuit denoise algorithm in the presence of Gaussian random observational noise and in the case of imaging the recorded data with inaccurate migration velocities.
The possibilities of compressed-sensing-based Kirchhoff prestack migration
Aldawood, Ali
2014-05-08
An approximate subsurface reflectivity distribution of the earth is usually obtained through the migration process. However, conventional migration algorithms, including those based on the least-squares approach, yield structure descriptions that are slightly smeared and of low resolution caused by the common migration artifacts due to limited aperture, coarse sampling, band-limited source, and low subsurface illumination. To alleviate this problem, we use the fact that minimizing the L1-norm of a signal promotes its sparsity. Thus, we formulated the Kirchhoff migration problem as a compressed-sensing (CS) basis pursuit denoise problem to solve for highly focused migrated images compared with those obtained by standard and least-squares migration algorithms. The results of various subsurface reflectivity models revealed that solutions computed using the CS based migration provide a more accurate subsurface reflectivity location and amplitude. We applied the CS algorithm to image synthetic data from a fault model using dense and sparse acquisition geometries. Our results suggest that the proposed approach may still provide highly resolved images with a relatively small number of measurements. We also evaluated the robustness of the basis pursuit denoise algorithm in the presence of Gaussian random observational noise and in the case of imaging the recorded data with inaccurate migration velocities.
File compression and encryption based on LLS and arithmetic coding
Yu, Changzhi; Li, Hengjian; Wang, Xiyu
2018-03-01
e propose a file compression model based on arithmetic coding. Firstly, the original symbols, to be encoded, are input to the encoder one by one, we produce a set of chaotic sequences by using the Logistic and sine chaos system(LLS), and the values of this chaotic sequences are randomly modified the Upper and lower limits of current symbols probability. In order to achieve the purpose of encryption, we modify the upper and lower limits of all character probabilities when encoding each symbols. Experimental results show that the proposed model can achieve the purpose of data encryption while achieving almost the same compression efficiency as the arithmetic coding.
A design approach for systems based on magnetic pulse compression
International Nuclear Information System (INIS)
Praveen Kumar, D. Durga; Mitra, S.; Senthil, K.; Sharma, D. K.; Rajan, Rehim N.; Sharma, Archana; Nagesh, K. V.; Chakravarthy, D. P.
2008-01-01
A design approach giving the optimum number of stages in a magnetic pulse compression circuit and gain per stage is given. The limitation on the maximum gain per stage is discussed. The total system volume minimization is done by considering the energy storage capacitor volume and magnetic core volume at each stage. At the end of this paper, the design of a magnetic pulse compression based linear induction accelerator of 200 kV, 5 kA, and 100 ns with a repetition rate of 100 Hz is discussed with its experimental results
Image Compression Based On Wavelet, Polynomial and Quadtree
Directory of Open Access Journals (Sweden)
Bushra A. SULTAN
2011-01-01
Full Text Available In this paper a simple and fast image compression scheme is proposed, it is based on using wavelet transform to decompose the image signal and then using polynomial approximation to prune the smoothing component of the image band. The architect of proposed coding scheme is high synthetic where the error produced due to polynomial approximation in addition to the detail sub-band data are coded using both quantization and Quadtree spatial coding. As a last stage of the encoding process shift encoding is used as a simple and efficient entropy encoder to compress the outcomes of the previous stage.The test results indicate that the proposed system can produce a promising compression performance while preserving the image quality level.
Disk-based compression of data from genome sequencing.
Grabowski, Szymon; Deorowicz, Sebastian; Roguski, Łukasz
2015-05-01
High-coverage sequencing data have significant, yet hard to exploit, redundancy. Most FASTQ compressors cannot efficiently compress the DNA stream of large datasets, since the redundancy between overlapping reads cannot be easily captured in the (relatively small) main memory. More interesting solutions for this problem are disk based, where the better of these two, from Cox et al. (2012), is based on the Burrows-Wheeler transform (BWT) and achieves 0.518 bits per base for a 134.0 Gbp human genome sequencing collection with almost 45-fold coverage. We propose overlapping reads compression with minimizers, a compression algorithm dedicated to sequencing reads (DNA only). Our method makes use of a conceptually simple and easily parallelizable idea of minimizers, to obtain 0.317 bits per base as the compression ratio, allowing to fit the 134.0 Gbp dataset into only 5.31 GB of space. http://sun.aei.polsl.pl/orcom under a free license. sebastian.deorowicz@polsl.pl Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Hyperspectral image compressing using wavelet-based method
Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng
2017-10-01
Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.
Cellular characterization of compression induced-damage in live biological samples
Bo, Chiara; Balzer, Jens; Hahnel, Mark; Rankin, Sara M.; Brown, Katherine A.; Proud, William G.
2011-06-01
Understanding the dysfunctions that high-intensity compression waves induce in human tissues is critical to impact on acute-phase treatments and requires the development of experimental models of traumatic damage in biological samples. In this study we have developed an experimental system to directly assess the impact of dynamic loading conditions on cellular function at the molecular level. Here we present a confinement chamber designed to subject live cell cultures in liquid environment to compression waves in the range of tens of MPa using a split Hopkinson pressure bars system. Recording the loading history and collecting the samples post-impact without external contamination allow the definition of parameters such as pressure and duration of the stimulus that can be related to the cellular damage. The compression experiments are conducted on Mesenchymal Stem Cells from BALB/c mice and the damage analysis are compared to two control groups. Changes in Stem cell viability, phenotype and function are assessed flow cytometry and with in vitro bioassays at two different time points. Identifying the cellular and molecular mechanisms underlying the damage caused by dynamic loading in live biological samples could enable the development of new treatments for traumatic injuries.
Guo, Wei; Tse, Peter W.
2013-01-01
Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect
Mechanical properties of tannin-based rigid foams undergoing compression
Energy Technology Data Exchange (ETDEWEB)
Celzard, A., E-mail: Alain.Celzard@enstib.uhp-nancy.fr [Institut Jean Lamour - UMR CNRS 7198, CNRS - Nancy-Universite - UPV-Metz, Departement Chimie et Physique des Solides et des Surfaces, ENSTIB, 27 rue du Merle Blanc, BP 1041, 88051 Epinal cedex 9 (France); Zhao, W. [Institut Jean Lamour - UMR CNRS 7198, CNRS - Nancy-Universite - UPV-Metz, Departement Chimie et Physique des Solides et des Surfaces, ENSTIB, 27 rue du Merle Blanc, BP 1041, 88051 Epinal cedex 9 (France); Pizzi, A. [ENSTIB-LERMAB, Nancy-University, 27 rue du Merle Blanc, BP 1041, 88051 Epinal cedex 9 (France); Fierro, V. [Institut Jean Lamour - UMR CNRS 7198, CNRS - Nancy-Universite - UPV-Metz, Departement Chimie et Physique des Solides et des Surfaces, ENSTIB, 27 rue du Merle Blanc, BP 1041, 88051 Epinal cedex 9 (France)
2010-06-25
The mechanical properties of a new class of extremely lightweight tannin-based materials, namely organic foams and their carbonaceous counterparts are detailed. Scaling laws are shown to describe correctly the observed behaviour. Information about the mechanical characteristics of the elementary forces acting within these solids is derived. It is suggested that organic materials present a rather bending-dominated behaviour and are partly plastic. On the contrary, carbon foams obtained by pyrolysis of the former present a fracture-dominated behaviour and are purely brittle. These conclusions are supported by the differences in the exponent describing the change of Young's modulus as a function of relative density, while that describing compressive strength is unchanged. Features of the densification strain also support such conclusions. Carbon foams of very low density may absorb high energy when compressed, making them valuable materials for crash protection.
Facial Image Compression Based on Structured Codebooks in Overcomplete Domain
Directory of Open Access Journals (Sweden)
Vila-Forcén JE
2006-01-01
Full Text Available We advocate facial image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: image compression is considered from the position of source coding with side information and, contrarily to the existing scenarios where the side information is given explicitly; the side information is created based on a deterministic approximation of the local image features. We consider an image in the overcomplete transform domain as a realization of a random source with a structured codebook of symbols where each symbol represents a particular edge shape. Due to the partial availability of the side information at both encoder and decoder, we treat our problem as a modification of the Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available at the decoder. Finally, the paper presents a practical image compression algorithm for facial images based on our concept that demonstrates the superior performance in the very-low-bit-rate regime.
Binaural model-based dynamic-range compression.
Ernst, Stephan M A; Kortlang, Steffen; Grimm, Giso; Bisitz, Thomas; Kollmeier, Birger; Ewert, Stephan D
2018-01-26
Binaural cues such as interaural level differences (ILDs) are used to organise auditory perception and to segregate sound sources in complex acoustical environments. In bilaterally fitted hearing aids, dynamic-range compression operating independently at each ear potentially alters these ILDs, thus distorting binaural perception and sound source segregation. A binaurally-linked model-based fast-acting dynamic compression algorithm designed to approximate the normal-hearing basilar membrane (BM) input-output function in hearing-impaired listeners is suggested. A multi-center evaluation in comparison with an alternative binaural and two bilateral fittings was performed to assess the effect of binaural synchronisation on (a) speech intelligibility and (b) perceived quality in realistic conditions. 30 and 12 hearing impaired (HI) listeners were aided individually with the algorithms for both experimental parts, respectively. A small preference towards the proposed model-based algorithm in the direct quality comparison was found. However, no benefit of binaural-synchronisation regarding speech intelligibility was found, suggesting a dominant role of the better ear in all experimental conditions. The suggested binaural synchronisation of compression algorithms showed a limited effect on the tested outcome measures, however, linking could be situationally beneficial to preserve a natural binaural perception of the acoustical environment.
Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.
Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua
2018-03-01
To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Grenier, E; Gehin, C; McAdams, E; Lun, B; Gobin, J-P; Uhl, J-F
2016-03-01
To study of the microcirculatory effects of elastic compression stockings. In phlebology, laser Doppler techniques (flux or imaging) are widely used to investigate cutaneous microcirculation. It is a method used to explore microcirculation by detecting blood flow in skin capillaries. Flux and imaging instruments evaluate, non-invasively in real-time, the perfusion of cutaneous micro vessels. Such tools, well known by the vascular community, are not really suitable to our protocol which requires evaluation through the elastic compression stockings fabric. Therefore, we involve another instrument, called the Hematron (developed by Insa-Lyon, Biomedical Sensor Group, Nanotechnologies Institute of Lyon), to investigate the relationship between skin microcirculatory activities and external compression provided by elastic compression stockings. The Hematron measurement principle is based on the monitoring of the skin's thermal conductivity. This clinical study examined a group of 30 female subjects, aged 42 years ±2 years, who suffer from minor symptoms of chronic venous disease, classified as C0s, and C1s (CEAP). The resulting figures show, subsequent to the pressure exerted by elastic compression stockings, an improvement of microcirculatory activities observed in 83% of the subjects, and a decreased effect was detected in the remaining 17%. Among the total population, the global average increase of the skin's microcirculatory activities is evaluated at 7.63% ± 1.80% (p compression stockings has a direct influence on the skin's microcirculation within this female sample group having minor chronic venous insufficiency signs. Further investigations are required for a deeper understanding of the elastic compression stockings effects on the microcirculatory activity in venous diseases at other stages of pathology. © The Author(s) 2014.
Correlation between k-space sampling pattern and MTF in compressed sensing MRSI.
Heikal, A A; Wachowicz, K; Fallone, B G
2016-10-01
To investigate the relationship between the k-space sampling patterns used for compressed sensing MR spectroscopic imaging (CS-MRSI) and the modulation transfer function (MTF) of the metabolite maps. This relationship may allow the desired frequency content of the metabolite maps to be quantitatively tailored when designing an undersampling pattern. Simulations of a phantom were used to calculate the MTF of Nyquist sampled (NS) 32 × 32 MRSI, and four-times undersampled CS-MRSI reconstructions. The dependence of the CS-MTF on the k-space sampling pattern was evaluated for three sets of k-space sampling patterns generated using different probability distribution functions (PDFs). CS-MTFs were also evaluated for three more sets of patterns generated using a modified algorithm where the sampling ratios are constrained to adhere to PDFs. Strong visual correlation as well as high R 2 was found between the MTF of CS-MRSI and the product of the frequency-dependant sampling ratio and the NS 32 × 32 MTF. Also, PDF-constrained sampling patterns led to higher reproducibility of the CS-MTF, and stronger correlations to the above-mentioned product. The relationship established in this work provides the user with a theoretical solution for the MTF of CS MRSI that is both predictable and customizable to the user's needs.
Compressive strength and hydrolytic stability of fly ash based geopolymers
Directory of Open Access Journals (Sweden)
Nikolić Irena
2013-01-01
Full Text Available The process of geopolymerization involves the reaction of solid aluminosilicate materials with highly alkaline silicate solution yielding an aluminosilicate inorganic polymer named geopolymer, which may be successfully applied in civil engineering as a replacement for cement. In this paper we have investigated the influence of synthesis parameters: solid to liquid ratio, NaOH concentration and the ratio of Na2SiO3/NaOH, on the mechanical properties and hydrolytic stability of fly ash based geopolymers in distilled water, sea water and simulated acid rain. The highest value of compressive strength was obtained using 10 mol dm-3 NaOH and at the Na2SiO3/NaOH ratio of 1.5. Moreover, the results have shown that mechanical properties of fly ash based geopolymers are in correlation with their hydrolytic stability. Factors that increase the compressive strength also increase the hydrolytic stability of fly ash based geopolymers. The best hydrolytic stability of fly ash based geopolymers was shown in sea water while the lowest stability was recorded in simulated acid rain. [Projekat Ministarstva nauke Republike Srbije, br. 172054 i Nanotechnology and Functional Materials Center, funded by the European FP7 project No. 245916
O'Connor, Sean M.; Lynch, Jerome P.; Gilbert, Anna C.
2013-04-01
Wireless sensors have emerged to offer low-cost sensors with impressive functionality (e.g., data acquisition, computing, and communication) and modular installations. Such advantages enable higher nodal densities than tethered systems resulting in increased spatial resolution of the monitoring system. However, high nodal density comes at a cost as huge amounts of data are generated, weighing heavy on power sources, transmission bandwidth, and data management requirements, often making data compression necessary. The traditional compression paradigm consists of high rate (>Nyquist) uniform sampling and storage of the entire target signal followed by some desired compression scheme prior to transmission. The recently proposed compressed sensing (CS) framework combines the acquisition and compression stage together, thus removing the need for storage and operation of the full target signal prior to transmission. The effectiveness of the CS approach hinges on the presence of a sparse representation of the target signal in a known basis, similarly exploited by several traditional compressive sensing applications today (e.g., imaging, MRI). Field implementations of CS schemes in wireless SHM systems have been challenging due to the lack of commercially available sensing units capable of sampling methods (e.g., random) consistent with the compressed sensing framework, often moving evaluation of CS techniques to simulation and post-processing. The research presented here describes implementation of a CS sampling scheme to the Narada wireless sensing node and the energy efficiencies observed in the deployed sensors. Of interest in this study is the compressibility of acceleration response signals collected from a multi-girder steel-concrete composite bridge. The study shows the benefit of CS in reducing data requirements while ensuring data analysis on compressed data remain accurate.
Astronomical Image Compression Techniques Based on ACC and KLT Coder
Directory of Open Access Journals (Sweden)
J. Schindler
2011-01-01
Full Text Available This paper deals with a compression of image data in applications in astronomy. Astronomical images have typical specific properties — high grayscale bit depth, size, noise occurrence and special processing algorithms. They belong to the class of scientific images. Their processing and compression is quite different from the classical approach of multimedia image processing. The database of images from BOOTES (Burst Observer and Optical Transient Exploring System has been chosen as a source of the testing signal. BOOTES is a Czech-Spanish robotic telescope for observing AGN (active galactic nuclei and the optical transient of GRB (gamma ray bursts searching. This paper discusses an approach based on an analysis of statistical properties of image data. A comparison of two irrelevancy reduction methods is presented from a scientific (astrometric and photometric point of view. The first method is based on a statistical approach, using the Karhunen-Loeve transform (KLT with uniform quantization in the spectral domain. The second technique is derived from wavelet decomposition with adaptive selection of used prediction coefficients. Finally, the comparison of three redundancy reduction methods is discussed. Multimedia format JPEG2000 and HCOMPRESS, designed especially for astronomical images, are compared with the new Astronomical Context Coder (ACC coder based on adaptive median regression.
Study and analysis of wavelet based image compression techniques
African Journals Online (AJOL)
user
Discrete Wavelet Transform (DWT) is a recently developed compression ... serve emerging areas of mobile multimedia and internet communication, ..... In global thresholding the best trade-off between PSNR and compression is provided by.
International Nuclear Information System (INIS)
Chouakri, S A; Djaafri, O; Taleb-Ahmed, A
2013-01-01
We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly
ROI-based DICOM image compression for telemedicine
Indian Academy of Sciences (India)
ground and reconstruct the image portions losslessly. The compressed image can ... If the image is compressed by 8:1 compression without any perceptual distortion, the ... Figure 2. Cross-sectional view of medical image (statistical representation). ... The Integer Wavelet Transform (IWT) is used to have lossless processing.
Geothermally Coupled Well-Based Compressed Air Energy Storage
Energy Technology Data Exchange (ETDEWEB)
Davidson, C L [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bearden, Mark D [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Horner, Jacob A [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Appriou, Delphine [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McGrail, B Peter [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-12-01
. This project assessed the technical and economic feasibility of implementing geothermally coupled well-based CAES for grid-scale energy storage. Based on an evaluation of design specifications for a range of casing grades common in U.S. oil and gas fields, a 5-MW CAES project could be supported by twenty to twenty-five 5,000-foot, 7-inch wells using lower-grade casing, and as few as eight such wells for higher-end casing grades. Using this information, along with data on geothermal resources, well density, and potential future markets for energy storage systems, The Geysers geothermal field was selected to parameterize a case study to evaluate the potential match between the proven geothermal resource present at The Geysers and the field’s existing well infrastructure. Based on calculated wellbore compressed air mass, the study shows that a single average geothermal production well could provide enough geothermal energy to support a 15.4-MW (gross) power generation facility using 34 to 35 geothermal wells repurposed for compressed air storage, resulting in a simplified levelized cost of electricity (sLCOE) estimated at 11.2 ¢/kWh (Table S.1). Accounting for the power loss to the geothermal power project associated with diverting geothermal resources for air heating results in a net 2-MW decrease in generation capacity, increasing the CAES project’s sLCOE by 1.8 ¢/kWh.
Geothermally Coupled Well-Based Compressed Air Energy Storage
Energy Technology Data Exchange (ETDEWEB)
Davidson, Casie L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bearden, Mark D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Horner, Jacob A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cabe, James E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Appriou, Delphine [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McGrail, B. Peter [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-12-20
. This project assessed the technical and economic feasibility of implementing geothermally coupled well-based CAES for grid-scale energy storage. Based on an evaluation of design specifications for a range of casing grades common in U.S. oil and gas fields, a 5-MW CAES project could be supported by twenty to twenty-five 5,000-foot, 7-inch wells using lower-grade casing, and as few as eight such wells for higher-end casing grades. Using this information, along with data on geothermal resources, well density, and potential future markets for energy storage systems, The Geysers geothermal field was selected to parameterize a case study to evaluate the potential match between the proven geothermal resource present at The Geysers and the field’s existing well infrastructure. Based on calculated wellbore compressed air mass, the study shows that a single average geothermal production well could provide enough geothermal energy to support a 15.4-MW (gross) power generation facility using 34 to 35 geothermal wells repurposed for compressed air storage, resulting in a simplified levelized cost of electricity (sLCOE) estimated at 11.2 ¢/kWh (Table S.1). Accounting for the power loss to the geothermal power project associated with diverting geothermal resources for air heating results in a net 2-MW decrease in generation capacity, increasing the CAES project’s sLCOE by 1.8 ¢/kWh.
Lossless Image Compression Based on Multiple-Tables Arithmetic Coding
Directory of Open Access Journals (Sweden)
Rung-Ching Chen
2009-01-01
Full Text Available This paper is intended to present a lossless image compression method based on multiple-tables arithmetic coding (MTAC method to encode a gray-level image f. First, the MTAC method employs a median edge detector (MED to reduce the entropy rate of f. The gray levels of two adjacent pixels in an image are usually similar. A base-switching transformation approach is then used to reduce the spatial redundancy of the image. The gray levels of some pixels in an image are more common than those of others. Finally, the arithmetic encoding method is applied to reduce the coding redundancy of the image. To promote high performance of the arithmetic encoding method, the MTAC method first classifies the data and then encodes each cluster of data using a distinct code table. The experimental results show that, in most cases, the MTAC method provides a higher efficiency in use of storage space than the lossless JPEG2000 does.
Compression-based aggregation model for medical web services.
Al-Shammary, Dhiah; Khalil, Ibrahim
2010-01-01
Many organizations such as hospitals have adopted Cloud Web services in applying their network services to avoid investing heavily computing infrastructure. SOAP (Simple Object Access Protocol) is the basic communication protocol of Cloud Web services that is XML based protocol. Generally,Web services often suffer congestions and bottlenecks as a result of the high network traffic that is caused by the large XML overhead size. At the same time, the massive load on Cloud Web services in terms of the large demand of client requests has resulted in the same problem. In this paper, two XML-aware aggregation techniques that are based on exploiting the compression concepts are proposed in order to aggregate the medical Web messages and achieve higher message size reduction.
Real time network traffic monitoring for wireless local area networks based on compressed sensing
Balouchestani, Mohammadreza
2017-05-01
A wireless local area network (WLAN) is an important type of wireless networks which connotes different wireless nodes in a local area network. WLANs suffer from important problems such as network load balancing, large amount of energy, and load of sampling. This paper presents a new networking traffic approach based on Compressed Sensing (CS) for improving the quality of WLANs. The proposed architecture allows reducing Data Delay Probability (DDP) to 15%, which is a good record for WLANs. The proposed architecture is increased Data Throughput (DT) to 22 % and Signal to Noise (S/N) ratio to 17 %, which provide a good background for establishing high qualified local area networks. This architecture enables continuous data acquisition and compression of WLAN's signals that are suitable for a variety of other wireless networking applications. At the transmitter side of each wireless node, an analog-CS framework is applied at the sensing step before analog to digital converter in order to generate the compressed version of the input signal. At the receiver side of wireless node, a reconstruction algorithm is applied in order to reconstruct the original signals from the compressed signals with high probability and enough accuracy. The proposed algorithm out-performs existing algorithms by achieving a good level of Quality of Service (QoS). This ability allows reducing 15 % of Bit Error Rate (BER) at each wireless node.
A Compressed Sensing-Based Wearable Sensor Network for Quantitative Assessment of Stroke Patients
Directory of Open Access Journals (Sweden)
Lei Yu
2016-02-01
Full Text Available Clinical rehabilitation assessment is an important part of the therapy process because it is the premise for prescribing suitable rehabilitation interventions. However, the commonly used assessment scales have the following two drawbacks: (1 they are susceptible to subjective factors; (2 they only have several rating levels and are influenced by a ceiling effect, making it impossible to exactly detect any further improvement in the movement. Meanwhile, energy constraints are a primary design consideration in wearable sensor network systems since they are often battery-operated. Traditionally, for wearable sensor network systems that follow the Shannon/Nyquist sampling theorem, there are many data that need to be sampled and transmitted. This paper proposes a novel wearable sensor network system to monitor and quantitatively assess the upper limb motion function, based on compressed sensing technology. With the sparse representation model, less data is transmitted to the computer than with traditional systems. The experimental results show that the accelerometer signals of Bobath handshake and shoulder touch exercises can be compressed, and the length of the compressed signal is less than 1/3 of the raw signal length. More importantly, the reconstruction errors have no influence on the predictive accuracy of the Brunnstrom stage classification model. It also indicated that the proposed system can not only reduce the amount of data during the sampling and transmission processes, but also, the reconstructed accelerometer signals can be used for quantitative assessment without any loss of useful information.
A Compressed Sensing-Based Wearable Sensor Network for Quantitative Assessment of Stroke Patients
Yu, Lei; Xiong, Daxi; Guo, Liquan; Wang, Jiping
2016-01-01
Clinical rehabilitation assessment is an important part of the therapy process because it is the premise for prescribing suitable rehabilitation interventions. However, the commonly used assessment scales have the following two drawbacks: (1) they are susceptible to subjective factors; (2) they only have several rating levels and are influenced by a ceiling effect, making it impossible to exactly detect any further improvement in the movement. Meanwhile, energy constraints are a primary design consideration in wearable sensor network systems since they are often battery-operated. Traditionally, for wearable sensor network systems that follow the Shannon/Nyquist sampling theorem, there are many data that need to be sampled and transmitted. This paper proposes a novel wearable sensor network system to monitor and quantitatively assess the upper limb motion function, based on compressed sensing technology. With the sparse representation model, less data is transmitted to the computer than with traditional systems. The experimental results show that the accelerometer signals of Bobath handshake and shoulder touch exercises can be compressed, and the length of the compressed signal is less than 1/3 of the raw signal length. More importantly, the reconstruction errors have no influence on the predictive accuracy of the Brunnstrom stage classification model. It also indicated that the proposed system can not only reduce the amount of data during the sampling and transmission processes, but also, the reconstructed accelerometer signals can be used for quantitative assessment without any loss of useful information. PMID:26861337
Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding
Directory of Open Access Journals (Sweden)
Yongjian Nian
2013-01-01
Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.
Adaptive learning compressive tracking based on Markov location prediction
Zhou, Xingyu; Fu, Dongmei; Yang, Tao; Shi, Yanan
2017-03-01
Object tracking is an interdisciplinary research topic in image processing, pattern recognition, and computer vision which has theoretical and practical application value in video surveillance, virtual reality, and automatic navigation. Compressive tracking (CT) has many advantages, such as efficiency and accuracy. However, when there are object occlusion, abrupt motion and blur, similar objects, and scale changing, the CT has the problem of tracking drift. We propose the Markov object location prediction to get the initial position of the object. Then CT is used to locate the object accurately, and the classifier parameter adaptive updating strategy is given based on the confidence map. At the same time according to the object location, extract the scale features, which is able to deal with object scale variations effectively. Experimental results show that the proposed algorithm has better tracking accuracy and robustness than current advanced algorithms and achieves real-time performance.
Control volume based modelling of compressible flow in reciprocating machines
DEFF Research Database (Denmark)
Andersen, Stig Kildegård; Thomsen, Per Grove; Carlsen, Henrik
2004-01-01
, and multidimensional effects must be calculated using empirical correlations; correlations for steady state flow can be used as an approximation. A transformation that assumes ideal gas is presented for transforming equations for masses and energies in control volumes into the corresponding pressures and temperatures......An approach to modelling unsteady compressible flow that is primarily one dimensional is presented. The approach was developed for creating distributed models of machines with reciprocating pistons but it is not limited to this application. The approach is based on the integral form of the unsteady...... conservation laws for mass, energy, and momentum applied to a staggered mesh consisting of two overlapping strings of control volumes. Loss mechanisms can be included directly in the governing equations of models by including them as terms in the conservation laws. Heat transfer, flow friction...
Compression-Based Tools for Navigation with an Image Database
Directory of Open Access Journals (Sweden)
Giovanni Motta
2012-01-01
Full Text Available We present tools that can be used within a larger system referred to as a passive assistant. The system receives information from a mobile device, as well as information from an image database such as Google Street View, and employs image processing to provide useful information about a local urban environment to a user who is visually impaired. The first stage acquires and computes accurate location information, the second stage performs texture and color analysis of a scene, and the third stage provides specific object recognition and navigation information. These second and third stages rely on compression-based tools (dimensionality reduction, vector quantization, and coding that are enhanced by knowledge of (approximate location of objects.
DEFF Research Database (Denmark)
Oxvig, Christian Schou; Pedersen, Patrick Steffen; Arildsen, Thomas
2014-01-01
Magni is an open source Python package that embraces compressed sensing and Atomic Force Microscopy (AFM) imaging techniques. It provides AFM-specific functionality for undersampling and reconstructing images from AFM equipment and thereby accelerating the acquisition of AFM images. Magni also pr...... as a convenient platform for researchers in compressed sensing aiming at obtaining a high degree of reproducibility of their research....
Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J
2017-10-05
A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.
A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node
Directory of Open Access Journals (Sweden)
Kan Luo
2018-01-01
Full Text Available Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS- based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node’s specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM, block sparse Bayesian learning (BSBL method, and discrete cosine transform (DCT basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption.
Directory of Open Access Journals (Sweden)
Sheng Shen
2018-04-01
Full Text Available The accuracy of underwater acoustic targets recognition via limited ship radiated noise can be improved by a deep neural network trained with a large number of unlabeled samples. However, redundant features learned by deep neural network have negative effects on recognition accuracy and efficiency. A compressed deep competitive network is proposed to learn and extract features from ship radiated noise. The core idea of the algorithm includes: (1 Competitive learning: By integrating competitive learning into the restricted Boltzmann machine learning algorithm, the hidden units could share the weights in each predefined group; (2 Network pruning: The pruning based on mutual information is deployed to remove the redundant parameters and further compress the network. Experiments based on real ship radiated noise show that the network can increase recognition accuracy with fewer informative features. The compressed deep competitive network can achieve a classification accuracy of 89.1 % , which is 5.3 % higher than deep competitive network and 13.1 % higher than the state-of-the-art signal processing feature extraction methods.
A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node.
Luo, Kan; Cai, Zhipeng; Du, Keqin; Zou, Fumin; Zhang, Xiangyu; Li, Jianqing
2018-01-01
Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption.
Directory of Open Access Journals (Sweden)
Christian Schou Oxvig
2014-10-01
Full Text Available Magni is an open source Python package that embraces compressed sensing and Atomic Force Microscopy (AFM imaging techniques. It provides AFM-specific functionality for undersampling and reconstructing images from AFM equipment and thereby accelerating the acquisition of AFM images. Magni also provides researchers in compressed sensing with a selection of algorithms for reconstructing undersampled general images, and offers a consistent and rigorous way to efficiently evaluate the researchers own developed reconstruction algorithms in terms of phase transitions. The package also serves as a convenient platform for researchers in compressed sensing aiming at obtaining a high degree of reproducibility of their research.
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-22
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
Huffman-based code compression techniques for embedded processors
Bonny, Mohamed Talal; Henkel, Jö rg
2010-01-01
% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures, namely ARM and MIPS. © 2010 ACM.
Statistics-Based Compression of Global Wind Fields
Jeong, Jaehong
2017-02-07
Wind has the potential to make a significant contribution to future energy resources. Locating the sources of this renewable energy on a global scale is however extremely challenging, given the difficulty to store very large data sets generated by modern computer models. We propose a statistical model that aims at reproducing the data-generating mechanism of an ensemble of runs via a Stochastic Generator (SG) of global annual wind data. We introduce an evolutionary spectrum approach with spatially varying parameters based on large-scale geographical descriptors such as altitude to better account for different regimes across the Earth\\'s orography. We consider a multi-step conditional likelihood approach to estimate the parameters that explicitly accounts for nonstationary features while also balancing memory storage and distributed computation. We apply the proposed model to more than 18 million points of yearly global wind speed. The proposed SG requires orders of magnitude less storage for generating surrogate ensemble members from wind than does creating additional wind fields from the climate model, even if an effective lossy data compression algorithm is applied to the simulation output.
Statistics-Based Compression of Global Wind Fields
Jeong, Jaehong; Castruccio, Stefano; Crippa, Paola; Genton, Marc G.
2017-01-01
Wind has the potential to make a significant contribution to future energy resources. Locating the sources of this renewable energy on a global scale is however extremely challenging, given the difficulty to store very large data sets generated by modern computer models. We propose a statistical model that aims at reproducing the data-generating mechanism of an ensemble of runs via a Stochastic Generator (SG) of global annual wind data. We introduce an evolutionary spectrum approach with spatially varying parameters based on large-scale geographical descriptors such as altitude to better account for different regimes across the Earth's orography. We consider a multi-step conditional likelihood approach to estimate the parameters that explicitly accounts for nonstationary features while also balancing memory storage and distributed computation. We apply the proposed model to more than 18 million points of yearly global wind speed. The proposed SG requires orders of magnitude less storage for generating surrogate ensemble members from wind than does creating additional wind fields from the climate model, even if an effective lossy data compression algorithm is applied to the simulation output.
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with vari...
Energy Technology Data Exchange (ETDEWEB)
Shih, Tzu-Ching [Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, 40402, Taiwan (China); Chen, Jeon-Hor; Nie Ke; Lin Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying [Tu and Yuen Center for Functional Onco-Imaging and Radiological Sciences, University of California, Irvine, CA 92697 (United States); Liu Dongxu; Sun Lizhi, E-mail: shih@mail.cmu.edu.t [Department of Civil and Environmental Engineering, University of California, Irvine, CA 92697 (United States)
2010-07-21
This study presents a finite element-based computational model to simulate the three-dimensional deformation of a breast and fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and craniocaudal and mediolateral oblique compression, as used in mammography, was applied. The geometry of the whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo (registered) 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the nonlinear elastic tissue deformation under compression, using the MSC.Marc (registered) software package. The model was tested in four cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these four cases at a compression ratio of 60% was in the range of 5-7 cm, which is a typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at a compression ratio of 60% was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on magnetic resonance imaging (MRI), which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities--such as MRI, mammography, whole breast ultrasound and molecular imaging--that are performed using different body positions and under
Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao
2018-06-01
To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.
Highly compressible and all-solid-state supercapacitors based on nanostructured composite sponge.
Niu, Zhiqiang; Zhou, Weiya; Chen, Xiaodong; Chen, Jun; Xie, Sishen
2015-10-21
Based on polyaniline-single-walled carbon nanotubes -sponge electrodes, highly compressible all-solid-state supercapacitors are prepared with an integrated configuration using a poly(vinyl alcohol) (PVA)/H2 SO4 gel as the electrolyte. The unique configuration enables the resultant supercapacitors to be compressed as an integrated unit arbitrarily during 60% compressible strain. Furthermore, the performance of the resultant supercapacitors is nearly unchanged even under 60% compressible strain. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Review On Segmentation Based Image Compression Techniques
Directory of Open Access Journals (Sweden)
S.Thayammal
2013-11-01
Full Text Available Abstract -The storage and transmission of imagery become more challenging task in the current scenario of multimedia applications. Hence, an efficient compression scheme is highly essential for imagery, which reduces the requirement of storage medium and transmission bandwidth. Not only improvement in performance and also the compression techniques must converge quickly in order to apply them for real time applications. There are various algorithms have been done in image compression, but everyone has its own pros and cons. Here, an extensive analysis between existing methods is performed. Also, the use of existing works is highlighted, for developing the novel techniques which face the challenging task of image storage and transmission in multimedia applications.
Compression-based geometric pattern discovery in music
DEFF Research Database (Denmark)
Meredith, David
2014-01-01
The purpose of musical analysis is to find the best possible explanations for musical objects, where such objects may range from single chords or phrases to entire musical corpora. Kolmogorov complexity theory suggests that the best possible explanation for an object is represented by the shortest...... possible description of it. Two compression algorithms, COSIATEC and SIATECCompress, are described that take point-set representations of musical objects as input and generate compressed encodings of these point sets as output. The algorithms were evaluated on a task in which 360 folk songs were classified...
Multiple Description Coding with Feedback Based Network Compression
DEFF Research Database (Denmark)
Sørensen, Jesper Hemming; Østergaard, Jan; Popovski, Petar
2010-01-01
and an intermediate node, respectively. A trade-off exists between reducing the delay of the feedback by adapting in the vicinity of the receiver and increasing the gain from compression by adapting close to the source. The analysis shows that adaptation in the network provides a better trade-off than adaptation...
Time-lens based optical packet pulse compression and retiming
DEFF Research Database (Denmark)
Laguardia Areal, Janaina; Hu, Hao; Palushani, Evarist
2010-01-01
recovery, resulting in a potentially very efficient solution. The scheme uses a time-lens, implemented through a sinusoidally driven optical phase modulation, combined with a linear dispersion element. As time-lenses are also used for pulse compression, we design the circuit also to perform pulse...
Polarimetric and Indoor Imaging Fusion Based on Compressive Sensing
2013-04-01
34 in Proc. IEEE Radar Conf, Rome, Italy , May 2008. [17] M. G. Amin, F. Ahmad, W. Zhang, "A compressive sensing approach to moving target... Ferrara , J. Jackson, and M. Stuff, "Three-dimensional sparse-aperture moving-target imaging," in Proc. SPIE, vol. 6970, 2008. [43] M. Skolnik (Ed
32Still Image Compression Algorithm Based on Directional Filter Banks
Chunling Yang; Duanwu Cao; Li Ma
2010-01-01
Hybrid wavelet and directional filter banks (HWD) is an effective multi-scale geometrical analysis method. Compared to wavelet transform, it can better capture the directional information of images. But the ringing artifact, which is caused by the coefficient quantization in transform domain, is the biggest drawback of image compression algorithms in HWD domain. In this paper, by researching on the relationship between directional decomposition and ringing artifact, an improved decomposition ...
Directory of Open Access Journals (Sweden)
Aihua Liu
2017-01-01
Full Text Available A method of direction-of-arrival (DOA estimation using array interpolation is proposed in this paper to increase the number of resolvable sources and improve the DOA estimation performance for coprime array configuration with holes in its virtual array. The virtual symmetric nonuniform linear array (VSNLA of coprime array signal model is introduced, with the conventional MUSIC with spatial smoothing algorithm (SS-MUSIC applied on the continuous lags in the VSNLA; the degrees of freedom (DoFs for DOA estimation are obviously not fully exploited. To effectively utilize the extent of DoFs offered by the coarray configuration, a compressing sensing based array interpolation algorithm is proposed. The compressing sensing technique is used to obtain the coarse initial DOA estimation, and a modified iterative initial DOA estimation based interpolation algorithm (IMCA-AI is then utilized to obtain the final DOA estimation, which maps the sample covariance matrix of the VSNLA to the covariance matrix of a filled virtual symmetric uniform linear array (VSULA with the same aperture size. The proposed DOA estimation method can efficiently improve the DOA estimation performance. The numerical simulations are provided to demonstrate the effectiveness of the proposed method.
A joint image encryption and watermarking algorithm based on compressive sensing and chaotic map
International Nuclear Information System (INIS)
Xiao Di; Cai Hong-Kun; Zheng Hong-Ying
2015-01-01
In this paper, a compressive sensing (CS) and chaotic map-based joint image encryption and watermarking algorithm is proposed. The transform domain coefficients of the original image are scrambled by Arnold map firstly. Then the watermark is adhered to the scrambled data. By compressive sensing, a set of watermarked measurements is obtained as the watermarked cipher image. In this algorithm, watermark embedding and data compression can be performed without knowing the original image; similarly, watermark extraction will not interfere with decryption. Due to the characteristics of CS, this algorithm features compressible cipher image size, flexible watermark capacity, and lossless watermark extraction from the compressed cipher image as well as robustness against packet loss. Simulation results and analyses show that the algorithm achieves good performance in the sense of security, watermark capacity, extraction accuracy, reconstruction, robustness, etc. (paper)
Research of Block-Based Motion Estimation Methods for Video Compression
Directory of Open Access Journals (Sweden)
Tropchenko Andrey
2016-08-01
Full Text Available This work is a review of the block-based algorithms used for motion estimation in video compression. It researches different types of block-based algorithms that range from the simplest named Full Search to the fast adaptive algorithms like Hierarchical Search. The algorithms evaluated in this paper are widely accepted by the video compressing community and have been used in implementing various standards, such as MPEG-4 Visual and H.264. The work also presents a very brief introduction to the entire flow of video compression.
Loss less real-time data compression based on LZO for steady-state Tokamak DAS
International Nuclear Information System (INIS)
Pujara, H.D.; Sharma, Manika
2008-01-01
The evolution of data acquisition system (DAS) for steady-state operation of Tokamak has been technology driven. Steady-state Tokamak demands a data acquisition system which is capable enough to acquire data losslessly from diagnostics. The needs of loss less continuous acquisition have a significant effect on data storage and takes up a greater portion of any data acquisition systems. Another basic need of steady state of nature of operation demands online viewing of data which loads the LAN significantly. So there is strong demand for something that would control the expansion of both these portion by a way of employing compression technique in real time. This paper presents a data acquisition systems employing real-time data compression technique based on LZO. It is a data compression library which is suitable for data compression and decompression in real time. The algorithm used favours speed over compression ratio. The system has been rigged up based on PXI bus and dual buffer mode architecture is implemented for loss less acquisition. The acquired buffer is compressed in real time and streamed to network and hard disk for storage. Observed performance of measure on various data type like binary, integer float, types of different type of wave form as well as compression timing overheads has been presented in the paper. Various software modules for real-time acquiring, online viewing of data on network nodes have been developed in LabWindows/CVI based on client server architecture
A hybrid video compression based on zerotree wavelet structure
International Nuclear Information System (INIS)
Kilic, Ilker; Yilmaz, Reyat
2009-01-01
A video compression algorithm comparable to the standard techniques at low bit rates is presented in this paper. The overlapping block motion compensation (OBMC) is combined with discrete wavelet transform which followed by Lloyd-Max quantization and zerotree wavelet (ZTW) structure. The novel feature of this coding scheme is the combination of hierarchical finite state vector quantization (HFSVQ) with the ZTW to encode the quantized wavelet coefficients. It is seen that the proposed video encoder (ZTW-HFSVQ) performs better than the MPEG-4 and Zerotree Entropy Coding (ZTE). (author)
Effect of compression on reactivity of plutonium based materials
International Nuclear Information System (INIS)
Marshall, A.C.; Marotta, C.R.
1977-01-01
An analysis was made to determine if criticality could occur due to compression of bare spheres of Pu and PuO 2 (solid or powdered) during a high-speed impact accident of an air transportable plutonium package. It was calculated that an initial k/sub eff/ less than 0.70 would not result in a critical condition (less than 0.97); thus, a conservative max permissible design value of k/sub eff/ for a Pu air transportable package is 0.70
Optical scanning holography based on compressive sensing using a digital micro-mirror device
A-qian, Sun; Ding-fu, Zhou; Sheng, Yuan; You-jun, Hu; Peng, Zhang; Jian-ming, Yue; xin, Zhou
2017-02-01
Optical scanning holography (OSH) is a distinct digital holography technique, which uses a single two-dimensional (2D) scanning process to record the hologram of a three-dimensional (3D) object. Usually, these 2D scanning processes are in the form of mechanical scanning, and the quality of recorded hologram may be affected due to the limitation of mechanical scanning accuracy and unavoidable vibration of stepper motor's start-stop. In this paper, we propose a new framework, which replaces the 2D mechanical scanning mirrors with a Digital Micro-mirror Device (DMD) to modulate the scanning light field, and we call it OSH based on Compressive Sensing (CS) using a digital micro-mirror device (CS-OSH). CS-OSH can reconstruct the hologram of an object through the use of compressive sensing theory, and then restore the image of object itself. Numerical simulation results confirm this new type OSH can get a reconstructed image with favorable visual quality even under the condition of a low sample rate.
Cloud solution for histopathological image analysis using region of interest based compression.
Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana
2017-07-01
Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.
Biometric and Emotion Identification: An ECG Compression Based Method
Directory of Open Access Journals (Sweden)
Susana Brás
2018-04-01
Full Text Available We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG. The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1 conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2 conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3 identification of the ECG record class, using a 1-NN (nearest neighbor classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.
Biometric and Emotion Identification: An ECG Compression Based Method.
Brás, Susana; Ferreira, Jacqueline H T; Soares, Sandra C; Pinho, Armando J
2018-01-01
We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.
Biometric and Emotion Identification: An ECG Compression Based Method
Brás, Susana; Ferreira, Jacqueline H. T.; Soares, Sandra C.; Pinho, Armando J.
2018-01-01
We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model. PMID:29670564
Optical identity authentication technique based on compressive ghost imaging with QR code
Wenjie, Zhan; Leihong, Zhang; Xi, Zeng; Yi, Kang
2018-04-01
With the rapid development of computer technology, information security has attracted more and more attention. It is not only related to the information and property security of individuals and enterprises, but also to the security and social stability of a country. Identity authentication is the first line of defense in information security. In authentication systems, response time and security are the most important factors. An optical authentication technology based on compressive ghost imaging with QR codes is proposed in this paper. The scheme can be authenticated with a small number of samples. Therefore, the response time of the algorithm is short. At the same time, the algorithm can resist certain noise attacks, so it offers good security.
Micro-Doppler Ambiguity Resolution Based on Short-Time Compressed Sensing
Directory of Open Access Journals (Sweden)
Jing-bo Zhuang
2015-01-01
Full Text Available When using a long range radar (LRR to track a target with micromotion, the micro-Doppler embodied in the radar echoes may suffer from ambiguity problem. In this paper, we propose a novel method based on compressed sensing (CS to solve micro-Doppler ambiguity. According to the RIP requirement, a sparse probing pulse train with its transmitting time random is designed. After matched filtering, the slow-time echo signals of the micromotion target can be viewed as randomly sparse sampling of Doppler spectrum. Select several successive pulses to form a short-time window and the CS sensing matrix can be built according to the time stamps of these pulses. Then performing Orthogonal Matching Pursuit (OMP, the unambiguous micro-Doppler spectrum can be obtained. The proposed algorithm is verified using the echo signals generated according to the theoretical model and the signals with micro-Doppler signature produced using the commercial electromagnetic simulation software FEKO.
Multispectral Image Compression Based on DSC Combined with CCSDS-IDC
Directory of Open Access Journals (Sweden)
Jin Li
2014-01-01
Full Text Available Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC combined with image data compression (IDC approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE. Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS-based algorithm has better compression performance than the traditional compression approaches.
Multispectral image compression based on DSC combined with CCSDS-IDC.
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.
Directory of Open Access Journals (Sweden)
Chen Chun
2008-03-01
Full Text Available Abstract Background With the rapid emergence of RNA databases and newly identified non-coding RNAs, an efficient compression algorithm for RNA sequence and structural information is needed for the storage and analysis of such data. Although several algorithms for compressing DNA sequences have been proposed, none of them are suitable for the compression of RNA sequences with their secondary structures simultaneously. This kind of compression not only facilitates the maintenance of RNA data, but also supplies a novel way to measure the informational complexity of RNA structural data, raising the possibility of studying the relationship between the functional activities of RNA structures and their complexities, as well as various structural properties of RNA based on compression. Results RNACompress employs an efficient grammar-based model to compress RNA sequences and their secondary structures. The main goals of this algorithm are two fold: (1 present a robust and effective way for RNA structural data compression; (2 design a suitable model to represent RNA secondary structure as well as derive the informational complexity of the structural data based on compression. Our extensive tests have shown that RNACompress achieves a universally better compression ratio compared with other sequence-specific or common text-specific compression algorithms, such as Gencompress, winrar and gzip. Moreover, a test of the activities of distinct GTP-binding RNAs (aptamers compared with their structural complexity shows that our defined informational complexity can be used to describe how complexity varies with activity. These results lead to an objective means of comparing the functional properties of heteropolymers from the information perspective. Conclusion A universal algorithm for the compression of RNA secondary structure as well as the evaluation of its informational complexity is discussed in this paper. We have developed RNACompress, as a useful tool
Energy Technology Data Exchange (ETDEWEB)
Zhu, Ruihua; Liu, Qing [School of Materials Science and Engineering, Central South University, Changsha 410083 (China); Li, Jinfeng, E-mail: lijinfeng@csu.edu.cn [School of Materials Science and Engineering, Central South University, Changsha 410083 (China); Xiang, Sheng [School of Materials Science and Engineering, Central South University, Changsha 410083 (China); Chen, Yonglai; Zhang, Xuhu [Aerospace Research Institute of Materials and Processing Technology, Beijing 100076 (China)
2015-11-25
Dynamic restoration mechanism of 2050 Al–Li alloy and its constitutive model were investigated by means of hot compression simulation in the deformation temperature ranging from 340 to 500 °C and at strain rates of 0.001–10 s{sup −1}. The microstructures of the compressed samples were observed using optical microscopy and transmission electron microscopy. On the base of dislocation density theory and Avrami kinetics, a physically based constitutive model was established. The results show that dynamic recovery (DRV) and dynamic recrystallization (DRX) are co-responsible for the dynamic restoration during the hot compression process under all compression conditions. The dynamic precipitation (DPN) of T1 and σ phases was observed after the deformation at 340 °C. This is the first experimental evidence for the DPN of σ phase in Al–Cu–Li alloys. The particle stimulated nucleation of DRX (PSN-DRX) due to the large Al–Cu–Mn particle was also observed. The error analysis suggests that the established constitutive model can adequately describe the flow stress dependence on strain rate, temperature and strain during the hot deformation process. - Highlights: • The experimental evidence for the DPN of σ phase in Al–Cu–Li alloys was found. • The PSN-DRX due to the large Al–Cu–Mn particle was observed. • A novel method was proposed to calculated the stress multiplier α.
Underwater Acoustic Matched Field Imaging Based on Compressed Sensing
Directory of Open Access Journals (Sweden)
Huichen Yan
2015-10-01
Full Text Available Matched field processing (MFP is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP model from wave propagation theory by using randomly deployed sensors. In addition, the model’s recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method.
REMOTELY SENSEDC IMAGE COMPRESSION BASED ON WAVELET TRANSFORM
Directory of Open Access Journals (Sweden)
Heung K. Lee
1996-06-01
Full Text Available In this paper, we present an image compression algorithm that is capable of significantly reducing the vast mount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet trans-form to remove the spatial redundancy. The transformed images are than encoded by hilbert-curve scanning and run-length-encoding, followed by huffman coding. We also present the performance of the proposed algorithm with KITSAT-1 image as well as the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by peak signal to noise ratio (PSNR and classification capability.
Edge-Based Image Compression with Homogeneous Diffusion
Mainberger, Markus; Weickert, Joachim
It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.
StirMark Benchmark: audio watermarking attacks based on lossy compression
Steinebach, Martin; Lang, Andreas; Dittmann, Jana
2002-04-01
StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.
Assessment of compressive failure process of cortical bone materials using damage-based model.
Ng, Theng Pin; R Koloor, S S; Djuansjah, J R P; Abdul Kadir, M R
2017-02-01
The main failure factors of cortical bone are aging or osteoporosis, accident and high energy trauma or physiological activities. However, the mechanism of damage evolution coupled with yield criterion is considered as one of the unclear subjects in failure analysis of cortical bone materials. Therefore, this study attempts to assess the structural response and progressive failure process of cortical bone using a brittle damaged plasticity model. For this reason, several compressive tests are performed on cortical bone specimens made of bovine femur, in order to obtain the structural response and mechanical properties of the material. Complementary finite element (FE) model of the sample and test is prepared to simulate the elastic-to-damage behavior of the cortical bone using the brittle damaged plasticity model. The FE model is validated in a comparative method using the predicted and measured structural response as load-compressive displacement through simulation and experiment. FE results indicated that the compressive damage initiated and propagated at central region where maximum equivalent plastic strain is computed, which coincided with the degradation of structural compressive stiffness followed by a vast amount of strain energy dissipation. The parameter of compressive damage rate, which is a function dependent on damage parameter and the plastic strain is examined for different rates. Results show that considering a similar rate to the initial slope of the damage parameter in the experiment would give a better sense for prediction of compressive failure. Copyright © 2016 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Steinert, Marian; Kratz, Marita; Jones, David B. [Department of Experimental Orthopaedics and Biomechanics, Philipps University Marburg, Baldingerstr., 35043 Marburg (Germany); Jaedicke, Volker; Hofmann, Martin R. [Photonics and Terahertz Technology, Ruhr University Bochum, Universitätsstr. 150, 44801 Bochum (Germany)
2014-10-15
In this paper, we present a system that allows imaging of cartilage tissue via optical coherence tomography (OCT) during controlled uniaxial unconfined compression of cylindrical osteochondral cores in vitro. We describe the system design and conduct a static and dynamic performance analysis. While reference measurements yield a full scale maximum deviation of 0.14% in displacement, force can be measured with a full scale standard deviation of 1.4%. The dynamic performance evaluation indicates a high accuracy in force controlled mode up to 25 Hz, but it also reveals a strong effect of variance of sample mechanical properties on the tracking performance under displacement control. In order to counterbalance these disturbances, an adaptive feed forward approach was applied which finally resulted in an improved displacement tracking accuracy up to 3 Hz. A built-in imaging probe allows on-line monitoring of the sample via OCT while being loaded in the cultivation chamber. We show that cartilage topology and defects in the tissue can be observed and demonstrate the visualization of the compression process during static mechanical loading.
A novel ECG data compression method based on adaptive Fourier decomposition
Tan, Chunyu; Zhang, Liming
2017-12-01
This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.
Silicon based ultrafast optical waveform sampling
DEFF Research Database (Denmark)
Ji, Hua; Galili, Michael; Pu, Minhao
2010-01-01
A 300 nmx450 nmx5 mm silicon nanowire is designed and fabricated for a four wave mixing based non-linear optical gate. Based on this silicon nanowire, an ultra-fast optical sampling system is successfully demonstrated using a free-running fiber laser with a carbon nanotube-based mode-locker as th......A 300 nmx450 nmx5 mm silicon nanowire is designed and fabricated for a four wave mixing based non-linear optical gate. Based on this silicon nanowire, an ultra-fast optical sampling system is successfully demonstrated using a free-running fiber laser with a carbon nanotube-based mode......-locker as the sampling source. A clear eye-diagram of a 320 Gbit/s data signal is obtained. The temporal resolution of the sampling system is estimated to 360 fs....
International Nuclear Information System (INIS)
Ranjbar, Navid; Mehrali, Mehdi; Behnia, Arash; Alengaram, U. Johnson; Jumaat, Mohd Zamin
2014-01-01
Highlights: • Results show POFA is adaptable as replacement in FA based geopolymer mortar. • The increase in POFA/FA ratio delay of the compressive development of geopolymer. • The density of POFA based geoploymer is lower than FA based geopolymer mortar. - Abstract: This paper presents the effects and adaptability of palm oil fuel ash (POFA) as a replacement material in fly ash (FA) based geopolymer mortar from the aspect of microstructural and compressive strength. The geopolymers developed were synthesized with a combination of sodium hydroxide and sodium silicate as activator and POFA and FA as high silica–alumina resources. The development of compressive strength of POFA/FA based geopolymers was investigated using X-ray florescence (XRF), X-ray diffraction (XRD), Fourier transform infrared (FTIR), and field emission scanning electron microscopy (FESEM). It was observed that the particle shapes and surface area of POFA and FA as well as chemical composition affects the density and compressive strength of the mortars. The increment in the percentages of POFA increased the silica/alumina (SiO 2 /Al 2 O 3 ) ratio and that resulted in reduction of the early compressive strength of the geopolymer and delayed the geopolymerization process
Effective Low-Power Wearable Wireless Surface EMG Sensor Design Based on Analog-Compressed Sensing
Directory of Open Access Journals (Sweden)
Mohammadreza Balouchestani
2014-12-01
Full Text Available Surface Electromyography (sEMG is a non-invasive measurement process that does not involve tools and instruments to break the skin or physically enter the body to investigate and evaluate the muscular activities produced by skeletal muscles. The main drawbacks of existing sEMG systems are: (1 they are not able to provide real-time monitoring; (2 they suffer from long processing time and low speed; (3 they are not effective for wireless healthcare systems because they consume huge power. In this work, we present an analog-based Compressed Sensing (CS architecture, which consists of three novel algorithms for design and implementation of wearable wireless sEMG bio-sensor. At the transmitter side, two new algorithms are presented in order to apply the analog-CS theory before Analog to Digital Converter (ADC. At the receiver side, a robust reconstruction algorithm based on a combination of ℓ1-ℓ1-optimization and Block Sparse Bayesian Learning (BSBL framework is presented to reconstruct the original bio-signals from the compressed bio-signals. The proposed architecture allows reducing the sampling rate to 25% of Nyquist Rate (NR. In addition, the proposed architecture reduces the power consumption to 40%, Percentage Residual Difference (PRD to 24%, Root Mean Squared Error (RMSE to 2%, and the computation time from 22 s to 9.01 s, which provide good background for establishing wearable wireless healthcare systems. The proposed architecture achieves robust performance in low Signal-to-Noise Ratio (SNR for the reconstruction process.
Photonic compressive sensing with a micro-ring-resonator-based microwave photonic filter
DEFF Research Database (Denmark)
Chen, Ying; Ding, Yunhong; Zhu, Zhijing
2015-01-01
A novel approach to realize photonic compressive sensing (CS) with a multi-tap microwave photonic filter is proposed and demonstrated. The system takes both advantages of CS and photonics to capture wideband sparse signals with sub-Nyquist sampling rate. The low-pass filtering function required...
Petrov, Mikhail A.; Kosatchyov, Nikolay V.; Petrov, Pavel A.
2016-10-01
The paper represents the results of the study concerning the investigation of the influence of the filling grade (material density) on the force characteristic during the uniaxial compression test of the cylindrical polymer probes produced by additive technology based on FDM. The authors have shown that increasing of the filling grate follows to the increase of the deformation forces. However, the dependency is not a linear function and characterized by soft-elastic model of material behaviour, which is typical for polymers partly crystallized structure.
An Image Compression Scheme in Wireless Multimedia Sensor Networks Based on NMF
Directory of Open Access Journals (Sweden)
Shikang Kong
2017-02-01
Full Text Available With the goal of addressing the issue of image compression in wireless multimedia sensor networks with high recovered quality and low energy consumption, an image compression and transmission scheme based on non-negative matrix factorization (NMF is proposed in this paper. First, the NMF algorithm theory is studied. Then, a collaborative mechanism of image capture, block, compression and transmission is completed. Camera nodes capture images and send them to ordinary nodes which use an NMF algorithm for image compression. Compressed images are transmitted to the station by the cluster head node and received from ordinary nodes. The station takes on the image restoration. Simulation results show that, compared with the JPEG2000 and singular value decomposition (SVD compression schemes, the proposed scheme has a higher quality of recovered images and lower total node energy consumption. It is beneficial to reduce the burden of energy consumption and prolong the life of the whole network system, which has great significance for practical applications of WMSNs.
The Formation and Evolution of Shear Bands in Plane Strain Compressed Nickel-Base Superalloy
Directory of Open Access Journals (Sweden)
Bin Tang
2018-02-01
Full Text Available The formation and evolution of shear bands in Inconel 718 nickel-base superalloy under plane strain compression was investigated in the present work. It is found that the propagation of shear bands under plane strain compression is more intense in comparison with conventional uniaxial compression. The morphology of shear bands was identified to generally fall into two categories: in “S” shape at severe conditions (low temperatures and high strain rates and “X” shape at mild conditions (high temperatures and low strain rates. However, uniform deformation at the mesoscale without shear bands was also obtained by compressing at 1050 °C/0.001 s−1. By using the finite element method (FEM, the formation mechanism of the shear bands in the present study was explored for the special deformation mode of plane strain compression. Furthermore, the effect of processing parameters, i.e., strain rate and temperature, on the morphology and evolution of shear bands was discussed following a phenomenological approach. The plane strain compression attempt in the present work yields important information for processing parameters optimization and failure prediction under plane strain loading conditions of the Inconel 718 superalloy.
Adaptive Rate Sampling and Filtering Based on Level Crossing Sampling
Directory of Open Access Journals (Sweden)
Saeed Mian Qaisar
2009-01-01
Full Text Available The recent sophistications in areas of mobile systems and sensor networks demand more and more processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most difficult industrial challenges, in mobile computing. Most of efforts to achieve this goal are focused on improving the embedded systems design and the battery technology, but very few studies target to exploit the input signal time-varying nature. This paper aims to achieve power efficiency by intelligently adapting the processing activity to the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting a non conventional sampling scheme and adaptive rate filtering. The proposed approach, based on the LCSS (Level Crossing Sampling Scheme presents two filtering techniques, able to adapt their sampling rate and filter order by online analyzing the input signal variations. Indeed, the principle is to intelligently exploit the signal local characteristics—which is usually never considered—to filter only the relevant signal parts, by employing the relevant order filters. This idea leads towards a drastic gain in the computational efficiency and hence in the processing power when compared to the classical techniques.
International Nuclear Information System (INIS)
Trillo, C; Doval, A F; Deán-Ben, X L; López-Vázquez, J C; Fernández, J L; Hernández-Montes, S
2011-01-01
This paper describes a technique that numerically reconstructs the complex acoustic amplitude (i.e. the acoustic amplitude and phase) of a compression acoustic wave in the interior volume of a specimen from a set of full-field optical measurements of the instantaneous displacement of the surface. The volume of a thick specimen is probed in transmission mode by short bursts of narrowband compression acoustic waves generated at one of its faces. The temporal evolution of the displacement field induced by the bursts emerging at the opposite surface is measured by pulsed digital holographic interferometry (pulsed TV holography). A spatio-temporal 3D Fourier transform processing of the measured data yields the complex acoustic amplitude at the plane of the surface as a sequence of 2D complex-valued maps. Finally, a numerical implementation of the Rayleigh–Sommerfeld diffraction formula is employed to reconstruct the complex acoustic amplitude at other planes in the interior volume of the specimen. The whole procedure can be regarded as a combination of optical digital holography and acoustical holography methods. The technique was successfully tested on aluminium specimens with and without an internal artificial defect and sample results are presented. In particular, information about the shape and position of the defect was retrieved in the experiment performed on the flawed specimen, which indicates the potential applicability of the technique for the nondestructive testing of materials
Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique
Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi
Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.
Reducing the Computational Complexity of Reconstruction in Compressed Sensing Nonuniform Sampling
DEFF Research Database (Denmark)
Grigoryan, Ruben; Jensen, Tobias Lindstrøm; Arildsen, Thomas
2013-01-01
sparse signals, but requires computationally expensive reconstruction algorithms. This can be an obstacle for real-time applications. The reduction of complexity is achieved by applying a multi-coset sampling procedure. This proposed method reduces the size of the dictionary matrix, the size...
The Physics of Compressive Sensing and the Gradient-Based Recovery Algorithms
Dai, Qi; Sha, Wei
2009-01-01
The physics of compressive sensing (CS) and the gradient-based recovery algorithms are presented. First, the different forms for CS are summarized. Second, the physical meanings of coherence and measurement are given. Third, the gradient-based recovery algorithms and their geometry explanations are provided. Finally, we conclude the report and give some suggestion for future work.
Chloride transport under compressive load in bacteria-based self-healing concrete
Binti Md Yunus, B.; Schlangen, E.; Jonkers, H.M.
2015-01-01
An experiment was carried out in this study to investigate the effect of compressive load on chloride penetration in self-healing concrete containing bacterial-based healing agent. Bacteria-based healing agent with the fraction of 2 mm – 4 mm of particles sizes were used in this contribution. ESEM
International Nuclear Information System (INIS)
Tang Jie; Nett, Brian E; Chen Guanghong
2009-01-01
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
OTDM-WDM Conversion Based on Time-Domain Optical Fourier Transformation with Spectral Compression
DEFF Research Database (Denmark)
Mulvad, Hans Christian Hansen; Palushani, Evarist; Galili, Michael
2011-01-01
We propose a scheme enabling direct serial-to-parallel conversion of OTDM data tributaries onto a WDM grid, based on optical Fourier transformation with spectral compression. Demonstrations on 320 Gbit/s and 640 Gbit/s OTDM data are shown.......We propose a scheme enabling direct serial-to-parallel conversion of OTDM data tributaries onto a WDM grid, based on optical Fourier transformation with spectral compression. Demonstrations on 320 Gbit/s and 640 Gbit/s OTDM data are shown....
Beam steering performance of compressed Luneburg lens based on transformation optics
Gao, Ju; Wang, Cong; Zhang, Kuang; Hao, Yang; Wu, Qun
2018-06-01
In this paper, two types of compressed Luneburg lenses based on transformation optics are investigated and simulated using two different sources, namely, waveguides and dipoles, which represent plane and spherical wave sources, respectively. We determined that the largest beam steering angle and the related feed point are intrinsic characteristics of a certain type of compressed Luneburg lens, and that the optimized distance between the feed and lens, gain enhancement, and side-lobe suppression are related to the type of source. Based on our results, we anticipate that these lenses will prove useful in various future antenna applications.
Adaptive Binary Arithmetic Coder-Based Image Feature and Segmentation in the Compressed Domain
Directory of Open Access Journals (Sweden)
Hsi-Chin Hsin
2012-01-01
Full Text Available Image compression is necessary in various applications, especially for efficient transmission over a band-limited channel. It is thus desirable to be able to segment an image in the compressed domain directly such that the burden of decompressing computation can be avoided. Motivated by the adaptive binary arithmetic coder (MQ coder of JPEG2000, we propose an efficient scheme to segment the feature vectors that are extracted from the code stream of an image. We modify the Compression-based Texture Merging (CTM algorithm to alleviate the influence of overmerging problem by making use of the rate distortion information. Experimental results show that the MQ coder-based image segmentation is preferable in terms of the boundary displacement error (BDE measure. It has the advantage of saving computational cost as the segmentation results even at low rates of bits per pixel (bpp are satisfactory.
Medical image compression based on vector quantization with variable block sizes in wavelet domain.
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Directory of Open Access Journals (Sweden)
Huiyan Jiang
2012-01-01
Full Text Available An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
International Nuclear Information System (INIS)
Leihong, Zhang; Dong, Liang; Bei, Li; Zilan, Pan; Dawei, Zhang; Xiuhua, Ma
2015-01-01
In this article, the algorithm of compressing sensing is used to improve the imaging resolution and realize ghost imaging via compressive sensing for a phase object based on the theoretical analysis of the lensless Fourier imaging of the algorithm of ghost imaging based on phase-shifting digital holography. The algorithm of ghost imaging via compressive sensing based on phase-shifting digital holography uses the bucket detector to measure the total light intensity of the interference and the four-step phase-shifting method is used to obtain the total light intensity of differential interference light. The experimental platform is built based on the software simulation, and the experimental results show that the algorithm of ghost imaging via compressive sensing based on phase-shifting digital holography can obtain the high-resolution phase distribution figure of the phase object. With the same sampling times, the phase clarity of the phase distribution figure obtained by the algorithm of ghost imaging via compressive sensing based on phase-shifting digital holography is higher than that obtained by the algorithm of ghost imaging based on phase-shift digital holography. In this article, this study further extends the application range of ghost imaging and obtains the phase distribution of the phase object. (letter)
Energy Technology Data Exchange (ETDEWEB)
Melli, Seyed Ali, E-mail: sem649@mail.usask.ca [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Wahid, Khan A. [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Babyn, Paul [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada); Montgomery, James [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Snead, Elisabeth [Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, SK (Canada); El-Gayed, Ali [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Pettitt, Murray; Wolkowski, Bailey [College of Agriculture and Bioresources, University of Saskatchewan, Saskatoon, SK (Canada); Wesolowski, Michal [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada)
2016-01-11
Synchrotron source propagation-based X-ray phase contrast computed tomography is increasingly used in pre-clinical imaging. However, it typically requires a large number of projections, and subsequently a large radiation dose, to produce high quality images. To improve the applicability of this imaging technique, reconstruction algorithms that can reduce the radiation dose and acquisition time without degrading image quality are needed. The proposed research focused on using a novel combination of Douglas–Rachford splitting and randomized Kaczmarz algorithms to solve large-scale total variation based optimization in a compressed sensing framework to reconstruct 2D images from a reduced number of projections. Visual assessment and quantitative performance evaluations of a synthetic abdomen phantom and real reconstructed image of an ex-vivo slice of canine prostate tissue demonstrate that the proposed algorithm is competitive in reconstruction process compared with other well-known algorithms. An additional potential benefit of reducing the number of projections would be reduction of time for motion artifact to occur if the sample moves during image acquisition. Use of this reconstruction algorithm to reduce the required number of projections in synchrotron source propagation-based X-ray phase contrast computed tomography is an effective form of dose reduction that may pave the way for imaging of in-vivo samples.
Jang, Hwanchol; Yoon, Changhyeong; Choi, Wonshik; Eom, Tae Joong; Lee, Heung-No
2016-03-01
We provide an approach to improve the quality of image reconstruction in wide-field imaging through turbid media (WITM). In WITM, a calibration stage which measures the transmission matrix (TM), the set of responses of turbid medium to a set of plane waves with different incident angles, is preceded to the image recovery. Then, the TM is used for estimation of object image in image recovery stage. In this work, we aim to estimate highly resolved angular spectrum and use it for high quality image reconstruction. To this end, we propose to perform a dense sampling for TM measurement in calibration stage with finer incident angle spacing. In conventional approaches, incident angle spacing is made to be large enough so that the columns in TM are out of memory effect of turbid media. Otherwise, the columns in TM are correlated and the inversion becomes difficult. We employ compressed sensing (CS) for a successful high resolution angular spectrum recovery with dense sampled TM. CS is a relatively new information acquisition and reconstruction framework and has shown to provide superb performance in ill-conditioned inverse problems. We observe that the image quality metrics such as contrast-to-noise ratio and mean squared error are improved and the perceptual image quality is improved with reduced speckle noise in the reconstructed image. This results shows that the WITM performance can be improved only by executing dense sampling in the calibration stage and with an efficient signal reconstruction framework without elaborating the overall optical imaging systems.
Hardware compression using common portions of data
Chang, Jichuan; Viswanathan, Krishnamurthy
2015-03-24
Methods and devices are provided for data compression. Data compression can include receiving a plurality of data chunks, sampling at least some of the plurality of data chunks extracting a common portion from a number of the plurality of data chunks based on the sampling, and storing a remainder of the plurality of data chunks in memory.
Directory of Open Access Journals (Sweden)
A. Schroeder
2012-09-01
Full Text Available This paper proposes a compression of far field matrices in the fast multipole method and its multilevel extension for electromagnetic problems. The compression is based on a spherical harmonic representation of radiation patterns in conjunction with a radiating mode expression of the surface current. The method is applied to study near field effects and the far field of an antenna placed on a ship surface. Furthermore, the electromagnetic scattering of an electrically large plate is investigated. It is demonstrated, that the proposed technique leads to a significant memory saving, making multipole algorithms even more efficient without compromising the accuracy.
On the implicit density based OpenFOAM solver for turbulent compressible flows
Fürst, Jiří
The contribution deals with the development of coupled implicit density based solver for compressible flows in the framework of open source package OpenFOAM. However the standard distribution of OpenFOAM contains several ready-made segregated solvers for compressible flows, the performance of those solvers is rather week in the case of transonic flows. Therefore we extend the work of Shen [15] and we develop an implicit semi-coupled solver. The main flow field variables are updated using lower-upper symmetric Gauss-Seidel method (LU-SGS) whereas the turbulence model variables are updated using implicit Euler method.
A Proposal for Kelly CriterionBased Lossy Network Compression
2016-03-01
detection applications. Most of these applications only send alerts to the central analysis servers. These alerts do not provide the forensic capability...based intrusion detection systems. These systems tend to examine the indi- vidual system’s audit logs looking for intrusive activity. The notable
USING H.264/AVC-INTRA FOR DCT BASED SEGMENTATION DRIVEN COMPOUND IMAGE COMPRESSION
Directory of Open Access Journals (Sweden)
S. Ebenezer Juliet
2011-08-01
Full Text Available This paper presents a one pass block classification algorithm for efficient coding of compound images which consists of multimedia elements like text, graphics and natural images. The objective is to minimize the loss of visual quality of text during compression by separating text information which needs high special resolution than the pictures and background. It segments computer screen images into text/graphics and picture/background classes based on DCT energy in each 4x4 block, and then compresses both text/graphics pixels and picture/background blocks by H.264/AVC with variable quantization parameter. Experimental results show that the single H.264/AVC-INTRA coder with variable quantization outperforms single coders such as JPEG, JPEG-2000 for compound images. Also the proposed method improves the PSNR value significantly than standard JPEG, JPEG-2000 and while keeping competitive compression ratios.
Adaptive bit plane quadtree-based block truncation coding for image compression
Li, Shenda; Wang, Jin; Zhu, Qing
2018-04-01
Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.
A high capacity text steganography scheme based on LZW compression and color coding
Directory of Open Access Journals (Sweden)
Aruna Malik
2017-02-01
Full Text Available In this paper, capacity and security issues of text steganography have been considered by employing LZW compression technique and color coding based approach. The proposed technique uses the forward mail platform to hide the secret data. This algorithm first compresses secret data and then hides the compressed secret data into the email addresses and also in the cover message of the email. The secret data bits are embedded in the message (or cover text by making it colored using a color coding table. Experimental results show that the proposed method not only produces a high embedding capacity but also reduces computational complexity. Moreover, the security of the proposed method is significantly improved by employing stego keys. The superiority of the proposed method has been experimentally verified by comparing with recently developed existing techniques.
A method of vehicle license plate recognition based on PCANet and compressive sensing
Ye, Xianyi; Min, Feng
2018-03-01
The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.
Owodunni, Damilola S.; Ali, Anum Z.; Quadeer, Ahmed Abdul; Al-Safadi, Ebrahim B.; Hammi, Oualid; Al-Naffouri, Tareq Y.
2014-01-01
-domain, and three compressed sensing based algorithms are presented to estimate and compensate for these distortions at the receiver using a few and, at times, even no frequency-domain free carriers (i.e. pilot carriers). The first technique is a conventional
Arapov, K.; Bex, G.; Hendriks, R.; Rubingh, E.; Abbel, R.; de With, G.; Friedrich, H.
2016-01-01
This paper describes a combination of photonic annealing and compression rolling to improve the conductive properties of printed binder-based graphene inks. High-density light pulses result in temperatures up to 500 °C that along with a decrease of resistivity lead to layer expansion. The structural
Edge-based compression of cartoon-like images with homogeneous diffusion
DEFF Research Database (Denmark)
Mainberger, Markus; Bruhn, Andrés; Weickert, Joachim
2011-01-01
Edges provide semantically important image features. In this paper a lossy compression method for cartoon-like images is presented, which is based on edge information. Edges together with some adjacent grey/colour values are extracted and encoded using a classical edge detector, binary compressio...
Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique
Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi
2013-09-01
According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.
International Nuclear Information System (INIS)
Yu Yunhan; Xia Yan; Liu Yaqiang; Wang Shi; Ma Tianyu; Chen Jing; Hong Baoyu
2013-01-01
To achieve a maximum compression of system matrix in positron emission tomography (PET) image reconstruction, we proposed a polygonal image pixel division strategy in accordance with rotationally symmetric PET geometry. Geometrical definition and indexing rule for polygonal pixels were established. Image conversion from polygonal pixel structure to conventional rectangular pixel structure was implemented using a conversion matrix. A set of test images were analytically defined in polygonal pixel structure, converted to conventional rectangular pixel based images, and correctly displayed which verified the correctness of the image definition, conversion description and conversion of polygonal pixel structure. A compressed system matrix for PET image recon was generated by tap model and tested by forward-projecting three different distributions of radioactive sources to the sinogram domain and comparing them with theoretical predictions. On a practical small animal PET scanner, a compress ratio of 12.6:1 of the system matrix size was achieved with the polygonal pixel structure, comparing with the conventional rectangular pixel based tap-mode one. OS-EM iterative image reconstruction algorithms with the polygonal and conventional Cartesian pixel grid were developed. A hot rod phantom was detected and reconstructed based on these two grids with reasonable time cost. Image resolution of reconstructed images was both 1.35 mm. We conclude that it is feasible to reconstruct and display images in a polygonal image pixel structure based on a compressed system matrix in PET image reconstruction. (authors)
Verification-Based Interval-Passing Algorithm for Compressed Sensing
Wu, Xiaofu; Yang, Zhen
2013-01-01
We propose a verification-based Interval-Passing (IP) algorithm for iteratively reconstruction of nonnegative sparse signals using parity check matrices of low-density parity check (LDPC) codes as measurement matrices. The proposed algorithm can be considered as an improved IP algorithm by further incorporation of the mechanism of verification algorithm. It is proved that the proposed algorithm performs always better than either the IP algorithm or the verification algorithm. Simulation resul...
Directory of Open Access Journals (Sweden)
Jacek Hunicz
2015-01-01
Full Text Available In this study we summarize and analyze experimental observations of cyclic variability in homogeneous charge compression ignition (HCCI combustion in a single-cylinder gasoline engine. The engine was configured with negative valve overlap (NVO to trap residual gases from prior cycles and thus enable auto-ignition in successive cycles. Correlations were developed between different fuel injection strategies and cycle average combustion and work output profiles. Hypothesized physical mechanisms based on these correlations were then compared with trends in cycle-by-cycle predictability as revealed by sample entropy. The results of these comparisons help to clarify how fuel injection strategy can interact with prior cycle effects to affect combustion stability and so contribute to design control methods for HCCI engines.
EP-based wavelet coefficient quantization for linear distortion ECG data compression.
Hung, King-Chu; Wu, Tsung-Ching; Lee, Hsieh-Wei; Liu, Tung-Kuan
2014-07-01
Reconstruction quality maintenance is of the essence for ECG data compression due to the desire for diagnosis use. Quantization schemes with non-linear distortion characteristics usually result in time-consuming quality control that blocks real-time application. In this paper, a new wavelet coefficient quantization scheme based on an evolution program (EP) is proposed for wavelet-based ECG data compression. The EP search can create a stationary relationship among the quantization scales of multi-resolution levels. The stationary property implies that multi-level quantization scales can be controlled with a single variable. This hypothesis can lead to a simple design of linear distortion control with 3-D curve fitting technology. In addition, a competitive strategy is applied for alleviating data dependency effect. By using the ECG signals saved in MIT and PTB databases, many experiments were undertaken for the evaluation of compression performance, quality control efficiency, data dependency influence. The experimental results show that the new EP-based quantization scheme can obtain high compression performance and keep linear distortion behavior efficiency. This characteristic guarantees fast quality control even for the prediction model mismatching practical distortion curve. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
A blended pressure/density based method for the computation of incompressible and compressible flows
International Nuclear Information System (INIS)
Rossow, C.-C.
2003-01-01
An alternative method to low speed preconditioning for the computation of nearly incompressible flows with compressible methods is developed. For this approach the leading terms of the flux difference splitting (FDS) approximate Riemann solver are analyzed in the incompressible limit. In combination with the requirement of the velocity field to be divergence-free, an elliptic equation to solve for a pressure correction to enforce the divergence-free velocity field on the discrete level is derived. The pressure correction equation established is shown to be equivalent to classical methods for incompressible flows. In order to allow the computation of flows at all speeds, a blending technique for the transition from the incompressible, pressure based formulation to the compressible, density based formulation is established. It is found necessary to use preconditioning with this blending technique to account for a remaining 'compressible' contribution in the incompressible limit, and a suitable matrix directly applicable to conservative residuals is derived. Thus, a coherent framework is established to cover the discretization of both incompressible and compressible flows. Compared with standard preconditioning techniques, the blended pressure/density based approach showed improved robustness for high lift flows close to separation
Urban, K.; Sicakova, A.
2017-10-01
The paper deals with the use of alternative powder additives (fly ash and fine fraction of recycled concrete) to improve the recycled concrete aggregate and this occurs directly in the concrete mixing process. Specific mixing process (triple mixing method) is applied as it is favourable for this goal. Results of compressive strength after 2 and 28 days of hardening are given. Generally, using powder additives for coating the coarse recycled concrete aggregate in the first stage of triple mixing resulted in decrease of compressive strength, comparing the cement. There is no very important difference between samples based on recycled concrete aggregate and those based on natural aggregate as far as the cement is used for coating. When using both the fly ash and recycled concrete powder, the kind of aggregate causes more significant differences in compressive strength, with the values of those based on the recycled concrete aggregate being worse.
VLSI-based video event triggering for image data compression
Williams, Glenn L.
1994-02-01
Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.
Holland, Katharina; Sechopoulos, Ioannis; Mann, Ritse M; den Heeten, Gerard J; van Gils, Carla H; Karssemeijer, Nico
2017-11-28
In mammography, breast compression is applied to reduce the thickness of the breast. While it is widely accepted that firm breast compression is needed to ensure acceptable image quality, guidelines remain vague about how much compression should be applied during mammogram acquisition. A quantitative parameter indicating the desirable amount of compression is not available. Consequently, little is known about the relationship between the amount of breast compression and breast cancer detectability. The purpose of this study is to determine the effect of breast compression pressure in mammography on breast cancer screening outcomes. We used digital image analysis methods to determine breast volume, percent dense volume, and pressure from 132,776 examinations of 57,179 women participating in the Dutch population-based biennial breast cancer screening program. Pressure was estimated by dividing the compression force by the area of the contact surface between breast and compression paddle. The data was subdivided into quintiles of pressure and the number of screen-detected cancers, interval cancers, false positives, and true negatives were determined for each group. Generalized estimating equations were used to account for correlation between examinations of the same woman and for the effect of breast density and volume when estimating sensitivity, specificity, and other performance measures. Sensitivity was computed using interval cancers occurring between two screening rounds and using interval cancers within 12 months after screening. Pair-wise testing for significant differences was performed. Percent dense volume increased with increasing pressure, while breast volume decreased. Sensitivity in quintiles with increasing pressure was 82.0%, 77.1%, 79.8%, 71.1%, and 70.8%. Sensitivity based on interval cancers within 12 months was significantly lower in the highest pressure quintile compared to the third (84.3% vs 93.9%, p = 0.034). Specificity was lower in the
Sample Based Unit Liter Dose Estimates
International Nuclear Information System (INIS)
JENSEN, L.
2000-01-01
The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999a) and the Final Safety Analysis Report (FSAR) (FDH 1999b) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in producing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks. The results given in this report are a revision to similar results given in an earlier version of the document (Jensen and Wilmarth 1999). The main difference between the results in this document and the earlier version is that the dose conversion factors (DCF) for converting μCi/g or μCi/L to Sv/L (sieverts per liter) have changed. There are now two DCFs, one based on ICRP-68 and one based on ICW-71 (Brevick 2000)
Method for Multiple Targets Tracking in Cognitive Radar Based on Compressed Sensing
Directory of Open Access Journals (Sweden)
Yang Jun
2016-02-01
Full Text Available A multiple targets cognitive radar tracking method based on Compressed Sensing (CS is proposed. In this method, the theory of CS is introduced to the case of cognitive radar tracking process in multiple targets scenario. The echo signal is sparsely expressed. The designs of sparse matrix and measurement matrix are accomplished by expressing the echo signal sparsely, and subsequently, the restruction of measurement signal under the down-sampling condition is realized. On the receiving end, after considering that the problems that traditional particle filter suffers from degeneracy, and require a large number of particles, the particle swarm optimization particle filter is used to track the targets. On the transmitting end, the Posterior Cramér-Rao Bounds (PCRB of the tracking accuracy is deduced, and the radar waveform parameters are further cognitively designed using PCRB. Simulation results show that the proposed method can not only reduce the data quantity, but also provide a better tracking performance compared with traditional method.
Al-Salihi, Hayder Qahtan Kshash; Nakhai, Mohammad Reza
2017-01-01
Efficient and highly accurate channel state information (CSI) at the base station (BS) is essential to achieve the potential benefits of massive multiple input multiple output (MIMO) systems. However, the achievable accuracy that is attainable is limited in practice due to the problem of pilot contamination. It has recently been shown that compressed sensing (CS) techniques can address the pilot contamination problem. However, CS-based channel estimation requires prior knowledge of channel sp...
Directory of Open Access Journals (Sweden)
V. A. Mubassarova
2014-01-01
Full Text Available Results of uniaxial compression tests of rock samples in electromagnetic fields are presented. The experiments were performed in the Laboratory of Basic Physics of Strength, Institute of Continuous Media Mechanics, Ural Branch of RAS (ICMM. Deformation of samples was studied, and acoustic emission (AE signals were recorded. During the tests, loads varied by stages. Specimens of granite from the Kainda deposit in Kyrgyzstan (similar to samples tested at the Research Station of RAS, hereafter RS RAS were subject to electric pulses at specified levels of compression load. The electric pulses supply was galvanic; two graphite electrodes were fixed at opposite sides of each specimen. The multichannel Amsy-5 Vallen System was used to record AE signals in the six-channel mode, which provided for determination of spatial locations of AE sources. Strain of the specimens was studied with application of original methods of strain computation based on analyses of optical images of deformed specimen surfaces in LaVISION Strain Master System.Acoustic emission experiment data were interpreted on the basis of analyses of the AE activity in time, i.e. the number of AE events per second, and analyses of signals’ energy and AE sources’ locations, i.e. defects.The experiment was conducted at ICMM with the use of the set of equipment with advanced diagnostic capabilities (as compared to earlier experiments described in [Zakupin et al., 2006a, 2006b; Bogomolov et al., 2004]. It can provide new information on properties of acoustic emission and deformation responses of loaded rock specimens to external electric pulses.The research task also included verification of reproducibility of the effect (AE activity when fracturing rates responded to electrical pulses, which was revealed earlier in studies conducted at RS RAS. In terms of the principle of randomization, such verification is methodologically significant as new effects, i.e. physical laws, can be considered
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
A new chest compression depth feedback algorithm for high-quality CPR based on smartphone.
Song, Yeongtak; Oh, Jaehoon; Chee, Youngjoon
2015-01-01
Although many smartphone application (app) programs provide education and guidance for basic life support, they do not commonly provide feedback on the chest compression depth (CCD) and rate. The validation of its accuracy has not been reported to date. This study was a feasibility assessment of use of the smartphone as a CCD feedback device. In this study, we proposed the concept of a new real-time CCD estimation algorithm using a smartphone and evaluated the accuracy of the algorithm. Using the double integration of the acceleration signal, which was obtained from the accelerometer in the smartphone, we estimated the CCD in real time. Based on its periodicity, we removed the bias error from the accelerometer. To evaluate this instrument's accuracy, we used a potentiometer as the reference depth measurement. The evaluation experiments included three levels of CCD (insufficient, adequate, and excessive) and four types of grasping orientations with various compression directions. We used the difference between the reference measurement and the estimated depth as the error. The error was calculated for each compression. When chest compressions were performed with adequate depth for the patient who was lying on a flat floor, the mean (standard deviation) of the errors was 1.43 (1.00) mm. When the patient was lying on an oblique floor, the mean (standard deviation) of the errors was 3.13 (1.88) mm. The error of the CCD estimation was tolerable for the algorithm to be used in the smartphone-based CCD feedback app to compress more than 51 mm, which is the 2010 American Heart Association guideline.
Point-Cloud Compression for Vehicle-Based Mobile Mapping Systems Using Portable Network Graphics
Kohira, K.; Masuda, H.
2017-09-01
A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG) format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.
Owodunni, Damilola S.
2014-04-01
In this paper, compressed sensing techniques are proposed to linearize commercial power amplifiers driven by orthogonal frequency division multiplexing signals. The nonlinear distortion is considered as a sparse phenomenon in the time-domain, and three compressed sensing based algorithms are presented to estimate and compensate for these distortions at the receiver using a few and, at times, even no frequency-domain free carriers (i.e. pilot carriers). The first technique is a conventional compressed sensing approach, while the second incorporates a priori information about the distortions to enhance the estimation. Finally, the third technique involves an iterative data-aided algorithm that does not require any pilot carriers and hence allows the system to work at maximum bandwidth efficiency. The performances of all the proposed techniques are evaluated on a commercial power amplifier and compared. The error vector magnitude and symbol error rate results show the ability of compressed sensing to compensate for the amplifier\\'s nonlinear distortions. © 2013 Elsevier B.V.
An Improved Fast Compressive Tracking Algorithm Based on Online Random Forest Classifier
Directory of Open Access Journals (Sweden)
Xiong Jintao
2016-01-01
Full Text Available The fast compressive tracking (FCT algorithm is a simple and efficient algorithm, which is proposed in recent years. But, it is difficult to deal with the factors such as occlusion, appearance changes, pose variation, etc in processing. The reasons are that, Firstly, even if the naive Bayes classifier is fast in training, it is not robust concerning the noise. Secondly, the parameters are required to vary with the unique environment for accurate tracking. In this paper, we propose an improved fast compressive tracking algorithm based on online random forest (FCT-ORF for robust visual tracking. Firstly, we combine ideas with the adaptive compressive sensing theory regarding the weighted random projection to exploit both local and discriminative information of the object. The second reason is the online random forest classifier for online tracking which is demonstrated with more robust to the noise adaptively and high computational efficiency. The experimental results show that the algorithm we have proposed has a better performance in the field of occlusion, appearance changes, and pose variation than the fast compressive tracking algorithm’s contribution.
POINT-CLOUD COMPRESSION FOR VEHICLE-BASED MOBILE MAPPING SYSTEMS USING PORTABLE NETWORK GRAPHICS
Directory of Open Access Journals (Sweden)
K. Kohira
2017-09-01
Full Text Available A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects．Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.
The extraction of motion-onset VEP BCI features based on deep learning and compressed sensing.
Ma, Teng; Li, Hui; Yang, Hao; Lv, Xulin; Li, Peiyang; Liu, Tiejun; Yao, Dezhong; Xu, Peng
2017-01-01
Motion-onset visual evoked potentials (mVEP) can provide a softer stimulus with reduced fatigue, and it has potential applications for brain computer interface(BCI)systems. However, the mVEP waveform is seriously masked in the strong background EEG activities, and an effective approach is needed to extract the corresponding mVEP features to perform task recognition for BCI control. In the current study, we combine deep learning with compressed sensing to mine discriminative mVEP information to improve the mVEP BCI performance. The deep learning and compressed sensing approach can generate the multi-modality features which can effectively improve the BCI performance with approximately 3.5% accuracy incensement over all 11 subjects and is more effective for those subjects with relatively poor performance when using the conventional features. Compared with the conventional amplitude-based mVEP feature extraction approach, the deep learning and compressed sensing approach has a higher classification accuracy and is more effective for subjects with relatively poor performance. According to the results, the deep learning and compressed sensing approach is more effective for extracting the mVEP feature to construct the corresponding BCI system, and the proposed feature extraction framework is easy to extend to other types of BCIs, such as motor imagery (MI), steady-state visual evoked potential (SSVEP)and P300. Copyright Â© 2016 Elsevier B.V. All rights reserved.
Color image lossy compression based on blind evaluation and prediction of noise characteristics
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena
2011-03-01
The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.
DNABIT Compress – Genome compression algorithm
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-01
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...
Sample Based Unit Liter Dose Estimates
International Nuclear Information System (INIS)
JENSEN, L.
1999-01-01
The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999) and the Final Safety Analysis Report (FSAR) (FDH 1999) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in developing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks
A compressed sensing based method with support refinement for impulse noise cancelation in DSL
Quadeer, Ahmed Abdul
2013-06-01
This paper presents a compressed sensing based method to suppress impulse noise in digital subscriber line (DSL). The proposed algorithm exploits the sparse nature of the impulse noise and utilizes the carriers, already available in all practical DSL systems, for its estimation and cancelation. Specifically, compressed sensing is used for a coarse estimate of the impulse position, an a priori information based maximum aposteriori probability (MAP) metric for its refinement, followed by least squares (LS) or minimum mean square error (MMSE) estimation for estimating the impulse amplitudes. Simulation results show that the proposed scheme achieves higher rate as compared to other known sparse estimation algorithms in literature. The paper also demonstrates the superior performance of the proposed scheme compared to the ITU-T G992.3 standard that utilizes RS-coding for impulse noise refinement in DSL signals. © 2013 IEEE.
Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang
2016-09-22
To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It's theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.
An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Donghao Wang
2016-09-01
Full Text Available To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It’s theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.
Abramoff, M.D.
2006-01-01
Knowledge of the effect of compression of ophthalmic images on diagnostic reading is essential for effective tele-ophthalmology applications. It was therefore with great anticipation that I read the article “The Effect of Compression on Clinical Diagnosis of Glaucoma Based on Non-analyzed Confocal
Fusion of Thresholding Rules During Wavelet-Based Noisy Image Compression
Directory of Open Access Journals (Sweden)
Bekhtin Yury
2016-01-01
Full Text Available The new method for combining semisoft thresholding rules during wavelet-based data compression of images with multiplicative noise is suggested. The method chooses the best thresholding rule and the threshold value using the proposed criteria which provide the best nonlinear approximations and take into consideration errors of quantization. The results of computer modeling have shown that the suggested method provides relatively good image quality after restoration in the sense of some criteria such as PSNR, SSIM, etc.
Simulation-Based Stochastic Sensitivity Analysis of a Mach 4.5 Mixed-Compression Intake Performance
Kato, H.; Ito, K.
2009-01-01
A sensitivity analysis of a supersonic mixed-compression intake of a variable-cycle turbine-based combined cycle (TBCC) engine is presented. The TBCC engine is de- signed to power a long-range Mach 4.5 transport capable of antipodal missions studied in the framework of an EU FP6 project, LAPCAT. The nominal intake geometry was designed using DLR abpi cycle analysis pro- gram by taking into account various operating require- ments of a typical mission profile. The intake consists of two movable external compression ramps followed by an isolator section with bleed channel. The compressed air is then diffused through a rectangular-to-circular subsonic diffuser. A multi-block Reynolds-averaged Navier- Stokes (RANS) solver with Srinivasan-Tannehill equilibrium air model was used to compute the total pressure recovery and mass capture fraction. While RANS simulation of the nominal intake configuration provides more realistic performance characteristics of the intake than the cycle analysis program, the intake design must also take into account in-flight uncertainties for robust intake performance. In this study, we focus on the effects of the geometric uncertainties on pressure recovery and mass capture fraction, and propose a practical approach to simulation-based sensitivity analysis. The method begins by constructing a light-weight analytical model, a radial-basis function (RBF) network, trained via adaptively sampled RANS simulation results. Using the RBF network as the response surface approximation, stochastic sensitivity analysis is performed using analysis of variance (ANOVA) technique by Sobol. This approach makes it possible to perform a generalized multi-input- multi-output sensitivity analysis based on high-fidelity RANS simulation. The resulting Sobol's influence indices allow the engineer to identify dominant parameters as well as the degree of interaction among multiple parameters, which can then be fed back into the design cycle.
Hot-compress: A new postdeposition treatment for ZnO-based flexible dye-sensitized solar cells
Energy Technology Data Exchange (ETDEWEB)
Haque Choudhury, Mohammad Shamimul, E-mail: shamimul129@gmail.com [Department of Frontier Material, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya, Aichi 466-8555 (Japan); Department of Electrical and Electronic Engineering, International Islamic University Chittagong, b154/a, College Road, Chittagong 4203 (Bangladesh); Kishi, Naoki; Soga, Tetsuo [Department of Frontier Material, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya, Aichi 466-8555 (Japan)
2016-08-15
Highlights: • A new postdeposition treatment named hot-compress is introduced. • Hot-compression gives homogeneous compact layer ZnO photoanode. • I-V and EIS analysis data confirms the efficacy of this method. • Charge transport resistance was reduced by the application of hot-compression. - Abstract: This article introduces a new postdeposition treatment named hot-compress for flexible zinc oxide–base dye-sensitized solar cells. This postdeposition treatment includes the application of compression pressure at an elevated temperature. The optimum compression pressure of 130 Ma at an optimum compression temperature of 70 °C heating gives better photovoltaic performance compared to the conventional cells. The aptness of this method was confirmed by investigating scanning electron microscopy image, X-ray diffraction, current-voltage and electrochemical impedance spectroscopy analysis of the prepared cells. Proper heating during compression lowers the charge transport resistance, longer the electron lifetime of the device. As a result, the overall power conversion efficiency of the device was improved about 45% compared to the conventional room temperature compressed cell.
Leturiondo, Mikel; Ruiz de Gauna, Sofía; Ruiz, Jesus M; Julio Gutiérrez, J; Leturiondo, Luis A; González-Otero, Digna M; Russell, James K; Zive, Dana; Daya, Mohamud
2018-03-01
Capnography has been proposed as a method for monitoring the ventilation rate during cardiopulmonary resuscitation (CPR). A high incidence (above 70%) of capnograms distorted by chest compression induced oscillations has been previously reported in out-of-hospital (OOH) CPR. The aim of the study was to better characterize the chest compression artefact and to evaluate its influence on the performance of a capnogram-based ventilation detector during OOH CPR. Data from the MRx monitor-defibrillator were extracted from OOH cardiac arrest episodes. For each episode, presence of chest compression artefact was annotated in the capnogram. Concurrent compression depth and transthoracic impedance signals were used to identify chest compressions and to annotate ventilations, respectively. We designed a capnogram-based ventilation detection algorithm and tested its performance with clean and distorted episodes. Data were collected from 232 episodes comprising 52 654 ventilations, with a mean (±SD) of 227 (±118) per episode. Overall, 42% of the capnograms were distorted. Presence of chest compression artefact degraded algorithm performance in terms of ventilation detection, estimation of ventilation rate, and the ability to detect hyperventilation. Capnogram-based ventilation detection during CPR using our algorithm was compromised by the presence of chest compression artefact. In particular, artefact spanning from the plateau to the baseline strongly degraded ventilation detection, and caused a high number of false hyperventilation alarms. Further research is needed to reduce the impact of chest compression artefact on capnographic ventilation monitoring. Copyright © 2017 Elsevier B.V. All rights reserved.
DNABIT Compress – Genome compression algorithm
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-01
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923
Balouchestani, Mohammadreza
2017-05-01
Network traffic or data traffic in a Wireless Local Area Network (WLAN) is the amount of network packets moving across a wireless network from each wireless node to another wireless node, which provide the load of sampling in a wireless network. WLAN's Network traffic is the main component for network traffic measurement, network traffic control and simulation. Traffic classification technique is an essential tool for improving the Quality of Service (QoS) in different wireless networks in the complex applications such as local area networks, wireless local area networks, wireless personal area networks, wireless metropolitan area networks, and wide area networks. Network traffic classification is also an essential component in the products for QoS control in different wireless network systems and applications. Classifying network traffic in a WLAN allows to see what kinds of traffic we have in each part of the network, organize the various kinds of network traffic in each path into different classes in each path, and generate network traffic matrix in order to Identify and organize network traffic which is an important key for improving the QoS feature. To achieve effective network traffic classification, Real-time Network Traffic Classification (RNTC) algorithm for WLANs based on Compressed Sensing (CS) is presented in this paper. The fundamental goal of this algorithm is to solve difficult wireless network management problems. The proposed architecture allows reducing False Detection Rate (FDR) to 25% and Packet Delay (PD) to 15 %. The proposed architecture is also increased 10 % accuracy of wireless transmission, which provides a good background for establishing high quality wireless local area networks.
Ma, Lihong; Jin, Weimin
2018-01-01
A novel symmetric and asymmetric hybrid optical cryptosystem is proposed based on compressive sensing combined with computer generated holography. In this method there are six encryption keys, among which two decryption phase masks are different from the two random phase masks used in the encryption process. Therefore, the encryption system has the feature of both symmetric and asymmetric cryptography. On the other hand, because computer generated holography can flexibly digitalize the encrypted information and compressive sensing can significantly reduce data volume, what is more, the final encryption image is real function by phase truncation, the method favors the storage and transmission of the encryption data. The experimental results demonstrate that the proposed encryption scheme boosts the security and has high robustness against noise and occlusion attacks.
Directory of Open Access Journals (Sweden)
J. Obedt Figueroa-Cavazos
2016-01-01
Full Text Available This work explores the viability of 3D printed intervertebral lumbar cages based on biocompatible polycarbonate (PC-ISO® material. Several design concepts are proposed for the generation of patient-specific intervertebral lumbar cages. The 3D printed material achieved compressive yield strength of 55 MPa under a specific combination of manufacturing parameters. The literature recommends a reference load of 4,000 N for design of intervertebral lumbar cages. Under compression testing conditions, the proposed design concepts withstand between 7,500 and 10,000 N of load before showing yielding. Although some stress concentration regions were found during analysis, the overall viability of the proposed design concepts was validated.
Unaldi, Numan; Asari, Vijayan K.; Rahman, Zia-ur
2009-05-01
Recently we proposed a wavelet-based dynamic range compression algorithm to improve the visual quality of digital images captured from high dynamic range scenes with non-uniform lighting conditions. The fast image enhancement algorithm that provides dynamic range compression, while preserving the local contrast and tonal rendition, is also a good candidate for real time video processing applications. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some "pathological" scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for the final color restoration process. In this paper the latest version of the proposed algorithm, which deals with this issue is presented. The results obtained by applying the algorithm to numerous natural images show strong robustness and high image quality.
Directory of Open Access Journals (Sweden)
X.Z. Jiang
2014-07-01
Full Text Available Over the past few decades, wireless sensor networks have been widely used in the field of structure health monitoring of civil, mechanical, and aerospace systems. Currently, most wireless sensor networks are battery-powered and it is costly and unsustainable for maintenance because of the requirement for frequent battery replacements. As an attempt to address such issue, this article theoretically and experimentally studies a compression-based piezoelectric energy harvester using a multilayer stack configuration, which is suitable for civil infrastructure system applications where large compressive loads occur, such as heavily vehicular loading acting on pavements. In this article, we firstly present analytical and numerical modeling of the piezoelectric multilayer stack under axial compressive loading, which is based on the linear theory of piezoelectricity. A two-degree-of-freedom electromechanical model, considering both the mechanical and electrical aspects of the proposed harvester, was developed to characterize the harvested electrical power under the external electrical load. Exact closed-form expressions of the electromechanical models have been derived to analyze the mechanical and electrical properties of the proposed harvester. The theoretical analyses are validated through several experiments for a test prototype under harmonic excitations. The test results exhibit very good agreement with the analytical analyses and numerical simulations for a range of resistive loads and input excitation levels.
An Adaptive Joint Sparsity Recovery for Compressive Sensing Based EEG System
Directory of Open Access Journals (Sweden)
Hamza Djelouat
2017-01-01
Full Text Available The last decade has witnessed tremendous efforts to shape the Internet of things (IoT platforms to be well suited for healthcare applications. These platforms are comprised of a network of wireless sensors to monitor several physical and physiological quantities. For instance, long-term monitoring of brain activities using wearable electroencephalogram (EEG sensors is widely exploited in the clinical diagnosis of epileptic seizures and sleeping disorders. However, the deployment of such platforms is challenged by the high power consumption and system complexity. Energy efficiency can be achieved by exploring efficient compression techniques such as compressive sensing (CS. CS is an emerging theory that enables a compressed acquisition using well-designed sensing matrices. Moreover, system complexity can be optimized by using hardware friendly structured sensing matrices. This paper quantifies the performance of a CS-based multichannel EEG monitoring. In addition, the paper exploits the joint sparsity of multichannel EEG using subspace pursuit (SP algorithm as well as a designed sparsifying basis in order to improve the reconstruction quality. Furthermore, the paper proposes a modification to the SP algorithm based on an adaptive selection approach to further improve the performance in terms of reconstruction quality, execution time, and the robustness of the recovery process.
DETERMINING OPTIMAL CUBE FOR 3D-DCT BASED VIDEO COMPRESSION FOR DIFFERENT MOTION LEVELS
Directory of Open Access Journals (Sweden)
J. Augustin Jacob
2012-11-01
Full Text Available This paper proposes new three dimensional discrete cosine transform (3D-DCT based video compression algorithm that will select the optimal cube size based on the motion content of the video sequence. It is determined by finding normalized pixel difference (NPD values, and by categorizing the cubes as “low” or “high” motion cube suitable cube size of dimension either [16×16×8] or[8×8×8] is chosen instead of fixed cube algorithm. To evaluate the performance of the proposed algorithm test sequence with different motion levels are chosen. By doing rate vs. distortion analysis the level of compression that can be achieved and the quality of reconstructed video sequence are determined and compared against fixed cube size algorithm. Peak signal to noise ratio (PSNR is taken to measure the video quality. Experimental result shows that varying the cube size with reference to the motion content of video frames gives better performance in terms of compression ratio and video quality.
Toward topology-based characterization of small-scale mixing in compressible turbulence
Suman, Sawan; Girimaji, Sharath
2011-11-01
Turbulent mixing rate at small scales of motion (molecular mixing) is governed by the steepness of the scalar-gradient field which in turn is dependent upon the prevailing velocity gradients. Thus motivated, we propose a velocity-gradient topology-based approach for characterizing small-scale mixing in compressible turbulence. We define a mixing efficiency metric that is dependent upon the topology of the solenoidal and dilatational deformation rates of a fluid element. The mixing characteristics of solenoidal and dilatational velocity fluctuations are clearly delineated. We validate this new approach by employing mixing data from direct numerical simulations (DNS) of compressible decaying turbulence with passive scalar. For each velocity-gradient topology, we compare the mixing efficiency predicted by the topology-based model with the corresponding conditional scalar variance obtained from DNS. The new mixing metric accurately distinguishes good and poor mixing topologies and indeed reasonably captures the numerical values. The results clearly demonstrate the viability of the proposed approach for characterizing and predicting mixing in compressible flows.
Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing
2014-07-01
Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.
Stratified sampling design based on data mining.
Kim, Yeonkook J; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon; Park, Hayoung
2013-09-01
To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea.
International Nuclear Information System (INIS)
Vlasik, K.F.; Grachev, V.M.; Dmitrenko, V.V.; Sokolov, D.V.; Ulin, S.E.; Uteshev, Z.M.
2000-01-01
Paper describes an algorithm to detect and to identify radionuclides on the basis of γ-spectra derived using a compressed xenon base γ-spectrometer. The algorithm is based on the comparison of the measured γ-spectra with table data on radionuclides. One formulated criteria of comparison. One elaborated a package of programs realizing the algorithm and ensuring implementation of the comprehensive process of γ-spectra processing. The algorithm was evaluated using real spectra. Its applicability and efficiency are demonstrated [ru
Directory of Open Access Journals (Sweden)
Noor D. N.
2016-01-01
Full Text Available Conventional air conditioners or vapour compression systems are main contributors to energy consumption in modern buildings. There are common environmental issues emanating from vapour compression system such as greenhouse gas emission and heat wastage. These problems can be reduced by adaptation of solar energy components to vapour compression system. However, intermittence input of daily solar radiation was the main issue of solar energy system. This paper presents the recent studies on hybrid air conditioning system. In addition, the basic vapour compression system and components involved in the solar air conditioning system are discussed. Introduction of low temperature storage can be an interactive solution and improved economically which portray different modes of operating strategies. Yet, very few studies have examined on optimal operating strategies of the hybrid system. Finally, the findings of this review will help suggest optimization of solar absorption and vapour compression based hybrid air conditioning system for future work while considering both economic and environmental factors.
Web-based tool for subjective observer ranking of compressed medical images
Langer, Steven G.; Stewart, Brent K.; Andrew, Rex K.
1999-05-01
In the course of evaluating various compression schemes for ultrasound teleradiology applications, it became obvious that paper based methods of data collection were time consuming and error prone. A method was sought which allowed participating radiologists to view the ultrasound video clips (compressed to varying degree) at their desks. Furthermore, the method should allow observers to enter their evaluations and when finished, automatically submit the data to our statistical analysis engine. We have found the World Wide Web offered a ready solution. A web page was constructed that contains 18 embedded AVI video clips. The 18 clips represent 6 distinct anatomical areas, compressed by various methods and amounts, and then randomly distributed through the web page. To the right of each video, a series of questions are presented which ask the observer to rank (1 - 5) his/her ability to answer diagnostically relevant questions. When completed, the observer presses 'Submit' and a file of tab delimited test is created which can then be imported to an Excel workbook. Kappa analysis is then performed and the resulting plots demonstrate observer preferences.
A new DWT/MC/DPCM video compression framework based on EBCOT
Mei, L. M.; Wu, H. R.; Tan, D. M.
2005-07-01
A novel Discrete Wavelet Transform (DWT)/Motion Compensation (MC)/Differential Pulse Code Modulation (DPCM) video compression framework is proposed in this paper. Although the Discrete Cosine Transform (DCT)/MC/DPCM is the mainstream framework for video coders in industry and international standards, the idea of DWT/MC/DPCM has existed for more than one decade in the literature and the investigation is still undergoing. The contribution of this work is twofold. Firstly, the Embedded Block Coding with Optimal Truncation (EBCOT) is used here as the compression engine for both intra- and inter-frame coding, which provides good compression ratio and embedded rate-distortion (R-D) optimization mechanism. This is an extension of the EBCOT application from still images to videos. Secondly, this framework offers a good interface for the Perceptual Distortion Measure (PDM) based on the Human Visual System (HVS) where the Mean Squared Error (MSE) can be easily replaced with the PDM in the R-D optimization. Some of the preliminary results are reported here. They are also compared with benchmarks such as MPEG-2 and MPEG-4 version 2. The results demonstrate that under specified condition the proposed coder outperforms the benchmarks in terms of rate vs. distortion.
Directory of Open Access Journals (Sweden)
Tinghua Zhang
2018-02-01
Full Text Available Coded Aperture Compressive Temporal Imaging (CACTI can afford low-cost temporal super-resolution (SR, but limits are imposed by noise and compression ratio on reconstruction quality. To utilize inter-frame redundant information from multiple observations and sparsity in multi-transform domains, a robust reconstruction approach based on maximum a posteriori probability and Markov random field (MAP-MRF model for CACTI is proposed. The proposed approach adopts a weighted 3D neighbor system (WNS and the coordinate descent method to perform joint estimation of model parameters, to achieve the robust super-resolution reconstruction. The proposed multi-reconstruction algorithm considers both total variation (TV and ℓ 2 , 1 norm in wavelet domain to address the minimization problem for compressive sensing, and solves it using an accelerated generalized alternating projection algorithm. The weighting coefficient for different regularizations and frames is resolved by the motion characteristics of pixels. The proposed approach can provide high visual quality in the foreground and background of a scene simultaneously and enhance the fidelity of the reconstruction results. Simulation results have verified the efficacy of our new optimization framework and the proposed reconstruction approach.
Quark enables semi-reference-based compression of RNA-seq data.
Sarkar, Hirak; Patro, Rob
2017-11-01
The past decade has seen an exponential increase in biological sequencing capacity, and there has been a simultaneous effort to help organize and archive some of the vast quantities of sequencing data that are being generated. Although these developments are tremendous from the perspective of maximizing the scientific utility of available data, they come with heavy costs. The storage and transmission of such vast amounts of sequencing data is expensive. We present Quark, a semi-reference-based compression tool designed for RNA-seq data. Quark makes use of a reference sequence when encoding reads, but produces a representation that can be decoded independently, without the need for a reference. This allows Quark to achieve markedly better compression rates than existing reference-free schemes, while still relieving the burden of assuming a specific, shared reference sequence between the encoder and decoder. We demonstrate that Quark achieves state-of-the-art compression rates, and that, typically, only a small fraction of the reference sequence must be encoded along with the reads to allow reference-free decompression. Quark is implemented in C ++11, and is available under a GPLv3 license at www.github.com/COMBINE-lab/quark. rob.patro@cs.stonybrook.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Benkert, Thomas; Feng, Li; Sodickson, Daniel K; Chandarana, Hersh; Block, Kai Tobias
2017-08-01
Conventional fat/water separation techniques require that patients hold breath during abdominal acquisitions, which often fails and limits the achievable spatial resolution and anatomic coverage. This work presents a novel approach for free-breathing volumetric fat/water separation. Multiecho data are acquired using a motion-robust radial stack-of-stars three-dimensional GRE sequence with bipolar readout. To obtain fat/water maps, a model-based reconstruction is used that accounts for the off-resonant blurring of fat and integrates both compressed sensing and parallel imaging. The approach additionally enables generation of respiration-resolved fat/water maps by detecting motion from k-space data and reconstructing different respiration states. Furthermore, an extension is described for dynamic contrast-enhanced fat-water-separated measurements. Uniform and robust fat/water separation is demonstrated in several clinical applications, including free-breathing noncontrast abdominal examination of adults and a pediatric subject with both motion-averaged and motion-resolved reconstructions, as well as in a noncontrast breast exam. Furthermore, dynamic contrast-enhanced fat/water imaging with high temporal resolution is demonstrated in the abdomen and breast. The described framework provides a viable approach for motion-robust fat/water separation and promises particular value for clinical applications that are currently limited by the breath-holding capacity or cooperation of patients. Magn Reson Med 78:565-576, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
The Effect of Alkaline Activator Ratio on the Compressive Strength of Fly Ash-Based Geopolymer Paste
Lăzărescu, A. V.; Szilagyi, H.; Baeră, C.; Ioani, A.
2017-06-01
Alkaline activation of fly ash is a particular procedure in which ash resulting from a power plant combined with a specific alkaline activator creates a solid material when dried at a certain temperature. In order to obtain desirable compressive strengths, the mix design of fly ash based geopolymer pastes should be explored comprehensively. To determine the preliminary compressive strength for fly ash based geopolymer paste using Romanian material source, various ratios of Na2SiO3 solution/ NaOH solution were produced, keeping the fly ash/alkaline activator ratio constant. All the mixes were then cured at 70 °C for 24 hours and tested at 2 and 7 days, respectively. The aim of this paper is to present the preliminary compressive strength results for producing fly ash based geopolymer paste using Romanian material sources, the effect of alkaline activators ratio on the compressive strength and studying the directions for future research.
Mining compressing sequential problems
Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.
2012-01-01
Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and
Directory of Open Access Journals (Sweden)
Manzini Giovanni
2007-07-01
Full Text Available Abstract Background Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity, NCD (Normalized Compression Dissimilarity and CD (Compression Dissimilarity. Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. Results We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC
Ferragina, Paolo; Giancarlo, Raffaele; Greco, Valentina; Manzini, Giovanni; Valiente, Gabriel
2007-07-13
Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric) has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity), NCD (Normalized Compression Dissimilarity) and CD (Compression Dissimilarity). Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC (Receiver Operating Curve) analysis, aims at
Bulk and microscale compressive behavior of a Zr-based metallic glass
International Nuclear Information System (INIS)
Lai, Y.H.; Lee, C.J.; Cheng, Y.T.; Chou, H.S.; Chen, H.M.; Du, X.H.; Chang, C.I.; Huang, J.C.; Jian, S.R.; Jang, J.S.C.; Nieh, T.G.
2008-01-01
Micropillars with diameters of 3.8, 1 and 0.7 μm were fabricated from a two-phase Zr-based metallic glass using focus ion beam (FIB), and then tested in compression at strain rates from 1 x 10 -4 to 1 x 10 -2 s -1 . The apparent yield strength of the micropillars ranges from 1992 to 2972 MPa, or 25-86% increase over that of the bulk specimens. This strength increase can be rationalized by the Weibull statistics for brittle materials
Directory of Open Access Journals (Sweden)
Ling Yongfa
2016-01-01
Full Text Available The paper proposes a mobile control sink node data collection method in the wireless sensor network based on compressive sensing. This method, with regular track, selects the optimal data collection points in the monitoring area via the disc method, calcu-lates the shortest path by using the quantum genetic algorithm, and hence determines the data collection route. Simulation results show that this method has higher network throughput and better energy efficiency, capable of collecting a huge amount of data with balanced energy consumption in the network.
pyPcazip: A PCA-based toolkit for compression and analysis of molecular simulation data
Directory of Open Access Journals (Sweden)
Ardita Shkurti
2016-01-01
Full Text Available The biomolecular simulation community is currently in need of novel and optimised software tools that can analyse and process, in reasonable timescales, the large generated amounts of molecular simulation data. In light of this, we have developed and present here pyPcazip: a suite of software tools for compression and analysis of molecular dynamics (MD simulation data. The software is compatible with trajectory file formats generated by most contemporary MD engines such as AMBER, CHARMM, GROMACS and NAMD, and is MPI parallelised to permit the efficient processing of very large datasets. pyPcazip is a Unix based open-source software (BSD licenced written in Python.
Reconstruction algorithm in compressed sensing based on maximum a posteriori estimation
International Nuclear Information System (INIS)
Takeda, Koujin; Kabashima, Yoshiyuki
2013-01-01
We propose a systematic method for constructing a sparse data reconstruction algorithm in compressed sensing at a relatively low computational cost for general observation matrix. It is known that the cost of ℓ 1 -norm minimization using a standard linear programming algorithm is O(N 3 ). We show that this cost can be reduced to O(N 2 ) by applying the approach of posterior maximization. Furthermore, in principle, the algorithm from our approach is expected to achieve the widest successful reconstruction region, which is evaluated from theoretical argument. We also discuss the relation between the belief propagation-based reconstruction algorithm introduced in preceding works and our approach
DESIGN AND IMPLEMENTATION OF A VHDL PROCESSOR FOR DCT BASED IMAGE COMPRESSION
Directory of Open Access Journals (Sweden)
Md. Shabiul Islam
2017-11-01
Full Text Available This paper describes the design and implementation of a VHDL processor meant for performing 2D-Discrete Cosine Transform (DCT to use in image compression applications. The design flow starts from the system specification to implementation on silicon and the entire process is carried out using an advanced workstation based design environment for digital signal processing. The software allows the bit-true analysis to ensure that the designed VLSI processor satisfies the required specifications. The bit-true analysis is performed on all levels of abstraction (behavior, VHDL etc.. The motivation behind the work is smaller size chip area, faster processing, reducing the cost of the chip
Yavorovich, L. V.; Bespal`ko, A. A.; Fedotov, P. I.
2018-01-01
Parameters of electromagnetic responses (EMRe) generated during uniaxial compression of rock samples under excitation by deterministic acoustic pulses are presented and discussed. Such physical modeling in the laboratory allows to reveal the main regularities of electromagnetic signals (EMS) generation in rock massive. The influence of the samples mechanical properties on the parameters of the EMRe excited by an acoustic signal in the process of uniaxial compression is considered. It has been established that sulfides and quartz in the rocks of the Tashtagol iron ore deposit (Western Siberia, Russia) contribute to the conversion of mechanical energy into the energy of the electromagnetic field, which is expressed in an increase in the EMS amplitude. The decrease in the EMS amplitude when the stress-strain state of the sample changes during the uniaxial compression is observed when the amount of conductive magnetite contained in the rock is increased. The obtained results are important for the physical substantiation of testing methods and monitoring of changes in the stress-strain state of the rock massive by the parameters of electromagnetic signals and the characteristics of electromagnetic emission.
Design-based estimators for snowball sampling
Shafie, Termeh
2010-01-01
Snowball sampling, where existing study subjects recruit further subjects from amongtheir acquaintances, is a popular approach when sampling from hidden populations.Since people with many in-links are more likely to be selected, there will be a selectionbias in the samples obtained. In order to eliminate this bias, the sample data must beweighted. However, the exact selection probabilities are unknown for snowball samplesand need to be approximated in an appropriate way. This paper proposes d...
OpenCL-based vicinity computation for 3D multiresolution mesh compression
Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri
2017-03-01
3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation.
Fault Diagnosis for Hydraulic Servo System Using Compressed Random Subspace Based ReliefF
Directory of Open Access Journals (Sweden)
Yu Ding
2018-01-01
Full Text Available Playing an important role in electromechanical systems, hydraulic servo system is crucial to mechanical systems like engineering machinery, metallurgical machinery, ships, and other equipment. Fault diagnosis based on monitoring and sensory signals plays an important role in avoiding catastrophic accidents and enormous economic losses. This study presents a fault diagnosis scheme for hydraulic servo system using compressed random subspace based ReliefF (CRSR method. From the point of view of feature selection, the scheme utilizes CRSR method to determine the most stable feature combination that contains the most adequate information simultaneously. Based on the feature selection structure of ReliefF, CRSR employs feature integration rules in the compressed domain. Meanwhile, CRSR substitutes information entropy and fuzzy membership for traditional distance measurement index. The proposed CRSR method is able to enhance the robustness of the feature information against interference while selecting the feature combination with balanced information expressing ability. To demonstrate the effectiveness of the proposed CRSR method, a hydraulic servo system joint simulation model is constructed by HyPneu and Simulink, and three fault modes are injected to generate the validation data.
Energy Technology Data Exchange (ETDEWEB)
AlAfeef, Ala, E-mail: a.al-afeef.1@research.gla.ac.uk [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); School of Computing Science, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Bobynko, Joanna [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Cockshott, W. Paul. [School of Computing Science, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Craven, Alan J. [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Zuazo, Ian; Barges, Patrick [ArcelorMittal Maizières Research, Maizières-lès-Metz 57283 (France); MacLaren, Ian, E-mail: ian.maclaren@glasgow.ac.uk [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom)
2016-11-15
We have investigated the use of DualEELS in elementally sensitive tilt series tomography in the scanning transmission electron microscope. A procedure is implemented using deconvolution to remove the effects of multiple scattering, followed by normalisation by the zero loss peak intensity. This is performed to produce a signal that is linearly dependent on the projected density of the element in each pixel. This method is compared with one that does not include deconvolution (although normalisation by the zero loss peak intensity is still performed). Additionally, we compare the 3D reconstruction using a new compressed sensing algorithm, DLET, with the well-established SIRT algorithm. VC precipitates, which are extracted from a steel on a carbon replica, are used in this study. It is found that the use of this linear signal results in a very even density throughout the precipitates. However, when deconvolution is omitted, a slight density reduction is observed in the cores of the precipitates (a so-called cupping artefact). Additionally, it is clearly demonstrated that the 3D morphology is much better reproduced using the DLET algorithm, with very little elongation in the missing wedge direction. It is therefore concluded that reliable elementally sensitive tilt tomography using EELS requires the appropriate use of DualEELS together with a suitable reconstruction algorithm, such as the compressed sensing based reconstruction algorithm used here, to make the best use of the limited data volume and signal to noise inherent in core-loss EELS. - Highlights: • DualEELS is essential for chemically sensitive electron tomography using EELS. • A new compressed sensing based algorithm (DLET) gives high fidelity reconstruction. • This combination of DualEELS and DLET will give reliable results from few projections.
Medical Image Compression Based on Region of Interest, With Application to Colon CT Images
National Research Council Canada - National Science Library
Gokturk, Salih
2001-01-01
...., in diagnostically important regions. This paper discusses a hybrid model of lossless compression in the region of interest, with high-rate, motion-compensated, lossy compression in other regions...
Compressive Strength of EN AC-44200 Based Composite Materials Strengthened with α-Al2O3 Particles
Directory of Open Access Journals (Sweden)
Kurzawa A.
2017-06-01
Full Text Available The paper presents results of compressive strength investigations of EN AC-44200 based aluminum alloy composite materials reinforced with aluminum oxide particles at ambient and at temperatures of 100, 200 and 250°C. They were manufactured by squeeze casting of the porous preforms made of α-Al2O3 particles with liquid aluminum alloy EN AC-44200. The composite materials were reinforced with preforms characterized by the porosities of 90, 80, 70 and 60 vol. %, thus the alumina content in the composite materials was 10, 20, 30 and 40 vol.%. The results of the compressive strength of manufactured materials were presented and basing on the microscopic observations the effect of the volume content of strengthening alumina particles on the cracking mechanisms during compression at indicated temperatures were shown and discussed. The highest compressive strength of 470 MPa at ambient temperature showed composite materials strengthened with 40 vol.% of α-Al2O3 particles.
Okasha , Nader M
2017-01-01
International audience; Concrete is recognized as the second most consumed product in our modern life after water. The variability in concrete properties is inevitable. The concrete mix is designed for a compressive strength that is different from, typically higher than, the value specified by the structural designer. Ways to calculate the compressive strength to be used in the mix design are provided in building and structural codes. These ways are all based on criteria related purely and on...
Holland, Katharina; Sechopoulos, Ioannis; Mann, Ritse M.; Den Heeten, Gerard J.; van Gils, Carla H.; Karssemeijer, Nico
2017-01-01
Background: In mammography, breast compression is applied to reduce the thickness of the breast. While it is widely accepted that firm breast compression is needed to ensure acceptable image quality, guidelines remain vague about how much compression should be applied during mammogram acquisition. A
Xu, Li; Shan, Lin; Adachi, Fumiyuki
2014-01-01
In orthogonal frequency division modulation (OFDM) communication systems, channel state information (CSI) is required at receiver due to the fact that frequency-selective fading channel leads to disgusting intersymbol interference (ISI) over data transmission. Broadband channel model is often described by very few dominant channel taps and they can be probed by compressive sensing based sparse channel estimation (SCE) methods, for example, orthogonal matching pursuit algorithm, which can take the advantage of sparse structure effectively in the channel as for prior information. However, these developed methods are vulnerable to both noise interference and column coherence of training signal matrix. In other words, the primary objective of these conventional methods is to catch the dominant channel taps without a report of posterior channel uncertainty. To improve the estimation performance, we proposed a compressive sensing based Bayesian sparse channel estimation (BSCE) method which cannot only exploit the channel sparsity but also mitigate the unexpected channel uncertainty without scarifying any computational complexity. The proposed method can reveal potential ambiguity among multiple channel estimators that are ambiguous due to observation noise or correlation interference among columns in the training matrix. Computer simulations show that proposed method can improve the estimation performance when comparing with conventional SCE methods. PMID:24983012
Distributed Similarity based Clustering and Compressed Forwarding for wireless sensor networks.
Arunraja, Muruganantham; Malathi, Veluchamy; Sakthivel, Erulappan
2015-11-01
Wireless sensor networks are engaged in various data gathering applications. The major bottleneck in wireless data gathering systems is the finite energy of sensor nodes. By conserving the on board energy, the life span of wireless sensor network can be well extended. Data communication being the dominant energy consuming activity of wireless sensor network, data reduction can serve better in conserving the nodal energy. Spatial and temporal correlation among the sensor data is exploited to reduce the data communications. Data similar cluster formation is an effective way to exploit spatial correlation among the neighboring sensors. By sending only a subset of data and estimate the rest using this subset is the contemporary way of exploiting temporal correlation. In Distributed Similarity based Clustering and Compressed Forwarding for wireless sensor networks, we construct data similar iso-clusters with minimal communication overhead. The intra-cluster communication is reduced using adaptive-normalized least mean squares based dual prediction framework. The cluster head reduces the inter-cluster data payload using a lossless compressive forwarding technique. The proposed work achieves significant data reduction in both the intra-cluster and the inter-cluster communications, with the optimal data accuracy of collected data. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Symmetrical compression distance for arrhythmia discrimination in cloud-based big-data services.
Lillo-Castellano, J M; Mora-Jiménez, I; Santiago-Mozos, R; Chavarría-Asso, F; Cano-González, A; García-Alberola, A; Rojo-Álvarez, J L
2015-07-01
The current development of cloud computing is completely changing the paradigm of data knowledge extraction in huge databases. An example of this technology in the cardiac arrhythmia field is the SCOOP platform, a national-level scientific cloud-based big data service for implantable cardioverter defibrillators. In this scenario, we here propose a new methodology for automatic classification of intracardiac electrograms (EGMs) in a cloud computing system, designed for minimal signal preprocessing. A new compression-based similarity measure (CSM) is created for low computational burden, so-called weighted fast compression distance, which provides better performance when compared with other CSMs in the literature. Using simple machine learning techniques, a set of 6848 EGMs extracted from SCOOP platform were classified into seven cardiac arrhythmia classes and one noise class, reaching near to 90% accuracy when previous patient arrhythmia information was available and 63% otherwise, hence overcoming in all cases the classification provided by the majority class. Results show that this methodology can be used as a high-quality service of cloud computing, providing support to physicians for improving the knowledge on patient diagnosis.
Secure biometric image sensor and authentication scheme based on compressed sensing.
Suzuki, Hiroyuki; Suzuki, Masamichi; Urabe, Takuya; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki
2013-11-20
It is important to ensure the security of biometric authentication information, because its leakage causes serious risks, such as replay attacks using the stolen biometric data, and also because it is almost impossible to replace raw biometric information. In this paper, we propose a secure biometric authentication scheme that protects such information by employing an optical data ciphering technique based on compressed sensing. The proposed scheme is based on two-factor authentication, the biometric information being supplemented by secret information that is used as a random seed for a cipher key. In this scheme, a biometric image is optically encrypted at the time of image capture, and a pair of restored biometric images for enrollment and verification are verified in the authentication server. If any of the biometric information is exposed to risk, it can be reenrolled by changing the secret information. Through numerical experiments, we confirm that finger vein images can be restored from the compressed sensing measurement data. We also present results that verify the accuracy of the scheme.
A Novel Object Tracking Algorithm Based on Compressed Sensing and Entropy of Information
Directory of Open Access Journals (Sweden)
Ding Ma
2015-01-01
Full Text Available Object tracking has always been a hot research topic in the field of computer vision; its purpose is to track objects with specific characteristics or representation and estimate the information of objects such as their locations, sizes, and rotation angles in the current frame. Object tracking in complex scenes will usually encounter various sorts of challenges, such as location change, dimension change, illumination change, perception change, and occlusion. This paper proposed a novel object tracking algorithm based on compressed sensing and information entropy to address these challenges. First, objects are characterized by the Haar (Haar-like and ORB features. Second, the dimensions of computation space of the Haar and ORB features are effectively reduced through compressed sensing. Then the above-mentioned features are fused based on information entropy. Finally, in the particle filter framework, an object location was obtained by selecting candidate object locations in the current frame from the local context neighboring the optimal locations in the last frame. Our extensive experimental results demonstrated that this method was able to effectively address the challenges of perception change, illumination change, and large area occlusion, which made it achieve better performance than existing approaches such as MIL and CT.
Kim, Dong-Sun; Kwon, Jin-San
2014-09-18
Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor.
International Nuclear Information System (INIS)
Xu, Yun-Chao; Chen, Qun
2013-01-01
The vapor-compression refrigeration systems have been one of the essential energy conversion systems for humankind and exhausting huge amounts of energy nowadays. Surrounding the energy efficiency promotion of the systems, there are lots of effectual optimization methods but mainly relied on engineering experience and computer simulations rather than theoretical analysis due to the complex and vague physical essence. We attempt to propose a theoretical global optimization method based on in-depth physical analysis for the involved physical processes, i.e. heat transfer analysis for condenser and evaporator, through introducing the entransy theory and thermodynamic analysis for compressor and expansion valve. The integration of heat transfer and thermodynamic analyses forms the overall physical optimization model for the systems to describe the relation between all the unknown parameters and known conditions, which makes theoretical global optimization possible. With the aid of the mathematical conditional extremum solutions, an optimization equation group and the optimal configuration of all the unknown parameters are analytically obtained. Eventually, via the optimization of a typical vapor-compression refrigeration system with various working conditions to minimize the total heat transfer area of heat exchangers, the validity and superior of the newly proposed optimization method is proved. - Highlights: • A global optimization method for vapor-compression systems is proposed. • Integrating heat transfer and thermodynamic analyses forms the optimization model. • A mathematical relation between design parameters and requirements is derived. • Entransy dissipation is introduced into heat transfer analysis. • The validity of the method is proved via optimization of practical cases
Fast and low-dose computed laminography using compressive sensing based technique
Abbas, Sajid; Park, Miran; Cho, Seungryong
2015-03-01
Computed laminography (CL) is well known for inspecting microstructures in the materials, weldments and soldering defects in high density packed components or multilayer printed circuit boards. The overload problem on x-ray tube and gross failure of the radio-sensitive electronics devices during a scan are among important issues in CL which needs to be addressed. The sparse-view CL can be one of the viable option to overcome such issues. In this work a numerical aluminum welding phantom was simulated to collect sparsely sampled projection data at only 40 views using a conventional CL scanning scheme i.e. oblique scan. A compressive-sensing inspired total-variation (TV) minimization algorithm was utilized to reconstruct the images. It is found that the images reconstructed using sparse view data are visually comparable with the images reconstructed using full scan data set i.e. at 360 views on regular interval. We have quantitatively confirmed that tiny structures such as copper and tungsten slags, and copper flakes in the reconstructed images from sparsely sampled data are comparable with the corresponding structure present in the fully sampled data case. A blurring effect can be seen near the edges of few pores at the bottom of the reconstructed images from sparsely sampled data, despite the overall image quality is reasonable for fast and low-dose NDT.
Fast and low-dose computed laminography using compressive sensing based technique
Energy Technology Data Exchange (ETDEWEB)
Abbas, Sajid, E-mail: scho@kaist.ac.kr; Park, Miran, E-mail: scho@kaist.ac.kr; Cho, Seungryong, E-mail: scho@kaist.ac.kr [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 305-701 (Korea, Republic of)
2015-03-31
Computed laminography (CL) is well known for inspecting microstructures in the materials, weldments and soldering defects in high density packed components or multilayer printed circuit boards. The overload problem on x-ray tube and gross failure of the radio-sensitive electronics devices during a scan are among important issues in CL which needs to be addressed. The sparse-view CL can be one of the viable option to overcome such issues. In this work a numerical aluminum welding phantom was simulated to collect sparsely sampled projection data at only 40 views using a conventional CL scanning scheme i.e. oblique scan. A compressive-sensing inspired total-variation (TV) minimization algorithm was utilized to reconstruct the images. It is found that the images reconstructed using sparse view data are visually comparable with the images reconstructed using full scan data set i.e. at 360 views on regular interval. We have quantitatively confirmed that tiny structures such as copper and tungsten slags, and copper flakes in the reconstructed images from sparsely sampled data are comparable with the corresponding structure present in the fully sampled data case. A blurring effect can be seen near the edges of few pores at the bottom of the reconstructed images from sparsely sampled data, despite the overall image quality is reasonable for fast and low-dose NDT.
Fast and low-dose computed laminography using compressive sensing based technique
International Nuclear Information System (INIS)
Abbas, Sajid; Park, Miran; Cho, Seungryong
2015-01-01
Computed laminography (CL) is well known for inspecting microstructures in the materials, weldments and soldering defects in high density packed components or multilayer printed circuit boards. The overload problem on x-ray tube and gross failure of the radio-sensitive electronics devices during a scan are among important issues in CL which needs to be addressed. The sparse-view CL can be one of the viable option to overcome such issues. In this work a numerical aluminum welding phantom was simulated to collect sparsely sampled projection data at only 40 views using a conventional CL scanning scheme i.e. oblique scan. A compressive-sensing inspired total-variation (TV) minimization algorithm was utilized to reconstruct the images. It is found that the images reconstructed using sparse view data are visually comparable with the images reconstructed using full scan data set i.e. at 360 views on regular interval. We have quantitatively confirmed that tiny structures such as copper and tungsten slags, and copper flakes in the reconstructed images from sparsely sampled data are comparable with the corresponding structure present in the fully sampled data case. A blurring effect can be seen near the edges of few pores at the bottom of the reconstructed images from sparsely sampled data, despite the overall image quality is reasonable for fast and low-dose NDT
Upgrade of the SLAC SLED II Pulse Compression System Based on Recent High Power Tests
International Nuclear Information System (INIS)
Vlieks, A.E.; Fowkes, W.R.; Loewen, R.J.; Tantawi, S.G.
2011-01-01
In the Next Linear Collider (NLC) it is expected that the high power rf components be able to handle peak power levels in excess of 400 MW. We present recent results of high power tests designed to investigate the RF breakdown limits of the X-band pulse compression system used at SLAC. (SLED-II). Results of these tests show that both the TE 01 -TE 10 mode converter and the 4-port hybrid have a maximum useful power limit of 220-250 MW. Based on these tests, modifications of these components have been undertaken to improve their peak field handling capability. Results of these modifications will be presented. As part of an international effort to develop a new 0.5-1.5 TeV electron-positron linear collider for the 21st century, SLAC has been working towards a design, referred to as 'The Next Linear Collider' (NLC), which will operate at 11.424 GHz and utilize 50-75 MW klystrons as rf power sources. One of the major challenges in this design, or any other design, is how to generate and efficiently transport extremely high rf power from a source to an accelerator structure. SLAC has been investigating various methods of 'pulse compressing' a relatively wide rf pulse ((ge) 1 μs) from a klystron into a narrower, but more intense, pulse. Currently a SLED-II pulse compression scheme is being used at SLAC in the NLC Test Accelerator (NLCTA) and in the Accelerator Structures Test Area (ASTA) to provide high rf power for accelerator and component testing. In ASTA, a 1.05 μs pulse from a 50 MW klystron was successfully pulse compressed to 205 MW with a pulse width of 150 ns. Since operation in NLC will require generating and transporting rf power in excess of 400 MW it was decided to test the breakdown limits of the SLED-II rf components in ASTA with rf power up to the maximum available of 400 MW. This required the combining of power from two 50 MW klystrons and feeding the summed power into the SLED-II pulse compressor. Results from this experiment demonstrated that two of
Li, Xianye; Meng, Xiangfeng; Yang, Xiulun; Wang, Yurong; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2018-03-01
A multiple-image encryption method via lifting wavelet transform (LWT) and XOR operation is proposed, which is based on a row scanning compressive ghost imaging scheme. In the encryption process, the scrambling operation is implemented for the sparse images transformed by LWT, then the XOR operation is performed on the scrambled images, and the resulting XOR images are compressed in the row scanning compressive ghost imaging, through which the ciphertext images can be detected by bucket detector arrays. During decryption, the participant who possesses his/her correct key-group, can successfully reconstruct the corresponding plaintext image by measurement key regeneration, compression algorithm reconstruction, XOR operation, sparse images recovery, and inverse LWT (iLWT). Theoretical analysis and numerical simulations validate the feasibility of the proposed method.
Directory of Open Access Journals (Sweden)
Sheng Bi
2016-03-01
Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.
Directory of Open Access Journals (Sweden)
Shailendra Singh Chauhan
2016-09-01
Full Text Available A dew point evaporative-vapour compression based combined air conditioning system for providing good human comfort conditions at a low cost has been proposed in this paper. The proposed system has been parametrically analysed for a wide range of ambient temperatures and specific humidity under some reasonable assumptions. The proposed system has also been compared from the conventional vapour compression air conditioner on the basis of cooling load on the cooling coil working on 100% fresh air assumption. The saving of cooling load on the coil was found to be maximum with a value of 60.93% at 46 °C and 6 g/kg specific humidity, while it was negative for very high humidity of ambient air, which indicates that proposed system is applicable for dry and moderate humid conditions but not for very humid conditions. The system is working well with an average net monthly power saving of 192.31 kW h for hot and dry conditions and 124.38 kW h for hot and moderate humid conditions. Therefore it could be a better alternative for dry and moderate humid climate with a payback period of 7.2 years.
Vishnukumar, S.; Wilscy, M.
2017-12-01
In this paper, we propose a single image Super-Resolution (SR) method based on Compressive Sensing (CS) and Improved Total Variation (TV) Minimization Sparse Recovery. In the CS framework, low-resolution (LR) image is treated as the compressed version of high-resolution (HR) image. Dictionary Training and Sparse Recovery are the two phases of the method. K-Singular Value Decomposition (K-SVD) method is used for dictionary training and the dictionary represents HR image patches in a sparse manner. Here, only the interpolated version of the LR image is used for training purpose and thereby the structural self similarity inherent in the LR image is exploited. In the sparse recovery phase the sparse representation coefficients with respect to the trained dictionary for LR image patches are derived using Improved TV Minimization method. HR image can be reconstructed by the linear combination of the dictionary and the sparse coefficients. The experimental results show that the proposed method gives better results quantitatively as well as qualitatively on both natural and remote sensing images. The reconstructed images have better visual quality since edges and other sharp details are preserved.
Novel prediction- and subblock-based algorithm for fractal image compression
International Nuclear Information System (INIS)
Chung, K.-L.; Hsu, C.-H.
2006-01-01
Fractal encoding is the most consuming part in fractal image compression. In this paper, a novel two-phase prediction- and subblock-based fractal encoding algorithm is presented. Initially the original gray image is partitioned into a set of variable-size blocks according to the S-tree- and interpolation-based decomposition principle. In the first phase, each current block of variable-size range block tries to find the best matched domain block based on the proposed prediction-based search strategy which utilizes the relevant neighboring variable-size domain blocks. The first phase leads to a significant computation-saving effect. If the domain block found within the predicted search space is unacceptable, in the second phase, a subblock strategy is employed to partition the current variable-size range block into smaller blocks to improve the image quality. Experimental results show that our proposed prediction- and subblock-based fractal encoding algorithm outperforms the conventional full search algorithm and the recently published spatial-correlation-based algorithm by Truong et al. in terms of encoding time and image quality. In addition, the performance comparison among our proposed algorithm and the other two algorithms, the no search-based algorithm and the quadtree-based algorithm, are also investigated
Cyclops: single-pixel imaging lidar system based on compressive sensing
Magalhães, F.; Correia, M. V.; Farahi, F.; Pereira do Carmo, J.; Araújo, F. M.
2017-11-01
Mars and the Moon are envisaged as major destinations of future space exploration missions in the upcoming decades. Imaging LIDARs are seen as a key enabling technology in the support of autonomous guidance, navigation and control operations, as they can provide very accurate, wide range, high-resolution distance measurements as required for the exploration missions. Imaging LIDARs can be used at critical stages of these exploration missions, such as descent and selection of safe landing sites, rendezvous and docking manoeuvres, or robotic surface navigation and exploration. Despite these devices have been commercially available and used for long in diverse metrology and ranging applications, their size, mass and power consumption are still far from being suitable and attractive for space exploratory missions. Here, we describe a compact Single-Pixel Imaging LIDAR System that is based on a compressive sensing technique. The application of the compressive codes to a DMD array enables compression of the spatial information, while the collection of timing histograms correlated to the pulsed laser source ensures image reconstruction at the ranged distances. Single-pixel cameras have been compared with raster scanning and array based counterparts in terms of noise performance, and proved to be superior. Since a single photodetector is used, a better SNR and higher reliability is expected in contrast with systems using large format photodetector arrays. Furthermore, the event of failure of one or more micromirror elements in the DMD does not prevent full reconstruction of the images. This brings additional robustness to the proposed 3D imaging LIDAR. The prototype that was implemented has three modes of operation. Range Finder: outputs the average distance between the system and the area of the target under illumination; Attitude Meter: provides the slope of the target surface based on distance measurements in three areas of the target; 3D Imager: produces 3D ranged
Directory of Open Access Journals (Sweden)
Jerry D. Gibson
2016-06-01
Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.
Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery
Directory of Open Access Journals (Sweden)
Lingjun Liu
2017-01-01
Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.
A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations
Energy Technology Data Exchange (ETDEWEB)
Felix, Simon; Bolzern, Roman; Battaglia, Marina, E-mail: simon.felix@fhnw.ch, E-mail: roman.bolzern@fhnw.ch, E-mail: marina.battaglia@fhnw.ch [University of Applied Sciences and Arts Northwestern Switzerland FHNW, 5210 Windisch (Switzerland)
2017-11-01
One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS-CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS-CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.
Optical image encryption scheme with multiple light paths based on compressive ghost imaging
Zhu, Jinan; Yang, Xiulun; Meng, Xiangfeng; Wang, Yurong; Yin, Yongkai; Sun, Xiaowen; Dong, Guoyan
2018-02-01
An optical image encryption method with multiple light paths is proposed based on compressive ghost imaging. In the encryption process, M random phase-only masks (POMs) are generated by means of logistic map algorithm, and these masks are then uploaded to the spatial light modulator (SLM). The collimated laser light is divided into several beams by beam splitters as it passes through the SLM, and the light beams illuminate the secret images, which are converted into sparse images by discrete wavelet transform beforehand. Thus, the secret images are simultaneously encrypted into intensity vectors by ghost imaging. The distances between the SLM and secret images vary and can be used as the main keys with original POM and the logistic map algorithm coefficient in the decryption process. In the proposed method, the storage space can be significantly decreased and the security of the system can be improved. The feasibility, security and robustness of the method are further analysed through computer simulations.
A computationally efficient OMP-based compressed sensing reconstruction for dynamic MRI
International Nuclear Information System (INIS)
Usman, M; Prieto, C; Schaeffter, T; Batchelor, P G; Odille, F; Atkinson, D
2011-01-01
Compressed sensing (CS) methods in MRI are computationally intensive. Thus, designing novel CS algorithms that can perform faster reconstructions is crucial for everyday applications. We propose a computationally efficient orthogonal matching pursuit (OMP)-based reconstruction, specifically suited to cardiac MR data. According to the energy distribution of a y-f space obtained from a sliding window reconstruction, we label the y-f space as static or dynamic. For static y-f space images, a computationally efficient masked OMP reconstruction is performed, whereas for dynamic y-f space images, standard OMP reconstruction is used. The proposed method was tested on a dynamic numerical phantom and two cardiac MR datasets. Depending on the field of view composition of the imaging data, compared to the standard OMP method, reconstruction speedup factors ranging from 1.5 to 2.5 are achieved. (note)
A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations
Felix, Simon; Bolzern, Roman; Battaglia, Marina
2017-11-01
One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS_CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS_CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.
Methods of compression of digital holograms, based on 1-level wavelet transform
International Nuclear Information System (INIS)
Kurbatova, E A; Cheremkhin, P A; Evtikhiev, N N
2016-01-01
To reduce the size of memory required for storing information about 3D-scenes and to decrease the rate of hologram transmission, digital hologram compression can be used. Compression of digital holograms by wavelet transforms is among most powerful methods. In the paper the most popular wavelet transforms are considered and applied to the digital hologram compression. Obtained values of reconstruction quality and hologram's diffraction efficiencies are compared. (paper)
A low-cost hybrid drivetrain concept based on compressed air energy storage
International Nuclear Information System (INIS)
Brown, T.L.; Atluri, V.P.; Schmiedeler, J.P.
2014-01-01
Highlights: • A new pneumatic hybrid concept is introduced. • A proof-of-concept prototype system is built and tested. • The experimental system has a round-trip efficiency of just under 10%. • A thermodynamics model is used to predict the performance of modified designs. • An efficiency of nearly 50% is possible with reasonable design changes. - Abstract: This paper introduces a new low-cost hybrid drivetrain concept based on compressed air energy storage. In contrast to most contemporary approaches to pneumatic hybridization, which require modification to the primary power plant, this concept is based on a stand-alone pneumatic system that could be readily integrated with existing vehicles. The pneumatic system consists of an air tank and a compressor–expander that is coupled to the rest of the drivetrain via an infinitely variable transmission. Rather than incorporating more expensive technologies such as variable valve timing or a variable compression ratio compressor, a fixed valve system consisting of a rotary valve and passive check valves is optimized to operate efficiently over a range of tank pressures. The feasibility of this approach is established by thermodynamic modeling and the construction of a proof-of-concept prototype, which is also used to fine tune model parameters. While the proof-of-concept system shows a round trip efficiency of just under 10%, modeling shows that a round trip efficiency of 26% is possible with a revised design. If waste heat from the engine is used to maintain an elevated tank temperature, efficiencies of nearly 50% may be possible, indicating that the concept could be effective for practical hybridization of passenger vehicles
International Nuclear Information System (INIS)
Manthei, G.; Eisenblaetter, J.; Moriya, H.; Niitsuma, H.; Jones, R.H.
2003-01-01
Collapsing is a relatively new method. It is used for detecting patterns and structures in blurred and cloudy pictures of multiple soundings. In the case described here, the measurements were made in a very small region with a length of only a few decimeters. The events were registered during a triaxial compression experiment on a compact block of rock salt. The collapsing method showed a cellular structure of the salt block across the whole length of the test piece. The cells had a length of several cm, enclosing several grains of salt with an average grain size of less than one cm. In view of the fact that not all cell walls corresponded to acoustic emission events, it was assumed that only those grain boundaries are activated that are oriented at a favourable angle to the field of tension of the test piece [de
Optimization of compressive strength in admixture-reinforced cement-based grouts
Directory of Open Access Journals (Sweden)
Sahin Zaimoglu, A.
2007-12-01
Full Text Available The Taguchi method was used in this study to optimize the unconfined (7-, 14- and 28-day compressive strength of cement-based grouts with bentonite, fly ash and silica fume admixtures. The experiments were designed using an L16 orthogonal array in which the three factors considered were bentonite (0%, 0.5%, 1.0% and 3%, fly ash (10%, 20%, 30% and 40% and silica fume (0%, 5%, 10% and 20% content. The experimental results, which were analyzed by ANOVA and the Taguchi method, showed that fly ash and silica fume content play a significant role in unconfined compressive strength. The optimum conditions were found to be: 0% bentonite, 10% fly ash, 20% silica fume and 28 days of curing time. The maximum unconfined compressive strength reached under the above optimum conditions was 17.1 MPa.En el presente trabajo se ha intentado optimizar, mediante el método de Taguchi, las resistencias a compresión (a las edades de 7, 14 y 28 días de lechadas de cemento reforzadas con bentonita, cenizas volantes y humo de sílice. Se diseñaron los experimentos de acuerdo con un arreglo ortogonal tipo L16 en el que se contemplaban tres factores: la bentonita (0, 0,5, 1 y 3%, las cenizas volantes (10, 20, 30 y 40% y el humo de sílice (0, 5, 10 y 20% (porcentajes en peso del sólido. Los datos obtenidos se analizaron con mediante ANOVA y el método de Taguchi. De acuerdo con los resultados experimentales, el contenido tanto de cenizas volantes como de humo de sílice desempeña un papel significativo en la resistencia a compresión. Por otra parte, las condiciones óptimas que se han identificado son: 0% bentonita, 10% cenizas volantes, 20% humo de sílice y 28 días de tiempo de curado. La resistencia a compresión máxima conseguida en las anteriores condiciones era de 17,1 MPa.
Template-Based Sampling of Anisotropic BRDFs
Czech Academy of Sciences Publication Activity Database
Filip, Jiří; Vávra, Radomír
2014-01-01
Roč. 33, č. 7 (2014), s. 91-99 ISSN 0167-7055. [Pacific Graphics 2014. Soul, 08.10.2014-10.10.2014] R&D Projects: GA ČR(CZ) GA14-02652S; GA ČR(CZ) GA14-10911S; GA ČR GAP103/11/0335 Institutional support: RVO:67985556 Keywords : BRDF database * material appearnce * sampling * measurement Subject RIV: BD - Theory of Information Impact factor: 1.642, year: 2014 http://library.utia.cas.cz/separaty/2014/RO/filip-0432894.pdf
Directory of Open Access Journals (Sweden)
Ersoy Hakan
2012-10-01
Full Text Available
ABSTRACT
Uniaxial compressive strength (UCS deals with materials' to ability to withstand axially-directed pushing forces and especially considered to be rock materials' most important mechanical properties. However, the UCS test is an expensive, very time-consuming test to perform in the laboratory and requires high-quality core samples having regular geometry. Empirical equations were thus proposed for predicting UCS as a function of rocks' index properties. Analytical hierarchy process and multiple regression analysis based methodology were used (as opposed to traditional linear regression methods on data-sets obtained from carbonate rocks in NE Turkey. Limestone samples ranging from Devonian to late Cretaceous ages were chosen; travertine-onyx samples were selected from morphological environments considering their surface environmental conditions Test results from experiments carried out on about 250 carbonate rock samples were used in deriving the model. While the hierarchy model focused on determining the most important index properties affecting on UCS, regression analysis established meaningful relationships between UCS and index properties; 0. 85 and 0. 83 positive coefficient correlations between the variables were determined by regression analysis. The methodology provided an appropriate alternative to quantitative estimation of UCS and avoided the need for tedious and time consuming laboratory testing
RESUMEN
La resistencia a la compresión uniaxial (RCU trata con la capacidad de los materiales para soportar fuerzas empujantes dirigidas axialmente y, especialmente, es considerada ser uno de las más importantes propiedades mecánicas de
Brus, D.J.; Gruijter, de J.J.
1997-01-01
Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based
Scout-view assisted interior digital tomosynthesis (iDTS) based on compressed-sensing theory
Park, S. Y.; Kim, G. A.; Cho, H. S.; Seo, C. W.; Je, U. K.; Park, C. K.; Lim, H. W.; Kim, K. S.; Lee, D. Y.; Lee, H. W.; Kang, S. Y.; Park, J. E.; Woo, T. H.; Lee, M. S.
2017-12-01
Conventional digital tomosynthesis (DTS) based on the filtered-backprojection (FBP) reconstruction requires full field-of-view scan and also relatively dense projections, which results in still high dose for medical imaging purposes. In this work, to overcome these difficulties, we propose a new type of DTS examinations, the so-called scout-view assisted interior DTS (iDTS), in which the x-ray beam span covers only a small region-of-interest (ROI) containing target diagnosis with the help of some scout views and they are used in the reconstruction to add additional information to interior ROI otherwise absent with conventional iDTS reconstruction methods. We considered an effective iterative algorithm based on compressed-sensing theory, rather than the FBP-based algorithm, for more accurate iDTS reconstruction. We implemented the proposed algorithm, performed a systematic simulation and experiment, and investigated the image characteristics. We successfully reconstructed iDTS images of substantially high accuracy and no truncation artifacts by using the proposed method, preserving superior image homogeneity, edge sharpening, and in-plane spatial resolution.
Wang, Jianji; Zheng, Nanning
2013-09-01
Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.
Directory of Open Access Journals (Sweden)
Wei Ke
2017-01-01
Full Text Available In order to enhance the accuracy of sound source localization in noisy and reverberant environments, this paper proposes an adaptive sound source localization method based on distributed microphone arrays. Since sound sources lie at a few points in the discrete spatial domain, our method can exploit this inherent sparsity to convert the localization problem into a sparse recovery problem based on the compressive sensing (CS theory. In this method, a two-step discrete cosine transform- (DCT- based feature extraction approach is utilized to cover both short-time and long-time properties of acoustic signals and reduce the dimensions of the sparse model. In addition, an online dictionary learning (DL method is used to adjust the dictionary for matching the changes of audio signals, and then the sparse solution could better represent location estimations. Moreover, we propose an improved block-sparse reconstruction algorithm using approximate l0 norm minimization to enhance reconstruction performance for sparse signals in low signal-noise ratio (SNR conditions. The effectiveness of the proposed scheme is demonstrated by simulation results and experimental results where substantial improvement for localization performance can be obtained in the noisy and reverberant conditions.
Directory of Open Access Journals (Sweden)
Siqi Ying
2018-04-01
Full Text Available Nickel superalloys play a pivotal role in enabling power-generation devices on land, sea, and in the air. They derive their strength from coherent cuboidal precipitates of the ordered γ’ phase that is different from the γ matrix in composition, structure and properties. In order to reveal the correlation between elemental distribution, dislocation glide and the plastic deformation of micro- and nano-sized volumes of a nickel superalloy, a combined in situ nanoindentation compression study was carried out with a scanning electron microscope (SEM on micro- and nano-pillars fabricated by focused ion beam (FIB milling of Ni-base superalloy CMSX4. The observed mechanical response (hardening followed by softening was correlated with the progression of crystal slip that was revealed using FIB nano-tomography and energy-dispersive spectroscopy (EDS elemental mapping. A hypothesis was put forward that the dependence of material strength on the size of the sample (micropillar diameter is correlated with the characteristic dimension of the structural units (γ’ precipitates. By proposing two new dislocation-based models, the results were found to be described well by a new parameter-free Hall–Petch equation.
Andreatta, Pamela; Gans-Larty, Florence; Debpuur, Domitilla; Ofosu, Anthony; Perosky, Joseph
2011-10-01
Maternal mortality from postpartum hemorrhage remains high globally, in large part because women give birth in rural communities where unskilled (traditional birth attendants) provide care for delivering mothers. Traditional attendants are neither trained nor equipped to recognize or manage postpartum hemorrhage as a life-threatening emergent condition. Recommended treatment includes using uterotonic agents and physical manipulation to aid uterine contraction. In resource-limited areas where Obstetric first aid may be the only care option, physical methods such as bimanual uterine compression are easily taught, highly practical and if performed correctly, highly effective. A simulator with objective performance feedback was designed to teach skilled and unskilled birth attendants to perform the technique. To evaluate the impact of simulation-based training on the ability of birth attendants to correctly perform bimanual compression in response to postpartum hemorrhage from uterine atony. Simulation-based training was conducted for skilled (N=111) and unskilled birth attendants (N=14) at two regional (Kumasi, Tamale) and two district (Savelugu, Sene) medical centers in Ghana. Training was evaluated using Kirkpatrick's 4-level model. All participants significantly increased their bimanual uterine compression skills after training (p=0.000). There were no significant differences between 2-week delayed post-test performances indicating retention (p=0.52). Applied behavioral and clinical outcomes were reported for 9 months from a subset of birth attendants in Sene District: 425 births, 13 postpartum hemorrhages were reported without concomitant maternal mortality. The results of this study suggest that simulation-based training for skilled and unskilled birth attendants to perform bi-manual uterine compression as postpartum hemorrhage Obstetric first aid leads to improved applied procedural skills. Results from a smaller subset of the sample suggest that these skills
Compression for radiological images
Wilson, Dennis L.
1992-07-01
The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.
Leung, Chung Ming; Or, Siu Wing; Ho, S L
2013-12-01
A force sensing device capable of sensing dc (or static) compressive forces is developed based on a NAS106N stainless steel compressive spring, a sintered NdFeB permanent magnet, and a coil-wound Tb(0.3)Dy(0.7)Fe(1.92)/Pb(Zr, Ti)O3 magnetostrictive∕piezoelectric laminate. The dc compressive force sensing in the device is evaluated theoretically and experimentally and is found to originate from a unique force-induced, position-dependent, current-driven dc magnetoelectric effect. The sensitivity of the device can be increased by increasing the spring constant of the compressive spring, the size of the permanent magnet, and/or the driving current for the coil-wound laminate. Devices of low-force (20 N) and high-force (200 N) types, showing high output voltages of 262 and 128 mV peak, respectively, are demonstrated at a low driving current of 100 mA peak by using different combinations of compressive spring and permanent magnet.
ECF2: A pulsed power generator based on magnetic flux compression for K-shell radiation production
International Nuclear Information System (INIS)
L'Eplattenier, P.; Lassalle, F.; Mangeant, C.; Hamann, F.; Bavay, M.; Bayol, F.; Huet, D.; Morell, A.; Monjaux, P.; Avrillaud, G.; Lalle, B.
2002-01-01
The 3 MJ energy stored ECF2 generator is developed at Centre d'Etudes de Gramat, France, for K-shell radiation production. This generator is based on microsecond LTD stages as primary generators, and on the magnetic flux compression scheme for power amplification from the microsecond to the 100ns regime. This paper presents a general overview of the ECF2 generator. The flux compression stage, a key component, will be studied in details. We will present its advantages and drawbacks. We will then present the first experimental and numerical results which show the improvements that have already been made on this scheme
Sample classroom activities based on climate science
Miler, T.
2009-09-01
We present several activities developed for the middle school education based on a climate science. The first activity was designed to teach about the ocean acidification. A simple experiment can prove that absorption of CO2 in water increases its acidity. A liquid pH indicator is suitable for the demonstration in a classroom. The second activity uses data containing coordinates of a hurricane position. Pupils draw a path of a hurricane eye in a tracking chart (map of the Atlantic ocean). They calculate an average speed of the hurricane, investigate its direction and intensity development. The third activity uses pictures of the Arctic ocean on September when ice extend is usually the lowest. Students measure the ice extend for several years using a square grid printed on a plastic foil. Then they plot a graph and discuss the results. All these activities can be used to improve the natural science education and increase the climate change literacy.
Low-latency video transmission over high-speed WPANs based on low-power video compression
DEFF Research Database (Denmark)
Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Ann
2010-01-01
This paper presents latency-constrained video transmission over high-speed wireless personal area networks (WPANs). Low-power video compression is proposed as an alternative to uncompressed video transmission. A video source rate control based on MINMAX quality criteria is introduced. Practical...
Resource efficient data compression algorithms for demanding, WSN based biomedical applications.
Antonopoulos, Christos P; Voros, Nikolaos S
2016-02-01
During the last few years, medical research areas of critical importance such as Epilepsy monitoring and study, increasingly utilize wireless sensor network technologies in order to achieve better understanding and significant breakthroughs. However, the limited memory and communication bandwidth offered by WSN platforms comprise a significant shortcoming to such demanding application scenarios. Although, data compression can mitigate such deficiencies there is a lack of objective and comprehensive evaluation of relative approaches and even more on specialized approaches targeting specific demanding applications. The research work presented in this paper focuses on implementing and offering an in-depth experimental study regarding prominent, already existing as well as novel proposed compression algorithms. All algorithms have been implemented in a common Matlab framework. A major contribution of this paper, that differentiates it from similar research efforts, is the employment of real world Electroencephalography (EEG) and Electrocardiography (ECG) datasets comprising the two most demanding Epilepsy modalities. Emphasis is put on WSN applications, thus the respective metrics focus on compression rate and execution latency for the selected datasets. The evaluation results reveal significant performance and behavioral characteristics of the algorithms related to their complexity and the relative negative effect on compression latency as opposed to the increased compression rate. It is noted that the proposed schemes managed to offer considerable advantage especially aiming to achieve the optimum tradeoff between compression rate-latency. Specifically, proposed algorithm managed to combine highly completive level of compression while ensuring minimum latency thus exhibiting real-time capabilities. Additionally, one of the proposed schemes is compared against state-of-the-art general-purpose compression algorithms also exhibiting considerable advantages as far as the
Yuan, Sheng; Yang, Yangrui; Liu, Xuemei; Zhou, Xin; Wei, Zhenzhuo
2018-01-01
An optical image transformation and encryption scheme is proposed based on double random-phase encoding (DRPE) and compressive ghost imaging (CGI) techniques. In this scheme, a secret image is first transformed into a binary image with the phase-retrieval-based DRPE technique, and then encoded by a series of random amplitude patterns according to the ghost imaging (GI) principle. Compressive sensing, corrosion and expansion operations are implemented to retrieve the secret image in the decryption process. This encryption scheme takes the advantage of complementary capabilities offered by the phase-retrieval-based DRPE and GI-based encryption techniques. That is the phase-retrieval-based DRPE is used to overcome the blurring defect of the decrypted image in the GI-based encryption, and the CGI not only reduces the data amount of the ciphertext, but also enhances the security of DRPE. Computer simulation results are presented to verify the performance of the proposed encryption scheme.
Radiological Image Compression
Lo, Shih-Chung Benedict
The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.
Study on the compressive strength of fly ash based geo polymer concrete
Anand Khanna, Pawan; Kelkar, Durga; Papal, Mahesh; Sekar, S. K.
2017-11-01
Introduction of the alternative materials for complete replacement of cement in ordinary concrete will play an important role to control greenhouse gas and its effect. The 100% replacement of binder with fly ash (in integration with potassium hydroxide (koh) and potassium silicate (k2sio3) solutions) in concrete gives a significant alternative to conventional cement concrete. This paper focuses on the effect of alkaline solutions koh and k2sio3 on strength properties of fly ash based geo polymer concrete (fgpc); compared the strength at different molarities of alkaline activator koh at different curing temperature. Fly ash based geo polymer concrete was produced from low calcium fly ash, triggered by addition of koh and k2sio3 solution and by assimilation of superplasticizer for suitable workability. The molarities of potassium hydroxide as 8m, 10m and 12m molarities were used at various curing temperatures such as 60°c, 70 °c and 80°c. Results showed that for given proportion to get maximum compressive strength the optimum molarity of alkaline solution is 12m and optimum curing temperature is 70 °c.
An effective approach to attenuate random noise based on compressive sensing and curvelet transform
International Nuclear Information System (INIS)
Liu, Wei; Cao, Siyuan; Zu, Shaohuan; Chen, Yangkang
2016-01-01
Random noise attenuation is an important step in seismic data processing. In this paper, we propose a novel denoising approach based on compressive sensing and the curvelet transform. We formulate the random noise attenuation problem as an L _1 norm regularized optimization problem. We propose to use the curvelet transform as the sparse transform in the optimization problem to regularize the sparse coefficients in order to separate signal and noise and to use the gradient projection for sparse reconstruction (GPSR) algorithm to solve the formulated optimization problem with an easy implementation and a fast convergence. We tested the performance of our proposed approach on both synthetic and field seismic data. Numerical results show that the proposed approach can effectively suppress the distortion near the edge of seismic events during the noise attenuation process and has high computational efficiency compared with the traditional curvelet thresholding and iterative soft thresholding based denoising methods. Besides, compared with f-x deconvolution, the proposed denoising method is capable of eliminating the random noise more effectively while preserving more useful signals. (paper)
Hejranfar, Kazem; Parseh, Kaveh
2017-09-01
The preconditioned characteristic boundary conditions based on the artificial compressibility (AC) method are implemented at artificial boundaries for the solution of two- and three-dimensional incompressible viscous flows in the generalized curvilinear coordinates. The compatibility equations and the corresponding characteristic variables (or the Riemann invariants) are mathematically derived and then applied as suitable boundary conditions in a high-order accurate incompressible flow solver. The spatial discretization of the resulting system of equations is carried out by the fourth-order compact finite-difference (FD) scheme. In the preconditioning applied here, the value of AC parameter in the flow field and also at the far-field boundary is automatically calculated based on the local flow conditions to enhance the robustness and performance of the solution algorithm. The code is fully parallelized using the Concurrency Runtime standard and Parallel Patterns Library (PPL) and its performance on a multi-core CPU is analyzed. The incompressible viscous flows around a 2-D circular cylinder, a 2-D NACA0012 airfoil and also a 3-D wavy cylinder are simulated and the accuracy and performance of the preconditioned characteristic boundary conditions applied at the far-field boundaries are evaluated in comparison to the simplified boundary conditions and the non-preconditioned characteristic boundary conditions. It is indicated that the preconditioned characteristic boundary conditions considerably improve the convergence rate of the solution of incompressible flows compared to the other boundary conditions and the computational costs are significantly decreased.
Energy Analysis of Decoders for Rakeness-Based Compressed Sensing of ECG Signals.
Pareschi, Fabio; Mangia, Mauro; Bortolotti, Daniele; Bartolini, Andrea; Benini, Luca; Rovatti, Riccardo; Setti, Gianluca
2017-12-01
In recent years, compressed sensing (CS) has proved to be effective in lowering the power consumption of sensing nodes in biomedical signal processing devices. This is due to the fact the CS is capable of reducing the amount of data to be transmitted to ensure correct reconstruction of the acquired waveforms. Rakeness-based CS has been introduced to further reduce the amount of transmitted data by exploiting the uneven distribution to the sensed signal energy. Yet, so far no thorough analysis exists on the impact of its adoption on CS decoder performance. The latter point is of great importance, since body-area sensor network architectures may include intermediate gateway nodes that receive and reconstruct signals to provide local services before relaying data to a remote server. In this paper, we fill this gap by showing that rakeness-based design also improves reconstruction performance. We quantify these findings in the case of ECG signals and when a variety of reconstruction algorithms are used either in a low-power microcontroller or a heterogeneous mobile computing platform.
A Test Data Compression Scheme Based on Irrational Numbers Stored Coding
Directory of Open Access Journals (Sweden)
Hai-feng Wu
2014-01-01
Full Text Available Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS, is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.
A test data compression scheme based on irrational numbers stored coding.
Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan
2014-01-01
Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.
International Nuclear Information System (INIS)
Yao, Erren; Wang, Huanran; Wang, Ligang; Xi, Guang; Maréchal, François
2017-01-01
Highlights: • A novel tri-generation based compressed air energy storage system. • Trade-off between efficiency and cost to highlight the best compromise solution. • Components with largest irreversibility and potential improvements highlighted. - Abstract: Compressed air energy storage technologies can improve the supply capacity and stability of the electricity grid, particularly when fluctuating renewable energies are massively connected. While incorporating the combined cooling, heating and power systems into compressed air energy storage could achieve stable operation as well as efficient energy utilization. In this paper, a novel combined cooling, heating and power based compressed air energy storage system is proposed. The system combines a gas engine, supplemental heat exchangers and an ammonia-water absorption refrigeration system. The design trade-off between the thermodynamic and economic objectives, i.e., the overall exergy efficiency and the total specific cost of product, is investigated by an evolutionary multi-objective algorithm for the proposed combined system. It is found that, with an increase in the exergy efficiency, the total product unit cost is less affected in the beginning, while rises substantially afterwards. The best trade-off solution is selected with an overall exergy efficiency of 53.04% and a total product unit cost of 20.54 cent/kWh, respectively. The variation of decision variables with the exergy efficiency indicates that the compressor, turbine and heat exchanger preheating the inlet air of turbine are the key equipment to cost-effectively pursuit a higher exergy efficiency. It is also revealed by an exergoeconomic analysis that, for the best trade-off solution, the investment costs of the compressor and the two heat exchangers recovering compression heat and heating up compressed air for expansion should be reduced (particularly the latter), while the thermodynamic performance of the gas engine need to be improved
International Nuclear Information System (INIS)
Lv, Song; He, Wei; Zhang, Aifeng; Li, Guiqiang; Luo, Bingqing; Liu, Xianghua
2017-01-01
Highlights: • A new CAES system for trigeneration based on electrical peak load shifting is proposed. • The theoretical models and the thermodynamics process are established and analyzed. • The relevant parameters influencing its performance have been discussed and optimized. • A novel energy and economic evaluation methods is proposed to evaluate the performance of the system. - Abstract: The compressed air energy storage (CAES) has made great contribution to both electricity and renewable energy. In the pursuit of reduced energy consumption and relieving power utility pressure effectively, a novel trigeneration system based on CAES for cooling, heating and electricity generation by electrical energy peak load shifting is proposed in this paper. The cooling power is generated by the direct expansion of compressed air, and the heating power is recovered in the process of compression and storage. Based on the working principle of the typical CAES, the theoretical analysis of the thermodynamic system models are established and the characteristics of the system are analyzed. A novel method used to evaluate energy and economic performance is proposed. A case study is conducted, and the economic-social and technical feasibility of the proposed system are discussed. The results show that the trigeneration system works efficiently at relatively low pressure, and the efficiency is expected to reach about 76.3% when air is compressed and released by 15 bar. The annual monetary cost saving annually is about 53.9%. Moreover, general considerations about the proposed system are also presented.
Compressive Sensing Based Bio-Inspired Shape Feature Detection CMOS Imager
Duong, Tuan A. (Inventor)
2015-01-01
A CMOS imager integrated circuit using compressive sensing and bio-inspired detection is presented which integrates novel functions and algorithms within a novel hardware architecture enabling efficient on-chip implementation.
Image-Based Compression Method of Three-Dimensional Range Data with Texture
Chen, Xia; Bell, Tyler; Zhang, Song
2017-01-01
Recently, high speed and high accuracy three-dimensional (3D) scanning techniques and commercially available 3D scanning devices have made real-time 3D shape measurement and reconstruction possible. The conventional mesh representation of 3D geometry, however, results in large file sizes, causing difficulties for its storage and transmission. Methods for compressing scanned 3D data therefore become desired. This paper proposes a novel compression method which stores 3D range data within the c...
On Scientific Data and Image Compression Based on Adaptive Higher-Order FEM
Czech Academy of Sciences Publication Activity Database
Šolín, Pavel; Andrš, David
2009-01-01
Roč. 1, č. 1 (2009), s. 56-68 ISSN 2070-0733 R&D Projects: GA ČR(CZ) GA102/07/0496; GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z20570509 Keywords : data compress ion * image compress ion * adaptive hp-FEM Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering http://www.global-sci.org/aamm
Directory of Open Access Journals (Sweden)
John Rojas
2013-01-01
Full Text Available Excipients are widely used to formulate solid drug forms by direct compression. However, the powderforming and tableting properties of these excipients are affected by the presence of lubricants and active ingredients. In this study, a screening methodology was employed to test the performance of an excipient for direct compression. The effects of three lubricants (magnesium stearate, stearic acid and talc on the compressibility and compaction of these excipients were assessed by the compressibility index and lubricant sensitivity ratio, respectively. Likewise, the dilution potential in blends with a poorly compactible drug such as acetaminophen was also assessed. Finally, the elastic recovery of tablets was evaluated five days after production. All lubricants increased the compressibility of these excipients and improved their flowability. However, hydrophobic lubricants such as magnesium stearate had a marked negative effect on compactibility, especially in plastic-deforming and more regularlyshaped materials with a smooth surface such as Starch 1500. Alginic acid, rice and cassava starches had the largest elastic recovery (>5%, indicating a tendency to cap. Moreover, highly plastic deforming materials such as sorbitol and polyvinylpyrrolidone (PVP-K30 exhibited the best dilution potential (~10%, whereas alginic acid showed a very high value (~70%. In terms of performance, sorbitol, PVP-K30, Avicel PH-101, sodium alginate and pregelatinized starch were the most appropriate excipients for the direct compression of drugs.
Improved compressed sensing-based cone-beam CT reconstruction using adaptive prior image constraints
Lee, Ho; Xing, Lei; Davidi, Ran; Li, Ruijiang; Qian, Jianguo; Lee, Rena
2012-04-01
Volumetric cone-beam CT (CBCT) images are acquired repeatedly during a course of radiation therapy and a natural question to ask is whether CBCT images obtained earlier in the process can be utilized as prior knowledge to reduce patient imaging dose in subsequent scans. The purpose of this work is to develop an adaptive prior image constrained compressed sensing (APICCS) method to solve this problem. Reconstructed images using full projections are taken on the first day of radiation therapy treatment and are used as prior images. The subsequent scans are acquired using a protocol of sparse projections. In the proposed APICCS algorithm, the prior images are utilized as an initial guess and are incorporated into the objective function in the compressed sensing (CS)-based iterative reconstruction process. Furthermore, the prior information is employed to detect any possible mismatched regions between the prior and current images for improved reconstruction. For this purpose, the prior images and the reconstructed images are classified into three anatomical regions: air, soft tissue and bone. Mismatched regions are identified by local differences of the corresponding groups in the two classified sets of images. A distance transformation is then introduced to convert the information into an adaptive voxel-dependent relaxation map. In constructing the relaxation map, the matched regions (unchanged anatomy) between the prior and current images are assigned with smaller weight values, which are translated into less influence on the CS iterative reconstruction process. On the other hand, the mismatched regions (changed anatomy) are associated with larger values and the regions are updated more by the new projection data, thus avoiding any possible adverse effects of prior images. The APICCS approach was systematically assessed by using patient data acquired under standard and low-dose protocols for qualitative and quantitative comparisons. The APICCS method provides an
Electromechanical-Traffic Model of Compression-Based Piezoelectric Energy Harvesting
Directory of Open Access Journals (Sweden)
Kok B.C.
2016-01-01
Full Text Available Piezoelectric energy harvesting has advantages over other alternative sources due to its large power density, ease of applications, and capability to be fabricated at different scales: macro, micro, and nano. This paper presents an electromechanical-traffic model for roadway compression-based piezoelectric energy harvesting system. A two-degree-of-freedom (2-DOF electromechanical model has been developed for the piezoelectric energy harvesting unit to define its performance in power generation under a number of external excitations on road surface. Lead Zirconate Titanate (PZT-5H is selected as the piezoelectric material to be used in this paper due to its high Piezoelectric Charge Constant (d and Piezoelectric Voltage Constant (g values. The main source of vibration energy that has been considered in this paper is the moving vehicle on the road. The effect of various frequencies on possible generated power caused by different vibration characteristics of moving vehicle has been studied. A single unit of circle-shape Piezoelectric Cymbal Transducer (PCT with diameter of 32 mm and thickness of 0.3 mm be able to generate about 0.12 mW and 13 mW of electric power under 4 Hz and 20 Hz of excitation, respectively. The estimated power to be generated for multiple arrays of PCT is approximately 150 kW/ km. Thus, the developed electromechanical-traffic model has enormous potential to be used in estimating the macro scale of roadway power generation system.
A diversity compression and combining technique based on channel shortening for cooperative networks
Hussain, Syed Imtiaz
2012-02-01
The cooperative relaying process with multiple relays needs proper coordination among the communicating and the relaying nodes. This coordination and the required capabilities may not be available in some wireless systems where the nodes are equipped with very basic communication hardware. We consider a scenario where the source node transmits its signal to the destination through multiple relays in an uncoordinated fashion. The destination captures the multiple copies of the transmitted signal through a Rake receiver. We analyze a situation where the number of Rake fingers N is less than that of the relaying nodes L. In this case, the receiver can combine N strongest signals out of L. The remaining signals will be lost and act as interference to the desired signal components. To tackle this problem, we develop a novel signal combining technique based on channel shortening principles. This technique proposes a processing block before the Rake reception which compresses the energy of L signal components over N branches while keeping the noise level at its minimum. The proposed scheme saves the system resources and makes the received signal compatible to the available hardware. Simulation results show that it outperforms the selection combining scheme. © 2012 IEEE.
A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications
Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.
2018-04-01
Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.
Development of compressible density-based steam explosion simulation code ESE-2
International Nuclear Information System (INIS)
Leskovar, M.
2004-01-01
A steam explosion is a fuel coolant interaction process by which the energy of the corium is transferred to water in a time-scale smaller than the time-scale for system pressure relief and induces dynamic loading of surrounding structures. A strong enough steam explosion in a nuclear power plant could jeopardize the containment integrity and so lead to a direct release of radioactive material to the environment. To help finding answers on open questions regarding steam explosion understanding and modelling, the steam explosion simulation code ESE-2 is being developed. In contrast to the developed simulation code ESE-1, where the multiphase flow equations are solved with pressure-based numerical methods (best suited for incompressible flow), in ESE-2 densitybased numerical methods (best suited for compressible flow) are used. Therefore ESE-2 will enable an accurate treatment of the whole steam explosion process, which consists of the premixing, triggering, propagation and expansion phase. In the paper the basic characteristics of the mathematical model and the numerical solution procedure in ESE-2 are described. The essence of the numerical treatment is that the convective terms in the multiphase flow equations are calculated with the AUSM+ scheme, which is very time efficient since no field-by-field wave decomposition is needed, using second order accurate discretization. (author)
Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan
2015-10-01
In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.
A compressive sensing based secure watermark detection and privacy preserving storage framework.
Qia Wang; Wenjun Zeng; Jun Tian
2014-03-01
Privacy is a critical issue when the data owners outsource data storage or processing to a third party computing service, such as the cloud. In this paper, we identify a cloud computing application scenario that requires simultaneously performing secure watermark detection and privacy preserving multimedia data storage. We then propose a compressive sensing (CS)-based framework using secure multiparty computation (MPC) protocols to address such a requirement. In our framework, the multimedia data and secret watermark pattern are presented to the cloud for secure watermark detection in a CS domain to protect the privacy. During CS transformation, the privacy of the CS matrix and the watermark pattern is protected by the MPC protocols under the semi-honest security model. We derive the expected watermark detection performance in the CS domain, given the target image, watermark pattern, and the size of the CS matrix (but without the CS matrix itself). The correctness of the derived performance has been validated by our experiments. Our theoretical analysis and experimental results show that secure watermark detection in the CS domain is feasible. Our framework can also be extended to other collaborative secure signal processing and data-mining applications in the cloud.
Call your health insurance or prescription plan: Find out if they pay for compression stockings. Ask if your durable medical equipment benefit pays for compression stockings. Get a prescription from your doctor. Find a medical equipment store where they can ...
Tang, Xin; Chen, Zhongsheng; Li, Yue; Yang, Yongmin
2018-05-01
When faults happen at gas path components of gas turbines, some sparsely-distributed and charged debris will be generated and released into the exhaust gas. The debris is called abnormal debris. Electrostatic sensors can detect the debris online and further indicate the faults. It is generally considered that, under a specific working condition, a more serious fault generates more and larger debris, and a piece of larger debris carries more charge. Therefore, the amount and charge of the abnormal debris are important indicators of the fault severity. However, because an electrostatic sensor can only detect the superposed effect on the electrostatic field of all the debris, it can hardly identify the amount and position of the debris. Moreover, because signals of electrostatic sensors depend on not only charge but also position of debris, and the position information is difficult to acquire, measuring debris charge accurately using the electrostatic detecting method is still a technical difficulty. To solve these problems, a hemisphere-shaped electrostatic sensors' circular array (HSESCA) is used, and an array signal processing method based on compressive sensing (CS) is proposed in this paper. To research in a theoretical framework of CS, the measurement model of the HSESCA is discretized into a sparse representation form by meshing. In this way, the amount and charge of the abnormal debris are described as a sparse vector. It is further reconstructed by constraining l1-norm when solving an underdetermined equation. In addition, a pre-processing method based on singular value decomposition and a result calibration method based on weighted-centroid algorithm are applied to ensure the accuracy of the reconstruction. The proposed method is validated by both numerical simulations and experiments. Reconstruction errors, characteristics of the results and some related factors are discussed.
Consensus of heterogeneous multi-agent systems based on sampled data with a small sampling delay
International Nuclear Information System (INIS)
Wang Na; Wu Zhi-Hai; Peng Li
2014-01-01
In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay for heterogeneous multi-agent systems is proposed. Then, the algebra graph theory, the matrix method, the stability theory of linear systems, and some other techniques are employed to derive the necessary and sufficient conditions guaranteeing heterogeneous multi-agent systems to asymptotically achieve the stationary consensus. Finally, simulations are performed to demonstrate the correctness of the theoretical results. (interdisciplinary physics and related areas of science and technology)
Orubu, Samuel E F; Hobson, Nicholas J; Basit, Abdul W; Tuleu, Catherine
2017-04-01
Dispersible tablets are proposed by the World Health Organization as the preferred paediatric formulation. It was hypothesised that tablets made from a powdered milk-base that disperse in water to form suspensions resembling milk might be a useful platform to improve acceptability in children. Milk-based dispersible tablets containing various types of powdered milk and infant formulae were formulated. The influence of milk type and content on placebo tablet properties was investigated using a design-of-experiments approach. Responses measured included friability, crushing strength and disintegration time. Additionally, the influence of compression force on the tablet properties of a model formulation was studied by compaction simulation. Disintegration times increased as milk content increased. Compaction simulation studies showed that compression force influenced disintegration time. These results suggest that the milk content, rather than type, and compression force were the most important determinants of disintegration. Up to 30% milk could be incorporated to produce 200 mg 10-mm flat-faced placebo tablets by direct compression disintegrating within 3 min in 5-10 ml of water, which is a realistic administration volume in children. The platform could accommodate 30% of a model active pharmaceutical ingredient (caffeine citrate). © 2016 Royal Pharmaceutical Society.
A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots
Directory of Open Access Journals (Sweden)
Shaowu Pan
2015-04-01
Full Text Available A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT, which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.
A Kinect-based real-time compressive tracking prototype system for amphibious spherical robots.
Pan, Shaowu; Shi, Liwei; Guo, Shuxiang
2015-04-08
A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.
A Simulation-based Randomized Controlled Study of Factors Influencing Chest Compression Depth
Directory of Open Access Journals (Sweden)
Kelsey P. Mayrand
2015-12-01
Full Text Available Introduction: Current resuscitation guidelines emphasize a systems approach with a strong emphasis on quality cardiopulmonary resuscitation (CPR. Despite the American Heart Association (AHA emphasis on quality CPR for over 10 years, resuscitation teams do not consistently meet recommended CPR standards. The objective is to assess the impact on chest compression depth of factors including bed height, step stool utilization, position of the rescuer’s arms and shoulders relative to the point of chest compression, and rescuer characteristics including height, weight, and gender. Methods: Fifty-six eligible subjects, including physician assistant students and first-year emergency medicine residents, were enrolled and randomized to intervention (bed lowered and step stool readily available and control (bed raised and step stool accessible, but concealed groups. We instructed all subjects to complete all interventions on a high-fidelity mannequin per AHA guidelines. Secondary end points included subject arm angle, height, weight group, and gender. Results: Using an intention to treat analysis, the mean compression depths for the intervention and control groups were not significantly different. Subjects positioning their arms at a 90-degree angle relative to the sagittal plane of the mannequin’s chest achieved a mean compression depth significantly greater than those compressing at an angle less than 90 degrees. There was a significant correlation between using a step stool and achieving the correct shoulder position. Subject height, weight group, and gender were all independently associated with compression depth. Conclusion: Rescuer arm position relative to the patient’s chest and step stool utilization during CPR are modifiable factors facilitating improved chest compression depth.
A Story-Based Simulation for Teaching Sampling Distributions
Turner, Stephen; Dabney, Alan R.
2015-01-01
Statistical inference relies heavily on the concept of sampling distributions. However, sampling distributions are difficult to teach. We present a series of short animations that are story-based, with associated assessments. We hope that our contribution can be useful as a tool to teach sampling distributions in the introductory statistics…
Control charts for location based on different sampling schemes
Mehmood, R.; Riaz, M.; Does, R.J.M.M.
2013-01-01
Control charts are the most important statistical process control tool for monitoring variations in a process. A number of articles are available in the literature for the X̄ control chart based on simple random sampling, ranked set sampling, median-ranked set sampling (MRSS), extreme-ranked set
Cheng, An; Chao, Sao-Jeng; Lin, Wei-Ting
2013-01-01
Leaching of calcium ions increases the porosity of cement-based materials, consequently resulting in a negative effect on durability since it provides an entry for aggressive harmful ions, causing reinforcing steel corrosion. This study investigates the effects of leaching behavior of calcium ions on the compression and durability of cement-based materials. Since the parameters influencing the leaching behavior of cement-based materials are unclear and diverse, this paper focuses on the influence of added mineral admixtures (fly ash, slag and silica fume) on the leaching behavior of calcium ions regarding compression and durability of cemented-based materials. Ammonium nitrate solution was used to accelerate the leaching process in this study. Scanning electron microscopy, X-ray diffraction analysis, and thermogravimetric analysis were employed to analyze and compare the cement-based material compositions prior to and after calcium ion leaching. The experimental results show that the mineral admixtures reduce calcium hydroxide quantity and refine pore structure through pozzolanic reaction, thus enhancing the compressive strength and durability of cement-based materials. PMID:28809247
Dynamic failure of dry and fully saturated limestone samples based on incubation time concept
Directory of Open Access Journals (Sweden)
Yuri V. Petrov
2017-02-01
Full Text Available This paper outlines the results of experimental study of the dynamic rock failure based on the comparison of dry and saturated limestone samples obtained during the dynamic compression and split tests. The tests were performed using the Kolsky method and its modifications for dynamic splitting. The mechanical data (e.g. strength, time and energy characteristics of this material at high strain rates are obtained. It is shown that these characteristics are sensitive to the strain rate. A unified interpretation of these rate effects, based on the structural–temporal approach, is hereby presented. It is demonstrated that the temporal dependence of the dynamic compressive and split tensile strengths of dry and saturated limestone samples can be predicted by the incubation time criterion. Previously discovered possibilities to optimize (minimize the energy input for the failure process is discussed in connection with industrial rock failure processes. It is shown that the optimal energy input value associated with critical load, which is required to initialize failure in the rock media, strongly depends on the incubation time and the impact duration. The optimal load shapes, which minimize the momentum for a single failure impact, are demonstrated. Through this investigation, a possible approach to reduce the specific energy required for rock cutting by means of high-frequency vibrations is also discussed.
Efficient sampling algorithms for Monte Carlo based treatment planning
International Nuclear Information System (INIS)
DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.
1998-01-01
Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed
Applications of wavelet-based compression to multidimensional earth science data
Energy Technology Data Exchange (ETDEWEB)
Bradley, J.N.; Brislawn, C.M.
1993-01-01
A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.
Applications of wavelet-based compression to multidimensional earth science data
Energy Technology Data Exchange (ETDEWEB)
Bradley, J.N.; Brislawn, C.M.
1993-02-01
A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.
Al-Busaidi, Asiya M; Khriji, Lazhar; Touati, Farid; Rasid, Mohd Fadlee; Mnaouer, Adel Ben
2017-09-12
One of the major issues in time-critical medical applications using wireless technology is the size of the payload packet, which is generally designed to be very small to improve the transmission process. Using small packets to transmit continuous ECG data is still costly. Thus, data compression is commonly used to reduce the huge amount of ECG data transmitted through telecardiology devices. In this paper, a new ECG compression scheme is introduced to ensure that the compressed ECG segments fit into the available limited payload packets, while maintaining a fixed CR to preserve the diagnostic information. The scheme automatically divides the ECG block into segments, while maintaining other compression parameters fixed. This scheme adopts discrete wavelet transform (DWT) method to decompose the ECG data, bit-field preserving (BFP) method to preserve the quality of the DWT coefficients, and a modified running-length encoding (RLE) scheme to encode the coefficients. The proposed dynamic compression scheme showed promising results with a percentage packet reduction (PR) of about 85.39% at low percentage root-mean square difference (PRD) values, less than 1%. ECG records from MIT-BIH Arrhythmia Database were used to test the proposed method. The simulation results showed promising performance that satisfies the needs of portable telecardiology systems, like the limited payload size and low power consumption.
Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan
2014-10-01
It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.
Energy Technology Data Exchange (ETDEWEB)
Nochaiya, Thanongsak [Department of Physics, Faculty of Science, Naresuan University, Phitsanulok 65000 (Thailand); Sekine, Yoshika [Department of Chemistry, School of Science, Tokai University, 4-1-1 Kitakaname, Hiratsuka, Kanagawa 259-1292 (Japan); Choopun, Supab [Applied Physics Research Laboratory, Department of Physics and Materials Science, Faculty of Science, Chiang Mai University, Chiang Mai 50200 (Thailand); Chaipanich, Arnon, E-mail: arnon.chaipanich@cmu.ac.th [Advanced Cement-Based Materials Research Unit, Department of Physics and Materials Science, Faculty of Science, Chiang Mai University, Chiang Mai 50200 (Thailand)
2015-05-05
Highlights: • Nano zinc oxide was used as an additive material. • Microstructure and phase characterization of pastes were characterized using SEM and XRD. • TGA and FTIR were also used to determine the hydration reaction. • Compressive strength of ZnO mixes was found to increase at 28 days. - Abstract: Zinc oxide nanoparticles as a nanophotocatalyst has great potential for self-cleaning applications in concrete structures, its effects on the cement hydration, setting time and compressive strength are also important when using it in practice. This paper reports the effects of zinc oxide nanoparticles, as an additive material, on properties of cement-based materials. Setting time, compressive strength and porosity of mortars were investigated. Microstructure and morphology of pastes were characterized using scanning electron microscope and X-ray diffraction (XRD), respectively. Moreover, thermal gravimetric analysis (TGA) and Fourier-transform infrared spectrometer (FTIR) were also used to determine the hydration reaction. The results show that Portland cement paste with additional ZnO was found to slightly increase the water requirement while the setting time presented prolongation period than the control mix. However, compressive strength of ZnO mixes was found to be higher than that of PC mix up to 15% (at 28 days) via filler effect. Microstructure, XRD and TGA results of ZnO pastes show less hydration products before 28 days but similar at 28 days. In addition, FTIR results confirmed the retardation when ZnO was partially added in Portland cement pastes.
International Nuclear Information System (INIS)
Nochaiya, Thanongsak; Sekine, Yoshika; Choopun, Supab; Chaipanich, Arnon
2015-01-01
Highlights: • Nano zinc oxide was used as an additive material. • Microstructure and phase characterization of pastes were characterized using SEM and XRD. • TGA and FTIR were also used to determine the hydration reaction. • Compressive strength of ZnO mixes was found to increase at 28 days. - Abstract: Zinc oxide nanoparticles as a nanophotocatalyst has great potential for self-cleaning applications in concrete structures, its effects on the cement hydration, setting time and compressive strength are also important when using it in practice. This paper reports the effects of zinc oxide nanoparticles, as an additive material, on properties of cement-based materials. Setting time, compressive strength and porosity of mortars were investigated. Microstructure and morphology of pastes were characterized using scanning electron microscope and X-ray diffraction (XRD), respectively. Moreover, thermal gravimetric analysis (TGA) and Fourier-transform infrared spectrometer (FTIR) were also used to determine the hydration reaction. The results show that Portland cement paste with additional ZnO was found to slightly increase the water requirement while the setting time presented prolongation period than the control mix. However, compressive strength of ZnO mixes was found to be higher than that of PC mix up to 15% (at 28 days) via filler effect. Microstructure, XRD and TGA results of ZnO pastes show less hydration products before 28 days but similar at 28 days. In addition, FTIR results confirmed the retardation when ZnO was partially added in Portland cement pastes
Shecter, Liat; Oiknine, Yaniv; August, Isaac; Stern, Adrian
2017-09-01
Recently we presented a Compressive Sensing Miniature Ultra-spectral Imaging System (CS-MUSI)1 . This system consists of a single Liquid Crystal (LC) phase retarder as a spectral modulator and a gray scale sensor array to capture a multiplexed signal of the imaged scene. By designing the LC spectral modulator in compliance with the Compressive Sensing (CS) guidelines and applying appropriate algorithms we demonstrated reconstruction of spectral (hyper/ ultra) datacubes from an order of magnitude fewer samples than taken by conventional sensors. The LC modulator is designed to have an effective width of a few tens of micrometers, therefore it is prone to imperfections and spatial nonuniformity. In this work, we present the study of this nonuniformity and present a mathematical algorithm that allows the inference of the spectral transmission over the entire cell area from only a few calibration measurements.
Wavelet-Based Watermarking and Compression for ECG Signals with Verification Evaluation
Directory of Open Access Journals (Sweden)
Kuo-Kun Tseng
2014-02-01
Full Text Available In the current open society and with the growth of human rights, people are more and more concerned about the privacy of their information and other important data. This study makes use of electrocardiography (ECG data in order to protect individual information. An ECG signal can not only be used to analyze disease, but also to provide crucial biometric information for identification and authentication. In this study, we propose a new idea of integrating electrocardiogram watermarking and compression approach, which has never been researched before. ECG watermarking can ensure the confidentiality and reliability of a user’s data while reducing the amount of data. In the evaluation, we apply the embedding capacity, bit error rate (BER, signal-to-noise ratio (SNR, compression ratio (CR, and compressed-signal to noise ratio (CNR methods to assess the proposed algorithm. After comprehensive evaluation the final results show that our algorithm is robust and feasible.
NIR hyperspectral compressive imager based on a modified Fabry–Perot resonator
Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Stern, Adrian
2018-04-01
The acquisition of hyperspectral (HS) image datacubes with available 2D sensor arrays involves a time consuming scanning process. In the last decade, several compressive sensing (CS) techniques were proposed to reduce the HS acquisition time. In this paper, we present a method for near-infrared (NIR) HS imaging which relies on our rapid CS resonator spectroscopy technique. Within the framework of CS, and by using a modified Fabry–Perot resonator, a sequence of spectrally modulated images is used to recover NIR HS datacubes. Owing to the innovative CS design, we demonstrate the ability to reconstruct NIR HS images with hundreds of spectral bands from an order of magnitude fewer measurements, i.e. with a compression ratio of about 10:1. This high compression ratio, together with the high optical throughput of the system, facilitates fast acquisition of large HS datacubes.
Adaptive compressive ghost imaging based on wavelet trees and sparse representation.
Yu, Wen-Kai; Li, Ming-Fei; Yao, Xu-Ri; Liu, Xue-Feng; Wu, Ling-An; Zhai, Guang-Jie
2014-03-24
Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.
International Nuclear Information System (INIS)
Yu Xingfu; Tian Sugui; Du Hongqiang; Yu Huichen; Wang Minggang; Shang Lijuan; Cui Shusen
2009-01-01
By pre-compressive creep treatment, the cubical γ' phase in the nickel-base single crystal superalloy is transformed into the P-type rafted structure along the direction parallel to the applied stress axis. And the microstructure evolution of the P-type γ' rafted alloy during tensile creep is investigated by means of the measurement of the creep curve and microstructure observation. Results show that the P-type γ' rafted phase in the alloy is transformed into the N-type structure along the direction perpendicular to the applied stress axis in the initial stage of the tensile creep. In the role of the tensile stress at high temperature, the change of the element's equilibrium concentration in the different regions of P-type γ' rafted phase occurs, which promotes the inhomogeneous coarsening of the P-type γ' phase. And then, the decomposition of the P-type γ' rafted phase in the alloy occurs to form the groove structure. As of result of the directional diffusion of the elements, the fact that the P-type γ' rafted phase is decomposed to transform into the cubical-like structure is attributed to the increment of the solute elements M(Ta, Al) chemical potential in the groove regions. Further, the lattice constriction in the horizontal interfaces of the cubical-like γ' phase may repel out the Al and Ta atoms with higher radius due to the role of the shearing stress, and the lattice expanding in the upright interfaces of the cubical-like γ' phase, due to the role of the tension stress, may trap the Ta and Al atoms, which promotes the directional growing of γ' phase into the N-type rafted structure. Therefore, the change of the strain energy density in different interfaces of the cubical-like γ' phase is thought to be the driving force of the elements diffusing and the directional coarsening of γ' phase
A compressed sensing approach for resolution improvement in fiber-bundle based endomicroscopy
Dumas, John P.; Lodhi, Muhammad A.; Bajwa, Waheed U.; Pierce, Mark C.
2018-02-01
Endomicroscopy techniques such as confocal, multi-photon, and wide-field imaging have all been demonstrated using coherent fiber-optic imaging bundles. While the narrow diameter and flexibility of fiber bundles is clinically advantageous, the number of resolvable points in an image is conventionally limited to the number of individual fibers within the bundle. We are introducing concepts from the compressed sensing (CS) field to fiber bundle based endomicroscopy, to allow images to be recovered with more resolvable points than fibers in the bundle. The distal face of the fiber bundle is treated as a low-resolution sensor with circular pixels (fibers) arranged in a hexagonal lattice. A spatial light modulator is located conjugate to the object and distal face, applying multiple high resolution masks to the intermediate image prior to propagation through the bundle. We acquire images of the proximal end of the bundle for each (known) mask pattern and then apply CS inversion algorithms to recover a single high-resolution image. We first developed a theoretical forward model describing image formation through the mask and fiber bundle. We then imaged objects through a rigid fiber bundle and demonstrate that our CS endomicroscopy architecture can recover intra-fiber details while filling inter-fiber regions with interpolation. Finally, we examine the relationship between reconstruction quality and the ratio of the number of mask elements to the number of fiber cores, finding that images could be generated with approximately 28,900 resolvable points for a 1,000 fiber region in our platform.
Li, Jun; Song, Minghui; Peng, Yuanxi
2018-03-01
Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.
Energy Technology Data Exchange (ETDEWEB)
Wang Weizheng; Kuang Jishun; You Zhiqiang; Liu Peng, E-mail: jshkuang@163.com [College of Information Science and Engineering, Hunan University, Changsha 410082 (China)
2011-07-15
This paper presents a new test scheme based on scan block encoding in a linear feedback shift register (LFSR) reseeding-based compression environment. Meanwhile, our paper also introduces a novel algorithm of scan-block clustering. The main contribution of this paper is a flexible test-application framework that achieves significant reductions in switching activity during scan shift and the number of specified bits that need to be generated via LFSR reseeding. Thus, it can significantly reduce the test power and test data volume. Experimental results using Mintest test set on the larger ISCAS'89 benchmarks show that the proposed method reduces the switching activity significantly by 72%-94% and provides a best possible test compression of 74%-94% with little hardware overhead. (semiconductor integrated circuits)
International Nuclear Information System (INIS)
Lorenzoni, José; David, Philippe; Levivier, Marc
2012-01-01
Purpose: To describe the anatomical characteristics and patterns of neurovascular compression in patients suffering classic trigeminal neuralgia (CTN), using high-resolution magnetic resonance imaging (MRI). Materials and methods: The analysis of the anatomy of the trigeminal nerve, brain stem and the vascular structures related to this nerve was made in 100 consecutive patients treated with a Gamma Knife radiosurgery for CTN between December 1999 and September 2004. MRI studies (T1, T1 enhanced and T2-SPIR) with axial, coronal and sagital simultaneous visualization were dynamically assessed using the software GammaPlan™. Three-dimensional reconstructions were also developed in some representative cases. Results: In 93 patients (93%), there were one or several vascular structures in contact, either, with the trigeminal nerve, or close to its origin in the pons. The superior cerebellar artery was involved in 71 cases (76%). Other vessels identified were the antero-inferior cerebellar artery, the basilar artery, the vertebral artery, and some venous structures. Vascular compression was found anywhere along the trigeminal nerve. The mean distance between the nerve compression and the origin of the nerve in the brainstem was 3.76 ± 2.9 mm (range 0–9.8 mm). In 39 patients (42%), the vascular compression was located proximally and in 42 (45%) the compression was located distally. Nerve dislocation or distortion by the vessel was observed in 30 cases (32%). Conclusions: The findings of this study are similar to those reported in surgical and autopsy series. This non-invasive MRI-based approach could be useful for diagnostic and therapeutic decisions in CTN, and it could help to understand its pathogenesis.
Anisotropic Concrete Compressive Strength
DEFF Research Database (Denmark)
Gustenhoff Hansen, Søren; Jørgensen, Henrik Brøner; Hoang, Linh Cao
2017-01-01
When the load carrying capacity of existing concrete structures is (re-)assessed it is often based on compressive strength of cores drilled out from the structure. Existing studies show that the core compressive strength is anisotropic; i.e. it depends on whether the cores are drilled parallel...
Energy Technology Data Exchange (ETDEWEB)
Sidles, John A; Jacky, Jonathan P [Department of Orthopaedics and Sports Medicine, Box 356500, School of Medicine, University of Washington, Seattle, WA, 98195 (United States); Garbini, Joseph L; Malcomb, Joseph R; Williamson, Austin M [Department of Mechanical Engineering, University of Washington, Seattle, WA 98195 (United States); Harrell, Lee E [Department of Physics, US Military Academy, West Point, NY 10996 (United States); Hero, Alfred O [Department of Electrical Engineering, University of Michigan, MI 49931 (United States); Norman, Anthony G [Department of Bioengineering, University of Washington, Seattle, WA 98195 (United States)], E-mail: sidles@u.washington.edu
2009-06-15
Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kaehler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kaehlerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kaehler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candes-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.
International Nuclear Information System (INIS)
Sidles, John A; Jacky, Jonathan P; Garbini, Joseph L; Malcomb, Joseph R; Williamson, Austin M; Harrell, Lee E; Hero, Alfred O; Norman, Anthony G
2009-01-01
Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kaehler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kaehlerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kaehler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candes-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.
Sidles, John A.; Garbini, Joseph L.; Harrell, Lee E.; Hero, Alfred O.; Jacky, Jonathan P.; Malcomb, Joseph R.; Norman, Anthony G.; Williamson, Austin M.
2009-06-01
Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kähler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kählerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kähler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candès-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.
Moshina, Nataliia; Sebuødegård, Sofie; Hofvind, Solveig
2017-06-01
We aimed to investigate early performance measures in a population-based breast cancer screening program stratified by compression force and pressure at the time of mammographic screening examination. Early performance measures included recall rate, rates of screen-detected and interval breast cancers, positive predictive value of recall (PPV), sensitivity, specificity, and histopathologic characteristics of screen-detected and interval breast cancers. Information on 261,641 mammographic examinations from 93,444 subsequently screened women was used for analyses. The study period was 2007-2015. Compression force and pressure were categorized using tertiles as low, medium, or high. χ 2 test, t tests, and test for trend were used to examine differences between early performance measures across categories of compression force and pressure. We applied generalized estimating equations to identify the odds ratios (OR) of screen-detected or interval breast cancer associated with compression force and pressure, adjusting for fibroglandular and/or breast volume and age. The recall rate decreased, while PPV and specificity increased with increasing compression force (p for trend screen-detected cancer, PPV, sensitivity, and specificity decreased with increasing compression pressure (p for trend breast cancer compared with low compression pressure (1.89; 95% CI 1.43-2.48). High compression force and low compression pressure were associated with more favorable early performance measures in the screening program.
Triangulation based inclusion probabilities: a design-unbiased sampling approach
Fehrmann, Lutz; Gregoire, Timothy; Kleinn, Christoph
2011-01-01
A probabilistic sampling approach for design-unbiased estimation of area-related quantitative characteristics of spatially dispersed population units is proposed. The developed field protocol includes a fixed number of 3 units per sampling location and is based on partial triangulations over their natural neighbors to derive the individual inclusion probabilities. The performance of the proposed design is tested in comparison to fixed area sample plots in a simulation with two forest stands. ...
Maglogiannis, Ilias; Doukas, Charalampos; Kormentzas, George; Pliakas, Thomas
2009-07-01
Most of the commercial medical image viewers do not provide scalability in image compression and/or region of interest (ROI) encoding/decoding. Furthermore, these viewers do not take into consideration the special requirements and needs of a heterogeneous radio setting that is constituted by different access technologies [e.g., general packet radio services (GPRS)/ universal mobile telecommunications system (UMTS), wireless local area network (WLAN), and digital video broadcasting (DVB-H)]. This paper discusses a medical application that contains a viewer for digital imaging and communications in medicine (DICOM) images as a core module. The proposed application enables scalable wavelet-based compression, retrieval, and decompression of DICOM medical images and also supports ROI coding/decoding. Furthermore, the presented application is appropriate for use by mobile devices activating in heterogeneous radio settings. In this context, performance issues regarding the usage of the proposed application in the case of a prototype heterogeneous system setup are also discussed.
Directory of Open Access Journals (Sweden)
Rachmad Vidya Wicaksana Putra
2012-09-01
Full Text Available In the literature, several approaches of designing a DCT/IDCT-based image compression system have been proposed. In this paper, we present a new RTL design approach with as main focus developing a DCT/IDCT-based image compression architecture using a self-created algorithm. This algorithm can efficiently minimize the amount of shifter-adders to substitute multipliers. We call this new algorithm the multiplication from Common Binary Expression (mCBE Algorithm. Besides this algorithm, we propose alternative quantization numbers, which can be implemented simply as shifters in digital hardware. Mostly, these numbers can retain a good compressed-image quality compared to JPEG recommendations. These ideas lead to our design being small in circuit area, multiplierless, and low in complexity. The proposed 8-point 1D-DCT design has only six stages, while the 8-point 1D-IDCT design has only seven stages (one stage being defined as equal to the delay of one shifter or 2-input adder. By using the pipelining method, we can achieve a high-speed architecture with latency as a trade-off consideration. The design has been synthesized and can reach a speed of up to 1.41ns critical path delay (709.22MHz.
Directory of Open Access Journals (Sweden)
Yihang Yin
2015-08-01
Full Text Available Wireless sensor networks (WSNs have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA. First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.
Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong
2015-08-07
Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.
Hortos, William S.
2008-04-01
Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at
The Toggle Local Planner for sampling-based motion planning
Denny, Jory; Amato, Nancy M.
2012-01-01
Sampling-based solutions to the motion planning problem, such as the probabilistic roadmap method (PRM), have become commonplace in robotics applications. These solutions are the norm as the dimensionality of the planning space grows, i.e., d > 5
Variable screening and ranking using sampling-based sensitivity measures
International Nuclear Information System (INIS)
Wu, Y-T.; Mohanty, Sitakanta
2006-01-01
This paper presents a methodology for screening insignificant random variables and ranking significant important random variables using sensitivity measures including two cumulative distribution function (CDF)-based and two mean-response based measures. The methodology features (1) using random samples to compute sensitivities and (2) using acceptance limits, derived from the test-of-hypothesis, to classify significant and insignificant random variables. Because no approximation is needed in either the form of the performance functions or the type of continuous distribution functions representing input variables, the sampling-based approach can handle highly nonlinear functions with non-normal variables. The main characteristics and effectiveness of the sampling-based sensitivity measures are investigated using both simple and complex examples. Because the number of samples needed does not depend on the number of variables, the methodology appears to be particularly suitable for problems with large, complex models that have large numbers of random variables but relatively few numbers of significant random variables
A sampling-based approach to probabilistic pursuit evasion
Mahadevan, Aditya; Amato, Nancy M.
2012-01-01
Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented
Rojali, Salman, Afan Galih; George
2017-08-01
Along with the development of information technology in meeting the needs, various adverse actions and difficult to avoid are emerging. One of such action is data theft. Therefore, this study will discuss about cryptography and steganography that aims to overcome these problems. This study will use the Modification Vigenere Cipher, Least Significant Bit and Dictionary Based Compression methods. To determine the performance of study, Peak Signal to Noise Ratio (PSNR) method is used to measure objectively and Mean Opinion Score (MOS) method is used to measure subjectively, also, the performance of this study will be compared to other method such as Spread Spectrum and Pixel Value differencing. After comparing, it can be concluded that this study can provide better performance when compared to other methods (Spread Spectrum and Pixel Value Differencing) and has a range of MSE values (0.0191622-0.05275) and PSNR (60.909 to 65.306) with a hidden file size of 18 kb and has a MOS value range (4.214 to 4.722) or image quality that is approaching very good.
Ma, Longtao; Chen, Shengmei; Pei, Zengxia; Huang, Yan; Liang, Guojin; Mo, Funian; Yang, Qi; Su, Jun; Gao, Yihua; Zapien, Juan Antonio; Zhi, Chunyi
2018-02-27
The exploitation of a high-efficient, low-cost, and stable non-noble-metal-based catalyst with oxygen reduction reaction (ORR) and oxygen evolution reaction (OER) simultaneously, as air electrode material for a rechargeable zinc-air battery is significantly crucial. Meanwhile, the compressible flexibility of a battery is the prerequisite of wearable or/and portable electronics. Herein, we present a strategy via single-site dispersion of an Fe-N x species on a two-dimensional (2D) highly graphitic porous nitrogen-doped carbon layer to implement superior catalytic activity toward ORR/OER (with a half-wave potential of 0.86 V for ORR and an overpotential of 390 mV at 10 mA·cm -2 for OER) in an alkaline medium. Furthermore, an elastic polyacrylamide hydrogel based electrolyte with the capability to retain great elasticity even under a highly corrosive alkaline environment is utilized to develop a solid-state compressible and rechargeable zinc-air battery. The creatively developed battery has a low charge-discharge voltage gap (0.78 V at 5 mA·cm -2 ) and large power density (118 mW·cm -2 ). It could be compressed up to 54% strain and bent up to 90° without charge/discharge performance and output power degradation. Our results reveal that single-site dispersion of catalytic active sites on a porous support for a bifunctional oxygen catalyst as cathode integrating a specially designed elastic electrolyte is a feasible strategy for fabricating efficient compressible and rechargeable zinc-air batteries, which could enlighten the design and development of other functional electronic devices.
On incomplete sampling under birth-death models and connections to the sampling-based coalescent.
Stadler, Tanja
2009-11-07
The constant rate birth-death process is used as a stochastic model for many biological systems, for example phylogenies or disease transmission. As the biological data are usually not fully available, it is crucial to understand the effect of incomplete sampling. In this paper, we analyze the constant rate birth-death process with incomplete sampling. We derive the density of the bifurcation events for trees on n leaves which evolved under this birth-death-sampling process. This density is used for calculating prior distributions in Bayesian inference programs and for efficiently simulating trees. We show that the birth-death-sampling process can be interpreted as a birth-death process with reduced rates and complete sampling. This shows that joint inference of birth rate, death rate and sampling probability is not possible. The birth-death-sampling process is compared to the sampling-based population genetics model, the coalescent. It is shown that despite many similarities between these two models, the distribution of bifurcation times remains different even in the case of very large population sizes. We illustrate these findings on an Hepatitis C virus dataset from Egypt. We show that the transmission times estimates are significantly different-the widely used Gamma statistic even changes its sign from negative to positive when switching from the coalescent to the birth-death process.
Ali, Anum Z.
2013-12-01
Linearization of user equipment power amplifiers driven by orthogonal frequency division multiplexing signals is addressed in this paper. Particular attention is paid to the power efficient operation of an orthogonal frequency division multiple access cognitive radio system and realization of such a system using compressed sensing. Specifically, precompensated overdriven amplifiers are employed at the mobile terminal. Over-driven amplifiers result in in-band distortions and out of band interference. Out of band interference mostly occupies the spectrum of inactive users, whereas the in-band distortions are mitigated using compressed sensing at the receiver. It is also shown that the performance of the proposed scheme can be further enhanced using multiple measurements of the distortion signal in single-input multi-output systems. Numerical results verify the ability of the proposed setup to improve error vector magnitude, bit error rate, outage capacity and mean squared error. © 2011 IEEE.
Ali, Anum Z.; Hammi, Oualid; Al-Naffouri, Tareq Y.
2013-01-01
Linearization of user equipment power amplifiers driven by orthogonal frequency division multiplexing signals is addressed in this paper. Particular attention is paid to the power efficient operation of an orthogonal frequency division multiple access cognitive radio system and realization of such a system using compressed sensing. Specifically, precompensated overdriven amplifiers are employed at the mobile terminal. Over-driven amplifiers result in in-band distortions and out of band interference. Out of band interference mostly occupies the spectrum of inactive users, whereas the in-band distortions are mitigated using compressed sensing at the receiver. It is also shown that the performance of the proposed scheme can be further enhanced using multiple measurements of the distortion signal in single-input multi-output systems. Numerical results verify the ability of the proposed setup to improve error vector magnitude, bit error rate, outage capacity and mean squared error. © 2011 IEEE.
Use of magnetic compression based on amorphous alloys as a drive for induction linacs
International Nuclear Information System (INIS)
Birx, D.L.; Cook, E.G.; Hawkins, S.A.; Poor, S.E.; Reginato, L.; Schmidt, J.; Smith, M.W.
1984-01-01
In anticipation of current and future needs for the Particle Beam Program and other programs at the Lawrence Livermore National Laboratory, we are continuing efforts in the development of high-repetition-rate magnetic pulse compressors that use ferromagnetic metallic glasses, both in the linear and very high saturation rates. These devices are ideally suited as drivers for linear induction accelerators, where duty factor or average repetition rates (hundred of hertz) requirements exceed the parameters that can be achieved by pulse compression using spark gaps. The technique of magnetic pulse compression has been with use for several decades, but relatively recent developments in rapidly quenched magnetic metals of very thin cross sections, has led to the development of state-of-the-art magnetic pulse compressors with very high peak power, repetition rates, and reliability. This paper will describe results of recent experiments and the relevant electrical and mechanical properties of magnetic pulse compressors to achieve high efficiency and reliability
Babbitt, Wm Randall; Barber, Zeb W; Renner, Christoffer
2011-12-15
Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.
A finite-volume HLLC-based scheme for compressible interfacial flows with surface tension
Energy Technology Data Exchange (ETDEWEB)
Garrick, Daniel P. [Department of Aerospace Engineering, Iowa State University, Ames, IA (United States); Owkes, Mark [Department of Mechanical and Industrial Engineering, Montana State University, Bozeman, MT (United States); Regele, Jonathan D., E-mail: jregele@iastate.edu [Department of Aerospace Engineering, Iowa State University, Ames, IA (United States)
2017-06-15
Shock waves are often used in experiments to create a shear flow across liquid droplets to study secondary atomization. Similar behavior occurs inside of supersonic combustors (scramjets) under startup conditions, but it is challenging to study these conditions experimentally. In order to investigate this phenomenon further, a numerical approach is developed to simulate compressible multiphase flows under the effects of surface tension forces. The flow field is solved via the compressible multicomponent Euler equations (i.e., the five equation model) discretized with the finite volume method on a uniform Cartesian grid. The solver utilizes a total variation diminishing (TVD) third-order Runge–Kutta method for time-marching and second order TVD spatial reconstruction. Surface tension is incorporated using the Continuum Surface Force (CSF) model. Fluxes are upwinded with a modified Harten–Lax–van Leer Contact (HLLC) approximate Riemann solver. An interface compression scheme is employed to counter numerical diffusion of the interface. The present work includes modifications to both the HLLC solver and the interface compression scheme to account for capillary force terms and the associated pressure jump across the gas–liquid interface. A simple method for numerically computing the interface curvature is developed and an acoustic scaling of the surface tension coefficient is proposed for the non-dimensionalization of the model. The model captures the surface tension induced pressure jump exactly if the exact curvature is known and is further verified with an oscillating elliptical droplet and Mach 1.47 and 3 shock-droplet interaction problems. The general characteristics of secondary atomization at a range of Weber numbers are also captured in a series of simulations.
DEFF Research Database (Denmark)
Andersen, Stig Kildegård; Carlsen, Henrik; Thomsen, Per Grove
2006-01-01
We present an approach for modelling unsteady, primarily one-dimensional, compressible flow. The conservation laws for mass, energy, and momentum are applied to a staggered mesh of control volumes and loss mechanisms are included directly as extra terms. Heat transfer, flow friction, and multidim...... are presented. The capabilities of the approach are illustrated with an example solution and an experimental validation of a Stirling engine model....
Alpha Matting with KL-Divergence Based Sparse Sampling.
Karacan, Levent; Erdem, Aykut; Erdem, Erkut
2017-06-22
In this paper, we present a new sampling-based alpha matting approach for the accurate estimation of foreground and background layers of an image. Previous sampling-based methods typically rely on certain heuristics in collecting representative samples from known regions, and thus their performance deteriorates if the underlying assumptions are not satisfied. To alleviate this, we take an entirely new approach and formulate sampling as a sparse subset selection problem where we propose to pick a small set of candidate samples that best explains the unknown pixels. Moreover, we describe a new dissimilarity measure for comparing two samples which is based on KLdivergence between the distributions of features extracted in the vicinity of the samples. The proposed framework is general and could be easily extended to video matting by additionally taking temporal information into account in the sampling process. Evaluation on standard benchmark datasets for image and video matting demonstrates that our approach provides more accurate results compared to the state-of-the-art methods.
Optimum mix for fly ash geopolymer binder based on workability and compressive strength
Arafa, S. A.; Ali, A. Z. M.; Awal, A. S. M. A.; Loon, L. Y.
2018-04-01
The request of concrete is increasing every day for sustaining the necessity of development of structure. The production of OPC not only consumes big amount of natural resources and energy, but also emit significant quantity of CO2 to the atmosphere. Therefore, it is necessary to find alternatives like Geopolymer to make the concrete environment friendly. Geopolymer is an inorganic alumino-silicate compound, produced from fly ash. This paper describes the experimental work conducted by casting 40 geopolymer paste mixes, and was cured at 80°C for 24 h to evaluate the effect of various parameters affecting the workability and compressive strength. Alkaline solution to fly ash ratio and sodium hydroxide (NaOH) concentration were chosen as the key parameters of strength and workability. Laboratory investigation with different percentage of sodium hydroxide concentration and different alkaline liquid to fly ash ratio reveals that the optimum ratios are 10 M, AL/FA=0.5. It has generally been found that the workability decreased and the compressive strength increased with an increase in the concentration of sodium hydroxide solution. However, workability was increased and the compressive strength was decreased with the increase in the ratio of fly ash to alkaline solution.
Directory of Open Access Journals (Sweden)
Tongfeng Zhang
2016-01-01
Full Text Available A one-dimensional (1D hybrid chaotic system is constructed by three different 1D chaotic maps in parallel-then-cascade fashion. The proposed chaotic map has larger key space and exhibits better uniform distribution property in some parametric range compared with existing 1D chaotic map. Meanwhile, with the combination of compressive sensing (CS and Fibonacci-Lucas transform (FLT, a novel image compression and encryption scheme is proposed with the advantages of the 1D hybrid chaotic map. The whole encryption procedure includes compression by compressed sensing (CS, scrambling with FLT, and diffusion after linear scaling. Bernoulli measurement matrix in CS is generated by the proposed 1D hybrid chaotic map due to its excellent uniform distribution. To enhance the security and complexity, transform kernel of FLT varies in each permutation round according to the generated chaotic sequences. Further, the key streams used in the diffusion process depend on the chaotic map as well as plain image, which could resist chosen plaintext attack (CPA. Experimental results and security analyses demonstrate the validity of our scheme in terms of high security and robustness against noise attack and cropping attack.
Jridi, Maher; Alfalou, Ayman
2018-03-01
In this paper, enhancement of an existing optical simultaneous fusion, compression and encryption (SFCE) scheme in terms of real-time requirements, bandwidth occupation and encryption robustness is proposed. We have used and approximate form of the DCT to decrease the computational resources. Then, a novel chaos-based encryption algorithm is introduced in order to achieve the confusion and diffusion effects. In the confusion phase, Henon map is used for row and column permutations, where the initial condition is related to the original image. Furthermore, the Skew Tent map is employed to generate another random matrix in order to carry out pixel scrambling. Finally, an adaptation of a classical diffusion process scheme is employed to strengthen security of the cryptosystem against statistical, differential, and chosen plaintext attacks. Analyses of key space, histogram, adjacent pixel correlation, sensitivity, and encryption speed of the encryption scheme are provided, and favorably compared to those of the existing crypto-compression system. The proposed method has been found to be digital/optical implementation-friendly which facilitates the integration of the crypto-compression system on a very broad range of scenarios.
International Nuclear Information System (INIS)
Zhen, Xudong; Wang, Yang
2015-01-01
Highlights: • Knock during HCCI in a high compression ratio methanol engine was modeled. • A detailed methanol mechanism was used to simulate the knocking combustion. • Compared with the SI engines, the HCCI knocking combustion burnt faster. • The reaction rate of HCO had two obvious peaks, one was positive, and another was negative. • Compared with the SI engines, the values of the reaction rates of CH 2 O, H 2 O 2 , and HO 2 were higher, and it had negative peaks. - Abstract: In this study, knock during HCCI (homogeneous charge compression ignition) was studied based on LES (large eddy simulation) with methanol chemical kinetics (84-reaction, 21-species) in a high compression ratio methanol engine. The non-knocking and knocking combustion of SI (spark ignition) and HCCI engines were compared. The results showed that the auto-ignition spots were initially occurred near the combustion chamber wall. The knocking combustion burnt faster during HCCI than SI methanol engine. The HCO reaction rate was different from SI engine, it had two obvious peaks, one was positive peak, and another was negative peak. Compared with the SI methanol engine, in addition to the concentration of HCO, the concentrations of the other intermediate products and species such as CO, OH, CH 2 O, H 2 O 2 , HO 2 were increased significantly; the reaction rates of CH 2 O, H 2 O 2 , and HO 2 had negative peaks, and whose values were several times higher than SI methanol engine
An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method
International Nuclear Information System (INIS)
Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.
2015-01-01
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)
Tree compression with top trees
DEFF Research Database (Denmark)
Bille, Philip; Gørtz, Inge Li; Landau, Gad M.
2013-01-01
We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...
Tree compression with top trees
DEFF Research Database (Denmark)
Bille, Philip; Gørtz, Inge Li; Landau, Gad M.
2015-01-01
We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...
Peller, Joseph; Thompson, Kyle J.; Siddiqui, Imran; Martinie, John; Iannitti, David A.; Trammell, Susan R.
2017-02-01
Pancreatic cancer is the fourth leading cause of cancer death in the US. Currently, surgery is the only treatment that offers a chance of cure, however, accurately identifying tumor margins in real-time is difficult. Research has demonstrated that optical spectroscopy can be used to distinguish between healthy and diseased tissue. The design of a single-pixel imaging system for cancer detection is discussed. The system differentiates between healthy and diseased tissue based on differences in the optical reflectance spectra of these regions. In this study, pancreatic tissue samples from 6 patients undergoing Whipple procedures are imaged with the system (total number of tissue sample imaged was N=11). Regions of healthy and unhealthy tissue are determined based on SAM analysis of these spectral images. Hyperspectral imaging results are then compared to white light imaging and histological analysis. Cancerous regions were clearly visible in the hyperspectral images. Margins determined via spectral imaging were in good agreement with margins identified by histology, indicating that hyperspectral imaging system can differentiate between healthy and diseased tissue. After imaging the system was able to detect cancerous regions with a sensitivity of 74.50±5.89% and a specificity of 75.53±10.81%. Possible applications of this imaging system include determination of tumor margins during surgery/biopsy and assistance with cancer diagnosis and staging.
[Compression treatment for burned skin].
Jaafar, Fadhel; Lassoued, Mohamed A; Sahnoun, Mahdi; Sfar, Souad; Cheikhrouhou, Morched
2012-02-01
The regularity of a compressive knit is defined as its ability to perform its function in a burnt skin. This property is essential to avoid the phenomenon of rejection of the material or toxicity problems But: Make knits biocompatible with high burnet of human skin. We fabric knits of elastic material. To ensure good adhesion to the skin, we made elastic material, typically a tight loop knitted. The Length of yarn absorbed by stitch and the raw matter are changed with each sample. The physical properties of each sample are measured and compared. Surface modifications are made to these samples by impregnation of microcapsules based on jojoba oil. Knits are compressif, elastic in all directions, light, thin, comfortable, and washable for hygiene issues. In addition, the washing can find their compressive properties. The Jojoba Oil microcapsules hydrated the human burnet skin. This moisturizer is used to the firmness of the wound and it gives flexibility to the skin. Compressive Knits are biocompatible with burnet skin. The mixture of natural and synthetic fibers is irreplaceable in terms comfort and regularity.
Description of hot compressed hadronic matter based on an effective chiral Lagrangian
Energy Technology Data Exchange (ETDEWEB)
Florkowski, W. [Institute of Nuclear Physics, Cracow (Poland)
1996-11-01
In this report we give the review of the recent results obtained in the Nambu-Jona-Lasinio (NJL) model, describing the properties of hot compressed matter. The first large class problems concerns the behaviour of static meson correlation functions. In particular, this includes the investigation of the screening of meson fields at finite temperature or density. Another wide range of problems presented in our report concerns the formulation of the transport theory for the NJL model and its applications to the description of high energy nuclear collision. 86 refs, 35 figs.
Bidirectional Texture Function Compression Based on Multi-Level Vector Quantization
Czech Academy of Sciences Publication Activity Database
Havran, V.; Filip, Jiří; Myszkowski, K.
2010-01-01
Roč. 29, č. 1 (2010), s. 175-190 ISSN 0167-7055 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593 Grant - others:EC Marie Curie ERG(CZ) 239294 Institutional research plan: CEZ:AV0Z10750506 Keywords : bidirectional texture function * BRDF * compression * SSIM Subject RIV: BD - Theory of Information Impact factor: 1.455, year: 2010 http://library.utia.cas.cz/separaty/2010/RO/filip-0338804.pdf
Description of hot compressed hadronic matter based on an effective chiral Lagrangian
International Nuclear Information System (INIS)
Florkowski, W.
1996-11-01
In this report we give the review of the recent results obtained in the Nambu-Jona-Lasinio (NJL) model, describing the properties of hot compressed matter. The first large class problems concerns the behaviour of static meson correlation functions. In particular, this includes the investigation of the screening of meson fields at finite temperature or density. Another wide range of problems presented in our report concerns the formulation of the transport theory for the NJL model and its applications to the description of high energy nuclear collision. 86 refs, 35 figs
Pattern-based compression of multi-band image data for landscape analysis
Myers, Wayne L; Patil, Ganapati P
2006-01-01
This book describes an integrated approach to using remotely sensed data in conjunction with geographic information systems for landscape analysis. Remotely sensed data are compressed into an analytical image-map that is compatible with the most popular geographic information systems as well as freeware viewers. The approach is most effective for landscapes that exhibit a pronounced mosaic pattern of land cover. The image maps are much more compact than the original remotely sensed data, which enhances utility on the internet. As value-added products, distribution of image-maps is not affected by copyrights on original multi-band image data.
Soft magnetic properties of bulk amorphous Co-based samples
International Nuclear Information System (INIS)
Fuezer, J.; Bednarcik, J.; Kollar, P.
2006-01-01
Ball milling of melt-spun ribbons and subsequent compaction of the resulting powders in the supercooled liquid region were used to prepare disc shaped bulk amorphous Co-based samples. The several bulk samples have been prepared by hot compaction with subsequent heat treatment (500 deg C - 575 deg C). The influence of the consolidation temperature and follow-up heat treatment on the magnetic properties of bulk samples was investigated. The final heat treatment leads to decrease of the coercivity to the value between the 7.5 to 9 A/m (Authors)
A novel PMT test system based on waveform sampling
Yin, S.; Ma, L.; Ning, Z.; Qian, S.; Wang, Y.; Jiang, X.; Wang, Z.; Yu, B.; Gao, F.; Zhu, Y.; Wang, Z.
2018-01-01
Comparing with the traditional test system based on a QDC and TDC and scaler, a test system based on waveform sampling is constructed for signal sampling of the 8"R5912 and the 20"R12860 Hamamatsu PMT in different energy states from single to multiple photoelectrons. In order to achieve high throughput and to reduce the dead time in data processing, the data acquisition software based on LabVIEW is developed and runs with a parallel mechanism. The analysis algorithm is realized in LabVIEW and the spectra of charge, amplitude, signal width and rising time are analyzed offline. The results from Charge-to-Digital Converter, Time-to-Digital Converter and waveform sampling are discussed in detailed comparison.
Compressive Strength of EN AC-44200 Based Composite Materials Strengthened with α-Al2O3 Particles
Kurzawa A.; Kaczmar J. W.
2017-01-01
The paper presents results of compressive strength investigations of EN AC-44200 based aluminum alloy composite materials reinforced with aluminum oxide particles at ambient and at temperatures of 100, 200 and 250°C. They were manufactured by squeeze casting of the porous preforms made of α-Al2O3 particles with liquid aluminum alloy EN AC-44200. The composite materials were reinforced with preforms characterized by the porosities of 90, 80, 70 and 60 vol. %, thus the alumina content in the co...
Ahn, Jungmo; Park, JaeYeon; Park, Donghwan; Paek, Jeongyeup; Ko, JeongGil
2018-01-01
With the introduction of various advanced deep learning algorithms, initiatives for image classification systems have transitioned over from traditional machine learning algorithms (e.g., SVM) to Convolutional Neural Networks (CNNs) using deep learning software tools. A prerequisite in applying CNN to real world applications is a system that collects meaningful and useful data. For such purposes, Wireless Image Sensor Networks (WISNs), that are capable of monitoring natural environment phenomena using tiny and low-power cameras on resource-limited embedded devices, can be considered as an effective means of data collection. However, with limited battery resources, sending high-resolution raw images to the backend server is a burdensome task that has direct impact on network lifetime. To address this problem, we propose an energy-efficient pre- and post- processing mechanism using image resizing and color quantization that can significantly reduce the amount of data transferred while maintaining the classification accuracy in the CNN at the backend server. We show that, if well designed, an image in its highly compressed form can be well-classified with a CNN model trained in advance using adequately compressed data. Our evaluation using a real image dataset shows that an embedded device can reduce the amount of transmitted data by ∼71% while maintaining a classification accuracy of ∼98%. Under the same conditions, this process naturally reduces energy consumption by ∼71% compared to a WISN that sends the original uncompressed images.
A Fast, Open EEG Classification Framework Based on Feature Compression and Channel Ranking
Directory of Open Access Journals (Sweden)
Jiuqi Han
2018-04-01
Full Text Available Superior feature extraction, channel selection and classification methods are essential for designing electroencephalography (EEG classification frameworks. However, the performance of most frameworks is limited by their improper channel selection methods and too specifical design, leading to high computational complexity, non-convergent procedure and narrow expansibility. In this paper, to remedy these drawbacks, we propose a fast, open EEG classification framework centralized by EEG feature compression, low-dimensional representation, and convergent iterative channel ranking. First, to reduce the complexity, we use data clustering to compress the EEG features channel-wise, packing the high-dimensional EEG signal, and endowing them with numerical signatures. Second, to provide easy access to alternative superior methods, we structurally represent each EEG trial in a feature vector with its corresponding numerical signature. Thus, the recorded signals of many trials shrink to a low-dimensional structural matrix compatible with most pattern recognition methods. Third, a series of effective iterative feature selection approaches with theoretical convergence is introduced to rank the EEG channels and remove redundant ones, further accelerating the EEG classification process and ensuring its stability. Finally, a classical linear discriminant analysis (LDA model is employed to classify a single EEG trial with selected channels. Experimental results on two real world brain-computer interface (BCI competition datasets demonstrate the promising performance of the proposed framework over state-of-the-art methods.
Compressive Sensing in Communication Systems
DEFF Research Database (Denmark)
Fyhn, Karsten
2013-01-01
. The need for cheaper, smarter and more energy efficient wireless devices is greater now than ever. This thesis addresses this problem and concerns the application of the recently developed sampling theory of compressive sensing in communication systems. Compressive sensing is the merging of signal...... acquisition and compression. It allows for sampling a signal with a rate below the bound dictated by the celebrated Shannon-Nyquist sampling theorem. In some communication systems this necessary minimum sample rate, dictated by the Shannon-Nyquist sampling theorem, is so high it is at the limit of what...... with using compressive sensing in communication systems. The main contribution of this thesis is two-fold: 1) a new compressive sensing hardware structure for spread spectrum signals, which is simpler than the current state-of-the-art, and 2) a range of algorithms for parameter estimation for the class...
Yu, Xu; Shao, Quanqin; Zhu, Yunhai; Deng, Yuejin; Yang, Haijun
2006-10-01
With the development of informationization and the separation between data management departments and application departments, spatial data sharing becomes one of the most important objectives for the spatial information infrastructure construction, and spatial metadata management system, data transmission security and data compression are the key technologies to realize spatial data sharing. This paper discusses the key technologies for metadata based on data interoperability, deeply researches the data compression algorithms such as adaptive Huffman algorithm, LZ77 and LZ78 algorithm, studies to apply digital signature technique to encrypt spatial data, which can not only identify the transmitter of spatial data, but also find timely whether the spatial data are sophisticated during the course of network transmission, and based on the analysis of symmetric encryption algorithms including 3DES,AES and asymmetric encryption algorithm - RAS, combining with HASH algorithm, presents a improved mix encryption method for spatial data. Digital signature technology and digital watermarking technology are also discussed. Then, a new solution of spatial data network distribution is put forward, which adopts three-layer architecture. Based on the framework, we give a spatial data network distribution system, which is efficient and safe, and also prove the feasibility and validity of the proposed solution.
Directory of Open Access Journals (Sweden)
Laisen Nie
2018-01-01
Full Text Available Wireless mesh network is prevalent for providing a decentralized access for users and other intelligent devices. Meanwhile, it can be employed as the infrastructure of the last few miles connectivity for various network applications, for example, Internet of Things (IoT and mobile networks. For a wireless mesh backbone network, it has obtained extensive attention because of its large capacity and low cost. Network traffic prediction is important for network planning and routing configurations that are implemented to improve the quality of service for users. This paper proposes a network traffic prediction method based on a deep learning architecture and the Spatiotemporal Compressive Sensing method. The proposed method first adopts discrete wavelet transform to extract the low-pass component of network traffic that describes the long-range dependence of itself. Then, a prediction model is built by learning a deep architecture based on the deep belief network from the extracted low-pass component. Otherwise, for the remaining high-pass component that expresses the gusty and irregular fluctuations of network traffic, the Spatiotemporal Compressive Sensing method is adopted to predict it. Based on the predictors of two components, we can obtain a predictor of network traffic. From the simulation, the proposed prediction method outperforms three existing methods.
Kumar, Ashish; Kumar, Manjeet; Komaragiri, Rama
2018-04-19
Bradycardia can be modulated using the cardiac pacemaker, an implantable medical device which sets and balances the patient's cardiac health. The device has been widely used to detect and monitor the patient's heart rate. The data collected hence has the highest authenticity assurance and is convenient for further electric stimulation. In the pacemaker, ECG detector is one of the most important element. The device is available in its new digital form, which is more efficient and accurate in performance with the added advantage of economical power consumption platform. In this work, a joint algorithm based on biorthogonal wavelet transform and run-length encoding (RLE) is proposed for QRS complex detection of the ECG signal and compressing the detected ECG data. Biorthogonal wavelet transform of the input ECG signal is first calculated using a modified demand based filter bank architecture which consists of a series combination of three lowpass filters with a highpass filter. Lowpass and highpass filters are realized using a linear phase structure which reduces the hardware cost of the proposed design approximately by 50%. Then, the location of the R-peak is found by comparing the denoised ECG signal with the threshold value. The proposed R-peak detector achieves the highest sensitivity and positive predictivity of 99.75 and 99.98 respectively with the MIT-BIH arrhythmia database. Also, the proposed R-peak detector achieves a comparatively low data error rate (DER) of 0.002. The use of RLE for the compression of detected ECG data achieves a higher compression ratio (CR) of 17.1. To justify the effectiveness of the proposed algorithm, the results have been compared with the existing methods, like Huffman coding/simple predictor, Huffman coding/adaptive, and slope predictor/fixed length packaging.
Improved mesh based photon sampling techniques for neutron activation analysis
International Nuclear Information System (INIS)
Relson, E.; Wilson, P. P. H.; Biondo, E. D.
2013-01-01
The design of fusion power systems requires analysis of neutron activation of large, complex volumes, and the resulting particles emitted from these volumes. Structured mesh-based discretization of these problems allows for improved modeling in these activation analysis problems. Finer discretization of these problems results in large computational costs, which drives the investigation of more efficient methods. Within an ad hoc subroutine of the Monte Carlo transport code MCNP, we implement sampling of voxels and photon energies for volumetric sources using the alias method. The alias method enables efficient sampling of a discrete probability distribution, and operates in 0(1) time, whereas the simpler direct discrete method requires 0(log(n)) time. By using the alias method, voxel sampling becomes a viable alternative to sampling space with the 0(1) approach of uniformly sampling the problem volume. Additionally, with voxel sampling it is straightforward to introduce biasing of volumetric sources, and we implement this biasing of voxels as an additional variance reduction technique that can be applied. We verify our implementation and compare the alias method, with and without biasing, to direct discrete sampling of voxels, and to uniform sampling. We study the behavior of source biasing in a second set of tests and find trends between improvements and source shape, material, and material density. Overall, however, the magnitude of improvements from source biasing appears to be limited. Future work will benefit from the implementation of efficient voxel sampling - particularly with conformal unstructured meshes where the uniform sampling approach cannot be applied. (authors)
International Nuclear Information System (INIS)
Falchieri, Davide; Gandolfi, Enzo; Masotti, Matteo
2004-01-01
This paper evaluates the performances of a wavelet-based compression algorithm applied to the data produced by the silicon drift detectors of the ALICE experiment at CERN. This compression algorithm is a general purpose lossy technique, in other words, its application could prove useful even on a wide range of other data reduction's problems. In particular the design targets relevant for our wavelet-based compression algorithm are the following ones: a high-compression coefficient, a reconstruction error as small as possible and a very limited execution time. Interestingly, the results obtained are quite close to the ones achieved by the algorithm implemented in the first prototype of the chip CARLOS, the chip that will be used in the silicon drift detectors readout chain
Theory of sampling and its application in tissue based diagnosis
Directory of Open Access Journals (Sweden)
Kayser Gian
2009-02-01
Full Text Available Abstract Background A general theory of sampling and its application in tissue based diagnosis is presented. Sampling is defined as extraction of information from certain limited spaces and its transformation into a statement or measure that is valid for the entire (reference space. The procedure should be reproducible in time and space, i.e. give the same results when applied under similar circumstances. Sampling includes two different aspects, the procedure of sample selection and the efficiency of its performance. The practical performance of sample selection focuses on search for localization of specific compartments within the basic space, and search for presence of specific compartments. Methods When a sampling procedure is applied in diagnostic processes two different procedures can be distinguished: I the evaluation of a diagnostic significance of a certain object, which is the probability that the object can be grouped into a certain diagnosis, and II the probability to detect these basic units. Sampling can be performed without or with external knowledge, such as size of searched objects, neighbourhood conditions, spatial distribution of objects, etc. If the sample size is much larger than the object size, the application of a translation invariant transformation results in Kriege's formula, which is widely used in search for ores. Usually, sampling is performed in a series of area (space selections of identical size. The size can be defined in relation to the reference space or according to interspatial relationship. The first method is called random sampling, the second stratified sampling. Results Random sampling does not require knowledge about the reference space, and is used to estimate the number and size of objects. Estimated features include area (volume fraction, numerical, boundary and surface densities. Stratified sampling requires the knowledge of objects (and their features and evaluates spatial features in relation to
Contingency inferences driven by base rates: Valid by sampling
Directory of Open Access Journals (Sweden)
Florian Kutzner
2011-04-01
Full Text Available Fiedler et al. (2009, reviewed evidence for the utilization of a contingency inference strategy termed pseudocontingencies (PCs. In PCs, the more frequent levels (and, by implication, the less frequent levels are assumed to be associated. PCs have been obtained using a wide range of task settings and dependent measures. Yet, the readiness with which decision makers rely on PCs is poorly understood. A computer simulation explored two potential sources of subjective validity of PCs. First, PCs are shown to perform above chance level when the task is to infer the sign of moderate to strong population contingencies from a sample of observations. Second, contingency inferences based on PCs and inferences based on cell frequencies are shown to partially agree across samples. Intriguingly, this criterion and convergent validity are by-products of random sampling error, highlighting the inductive nature of contingency inferences.
Community-based survey versus sentinel site sampling in ...
African Journals Online (AJOL)
rural children. Implications for nutritional surveillance and the development of nutritional programmes. G. c. Solarsh, D. M. Sanders, C. A. Gibson, E. Gouws. A study of the anthropometric status of under-5-year-olds was conducted in the Nqutu district of Kwazulu by means of a representative community-based sample and.
A sampling-based approach to probabilistic pursuit evasion
Mahadevan, Aditya
2012-05-01
Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented with useful information to model interesting scenarios related to multi-agent interaction and coordination. © 2012 IEEE.
Quantitative Inspection of Remanence of Broken Wire Rope Based on Compressed Sensing.
Zhang, Juwei; Tan, Xiaojiang
2016-08-25
Most traditional strong magnetic inspection equipment has disadvantages such as big excitation devices, high weight, low detection precision, and inconvenient operation. This paper presents the design of a giant magneto-resistance (GMR) sensor array collection system. The remanence signal is collected to acquire two-dimensional magnetic flux leakage (MFL) data on the surface of wire ropes. Through the use of compressed sensing wavelet filtering (CSWF), the image expression of wire ropes MFL on the surface was obtained. Then this was taken as the input of the designed back propagation (BP) neural network to extract three kinds of MFL image geometry features and seven invariant moments of defect images. Good results were obtained. The experimental results show that nondestructive inspection through the use of remanence has higher accuracy and reliability compared with traditional inspection devices, along with smaller volume, lighter weight and higher precision.
Quantitative Inspection of Remanence of Broken Wire Rope Based on Compressed Sensing
Zhang, Juwei; Tan, Xiaojiang
2016-01-01
Most traditional strong magnetic inspection equipment has disadvantages such as big excitation devices, high weight, low detection precision, and inconvenient operation. This paper presents the design of a giant magneto-resistance (GMR) sensor array collection system. The remanence signal is collected to acquire two-dimensional magnetic flux leakage (MFL) data on the surface of wire ropes. Through the use of compressed sensing wavelet filtering (CSWF), the image expression of wire ropes MFL on the surface was obtained. Then this was taken as the input of the designed back propagation (BP) neural network to extract three kinds of MFL image geometry features and seven invariant moments of defect images. Good results were obtained. The experimental results show that nondestructive inspection through the use of remanence has higher accuracy and reliability compared with traditional inspection devices, along with smaller volume, lighter weight and higher precision. PMID:27571077
Toward Wireless Health Monitoring via an Analog Signal Compression-Based Biosensing Platform.
Zhao, Xueyuan; Sadhu, Vidyasagar; Le, Tuan; Pompili, Dario; Javanmard, Mehdi
2018-06-01
Wireless all-analog biosensor design for the concurrent microfluidic and physiological signal monitoring is presented in this paper. The key component is an all-analog circuit capable of compressing two analog sources into one analog signal by the analog joint source-channel coding (AJSCC). Two circuit designs are discussed, including the stacked-voltage-controlled voltage source (VCVS) design with the fixed number of levels, and an improved design, which supports a flexible number of AJSCC levels. Experimental results are presented on the wireless biosensor prototype, composed of printed circuit board realizations of the stacked-VCVS design. Furthermore, circuit simulation and wireless link simulation results are presented on the improved design. Results indicate that the proposed wireless biosensor is well suited for sensing two biological signals simultaneously with high accuracy, and can be applied to a wide variety of low-power and low-cost wireless continuous health monitoring applications.
Ouyang, Bing; Hou, Weilin; Gong, Cuiling; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.
2016-05-01
The Compressive Line Sensing (CLS) active imaging system has been demonstrated to be effective in scattering mediums, such as turbid coastal water through simulations and test tank experiments. Since turbulence is encountered in many atmospheric and underwater surveillance applications, a new CLS imaging prototype was developed to investigate the effectiveness of the CLS concept in a turbulence environment. Compared with earlier optical bench top prototype, the new system is significantly more robust and compact. A series of experiments were conducted at the Naval Research Lab's optical turbulence test facility with the imaging path subjected to various turbulence intensities. In addition to validating the system design, we obtained some unexpected exciting results - in the strong turbulence environment, the time-averaged measurements using the new CLS imaging prototype improved both SNR and resolution of the reconstructed images. We will discuss the implications of the new findings, the challenges of acquiring data through strong turbulence environment, and future enhancements.
Quantitative Inspection of Remanence of Broken Wire Rope Based on Compressed Sensing
Directory of Open Access Journals (Sweden)
Juwei Zhang
2016-08-01
Full Text Available Most traditional strong magnetic inspection equipment has disadvantages such as big excitation devices, high weight, low detection precision, and inconvenient operation. This paper presents the design of a giant magneto-resistance (GMR sensor array collection system. The remanence signal is collected to acquire two-dimensional magnetic flux leakage (MFL data on the surface of wire ropes. Through the use of compressed sensing wavelet filtering (CSWF, the image expression of wire ropes MFL on the surface was obtained. Then this was taken as the input of the designed back propagation (BP neural network to extract three kinds of MFL image geometry features and seven invariant moments of defect images. Good results were obtained. The experimental results show that nondestructive inspection through the use of remanence has higher accuracy and reliability compared with traditional inspection devices, along with smaller volume, lighter weight and higher precision.
Directory of Open Access Journals (Sweden)
Abdulsalam Arafa Salaheddin
2017-01-01
Full Text Available The production of ordinary Portland cement (OPC consumes considerable natural resources and energy, and it also affects the emission of a significant quantity of CO2 in the atmosphere. This pervious geopolymer concrete study aims to explore an alternative binder without OPC. Pervious geopolymer concretes were prepared from fly ash (FA, sodium silicate (NaSiO3, sodium hydroxide (NaOH solution, and coarse aggregate (CA. The effects of pervious geopolymer concrete parameters that affect water permeability and compressive strength are evaluated. The FA to CA ratios of 1:6, 1:7,1:8, and 1:9 by weight, CA sizes of 5–10, 10–14, and 14–20 mm, constant NaSiO3/NaOH ratio of 2.5, alkaline liquid to fly ash (AL/FA ratios of 0.4, 0.5, and 0.6, and NaOH concentrations of 8, 10, and 12 M were the pervious geopolymer concrete mix proportions. The curing temperature of 80 °C for 24 h was used. The results showed that a pervious geopolymer concrete with CA of 10 mm achieved water permeability of 2.3 cm/s and compressive strength of 20 MPa with AL/FA ratio of 0.5, NaOH concentration of 10 M, and FA:CA of 1:7. GEOCRETE is indicated to have better engineering properties than does pervious concrete that is made of ordinary Portland cement.
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Belbasis, Aaron; Fuss, Franz Konstantin
2018-01-01
Muscle activity and fatigue performance parameters were obtained and compared between both a smart compression garment and the gold-standard, a surface electromyography (EMG) system during high-speed cycling in seven participants. The smart compression garment, based on force myography (FMG), comprised of integrated pressure sensors that were sandwiched between skin and garment, located on five thigh muscles. The muscle activity was assessed by means of crank cycle diagrams (polar plots) that displayed the muscle activity relative to the crank cycle. The fatigue was assessed by means of the median frequency of the power spectrum of the EMG signal; the fractal dimension (FD) of the EMG signal; and the FD of the pressure signal. The smart compression garment returned performance parameters (muscle activity and fatigue) comparable to the surface EMG. The major differences were that the EMG measured the electrical activity, whereas the pressure sensor measured the mechanical activity. As such, there was a phase shift between electrical and mechanical signals, with the electrical signals preceding the mechanical counterparts in most cases. This is specifically pronounced in high-speed cycling. The fatigue trend over the duration of the cycling exercise was clearly reflected in the fatigue parameters (FDs and median frequency) obtained from pressure and EMG signals. The fatigue parameter of the pressure signal (FD) showed a higher time dependency ( R 2 = 0.84) compared to the EMG signal. This reflects that the pressure signal puts more emphasis on the fatigue as a function of time rather than on the origin of fatigue (e.g., peripheral or central fatigue). In light of the high-speed activity results, caution should be exerted when using data obtained from EMG for biomechanical models. In contrast to EMG data, activity data obtained from FMG are considered more appropriate and accurate as an input for biomechanical modeling as they truly reflect the mechanical muscle
Directory of Open Access Journals (Sweden)
Aaron Belbasis
2018-04-01
Full Text Available Muscle activity and fatigue performance parameters were obtained and compared between both a smart compression garment and the gold-standard, a surface electromyography (EMG system during high-speed cycling in seven participants. The smart compression garment, based on force myography (FMG, comprised of integrated pressure sensors that were sandwiched between skin and garment, located on five thigh muscles. The muscle activity was assessed by means of crank cycle diagrams (polar plots that displayed the muscle activity relative to the crank cycle. The fatigue was assessed by means of the median frequency of the power spectrum of the EMG signal; the fractal dimension (FD of the EMG signal; and the FD of the pressure signal. The smart compression garment returned performance parameters (muscle activity and fatigue comparable to the surface EMG. The major differences were that the EMG measured the electrical activity, whereas the pressure sensor measured the mechanical activity. As such, there was a phase shift between electrical and mechanical signals, with the electrical signals preceding the mechanical counterparts in most cases. This is specifically pronounced in high-speed cycling. The fatigue trend over the duration of the cycling exercise was clearly reflected in the fatigue parameters (FDs and median frequency obtained from pressure and EMG signals. The fatigue parameter of the pressure signal (FD showed a higher time dependency (R2 = 0.84 compared to the EMG signal. This reflects that the pressure signal puts more emphasis on the fatigue as a function of time rather than on the origin of fatigue (e.g., peripheral or central fatigue. In light of the high-speed activity results, caution should be exerted when using data obtained from EMG for biomechanical models. In contrast to EMG data, activity data obtained from FMG are considered more appropriate and accurate as an input for biomechanical modeling as they truly reflect the mechanical
Directory of Open Access Journals (Sweden)
Armando Arce
2012-01-01
Full Text Available This research paper deals with a innovative way to simplify the design of beam-forming networks (BFNs for multibeam steerable antenna arrays based on coherently radiating periodic structures (CORPS technology using the noniterative matrix pencil method (MPM. This design approach is based on the application of the MPM to linear arrays fed by CORPS-BFN configurations to further reduce the complexity of the beam-forming network. Two 2-beam design configurations of CORPS-BFN for a steerable linear array are analyzed and compared using this compressive method. Simulation results show the effectiveness and advantages of applying the MPM on BFNs based on CORPS exploiting the nonuniformity of the antenna elements. Furthermore, final results show that the integration of CORPS-BFN and MPM reduces the entire antenna system including the antenna array and the beam-forming network subsystem resulting in a substantial simplification in such systems.
International Nuclear Information System (INIS)
Marcinkowski, Łukasz; Kloskowski, Adam; Czub, Jacek; Namieśnik, Jacek; Warmińska, Dorota
2015-01-01
Highlights: • In DMSO both volumes and compressibilities of ionic liquids were studied. • Molecular dynamics simulations were performed for all studied ionic liquids. • V Φ of DMSO solutions of [Mor 1,R ][TFSI] decrease with increasing IL concentration. • Results indicate that [Mor 1,R ][TFSI] are structure breakers in dimethylsulfoxide. • Obtained results are the consequence of the cation size of the ionic liquid. - Abstract: The density and sound velocity of the solutions of ionic liquids based on N-alkyl-N-methyl-morpholinium cations, N-ethyl-N-methylmorpholinium bis(trifluoromethanesulfonyl)imide, N-butyl-N-methylmorpholinium bis(trifluoromethanesulfonyl)imide, N-methyl-N-octyl-morpholinium bis(trifluoromethanesulfonyl)imide and N-decyl-N-methylmorpholinium bis(trifluoromethanesulfonyl)imide in dimethylsulfoxide were measured at T = (298.15 to 318.15) K and at atmospheric pressure. The apparent molar volume and apparent molar compressibility values were evaluated from density and sound velocity values and fitted to the Masson equation from which the partial molar volume and partial molar isentropic compressibility of the ILs at infinite dilution were also calculated at working temperatures. By using the density values, the limiting apparent molar expansibilities were estimated. The effect of the alkyl chain length of the ILs and experimental temperature on these thermodynamic properties is discussed. In addition, molecular dynamics simulations were used to interpret the measured properties in terms of interactions of ILs with solvent molecules. Both, volumetric measurements results and molecular dynamics simulations for ionic liquids in dimethylsulfoxide were compared and discussed with results obtained for the same IL in acetonitrile
International Nuclear Information System (INIS)
Liu, Jin-Long; Wang, Jian-Hua
2015-01-01
Based on CAES (compressed air energy storage) and PM (pneumatic motor), a novel tri-generation system (heat energy, mechanical energy and cooling power) is proposed in this paper. Both the cheap electricity generated at night and the excess power from undelivered renewable energy due to instability, can be stored as compressed air and hot water by the proposed system. When energy is in great demand, the compressed air stored in this system is released to drive PM to generate mechanical power. The discharged air from PM can be further utilized as valuable cooling power. Compared to conventional CAES systems, the biggest characteristic of the proposed system is that the discharged air usually abandoned is used as cooling power. In order to study the performances of this system, a thermodynamic analysis and an experimental investigation are carried out. The thermodynamic model is validated by the experimental data. Using the validated thermodynamic model, the mechanical energy output, cooling capacity and temperature of discharged air, as well as the efficiency of the system are analyzed. The theoretical analysis indicates that the additional application of discharged air can improve total energy efficiency by 20–30%. Therefore, this system is very worthy of consideration and being popularized. - Highlights: • The proposed system can provide mechanical energy, heat energy and cooling power. • The exhaust air of pneumatic motor is used as cooling power instead of abandoned. • A thermodynamic model of the proposed system is constructed and validated. • The effects of several parameters on system performance are examined. • The proposed system can improve total energy efficiency of CAES system by 20–30%.
Zheng, H. W.; Shu, C.; Chew, Y. T.
2008-07-01
In this paper, an object-oriented and quadrilateral-mesh based solution adaptive algorithm for the simulation of compressible multi-fluid flows is presented. The HLLC scheme (Harten, Lax and van Leer approximate Riemann solver with the Contact wave restored) is extended to adaptively solve the compressible multi-fluid flows under complex geometry on unstructured mesh. It is also extended to the second-order of accuracy by using MUSCL extrapolation. The node, edge and cell are arranged in such an object-oriented manner that each of them inherits from a basic object. A home-made double link list is designed to manage these objects so that the inserting of new objects and removing of the existing objects (nodes, edges and cells) are independent of the number of objects and only of the complexity of O( 1). In addition, the cells with different levels are further stored in different lists. This avoids the recursive calculation of solution of mother (non-leaf) cells. Thus, high efficiency is obtained due to these features. Besides, as compared to other cell-edge adaptive methods, the separation of nodes would reduce the memory requirement of redundant nodes, especially in the cases where the level number is large or the space dimension is three. Five two-dimensional examples are used to examine its performance. These examples include vortex evolution problem, interface only problem under structured mesh and unstructured mesh, bubble explosion under the water, bubble-shock interaction, and shock-interface interaction inside the cylindrical vessel. Numerical results indicate that there is no oscillation of pressure and velocity across the interface and it is feasible to apply it to solve compressible multi-fluid flows with large density ratio (1000) and strong shock wave (the pressure ratio is 10,000) interaction with the interface.
Directory of Open Access Journals (Sweden)
Sitarenios Panagiotis
2016-01-01
Full Text Available The Modified Cam Clay model is extended to account for the behaviour of unsaturated soils using Bishop’s stress. To describe the Loading – Collapse behaviour, the model incorporates a compressibility framework with suction and degree of saturation dependent compression lines. For simplicity, the present paper describes the model in the triaxial stress space with characteristic simulations of constant suction compression and triaxial tests, as well as wetting tests. The model reproduces an evolving post yield compressibility under constant suction compression, and thus, can adequately describe a maximum of collapse.
Yin, Jun; Yang, Yuwang; Wang, Lei
2016-04-01
Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering--CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes--MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme.
International Nuclear Information System (INIS)
Abdukaev, I.Kh.; Kuchinskij, V.G.; Titov, V.I.
1984-01-01
Principles of construction of control and stabilization systems for a compression-generator (CG)-sources of power energy pulses- are considered. CG is an electromechanical energy converter, the principle of its operation is based on magnetic flux compression with periodic change of mutual inductance of two rotating windings. In each period, with the decrease of intrinsic inductance the generator forms in the load the pulse leading edge, and in the phase rise a pulse decay. To obtain the same pulse in the following period it is neccessary that the magnetic flux initial value should be restored in the generator winding. Problems of attaining pule shaper amplitude stability are considered. The method of pulse amplitude control in the load at the expense of the change in switch moment of capacitive storage to CG windings is suggested. The block-diagram of stabilization system is presented and its operation principle is described. The control system is assembled using K 155 and K 511 microcircuits and it was tested with CG at the pulse energy to 10 kJ. The tests have shown, that already to the third pulse the system provided quite shaped series of pulses
Research on test of product based on spatial sampling criteria and variable step sampling mechanism
Li, Ruihong; Han, Yueping
2014-09-01
This paper presents an effective approach for online testing the assembly structures inside products using multiple views technique and X-ray digital radiography system based on spatial sampling criteria and variable step sampling mechanism. Although there are some objects inside one product to be tested, there must be a maximal rotary step for an object within which the least structural size to be tested is predictable. In offline learning process, Rotating the object by the step and imaging it and so on until a complete cycle is completed, an image sequence is obtained that includes the full structural information for recognition. The maximal rotary step is restricted by the least structural size and the inherent resolution of the imaging system. During online inspection process, the program firstly finds the optimum solutions to all different target parts in the standard sequence, i.e., finds their exact angles in one cycle. Aiming at the issue of most sizes of other targets in product are larger than that of the least structure, the paper adopts variable step-size sampling mechanism to rotate the product specific angles with different steps according to different objects inside the product and match. Experimental results show that the variable step-size method can greatly save time compared with the traditional fixed-step inspection method while the recognition accuracy is guaranteed.
Overlapped block-based compressive sensing imaging on mobile handset devices
Directory of Open Access Journals (Sweden)
Irene Manotas Gutiérrez
2014-01-01
Full Text Available Compressive Sensing (CS es una nueva técnica que simultáneamente comprime y muestrea una imagen tomando un conjunto de proyecciones aleatorias de una escena. Un algoritmo de optimización es empleado para reconstruir la imagen utilizando las proyecciones aleatorias. Diferentes algoritmos de optimización se han diseñado para obtener de manera eficiente una correcta reconstrucción de la señal original. En la práctica estos algoritmos se han restringido a implementaciones de CS en arquitecturas de alto rendimiento computacional, como computadores de escritorio o unidades de procesamiento gráfico, debido a el gran número de operaciones requeridas por el proceso de reconstrucción. Este trabajo extiende la aplicación de CS para ser implementado en una arquitectura con memoria y capacidad de procesamiento limitados como un dispositivo móvil. Específicamente, se describe un algoritmo basado en bloques sobrepuestos que permite reconstruir la imagen en un dispositivo móvil y se presenta un análisis del consumo de energía de los algoritmos utilizados. Los resultados muestran el tiempo computacional y la calidad de reconstrucción para imágenes de 128x128 y 256x256 píxeles.
Prediction of compressibility parameters of the soils using artificial neural network.
Kurnaz, T Fikret; Dagdeviren, Ugur; Yildiz, Murat; Ozkan, Ozhan
2016-01-01
The compression index and recompression index are one of the important compressibility parameters to determine the settlement calculation for fine-grained soil layers. These parameters can be determined by carrying out laboratory oedometer test on undisturbed samples; however, the test is quite time-consuming and expensive. Therefore, many empirical formulas based on regression analysis have been presented to estimate the compressibility parameters using soil index properties. In this paper, an artificial neural network (ANN) model is suggested for prediction of compressibility parameters from basic soil properties. For this purpose, the input parameters are selected as the natural water content, initial void ratio, liquid limit and plasticity index. In this model, two output parameters, including compression index and recompression index, are predicted in a combined network structure. As the result of the study, proposed ANN model is successful for the prediction of the compression index, however the predicted recompression index values are not satisfying compared to the compression index.
Statistical Analysis of Compression Methods for Storing Binary Image for Low-Memory Systems
Directory of Open Access Journals (Sweden)
Roman Slaby
2013-01-01
Full Text Available The paper is focused on the statistical comparison of the selected compression methods which are used for compression of the binary images. The aim is to asses, which of presented compression method for low-memory system requires less number of bytes of memory. For assessment of the success rates of the input image to binary image the correlation functions are used. Correlation function is one of the methods of OCR algorithm used for the digitization of printed symbols. Using of compression methods is necessary for systems based on low-power micro-controllers. The data stream saving is very important for such systems with limited memory as well as the time required for decoding the compressed data. The success rate of the selected compression algorithms is evaluated using the basic characteristics of the exploratory analysis. The searched samples represent the amount of bytes needed to compress the test images, representing alphanumeric characters.
Directory of Open Access Journals (Sweden)
Ali Al-Khattawi
Full Text Available The work investigates the adhesive/cohesive molecular and physical interactions together with nanoscopic features of commonly used orally disintegrating tablet (ODT excipients microcrystalline cellulose (MCC and D-mannitol. This helps to elucidate the underlying physico-chemical and mechanical mechanisms responsible for powder densification and optimum product functionality. Atomic force microscopy (AFM contact mode analysis was performed to measure nano-adhesion forces and surface energies between excipient-drug particles (6-10 different particles per each pair. Moreover, surface topography images (100 nm2-10 µm2 and roughness data were acquired from AFM tapping mode. AFM data were related to ODT macro/microscopic properties obtained from SEM, FTIR, XRD, thermal analysis using DSC and TGA, disintegration testing, Heckel and tabletability profiles. The study results showed a good association between the adhesive molecular and physical forces of paired particles and the resultant densification mechanisms responsible for mechanical strength of tablets. MCC micro roughness was 3 times that of D-mannitol which explains the high hardness of MCC ODTs due to mechanical interlocking. Hydrogen bonding between MCC particles could not be established from both AFM and FTIR solid state investigation. On the contrary, D-mannitol produced fragile ODTs due to fragmentation of surface crystallites during compression attained from its weak crystal structure. Furthermore, AFM analysis has shown the presence of extensive micro fibril structures inhabiting nano pores which further supports the use of MCC as a disintegrant. Overall, excipients (and model drugs showed mechanistic behaviour on the nano/micro scale that could be related to the functionality of materials on the macro scale.
Al-khattawi, Ali; Alyami, Hamad; Townsend, Bill; Ma, Xianghong; Mohammed, Afzal R.
2014-01-01
The work investigates the adhesive/cohesive molecular and physical interactions together with nanoscopic features of commonly used orally disintegrating tablet (ODT) excipients microcrystalline cellulose (MCC) and D-mannitol. This helps to elucidate the underlying physico-chemical and mechanical mechanisms responsible for powder densification and optimum product functionality. Atomic force microscopy (AFM) contact mode analysis was performed to measure nano-adhesion forces and surface energies between excipient-drug particles (6-10 different particles per each pair). Moreover, surface topography images (100 nm2–10 µm2) and roughness data were acquired from AFM tapping mode. AFM data were related to ODT macro/microscopic properties obtained from SEM, FTIR, XRD, thermal analysis using DSC and TGA, disintegration testing, Heckel and tabletability profiles. The study results showed a good association between the adhesive molecular and physical forces of paired particles and the resultant densification mechanisms responsible for mechanical strength of tablets. MCC micro roughness was 3 times that of D-mannitol which explains the high hardness of MCC ODTs due to mechanical interlocking. Hydrogen bonding between MCC particles could not be established from both AFM and FTIR solid state investigation. On the contrary, D-mannitol produced fragile ODTs due to fragmentation of surface crystallites during compression attained from its weak crystal structure. Furthermore, AFM analysis has shown the presence of extensive micro fibril structures inhabiting nano pores which further supports the use of MCC as a disintegrant. Overall, excipients (and model drugs) showed mechanistic behaviour on the nano/micro scale that could be related to the functionality of materials on the macro scale. PMID:25025427
Energy Technology Data Exchange (ETDEWEB)
Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)
2012-07-01
Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)
Using machine learning to accelerate sampling-based inversion
Valentine, A. P.; Sambridge, M.
2017-12-01
In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.
Patch-based visual tracking with online representative sample selection
Ou, Weihua; Yuan, Di; Li, Donghao; Liu, Bin; Xia, Daoxun; Zeng, Wu
2017-05-01
Occlusion is one of the most challenging problems in visual object tracking. Recently, a lot of discriminative methods have been proposed to deal with this problem. For the discriminative methods, it is difficult to select the representative samples for the target template updating. In general, the holistic bounding boxes that contain tracked results are selected as the positive samples. However, when the objects are occluded, this simple strategy easily introduces the noises into the training data set and the target template and then leads the tracker to drift away from the target seriously. To address this problem, we propose a robust patch-based visual tracker with online representative sample selection. Different from previous works, we divide the object and the candidates into several patches uniformly and propose a score function to calculate the score of each patch independently. Then, the average score is adopted to determine the optimal candidate. Finally, we utilize the non-negative least square method to find the representative samples, which are used to update the target template. The experimental results on the object tracking benchmark 2013 and on the 13 challenging sequences show that the proposed method is robust to the occlusion and achieves promising results.
The RBANS Effort Index: base rates in geriatric samples.
Duff, Kevin; Spering, Cynthia C; O'Bryant, Sid E; Beglinger, Leigh J; Moser, David J; Bayless, John D; Culp, Kennith R; Mold, James W; Adams, Russell L; Scott, James G
2011-01-01
The Effort Index (EI) of the RBANS was developed to assist clinicians in discriminating patients who demonstrate good effort from those with poor effort. However, there are concerns that older adults might be unfairly penalized by this index, which uses uncorrected raw scores. Using five independent samples of geriatric patients with a broad range of cognitive functioning (e.g., cognitively intact, nursing home residents, probable Alzheimer's disease), base rates of failure on the EI were calculated. In cognitively intact and mildly impaired samples, few older individuals were classified as demonstrating poor effort (e.g., 3% in cognitively intact). However, in the more severely impaired geriatric patients, over one third had EI scores that fell above suggested cutoff scores (e.g., 37% in nursing home residents, 33% in probable Alzheimer's disease). In the cognitively intact sample, older and less educated patients were more likely to have scores suggestive of poor effort. Education effects were observed in three of the four clinical samples. Overall cognitive functioning was significantly correlated with EI scores, with poorer cognition being associated with greater suspicion of low effort. The current results suggest that age, education, and level of cognitive functioning should be taken into consideration when interpreting EI results and that significant caution is warranted when examining EI scores in elders suspected of having dementia.
Ultrasonic-based membrane aided sample preparation of urine proteomes.
Jesus, Jemmyson Romário; Santos, Hugo M; López-Fernández, H; Lodeiro, Carlos; Arruda, Marco Aurélio Zezzi; Capelo, J L
2018-02-01
A new ultrafast ultrasonic-based method for shotgun proteomics as well as label-free protein quantification in urine samples is developed. The method first separates the urine proteins using nitrocellulose-based membranes and then proteins are in-membrane digested using trypsin. The enzymatic digestion process is accelerated from overnight to four minutes using a sonoreactor ultrasonic device. Overall, the sample treatment pipeline comprising protein separation, digestion and identification is done in just 3h. The process is assessed using urine of healthy volunteers. The method shows that male can be differentiated from female using the protein content of urine in a fast, easy and straightforward way. 232 and 226 proteins are identified in urine of male and female, respectively. From this, 162 are common to both genders, whilst 70 are unique to male and 64 to female. From the 162 common proteins, 13 are present at levels statistically different (p minimalism concept as outlined by Halls, as each stage of this analysis is evaluated to minimize the time, cost, sample requirement, reagent consumption, energy requirements and production of waste products. Copyright © 2017 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Rachmad Vidya Wicaksana Putra
2013-09-01
Full Text Available In the literature, several approaches of designing a DCT/IDCT-based image compression system have been proposed. In this paper, we present a new RTL design approach with as main focus developing a DCT/IDCT-based image compression architecture using a self-created algorithm. This algorithm can efficiently minimize the amount of shifter -adders to substitute multiplier s. We call this new algorithm the multiplication from Common Binary Expression (mCBE Algorithm. Besides this algorithm, we propose alternative quantization numbers, which can be implemented simply as shifters in digital hardware. Mostly, these numbers can retain a good compressed-image quality compared to JPEG recommendations. These ideas lead to our design being small in circuit area, multiplierless, and low in complexity. The proposed 8-point 1D-DCT design has only six stages, while the 8-point 1D-IDCT design has only seven stages (one stage being defined as equal to the delay of one shifter or 2-input adder. By using the pipelining method, we can achieve a high-speed architecture with latency as a trade-off consideration. The design has been synthesized and can reach a speed of up to 1.41ns critical path delay (709.22MHz.
Directory of Open Access Journals (Sweden)
Li Liechen
2016-02-01
Full Text Available A conformal sparse array based on combined Barker code is designed for airship platform. The performance of the designed array such as signal-to-noise ratio is analyzed. Using the hovering characteristics of the airship, interferometry operation can be applied on the real aperture imaging results of two pulses, which can eliminate the random backscatter phase and make the image sparse in the transform domain. Building the relationship between echo and transform coefficients, the Compressed Sensing (CS theory can be introduced to solve the formula and achieving imaging. The image quality of the proposed method can reach the image formed by the full array imaging. The simulation results show the effectiveness of the proposed method.
Sitarenios Panagiotis; Kavvadas Michael
2016-01-01
The Modified Cam Clay model is extended to account for the behaviour of unsaturated soils using Bishop’s stress. To describe the Loading – Collapse behaviour, the model incorporates a compressibility framework with suction and degree of saturation dependent compression lines. For simplicity, the present paper describes the model in the triaxial stress space with characteristic simulations of constant suction compression and triaxial tests, as well as wetting tests. The model reproduces an evo...
GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING
Directory of Open Access Journals (Sweden)
Christopher Ouma Onyango
2010-09-01
Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.
Discrete Wigner Function Reconstruction and Compressed Sensing
Zhang, Jia-Ning; Fang, Lei; Ge, Mo-Lin
2011-01-01
A new reconstruction method for Wigner function is reported for quantum tomography based on compressed sensing. By analogy with computed tomography, Wigner functions for some quantum states can be reconstructed with less measurements utilizing this compressed sensing based method.
Zhang, Jin-Yu; Meng, Xiang-Bing; Xu, Wei; Zhang, Wei; Zhang, Yong
2014-01-01
This paper has proposed a new thermal wave image sequence compression algorithm by combining double exponential decay fitting model and differential evolution algorithm. This study benchmarked fitting compression results and precision of the proposed method was benchmarked to that of the traditional methods via experiment; it investigated the fitting compression performance under the long time series and improved model and validated the algorithm by practical thermal image sequence compression and reconstruction. The results show that the proposed algorithm is a fast and highly precise infrared image data processing method. PMID:24696649
Directory of Open Access Journals (Sweden)
Jin-Yu Zhang
2014-01-01
Full Text Available This paper has proposed a new thermal wave image sequence compression algorithm by combining double exponential decay fitting model and differential evolution algorithm. This study benchmarked fitting compression results and precision of the proposed method was benchmarked to that of the traditional methods via experiment; it investigated the fitting compression performance under the long time series and improved model and validated the algorithm by practical thermal image sequence compression and reconstruction. The results show that the proposed algorithm is a fast and highly precise infrared image data processing method.
Sample-Based Extreme Learning Machine with Missing Data
Directory of Open Access Journals (Sweden)
Hang Gao
2015-01-01
Full Text Available Extreme learning machine (ELM has been extensively studied in machine learning community during the last few decades due to its high efficiency and the unification of classification, regression, and so forth. Though bearing such merits, existing ELM algorithms cannot efficiently handle the issue of missing data, which is relatively common in practical applications. The problem of missing data is commonly handled by imputation (i.e., replacing missing values with substituted values according to available information. However, imputation methods are not always effective. In this paper, we propose a sample-based learning framework to address this issue. Based on this framework, we develop two sample-based ELM algorithms for classification and regression, respectively. Comprehensive experiments have been conducted in synthetic data sets, UCI benchmark data sets, and a real world fingerprint image data set. As indicated, without introducing extra computational complexity, the proposed algorithms do more accurate and stable learning than other state-of-the-art ones, especially in the case of higher missing ratio.
International Nuclear Information System (INIS)
Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei
2014-01-01
As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate O(1/k 2 ). In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques. (paper)
Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei
2014-06-21
As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate [Formula: see text]. In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.
CEPRAM: Compression for Endurance in PCM RAM
González Alberquilla, Rodrigo; Castro Rodríguez, Fernando; Piñuel Moreno, Luis; Tirado Fernández, Francisco
2017-01-01
We deal with the endurance problem of Phase Change Memories (PCM) by proposing Compression for Endurance in PCM RAM (CEPRAM), a technique to elongate the lifespan of PCM-based main memory through compression. We introduce a total of three compression schemes based on already existent schemes, but targeting compression for PCM-based systems. We do a two-level evaluation. First, we quantify the performance of the compression, in terms of compressed size, bit-flips and how they are affected by e...
CdTe detector based PIXE mapping of geological samples
Energy Technology Data Exchange (ETDEWEB)
Chaves, P.C., E-mail: cchaves@ctn.ist.utl.pt [Centro de Física Atómica da Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); IST/ITN, Instituto Superior Técnico, Universidade Técnica de Lisboa, Campus Tecnológico e Nuclear, EN10, 2686-953 Sacavém (Portugal); Taborda, A. [Centro de Física Atómica da Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); IST/ITN, Instituto Superior Técnico, Universidade Técnica de Lisboa, Campus Tecnológico e Nuclear, EN10, 2686-953 Sacavém (Portugal); Oliveira, D.P.S. de [Laboratório Nacional de Energia e Geologia (LNEG), Apartado 7586, 2611-901 Alfragide (Portugal); Reis, M.A. [Centro de Física Atómica da Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); IST/ITN, Instituto Superior Técnico, Universidade Técnica de Lisboa, Campus Tecnológico e Nuclear, EN10, 2686-953 Sacavém (Portugal)
2014-01-01
A sample collected from a borehole drilled approximately 10 km ESE of Bragança, Trás-os-Montes, was analysed by standard and high energy PIXE at both CTN (previous ITN) PIXE setups. The sample is a fine-grained metapyroxenite grading to coarse-grained in the base with disseminated sulphides and fine veinlets of pyrrhotite and pyrite. Matrix composition was obtained at the standard PIXE setup using a 1.25 MeV H{sup +} beam at three different spots. Medium and high Z elemental concentrations were then determined using the DT2fit and DT2simul codes (Reis et al., 2008, 2013 [1,2]), on the spectra obtained in the High Resolution and High Energy (HRHE)-PIXE setup (Chaves et al., 2013 [3]) by irradiation of the sample with a 3.8 MeV proton beam provided by the CTN 3 MV Tandetron accelerator. In this paper we present results, discuss detection limits of the method and the added value of the use of the CdTe detector in this context.
SeqCompress: an algorithm for biological sequence compression.
Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan
2014-10-01
The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Murphy, Fred; Nightingale, Julie; Hogg, Peter; Robinson, Leslie; Seddon, Doreen; Mackay, Stuart
2015-01-01
This research project investigated the compression behaviours of practitioners during screening mammography. The study sought to provide a qualitative understanding of ‘how’ and ‘why’ practitioners apply compression force. With a clear conflict in the existing literature and little scientific evidence base to support the reasoning behind the application of compression force, this research project investigated the application of compression using a phenomenological approach. Following ethical approval, six focus group interviews were conducted at six different breast screening centres in England. A sample of 41 practitioners were interviewed within the focus groups together with six one-to-one interviews of mammography educators or clinical placement co-ordinators. The findings revealed two broad humanistic and technological categories consisting of 10 themes. The themes included client empowerment, white-lies, time for interactions, uncertainty of own practice, culture, power, compression controls, digital technology, dose audit-safety nets, numerical scales. All of these themes were derived from 28 units of significant meaning (USM). The results demonstrate a wide variation in the application of compression force, thus offering a possible explanation for the difference between practitioner compression forces found in quantitative studies. Compression force was applied in many different ways due to individual practitioner experiences and behaviour. Furthermore, the culture and the practice of the units themselves influenced beliefs and attitudes of practitioners in compression force application. The strongest recommendation to emerge from this study was the need for peer observation to enable practitioners to observe and compare their own compression force practice to that of their colleagues. The findings are significant for clinical practice in order to understand how and why compression force is applied
Chemometric classification of casework arson samples based on gasoline content.
Sinkov, Nikolai A; Sandercock, P Mark L; Harynuk, James J
2014-02-01
Detection and identification of ignitable liquids (ILs) in arson debris is a critical part of arson investigations. The challenge of this task is due to the complex and unpredictable chemical nature of arson debris, which also contains pyrolysis products from the fire. ILs, most commonly gasoline, are complex chemical mixtures containing hundreds of compounds that will be consumed or otherwise weathered by the fire to varying extents depending on factors such as temperature, air flow, the surface on which IL was placed, etc. While methods such as ASTM E-1618 are effective, data interpretation can be a costly bottleneck in the analytical process for some laboratories. In this study, we address this issue through the application of chemometric tools. Prior to the application of chemometric tools such as PLS-DA and SIMCA, issues of chromatographic alignment and variable selection need to be addressed. Here we use an alignment strategy based on a ladder consisting of perdeuterated n-alkanes. Variable selection and model optimization was automated using a hybrid backward elimination (BE) and forward selection (FS) approach guided by the cluster resolution (CR) metric. In this work, we demonstrate the automated construction, optimization, and application of chemometric tools to casework arson data. The resulting PLS-DA and SIMCA classification models, trained with 165 training set samples, have provided classification of 55 validation set samples based on gasoline content with 100% specificity and sensitivity. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Mammographic compression in Asian women.
Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong
2017-01-01
To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (pAsian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.
Compressive sensing for urban radar
Amin, Moeness
2014-01-01
With the emergence of compressive sensing and sparse signal reconstruction, approaches to urban radar have shifted toward relaxed constraints on signal sampling schemes in time and space, and to effectively address logistic difficulties in data acquisition. Traditionally, these challenges have hindered high resolution imaging by restricting both bandwidth and aperture, and by imposing uniformity and bounds on sampling rates.Compressive Sensing for Urban Radar is the first book to focus on a hybrid of two key areas: compressive sensing and urban sensing. It explains how reliable imaging, tracki
Compressed Sensing for Space-Based High-Definition Video Technologies, Phase I
National Aeronautics and Space Administration — Space-based imaging sensors are important for NASA's mission in both performing scientific measurements and producing literature and documentary cinema. The recent...
International Nuclear Information System (INIS)
Liao, Pingping; Cai, Maolin; Shi, Yan; Fan, Zichuan
2013-01-01
The conventional ultrasonic method for compressed air leak detection utilizes a directivity-based ultrasonic leak detector (DULD) to locate the leak. The location accuracy of this method is low due to the limit of the nominal frequency and the size of the ultrasonic sensor. In order to overcome this deficiency, a method based on time delay estimation (TDE) is proposed. The method utilizes three ultrasonic sensors arranged in an equilateral triangle to simultaneously receive the ultrasound generated by the leak. The leak can be located according to time delays between every two sensor signals. The theoretical accuracy of the method is analyzed, and it is found that the location error increases linearly with delay estimation error and the distance from the leak to the sensor plane, and the location error decreases with the distance between sensors. The average square difference function delay estimator with parabolic fitting is used and two practical techniques are devised to remove the anomalous delay estimates. Experimental results indicate that the location accuracy using the TDE-based ultrasonic leak detector is 6.5–8.3 times as high as that using the DULD. By adopting the proposed method, the leak can be located more accurately and easily, and then the detection efficiency is improved. (paper)
Gu, Xiangping; Zhou, Xiaofeng; Sun, Yanjing
2018-02-28
Compressive sensing (CS)-based data gathering is a promising method to reduce energy consumption in wireless sensor networks (WSNs). Traditional CS-based data-gathering approaches require a large number of sensor nodes to participate in each CS measurement task, resulting in high energy consumption, and do not guarantee load balance. In this paper, we propose a sparser analysis that depends on modified diffusion wavelets, which exploit sensor readings' spatial correlation in WSNs. In particular, a novel data-gathering scheme with joint routing and CS is presented. A modified ant colony algorithm is adopted, where next hop node selection takes a node's residual energy and path length into consideration simultaneously. Moreover, in order to speed up the coverage rate and avoid the local optimal of the algorithm, an improved pheromone impact factor is put forward. More importantly, theoretical proof is given that the equivalent sensing matrix generated can satisfy the restricted isometric property (RIP). The simulation results demonstrate that the modified diffusion wavelets' sparsity affects the sensor signal and has better reconstruction performance than DFT. Furthermore, our data gathering with joint routing and CS can dramatically reduce the energy consumption of WSNs, balance the load, and prolong the network lifetime in comparison to state-of-the-art CS-based methods.
Solution-based targeted genomic enrichment for precious DNA samples
Directory of Open Access Journals (Sweden)
Shearer Aiden
2012-05-01
Full Text Available Abstract Background Solution-based targeted genomic enrichment (TGE protocols permit selective sequencing of genomic regions of interest on a massively parallel scale. These protocols could be improved by: 1 modifying or eliminating time consuming steps; 2 increasing yield to reduce input DNA and excessive PCR cycling; and 3 enhancing reproducible. Results We developed a solution-based TGE method for downstream Illumina sequencing in a non-automated workflow, adding standard Illumina barcode indexes during the post-hybridization amplification to allow for sample pooling prior to sequencing. The method utilizes Agilent SureSelect baits, primers and hybridization reagents for the capture, off-the-shelf reagents for the library preparation steps, and adaptor oligonucleotides for Illumina paired-end sequencing purchased directly from an oligonucleotide manufacturing company. Conclusions This solution-based TGE method for Illumina sequencing is optimized for small- or medium-sized laboratories and addresses the weaknesses of standard protocols by reducing the amount of input DNA required, increasing capture yield, optimizing efficiency, and improving reproducibility.
Preview-based sampling for controlling gaseous simulations
Huang, Ruoguan
2011-01-01
In this work, we describe an automated method for directing the control of a high resolution gaseous fluid simulation based on the results of a lower resolution preview simulation. Small variations in accuracy between low and high resolution grids can lead to divergent simulations, which is problematic for those wanting to achieve a desired behavior. Our goal is to provide a simple method for ensuring that the high resolution simulation matches key properties from the lower resolution simulation. We first let a user specify a fast, coarse simulation that will be used for guidance. Our automated method samples the data to be matched at various positions and scales in the simulation, or allows the user to identify key portions of the simulation to maintain. During the high resolution simulation, a matching process ensures that the properties sampled from the low resolution simulation are maintained. This matching process keeps the different resolution simulations aligned even for complex systems, and can ensure consistency of not only the velocity field, but also advected scalar values. Because the final simulation is naturally similar to the preview simulation, only minor controlling adjustments are needed, allowing a simpler control method than that used in prior keyframing approaches. Copyright © 2011 by the Association for Computing Machinery, Inc.
Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.
Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel
2017-06-01
Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.
Picture data compression coder using subband/transform coding with a Lempel-Ziv-based coder
Glover, Daniel R. (Inventor)
1995-01-01
Digital data coders/decoders are used extensively in video transmission. A digitally encoded video signal is separated into subbands. Separating the video into subbands allows transmission at low data rates. Once the data is separated into these subbands it can be coded and then decoded by statistical coders such as the Lempel-Ziv based coder.
Soil classification basing on the spectral characteristics of topsoil samples
Liu, Huanjun; Zhang, Xiaokang; Zhang, Xinle
2016-04-01
Soil taxonomy plays an important role in soil utility and management, but China has only course soil map created based on 1980s data. New technology, e.g. spectroscopy, could simplify soil classification. The study try to classify soils basing on the spectral characteristics of topsoil samples. 148 topsoil samples of typical soils, including Black soil, Chernozem, Blown soil and Meadow soil, were collected from Songnen plain, Northeast China, and the room spectral reflectance in the visible and near infrared region (400-2500 nm) were processed with weighted moving average, resampling technique, and continuum removal. Spectral indices were extracted from soil spectral characteristics, including the second absorption positions of spectral curve, the first absorption vale's area, and slope of spectral curve at 500-600 nm and 1340-1360 nm. Then K-means clustering and decision tree were used respectively to build soil classification model. The results indicated that 1) the second absorption positions of Black soil and Chernozem were located at 610 nm and 650 nm respectively; 2) the spectral curve of the meadow is similar to its adjacent soil, which could be due to soil erosion; 3) decision tree model showed higher classification accuracy, and accuracy of Black soil, Chernozem, Blown soil and Meadow are 100%, 88%, 97%, 50% respectively, and the accuracy of Blown soil could be increased to 100% by adding one more spectral index (the first two vole's area) to the model, which showed that the model could be used for soil classification and soil map in near future.
Compression of Infrared images
DEFF Research Database (Denmark)
Mantel, Claire; Forchhammer, Søren
2017-01-01
best for bits-per-pixel rates below 1.4 bpp, while HEVC obtains best performance in the range 1.4 to 6.5 bpp. The compression performance is also evaluated based on maximum errors. These results also show that HEVC can achieve a precision of 1°C with an average of 1.3 bpp....
Weston, Brian; Nourgaliev, Robert; Delplanque, Jean-Pierre
2017-11-01
We present a new block-based Schur complement preconditioner for simulating all-speed compressible flow with phase change. The conservation equations are discretized with a reconstructed Discontinuous Galerkin method and integrated in time with fully implicit time discretization schemes. The resulting set of non-linear equations is converged using a robust Newton-Krylov framework. Due to the stiffness of the underlying physics associated with stiff acoustic waves and viscous material strength effects, we solve for the primitive-variables (pressure, velocity, and temperature). To enable convergence of the highly ill-conditioned linearized systems, we develop a physics-based preconditioner, utilizing approximate block factorization techniques to reduce the fully-coupled 3×3 system to a pair of reduced 2×2 systems. We demonstrate that our preconditioned Newton-Krylov framework converges on very stiff multi-physics problems, corresponding to large CFL and Fourier numbers, with excellent algorithmic and parallel scalability. Results are shown for the classic lid-driven cavity flow problem as well as for 3D laser-induced phase change. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
A threshold-based fixed predictor for JPEG-LS image compression
Deng, Lihua; Huang, Zhenghua; Yao, Shoukui
2018-03-01
In JPEG-LS, fixed predictor based on median edge detector (MED) only detect horizontal and vertical edges, and thus produces large prediction errors in the locality of diagonal edges. In this paper, we propose a threshold-based edge detection scheme for the fixed predictor. The proposed scheme can detect not only the horizontal and vertical edges, but also diagonal edges. For some certain thresholds, the proposed scheme can be simplified to other existing schemes. So, it can also be regarded as the integration of these existing schemes. For a suitable threshold, the accuracy of horizontal and vertical edges detection is higher than the existing median edge detection in JPEG-LS. Thus, the proposed fixed predictor outperforms the existing JPEG-LS predictors for all images tested, while the complexity of the overall algorithm is maintained at a similar level.
Model-Based Photoacoustic Image Reconstruction using Compressed Sensing and Smoothed L0 Norm
Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi
2018-01-01
Photoacoustic imaging (PAI) is a novel medical imaging modality that uses the advantages of the spatial resolution of ultrasound imaging and the high contrast of pure optical imaging. Analytical algorithms are usually employed to reconstruct the photoacoustic (PA) images as a result of their simple implementation. However, they provide a low accurate image. Model-based (MB) algorithms are used to improve the image quality and accuracy while a large number of transducers and data acquisition a...
Design-based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation.
Ojeda, Mario Miguel; Sahai, Hardeo
2002-01-01
Discusses some key statistical concepts in probabilistic and non-probabilistic sampling to provide an overview for understanding the inference process. Suggests a statistical model constituting the basis of statistical inference and provides a brief review of the finite population descriptive inference and a quota sampling inferential theory.…
Directory of Open Access Journals (Sweden)
Shailesh Kamble
2017-08-01
Full Text Available The major challenge with fractal image/video coding technique is that, it requires more encoding time. Therefore, how to reduce the encoding time is the research component remains in the fractal coding. Block matching motion estimation algorithms are used, to reduce the computations performed in the process of encoding. The objective of the proposed work is to develop an approach for video coding using modified three step search (MTSS block matching algorithm and weighted finite automata (WFA coding with a specific focus on reducing the encoding time. The MTSS block matching algorithm are used for computing motion vectors between the two frames i.e. displacement of pixels and WFA is used for the coding as it behaves like the Fractal Coding (FC. WFA represents an image (frame or motion compensated prediction error based on the idea of fractal that the image has self-similarity in itself. The self-similarity is sought from the symmetry of an image, so the encoding algorithm divides an image into multi-levels of quad-tree segmentations and creates an automaton from the sub-images. The proposed MTSS block matching algorithm is based on the combination of rectangular and hexagonal search pattern and compared with the existing New Three-Step Search (NTSS, Three-Step Search (TSS, and Efficient Three-Step Search (ETSS block matching estimation algorithm. The performance of the proposed MTSS block matching algorithm is evaluated on the basis of performance evaluation parameters i.e. mean absolute difference (MAD and average search points required per frame. Mean of absolute difference (MAD distortion function is used as the block distortion measure (BDM. Finally, developed approaches namely, MTSS and WFA, MTSS and FC, and Plane FC (applied on every frame are compared with each other. The experimentations are carried out on the standard uncompressed video databases, namely, akiyo, bus, mobile, suzie, traffic, football, soccer, ice etc. Developed
Durability Testing of Biomass Based Oxygenated Fuel Components in a Compression Ignition Engine
Energy Technology Data Exchange (ETDEWEB)
Ratcliff, Matthew A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); McCormick, Robert L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Baumgardner, Marc E. [Gonzaga University; Lakshminarayanan, Arunachalam [Colorado State University; Olsen, Daniel B. [Colorado State University; Marchese, Anthony J. [Colorado State University
2017-10-18
Blending cellulosic biofuels with traditional petroleum-derived fuels results in transportation fuels with reduced carbon footprints. Many cellulosic fuels rely on processing methods that produce mixtures of oxygenates which must be upgraded before blending with traditional fuels. Complete oxygenate removal is energy-intensive and it is likely that such biofuel blends will necessarily contain some oxygen content to be economically viable. Previous work by our group indicated that diesel fuel blends with low levels (<4%-vol) of oxygenates resulted in minimal negative effects on short-term engine performance and emissions. However, little is known about the long-term effects of these compounds on engine durability issues such as the impact on fuel injection, in-cylinder carbon buildup, and engine oil degradation. In this study, four of the oxygenated components previously tested were blended at 4%-vol in diesel fuel and tested with a durability protocol devised for this work consisting of 200 hrs of testing in a stationary, single-cylinder, Yanmar diesel engine operating at constant load. Oil samples, injector spray patterns, and carbon buildup from the injector and cylinder surfaces were analyzed. It was found that, at the levels tested, these fuels had minimal impact on the overall engine operation, which is consistent with our previous findings.
Compressed normalized block difference for object tracking
Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge
2018-04-01
Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.
A Table-Based Random Sampling Simulation for Bioluminescence Tomography
Directory of Open Access Journals (Sweden)
Xiaomeng Zhang
2006-01-01
Full Text Available As a popular simulation of photon propagation in turbid media, the main problem of Monte Carlo (MC method is its cumbersome computation. In this work a table-based random sampling simulation (TBRS is proposed. The key idea of TBRS is to simplify multisteps of scattering to a single-step process, through randomly table querying, thus greatly reducing the computing complexity of the conventional MC algorithm and expediting the computation. The TBRS simulation is a fast algorithm of the conventional MC simulation of photon propagation. It retained the merits of flexibility and accuracy of conventional MC method and adapted well to complex geometric media and various source shapes. Both MC simulations were conducted in a homogeneous medium in our work. Also, we present a reconstructing approach to estimate the position of the fluorescent source based on the trial-and-error theory as a validation of the TBRS algorithm. Good agreement is found between the conventional MC simulation and the TBRS simulation.
Rank-defective millimeter-wave channel estimation based on subspace-compressive sensing
Directory of Open Access Journals (Sweden)
Majid Shakhsi Dastgahian
2016-11-01
Full Text Available Millimeter-wave communication (mmWC is considered as one of the pioneer candidates for 5G indoor and outdoor systems in E-band. To subdue the channel propagation characteristics in this band, high dimensional antenna arrays need to be deployed at both the base station (BS and mobile sets (MS. Unlike the conventional MIMO systems, Millimeter-wave (mmW systems lay away to employ the power predatory equipment such as ADC or RF chain in each branch of MIMO system because of hardware constraints. Such systems leverage to the hybrid precoding (combining architecture for downlink deployment. Because there is a large array at the transceiver, it is impossible to estimate the channel by conventional methods. This paper develops a new algorithm to estimate the mmW channel by exploiting the sparse nature of the channel. The main contribution is the representation of a sparse channel model and the exploitation of a modified approach based on Multiple Measurement Vector (MMV greedy sparse framework and subspace method of Multiple Signal Classification (MUSIC which work together to recover the indices of non-zero elements of an unknown channel matrix when the rank of the channel matrix is defected. In practical rank-defective channels, MUSIC fails, and we need to propose new extended MUSIC approaches based on subspace enhancement to compensate the limitation of MUSIC. Simulation results indicate that our proposed extended MUSIC algorithms will have proper performances and moderate computational speeds, and that they are even able to work in channels with an unknown sparsity level.
Estimation of plant sampling uncertainty: an example based on chemical analysis of moss samples.
Dołęgowska, Sabina
2016-11-01
In order to estimate the level of uncertainty arising from sampling, 54 samples (primary and duplicate) of the moss species Pleurozium schreberi (Brid.) Mitt. were collected within three forested areas (Wierna Rzeka, Piaski, Posłowice Range) in the Holy Cross Mountains (south-central Poland). During the fieldwork, each primary sample composed of 8 to 10 increments (subsamples) was taken over an area of 10 m 2 whereas duplicate samples were collected in the same way at a distance of 1-2 m. Subsequently, all samples were triple rinsed with deionized water, dried, milled, and digested (8 mL HNO 3 (1:1) + 1 mL 30 % H 2 O 2 ) in a closed microwave system Multiwave 3000. The prepared solutions were analyzed twice for Cu, Fe, Mn, and Zn using FAAS and GFAAS techniques. All datasets were checked for normality and for normally distributed elements (Cu from Piaski, Zn from Posłowice, Fe, Zn from Wierna Rzeka). The sampling uncertainty was computed with (i) classical ANOVA, (ii) classical RANOVA, (iii) modified RANOVA, and (iv) range statistics. For the remaining elements, the sampling uncertainty was calculated with traditional and/or modified RANOVA (if the amount of outliers did not exceed 10 %) or classical ANOVA after Box-Cox transformation (if the amount of outliers exceeded 10 %). The highest concentrations of all elements were found in moss samples from Piaski, whereas the sampling uncertainty calculated with different statistical methods ranged from 4.1 to 22 %.
Liu, Qi; Wang, Ying; Wang, Jun; Wang, Qiong-Hua
2018-02-01
In this paper, a novel optical image encryption system combining compressed sensing with phase-shifting interference in fractional wavelet domain is proposed. To improve the encryption efficiency, the volume data of original image are decreased by compressed sensing. Then the compacted image is encoded through double random phase encoding in asymmetric fractional wavelet domain. In the encryption system, three pseudo-random sequences, generated by three-dimensional chaos map, are used as the measurement matrix of compressed sensing and two random-phase masks in the asymmetric fractional wavelet transform. It not only simplifies the keys to storage and transmission, but also enhances our cryptosystem nonlinearity to resist some common attacks. Further, holograms make our cryptosystem be immune to noises and occlusion attacks, which are obtained by two-step-only quadrature phase-shifting interference. And the compression and encryption can be achieved in the final result simultaneously. Numerical experiments have verified the security and validity of the proposed algorithm.
DEFF Research Database (Denmark)
Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.
2017-01-01
Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT...... matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via ℓ1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate...... and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1...
Biomedical sensor design using analog compressed sensing
Balouchestani, Mohammadreza; Krishnan, Sridhar
2015-05-01
The main drawback of current healthcare systems is the location-specific nature of the system due to the use of fixed/wired biomedical sensors. Since biomedical sensors are usually driven by a battery, power consumption is the most important factor determining the life of a biomedical sensor. They are also restricted by size, cost, and transmission capacity. Therefore, it is important to reduce the load of sampling by merging the sampling and compression steps to reduce the storage usage, transmission times, and power consumption in order to expand the current healthcare systems to Wireless Healthcare Systems (WHSs). In this work, we present an implementation of a low-power biomedical sensor using analog Compressed Sensing (CS) framework for sparse biomedical signals that addresses both the energy and telemetry bandwidth constraints of wearable and wireless Body-Area Networks (BANs). This architecture enables continuous data acquisition and compression of biomedical signals that are suitable for a variety of diagnostic and treatment purposes. At the transmitter side, an analog-CS framework is applied at the sensing step before Analog to Digital Converter (ADC) in order to generate the compressed version of the input analog bio-signal. At the receiver side, a reconstruction algorithm based on Restricted Isometry Property (RIP) condition is applied in order to reconstruct the original bio-signals form the compressed bio-signals with high probability and enough accuracy. We examine the proposed algorithm with healthy and neuropathy surface Electromyography (sEMG) signals. The proposed algorithm achieves a good level for Average Recognition Rate (ARR) at 93% and reconstruction accuracy at 98.9%. In addition, The proposed architecture reduces total computation time from 32 to 11.5 seconds at sampling-rate=29 % of Nyquist rate, Percentage Residual Difference (PRD)=26 %, Root Mean Squared Error (RMSE)=3 %.
On the stability and compressive nonlinearity of a physiologically based model of the cochlea
Energy Technology Data Exchange (ETDEWEB)
Nankali, Amir [Department of Mechanical Engineering, University of Michigan, Ann Arbor, Michigan (United States); Grosh, Karl [Department of Mechanical Engineering, University of Michigan, Ann Arbor, Michigan (United States); Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan (United States)
2015-12-31
Hearing relies on a series of coupled electrical, acoustical (fluidic) and mechanical interactions inside the cochlea that enable sound processing. A positive feedback mechanism within the cochlea, called the cochlear amplifier, provides amplitude and frequency selectivity in the mammalian auditory system. The cochlear amplifier and stability are studied using a nonlinear, micromechanical model of the Organ of Corti (OoC) coupled to the electrical potentials in the cochlear ducts. It is observed that the mechano-electrical transduction (MET) sensitivity and somatic motility of the outer hair cell (OHC), control the cochlear stability. Increasing MET sensitivity beyond a critical value, while electromechanical coupling coefficient is within a specific range, causes instability. We show that instability in this model is generated through a supercritical Hopf bifurcation. A reduced order model of the system is approximated and it is shown that the tectorial membrane (TM) transverse mode effect on the dynamics is significant while the radial mode can be simplified from the equations. The cochlear amplifier in this model exhibits good agreement with the experimental data. A comprehensive 3-dimensional model based on the cross sectional model is simulated and the results are compared. It is indicated that the global model qualitatively inherits some characteristics of the local model, but the longitudinal coupling along the cochlea shifts the stability boundary (i.e., Hopf bifurcation point) and enhances stability.
Beni, Yaghoub Tadi; Zeverdejani, M Karimi; Mehralian, Fahimeh
2017-10-01
Protein microtubules (MTs) are one of the important intercellular components and have a vital role in the stability and strength of the cells. Due to applied external loads, protein microtubules may be involved buckling phenomenon. Due to impact of protein microtubules in cell reactions, it is important to determine their critical buckling load. Considering nature of protein microtubules, various parameters are effective on microtubules buckling. The small size of microtubules and also lack of uniformity of MTs properties in different directions caused the necessity of accuracy in the analysis of these bio-structure. In fact, microtubules must be considered as a size dependent cylinder, which behave as an orthotropic material. Hence, in the present work using first-order shear deformation model (FSDT), the buckling equations of anisotropic MTs are derived based on new modified couple stress theory (NMCST). After solving the stability equations, the influences of various parameters are measured on the MTs critical buckling load. Copyright © 2017 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Rui Zhang
2018-01-01
Full Text Available This paper proposes a novel tamper detection, localization, and recovery scheme for encrypted images with Discrete Wavelet Transformation (DWT and Compressive Sensing (CS. The original image is first transformed into DWT domain and divided into important part, that is, low-frequency part, and unimportant part, that is, high-frequency part. For low-frequency part contains the main information of image, traditional chaotic encryption is employed. Then, high-frequency part is encrypted with CS to vacate space for watermark. The scheme takes the processed original image content as watermark, from which the characteristic digest values are generated. Comparing with the existing image authentication algorithms, the proposed scheme can realize not only tamper detection and localization but also tamper recovery. Moreover, tamper recovery is based on block division and the recovery accuracy varies with the contents that are possibly tampered. If either the watermark or low-frequency part is tampered, the recovery accuracy is 100%. The experimental results show that the scheme can not only distinguish the type of tamper and find the tampered blocks but also recover the main information of the original image. With great robustness and security, the scheme can adequately meet the need of secure image transmission under unreliable conditions.
Trowbridge, Kelly; Mische Lawson, Lisa; Andrews, Stephanie; Pecora, Jodi; Boyd, Sabra
2017-11-01
Mindfulness practices, including mindfulness meditation, show promise for decreasing stress among health care providers. This exploratory study investigates the feasibility of a two-day compressed mindfulness-based stress reduction (cMBSR) course provided in the hospital workplace with pediatric health care social workers. The standard course of Jon Kabat-Zinn's MBSR requires a participant commitment to eight weeks of instruction consisting of one 2.5-hour-per-week class, a single day retreat, and 45 minutes of practice for six of seven days each week. Commitments to family, work, caregiving, education, and so on, as well as limitations such as distance, may prevent health care providers from participating in a standard MBSR course. Using t tests, researchers measured the effect of cMBSR on (a) positive and negative experiences in pediatric social work, (b) perceived stress, (c) mindfulness, and (d) caring self-efficacy (as a component of patient- and family-centered care). Results included significant differences between the pre- and post-intervention outcome variables on the Professional Quality of Life Secondary Traumatic Stress subscale, the Mindful Attention and Awareness Scale, and the Caring Efficacy Scale. Findings found adequate evidence for the feasibility of cMBSR design and for a need of a more rigorous study of the effects of the cMBSR intervention. © 2017 National Association of Social Workers.
Cheng, Yih-Chun; Tsai, Pei-Yun; Huang, Ming-Hao
2016-05-19
Low-complexity compressed sensing (CS) techniques for monitoring electrocardiogram (ECG) signals in wireless body sensor network (WBSN) are presented. The prior probability of ECG sparsity in the wavelet domain is first exploited. Then, variable orthogonal multi-matching pursuit (vOMMP) algorithm that consists of two phases is proposed. In the first phase, orthogonal matching pursuit (OMP) algorithm is adopted to effectively augment the support set with reliable indices and in the second phase, the orthogonal multi-matching pursuit (OMMP) is employed to rescue the missing indices. The reconstruction performance is thus enhanced with the prior information and the vOMMP algorithm. Furthermore, the computation-intensive pseudo-inverse operation is simplified by the matrix-inversion-free (MIF) technique based on QR decomposition. The vOMMP-MIF CS decoder is then implemented in 90 nm CMOS technology. The QR decomposition is accomplished by two systolic arrays working in parallel. The implementation supports three settings for obtaining 40, 44, and 48 coefficients in the sparse vector. From the measurement result, the power consumption is 11.7 mW at 0.9 V and 12 MHz. Compared to prior chip implementations, our design shows good hardware efficiency and is suitable for low-energy applications.
Liu, Shun; Xu, Jinglei; Yu, Kaikai
2017-06-01
This paper proposes an improved approach for extraction of pressure fields from velocity data, such as obtained by particle image velocimetry (PIV), especially for steady compressible flows with strong shocks. The principle of this approach is derived from Navier-Stokes equations, assuming adiabatic condition and neglecting viscosity of flow field boundaries measured by PIV. The computing method is based on MacCormack's technique in computational fluid dynamics. Thus, this approach is called the MacCormack method. Moreover, the MacCormack method is compared with several approaches proposed in previous literature, including the isentropic method, the spatial integration and the Poisson method. The effects of velocity error level and PIV spatial resolution on these approaches are also quantified by using artificial velocity data containing shock waves. The results demonstrate that the MacCormack method has higher reconstruction accuracy than other approaches, and its advantages become more remarkable with shock strengthening. Furthermore, the performance of the MacCormack method is also validated by using synthetic PIV images with an oblique shock wave, confirming the feasibility and advantage of this approach in real PIV experiments. This work is highly significant for the studies on aerospace engineering, especially the outer flow fields of supersonic aircraft and the internal flow fields of ramjets.
OBS Data Denoising Based on Compressed Sensing Using Fast Discrete Curvelet Transform
Nan, F.; Xu, Y.
2017-12-01
OBS (Ocean Bottom Seismometer) data denoising is an important step of OBS data processing and inversion. It is necessary to get clearer seismic phases for further velocity structure analysis. Traditional methods for OBS data denoising include band-pass filter, Wiener filter and deconvolution etc. (Liu, 2015). Most of these filtering methods are based on Fourier Transform (FT). Recently, the multi-scale transform methods such as wavelet transform (WT) and Curvelet transform (CvT) are widely used for data denoising in various applications. The FT, WT and CvT could represent signal sparsely and separate noise in transform domain. They could be used in different cases. Compared with Curvelet transform, the FT has Gibbs phenomenon and it cannot handle points discontinuities well. WT is well localized and multi scale, but it has poor orientation selectivity and could not handle curves discontinuities well. CvT is a multiscale directional transform that could represent curves with only a small number of coefficients. It provide an optimal sparse representation of objects with singularities along smooth curves, which is suitable for seismic data processing. As we know, different seismic phases in OBS data are showed as discontinuous curves in time domain. Hence, we promote to analysis the OBS data via CvT and separate the noise in CvT domain. In this paper, our sparsity-promoting inversion approach is restrained by L1 condition and we solve this L1 problem by using modified iteration thresholding. Results show that the proposed method could suppress the noise well and give sparse results in Curvelet domain. Figure 1 compares the Curvelet denoising method with Wavelet method on the same iterations and threshold through synthetic example. a)Original data. b) Add-noise data. c) Denoised data using CvT. d) Denoised data using WT. The CvT can well eliminate the noise and has better result than WT. Further we applied the CvT denoise method for the OBS data processing. Figure 2a
The Toggle Local Planner for sampling-based motion planning
Denny, Jory
2012-05-01
Sampling-based solutions to the motion planning problem, such as the probabilistic roadmap method (PRM), have become commonplace in robotics applications. These solutions are the norm as the dimensionality of the planning space grows, i.e., d > 5. An important primitive of these methods is the local planner, which is used for validation of simple paths between two configurations. The most common is the straight-line local planner which interpolates along the straight line between the two configurations. In this paper, we introduce a new local planner, Toggle Local Planner (Toggle LP), which extends local planning to a two-dimensional subspace of the overall planning space. If no path exists between the two configurations in the subspace, then Toggle LP is guaranteed to correctly return false. Intuitively, more connections could be found by Toggle LP than by the straight-line planner, resulting in better connected roadmaps. As shown in our results, this is the case, and additionally, the extra cost, in terms of time or storage, for Toggle LP is minimal. Additionally, our experimental analysis of the planner shows the benefit for a wide array of robots, with DOF as high as 70. © 2012 IEEE.
Efficient two-dimensional compressive sensing in MIMO radar
Shahbazi, Nafiseh; Abbasfar, Aliazam; Jabbarian-Jahromi, Mohammad
2017-12-01
Compressive sensing (CS) has been a way to lower sampling rate leading to data reduction for processing in multiple-input multiple-output (MIMO) radar systems. In this paper, we further reduce the computational complexity of a pulse-Doppler collocated MIMO radar by introducing a two-dimensional (2D) compressive sensing. To do so, we first introduce a new 2D formulation for the compressed received signals and then we propose a new measurement matrix design for our 2D compressive sensing model that is based on minimizing the coherence of sensing matrix using gradient descent algorithm. The simulation results show that our proposed 2D measurement matrix design using gradient decent algorithm (2D-MMDGD) has much lower computational complexity compared to one-dimensional (1D) methods while having better performance in comparison with conventional methods such as Gaussian random measurement matrix.
Lattanzi, Riccardo; Zhang, Bei; Knoll, Florian; Assländer, Jakob; Cloos, Martijn A
2018-06-01
Magnetic Resonance Fingerprinting reconstructions can become computationally intractable with multiple transmit channels, if the B 1 + phases are included in the dictionary. We describe a general method that allows to omit the transmit phases. We show that this enables straightforward implementation of dictionary compression to further reduce the problem dimensionality. We merged the raw data of each RF source into a single k-space dataset, extracted the transceiver phases from the corresponding reconstructed images and used them to unwind the phase in each time frame. All phase-unwound time frames were combined in a single set before performing SVD-based compression. We conducted synthetic, phantom and in-vivo experiments to demonstrate the feasibility of SVD-based compression in the case of two-channel transmission. Unwinding the phases before SVD-based compression yielded artifact-free parameter maps. For fully sampled acquisitions, parameters were accurate with as few as 6 compressed time frames. SVD-based compression performed well in-vivo with highly under-sampled acquisitions using 16 compressed time frames, which reduced reconstruction time from 750 to 25min. Our method reduces the dimensions of the dictionary atoms and enables to implement any fingerprint compression strategy in the case of multiple transmit channels. Copyright © 2018 Elsevier Inc. All rights reserved.
Predicting Drug-Target Interactions Based on Small Positive Samples.
Hu, Pengwei; Chan, Keith C C; Hu, Yanxing
2018-01-01
evaluation of ODT shows that it can be potentially useful. It confirms that predicting potential or missing DTIs based on the known interactions is a promising direction to solve problems related to the use of uncertain and unreliable negative samples and those related to the great demand in computational resources. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Energy Technology Data Exchange (ETDEWEB)
Bonne, François; Bonnay, Patrick [INAC, SBT, UMR-E 9004 CEA/UJF-Grenoble, 17 rue des Martyrs, 38054 Grenoble (France); Alamir, Mazen [Gipsa-Lab, Control Systems Department, CNRS-University of Grenoble, 11, rue des Mathématiques, BP 46, 38402 Saint Martin d' Hères (France); Bradu, Benjamin [CERN, CH-1211 Genève 23 (Switzerland)
2014-01-29
In this paper, a multivariable model-based non-linear controller for Warm Compression Stations (WCS) is proposed. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to have precise control of every pressure in normal operation or to stabilize and control the cryoplant under high variation of thermal loads (such as a pulsed heat load expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details how to set the WCS model up to synthesize the Linear Quadratic Optimal feedback gain and how to use it. After preliminary tuning at CEA-Grenoble on the 400W@1.8K helium test facility, the controller has been implemented on a Schneider PLC and fully tested first on the CERN's real-time simulator. Then, it was experimentally validated on a real CERN cryoplant. The efficiency of the solution is experimentally assessed using a reasonable operating scenario of start and stop of compressors and cryogenic turbines. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.
Directory of Open Access Journals (Sweden)
Aref Al-Swaidani
2017-01-01
Full Text Available Natural pozzolan is being widely used as cement replacement. Despite the economic, ecological, and technical benefits of its adding, it is often associated with shortcomings such as the need of moist-curing for longer time and a lower early strength. This study is an attempt to investigate the effect of adding limestone filler on the compressive strength and durability of mortars/concrete containing scoria. Sixteen types of binders with different replacement levels of scoria (0, 10, 20, and 30% and limestone (0, 5, 10, and 15% were prepared. The development of the compressive strength of mortar/concrete specimens was investigated after 2, 7, 28, and 90 days’ curing. In addition, the acid resistance of the 28 days’ cured mortars was evaluated after 90 days’ exposure to 5% H2SO4. Concrete permeability was also evaluated after 2, 7, 28, and 90 days’ curing. Test results revealed that there was an increase in the early-age compressive strength and a decrease in water penetration depths with adding limestone filler. Contrary to expectation, the best acid resistance to 5% H2SO4 solution was noted in the mortars containing 15% limestone. Based on the results obtained, an empirical equation was derived to predict the compressive strength of mortars.
Directory of Open Access Journals (Sweden)
Ming-Kai Hsieh
2013-08-01
Full Text Available Vertebral compression fractures constitute a major health care problem, not only because of their high incidence but also due to both direct and indirect consequences on health-related quality of life and health care expenditures. The mainstay of management for symptomatic vertebral compression fractures is targeted medical therapy, including analgesics, bed rest, external fixation, and rehabilitation. However, anti-inflammatory drugs and certain types of analgesics can be poorly tolerated by elderly patients, and surgical fixation often fails due to the poor quality of osteoporotic bone. Balloon kyphoplasty and vertebroplasty are two minimally invasive percutaneous surgical approaches that have recently been developed for the management of symptomatic vertebral compression fractures. The purpose of this study was to perform a comprehensive review of the literature and conduct a meta-analysis to compare clinical outcomes of pain relief and function, radiographic outcomes of the restoration of anterior vertebral height and kyphotic angles, and subsequent complications associated with these two techniques.
Hernandez-Valladares, Maria; Aasebø, Elise; Selheim, Frode; Berven, Frode S; Bruserud, Øystein
2016-08-22
Global mass spectrometry (MS)-based proteomic and phosphoproteomic studies of acute myeloid leukemia (AML) biomarkers represent a powerful strategy to identify and confirm proteins and their phosphorylated modifications that could be applied in diagnosis and prognosis, as a support for individual treatment regimens and selection of patients for bone marrow transplant. MS-based studies require optimal and reproducible workflows that allow a satisfactory coverage of the proteome and its modifications. Preparation of samples for global MS analysis is a crucial step and it usually requires method testing, tuning and optimization. Different proteomic workflows that have been used to prepare AML patient samples for global MS analysis usually include a standard protein in-solution digestion procedure with a urea-based lysis buffer. The enrichment of phosphopeptides from AML patient samples has previously been carried out either with immobilized metal affinity chromatography (IMAC) or metal oxide affinity chromatography (MOAC). We have recently tested several methods of sample preparation for MS analysis of the AML proteome and phosphoproteome and introduced filter-aided sample preparation (FASP) as a superior methodology for the sensitive and reproducible generation of peptides from patient samples. FASP-prepared peptides can be further fractionated or IMAC-enriched for proteome or phosphoproteome analyses. Herein, we will review both in-solution and FASP-based sample preparation workflows and encourage the use of the latter for the highest protein and phosphorylation coverage and reproducibility.
Directory of Open Access Journals (Sweden)
Maria Hernandez-Valladares
2016-08-01
Full Text Available Global mass spectrometry (MS-based proteomic and phosphoproteomic studies of acute myeloid leukemia (AML biomarkers represent a powerful strategy to identify and confirm proteins and their phosphorylated modifications that could be applied in diagnosis and prognosis, as a support for individual treatment regimens and selection of patients for bone marrow transplant. MS-based studies require optimal and reproducible workflows that allow a satisfactory coverage of the proteome and its modifications. Preparation of samples for global MS analysis is a crucial step and it usually requires method testing, tuning and optimization. Different proteomic workflows that have been used to prepare AML patient samples for global MS analysis usually include a standard protein in-solution digestion procedure with a urea-based lysis buffer. The enrichment of phosphopeptides from AML patient samples has previously been carried out either with immobilized metal affinity chromatography (IMAC or metal oxide affinity chromatography (MOAC. We have recently tested several methods of sample preparation for MS analysis of the AML proteome and phosphoproteome and introduced filter-aided sample preparation (FASP as a superior methodology for the sensitive and reproducible generation of peptides from patient samples. FASP-prepared peptides can be further fractionated or IMAC-enriched for proteome or phosphoproteome analyses. Herein, we will review both in-solution and FASP-based sample preparation workflows and encourage the use of the latter for the highest protein and phosphorylation coverage and reproducibility.
International Nuclear Information System (INIS)
Hedayati Dezfuli, F; Alam, M Shahria
2015-01-01
Smart lead rubber bearings (LRBs), in which a shape memory alloy (SMA) is used in the form of wires, are a new generation of elastomeric isolators with improved performance in terms of recentering capability and energy dissipation capacity. It is of great interest to implement SMA wire-based lead rubber bearings (SMA-LRBs) in bridges; however, currently there is no appropriate hysteresis model for accurately simulating the behavior of such isolators. A constitutive model for SMA-LRBs is proposed in this study. An LRB is equipped with a double cross configuration of SMA wires (DC-SMAW) and subjected to compression and unidirectional shear loadings. Due to the complexity of the shear behavior of the SMA-LRB, a hysteresis model is developed for the DC-SMAWs and then combined with the bilinear kinematic hardening model, which is assumed for the LRB. Comparing the hysteretic response of decoupled systems with that of the SMA-LRB shows that the high recentering capability of the DC-SMAW model with zero residual deformation could noticeably reduce the residual deformation of the LRB. The developed constitutive model for DC-SMAWs is characterized by three stiffnesses when the shear strain exceeds a starting limit at which the SMA wires are activated due to phase transformation. An important point is that the shear hysteresis of the DC-SMAW model looks different from the flag-shaped hysteresis of the SMA because of the specific arrangement of wires and its effect on the resultant forces transferred from the wires to the rubber bearing. (paper)
Preview-based sampling for controlling gaseous simulations
Huang, Ruoguan; Melek, Zeki; Keyser, John
2011-01-01
to maintain. During the high resolution simulation, a matching process ensures that the properties sampled from the low resolution simulation are maintained. This matching process keeps the different resolution simulations aligned even for complex systems
Directory of Open Access Journals (Sweden)
Gang Wang
2018-05-01
Full Text Available As the application of a coal mine Internet of Things (IoT, mobile measurement devices, such as intelligent mine lamps, cause moving measurement data to be increased. How to transmit these large amounts of mobile measurement data effectively has become an urgent problem. This paper presents a compressed sensing algorithm for the large amount of coal mine IoT moving measurement data based on a multi-hop network and total variation. By taking gas data in mobile measurement data as an example, two network models for the transmission of gas data flow, namely single-hop and multi-hop transmission modes, are investigated in depth, and a gas data compressed sensing collection model is built based on a multi-hop network. To utilize the sparse characteristics of gas data, the concept of total variation is introduced and a high-efficiency gas data compression and reconstruction method based on Total Variation Sparsity based on Multi-Hop (TVS-MH is proposed. According to the simulation results, by using the proposed method, the moving measurement data flow from an underground distributed mobile network can be acquired and transmitted efficiently.
Wang, Gang; Zhao, Zhikai; Ning, Yongjie
2018-05-28
As the application of a coal mine Internet of Things (IoT), mobile measurement devices, such as intelligent mine lamps, cause moving measurement data to be increased. How to transmit these large amounts of mobile measurement data effectively has become an urgent problem. This paper presents a compressed sensing algorithm for the large amount of coal mine IoT moving measurement data based on a multi-hop network and total variation. By taking gas data in mobile measurement data as an example, two network models for the transmission of gas data flow, namely single-hop and multi-hop transmission modes, are investigated in depth, and a gas data compressed sensing collection model is built based on a multi-hop network. To utilize the sparse characteristics of gas data, the concept of total variation is introduced and a high-efficiency gas data compression and reconstruction method based on Total Variation Sparsity based on Multi-Hop (TVS-MH) is proposed. According to the simulation results, by using the proposed method, the moving measurement data flow from an underground distributed mobile network can be acquired and transmitted efficiently.
Digital cinema video compression
Husak, Walter
2003-05-01
The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.
Directory of Open Access Journals (Sweden)
Elias Chaibub Neto
Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.
Finding metastabilities in reversible Markov chains based on incomplete sampling
Directory of Open Access Journals (Sweden)
Fackeldey Konstantin
2017-01-01
Full Text Available In order to fully characterize the state-transition behaviour of finite Markov chains one needs to provide the corresponding transition matrix P. In many applications such as molecular simulation and drug design, the entries of the transition matrix P are estimated by generating realizations of the Markov chain and determining the one-step conditional probability Pij for a transition from one state i to state j. This sampling can be computational very demanding. Therefore, it is a good idea to reduce the sampling effort. The main purpose of this paper is to design a sampling strategy, which provides a partial sampling of only a subset of the rows of such a matrix P. Our proposed approach fits very well to stochastic processes stemming from simulation of molecular systems or random walks on graphs and it is different from the matrix completion approaches which try to approximate the transition matrix by using a low-rank-assumption. It will be shown how Markov chains can be analyzed on the basis of a partial sampling. More precisely. First, we will estimate the stationary distribution from a partially given matrix P. Second, we will estimate the infinitesimal generator Q of P on the basis of this stationary distribution. Third, from the generator we will compute the leading invariant subspace, which should be identical to the leading invariant subspace of P. Forth, we will apply Robust Perron Cluster Analysis (PCCA+ in order to identify metastabilities using this subspace.
Yoon, Jeong Hee; Yu, Mi Hye; Chang, Won; Park, Jin-Young; Nickel, Marcel Dominik; Son, Yohan; Kiefer, Berthold; Lee, Jeong Min
2017-10-01
The purpose of the study was to investigate the clinical feasibility of free-breathing dynamic T1-weighted imaging (T1WI) using Cartesian sampling, compressed sensing, and iterative reconstruction in gadoxetic acid-enhanced liver magnetic resonance imaging (MRI). This retrospective study was approved by our institutional review board, and the requirement for informed consent was waived. A total of 51 patients at high risk of breath-holding failure underwent dynamic T1WI in a free-breathing manner using volumetric interpolated breath-hold (BH) examination with compressed sensing reconstruction (CS-VIBE) and hard gating. Timing, motion artifacts, and image quality were evaluated by 4 radiologists on a 4-point scale. For patients with low image quality scores (XD]) reconstruction was additionally performed and reviewed in the same manner. In addition, in 68.6% (35/51) patients who had previously undergone liver MRI, image quality and motion artifacts on dynamic phases using CS-VIBE were compared with previous BH-T1WIs. In all patients, adequate arterial-phase timing was obtained at least once. Overall image quality of free-breathing T1WI was 3.30 ± 0.59 on precontrast and 2.68 ± 0.70, 2.93 ± 0.65, and 3.30 ± 0.49 on early arterial, late arterial, and portal venous phases, respectively. In 13 patients with lower than average image quality (XD-reconstructed CS-VIBE) significantly reduced motion artifacts (P XD reconstruction showed less motion artifacts and better image quality on precontrast, arterial, and portal venous phases (P < 0.0001-0.013). Volumetric interpolated breath-hold examination with compressed sensing has the potential to provide consistent, motion-corrected free-breathing dynamic T1WI for liver MRI in patients at high risk of breath-holding failure.
Directory of Open Access Journals (Sweden)
Ruixiong Li
2016-12-01
Full Text Available The compressed air energy storage (CAES system, considered as one method for peaking shaving and load-levelling of the electricity system, has excellent characteristics of energy storage and utilization. However, due to the waste heat existing in compressed air during the charge stage and exhaust gas during the discharge stage, the efficient operation of the conventional CAES system has been greatly restricted. The Kalina cycle (KC and organic Rankine cycle (ORC have been proven to be two worthwhile technologies to fulfill the different residual heat recovery for energy systems. To capture and reuse the waste heat from the CAES system, two systems (the CAES system combined with KC and ORC, respectively are proposed in this paper. The sensitivity analysis shows the effect of the compression ratio and the temperature of the exhaust on the system performance: the KC-CAES system can achieve more efficient operation than the ORC-CAES system under the same temperature of exhaust gas; meanwhile, the larger compression ratio can lead to the higher efficiency for the KC-CAES system than that of ORC-CAES with the constant temperature of the exhaust gas. In addition, the evolutionary multi-objective algorithm is conducted between the thermodynamic and economic performances to find the optimal parameters of the two systems. The optimum results indicate that the solutions with an exergy efficiency of around 59.74% and 53.56% are promising for KC-CAES and ORC-CAES system practical designs, respectively.
Reliability assessment based on small samples of normal distribution
International Nuclear Information System (INIS)
Ma Zhibo; Zhu Jianshi; Xu Naixin
2003-01-01
When the pertinent parameter involved in reliability definition complies with normal distribution, the conjugate prior of its distributing parameters (μ, h) is of normal-gamma distribution. With the help of maximum entropy and the moments-equivalence principles, the subjective information of the parameter and the sampling data of its independent variables are transformed to a Bayesian prior of (μ,h). The desired estimates are obtained from either the prior or the posterior which is formed by combining the prior and sampling data. Computing methods are described and examples are presented to give demonstrations
Zhang, Xuyan; Zhang, Zhiyao; Wang, Shubing; Liang, Dong; Li, Heping; Liu, Yong
2018-03-01
We propose and demonstrate an approach that can achieve high-resolution quantization by employing soliton self-frequency shift and spectral compression. Our approach is based on a bi-directional comb-fiber architecture which is composed of a Sagnac-loop-based mirror and a comb-like combination of N sections of interleaved single-mode fibers and high nonlinear fibers. The Sagnac-loop-based mirror placed at the terminal of a bus line reflects the optical pulses back to the bus line to achieve additional N-stage spectral compression, thus single-stage soliton self-frequency shift (SSFS) and (2 N - 1)-stage spectral compression are realized in the bi-directional scheme. The fiber length in the architecture is numerically optimized, and the proposed quantization scheme is evaluated by both simulation and experiment in the case of N = 2. In the experiment, a quantization resolution of 6.2 bits is obtained, which is 1.2-bit higher than that of its uni-directional counterpart.
Directory of Open Access Journals (Sweden)
Xiangwei Li
2014-12-01
Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.
Directory of Open Access Journals (Sweden)
Jijian Lian
2017-05-01
Full Text Available Better understanding of the complex mechanical properties of ice is the foundation to predict the ice fail process and avoid potential ice threats. In the present study, uniaxial compressive strength and fracture mode of natural lake ice are investigated over moderate strain-rate range of 0.4–10 s−1 at −5 °C and −10 °C. The digital speckle correlation method (DSCM is used for deformation measurement through constructing artificial speckle on ice sample surface in advance, and two dynamic load cells are employed to measure the dynamic load for monitoring the equilibrium of two ends’ forces under high-speed loading. The relationships between uniaxial compressive strength and strain-rate, temperature, loading direction, and air porosity are investigated, and the fracture mode of ice at moderate rates is also discussed. The experimental results show that there exists a significant difference between true strain-rate and nominal strain-rate derived from actuator displacement under dynamic loading conditions. Over the employed strain-rate range, the dynamic uniaxial compressive strength of lake ice shows positive strain-rate sensitivity and decreases with increasing temperature. Ice obtains greater strength values when it is with lower air porosity and loaded vertically. The fracture mode of ice seems to be a combination of splitting failure and crushing failure.
Context-Aware Image Compression.
Directory of Open Access Journals (Sweden)
Jacky C K Chan
Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.
Je, U K; Cho, H M; Hong, D K; Cho, H S; Park, Y O; Park, C K; Kim, K S; Lim, H W; Kim, G A; Park, S Y; Woo, T H; Cho, S I
2016-01-01
In this work, we propose a practical method that can combine the two functionalities of dental panoramic and cone-beam CT (CBCT) features in one by using a single panoramic detector. We implemented a CS-based reconstruction algorithm for the proposed method and performed a systematic simulation to demonstrate its viability for 3D dental X-ray imaging. We successfully reconstructed volumetric images of considerably high accuracy by using a panoramic detector having an active area of 198.4 mm × 6.4 mm and evaluated the reconstruction quality as a function of the pitch (p) and the angle step (Δθ). Our simulation results indicate that the CS-based reconstruction almost completely recovered the phantom structures, as in CBCT, for p≤2.0 and θ≤6°, indicating that it seems very promising for accurate image reconstruction even for large-pitch and few-view data. We expect the proposed method to be applicable to developing a cost-effective, volumetric dental X-ray imaging system. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.