Sample records for rdtc optimized compression

  1. Bit-Optimal Lempel-Ziv compression

    Ferragina, Paolo; Venturini, Rossano


    One of the most famous and investigated lossless data-compression scheme is the one introduced by Lempel and Ziv about 40 years ago. This compression scheme is known as "dictionary-based compression" and consists of squeezing an input string by replacing some of its substrings with (shorter) codewords which are actually pointers to a dictionary of phrases built as the string is processed. Surprisingly enough, although many fundamental results are nowadays known about upper bounds on the speed and effectiveness of this compression process and references therein), ``we are not aware of any parsing scheme that achieves optimality when the LZ77-dictionary is in use under any constraint on the codewords other than being of equal length'' [N. Rajpoot and C. Sahinalp. Handbook of Lossless Data Compression, chapter Dictionary-based data compression. Academic Press, 2002. pag. 159]. Here optimality means to achieve the minimum number of bits in compressing each individual input string, without any assumption on its ge...

  2. Cloud Optimized Image Format and Compression

    Becker, P.; Plesea, L.; Maurer, T.


    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  3. Information optimal compressive sensing: static measurement design.

    Ashok, Amit; Huang, Liang-Chih; Neifeld, Mark A


    The compressive sensing paradigm exploits the inherent sparsity/compressibility of signals to reduce the number of measurements required for reliable reconstruction/recovery. In many applications additional prior information beyond signal sparsity, such as structure in sparsity, is available, and current efforts are mainly limited to exploiting that information exclusively in the signal reconstruction problem. In this work, we describe an information-theoretic framework that incorporates the additional prior information as well as appropriate measurement constraints in the design of compressive measurements. Using a Gaussian binomial mixture prior we design and analyze the performance of optimized projections relative to random projections under two specific design constraints and different operating measurement signal-to-noise ratio (SNR) regimes. We find that the information-optimized designs yield significant, in some cases nearly an order of magnitude, improvements in the reconstruction performance with respect to the random projections. These improvements are especially notable in the low measurement SNR regime where the energy-efficient design of optimized projections is most advantageous. In such cases, the optimized projection design departs significantly from random projections in terms of their incoherence with the representation basis. In fact, we find that the maximizing incoherence of projections with the representation basis is not necessarily optimal in the presence of additional prior information and finite measurement noise/error. We also apply the information-optimized projections to the compressive image formation problem for natural scenes, and the improved visual quality of reconstructed images with respect to random projections and other compressive measurement design affirms the overall effectiveness of the information-theoretic design framework.

  4. Near-Optimal Compressive Binary Search

    Malloy, Matthew L.; Nowak, Robert D.


    We propose a simple modification to the recently proposed compressive binary search. The modification removes an unnecessary and suboptimal factor of log log n from the SNR requirement, making the procedure optimal (up to a small constant). Simulations show that the new procedure performs significantly better in practice as well. We also contrast this problem with the more well known problem of noisy binary search.

  5. Optimized Projection Matrix for Compressive Sensing

    Jianping Xu


    Full Text Available Compressive sensing (CS is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.

  6. Inverse lithography source optimization via compressive sensing.

    Song, Zhiyang; Ma, Xu; Gao, Jie; Wang, Jie; Li, Yanqiu; Arce, Gonzalo R


    Source optimization (SO) has emerged as a key technique for improving lithographic imaging over a range of process variations. Current SO approaches are pixel-based, where the source pattern is designed by solving a quadratic optimization problem using gradient-based algorithms or solving a linear programming problem. Most of these methods, however, are either computational intensive or result in a process window (PW) that may be further extended. This paper applies the rich theory of compressive sensing (CS) to develop an efficient and robust SO method. In order to accelerate the SO design, the source optimization is formulated as an underdetermined linear problem, where the number of equations can be much less than the source variables. Assuming the source pattern is a sparse pattern on a certain basis, the SO problem is transformed into a l1-norm image reconstruction problem based on CS theory. The linearized Bregman algorithm is applied to synthesize the sparse optimal source pattern on a representation basis, which effectively improves the source manufacturability. It is shown that the proposed linear SO formulation is more effective for improving the contrast of the aerial image than the traditional quadratic formulation. The proposed SO method shows that sparse-regularization in inverse lithography can indeed extend the PW of lithography systems. A set of simulations and analysis demonstrate the superiority of the proposed SO method over the traditional approaches.

  7. Optimization of PERT Network and Compression of Time

    Li Ping; Hu Jianbing; Gu Xinyi


    In the traditional methods of program evaluation and review technique (PERT) network optimization and compression of time limit for project, the uncertainty of free time difference and total time difference were not considered as well as its time risk. The anthors of this paper use the theory of dependent-chance programming to establish a new model about compression of time for project and multi-objective network optimization, which can overcome the shortages of traditional methods and realize the optimization of PERT network directly. By calculating an example with genetic algorithms, the following conclusions are drawn: (1) compression of time is restricted by cost ratio and completion probability of project; (2) activities with maximal standard difference of duration and minimal cost will be compressed in order of precedence; (3) there is no optimal solutions but noninferior solutions between chance and cost, and the most optimal node time depends on decision-maker's preference.

  8. Squish: Near-Optimal Compression for Archival of Relational Datasets

    Gao, Yihan; Parameswaran, Aditya


    Relational datasets are being generated at an alarmingly rapid rate across organizations and industries. Compressing these datasets could significantly reduce storage and archival costs. Traditional compression algorithms, e.g., gzip, are suboptimal for compressing relational datasets since they ignore the table structure and relationships between attributes. We study compression algorithms that leverage the relational structure to compress datasets to a much greater extent. We develop Squish, a system that uses a combination of Bayesian Networks and Arithmetic Coding to capture multiple kinds of dependencies among attributes and achieve near-entropy compression rate. Squish also supports user-defined attributes: users can instantiate new data types by simply implementing five functions for a new class interface. We prove the asymptotic optimality of our compression algorithm and conduct experiments to show the effectiveness of our system: Squish achieves a reduction of over 50% in storage size relative to systems developed in prior work on a variety of real datasets.

  9. optimizing compressive strength characteristics of hollow building ...


    This paper evaluates the compressive strength of sandcrete hollow building blocks when its sand fraction is partially replaced ... defines sandcrete blocks as composite materials made .... industry as well as the economy of Nigeria, if there is no.

  10. Compressive MUSIC with optimized partial support for joint sparse recovery

    Kim, Jong Min; Ye, Jong Chul


    Multiple measurement vector (MMV) problem addresses the identification of unknown input vectors that share common sparse support. The MMV problems had been traditionally addressed either by sensor array signal processing or compressive sensing. However, recent breakthrough in this area such as compressive MUSIC (CS-MUSIC) or subspace-augumented MUSIC (SA-MUSIC) optimally combines the compressive sensing (CS) and array signal processing such that $k-r$ supports are first found by CS and the remaining $r$ supports are determined by generalized MUSIC criterion, where $k$ and $r$ denote the sparsity and the independent snapshots, respectively. Even though such hybrid approach significantly outperforms the conventional algorithms, its performance heavily depends on the correct identification of $k-r$ partial support by compressive sensing step, which often deteriorate the overall performance. The main contribution of this paper is, therefore, to show that as long as $k-r+1$ correct supports are included in any $k$...

  11. Space, time, error, and power optimization of image compression transforms

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.


    The implementation of an image compression transform on one or more small, embedded processors typically involves stringent constraints on power consumption and form factor. Traditional methods of optimizing compression algorithm performance typically emphasize joint minimization of space and time complexity, often without significant consideration of arithmetic accuracy or power consumption. However, small autonomous imaging platforms typically require joint optimization of space, time, error (or accuracy), and power (STEP) parameters, which the authors call STEP optimization. In response to implementational constraints on space and power consumption, the authors have developed systems and techniques for STEP optimization that are based on recent research in VLSI circuit design, as well as extensive previous work in system optimization. Building on the authors' previous research in embedded processors as well as adaptive or reconfigurable computing, it is possible to produce system-independent STEP optimization that can be customized for a given set of system-specific constraints. This approach is particularly useful when algorithms for image and signal processing (ISP) computer vision (CV), or automated target recognition (ATR), expressed in a machine- independent notation, are mapped to one or more heterogeneous processors (e.g., digital signal processors or DSPs, SIMD mesh processors, or reconfigurable logic). Following a theoretical summary, this paper illustrates various STEP optimization techniques via case studies, for example, real-time compression of underwater imagery on board an autonomous vehicle. Optimization algorithms are taken from the literature, and error profiling/analysis methodologies developed in the authors' previous research are employed. This yields a more rigorous basis for the simulation and evaluation of compression algorithms on a wide variety of hardware models. In this study, image algebra is employed as the notation of choice

  12. Compressed Air System Optimization: Case Study Food Industry in Indonesia

    Widayati, Endang; Nuzahar, Hasril


    Compressors and compressed air systems was one of the most important utilities in industries or factories. Approximately 10% of the cost of electricity in the industry was used to produce compressed air. Therefore the potential for energy savings in the compressors and compressed air systems had a big challenge. This field was conducted especially in Indonesia food industry or factory. Compressed air system optimization was a technique approach to determine the optimal conditions for the operation of compressors and compressed air systems that included evaluation of the energy needs, supply adjustment, eliminating or reconfiguring the use and operation of inefficient, changing and complementing some equipment and improving operating efficiencies. This technique gave the significant impact for energy saving and costs. The potential savings based on this study through measurement and optimization e.g. system that lowers the pressure of 7.5 barg to 6.8 barg would reduce energy consumption and running costs approximately 4.2%, switch off the compressor GA110 and GA75 was obtained annual savings of USD 52,947 ≈ 455 714 kWh, running GA75 light load or unloaded then obtained annual savings of USD 31,841≈ 270,685 kWh, install new compressor 2x132 kW and 1x 132 kW VSD obtained annual savings of USD 108,325≈ 928,500 kWh. Furthermore it was needed to conduct study of technical aspect of energy saving potential (Investment Grade Audit) and performed Cost Benefit Analysis. This study was one of best practice solutions how to save energy and improve energy performance in compressors and compressed air system.

  13. On optimally partitioning a text to improve its compression

    Ferragina, Paolo; Venturini, Rossano


    In this paper we investigate the problem of partitioning an input string T in such a way that compressing individually its parts via a base-compressor C gets a compressed output that is shorter than applying C over the entire T at once. This problem was introduced in the context of table compression, and then further elaborated and extended to strings and trees. Unfortunately, the literature offers poor solutions: namely, we know either a cubic-time algorithm for computing the optimal partition based on dynamic programming, or few heuristics that do not guarantee any bounds on the efficacy of their computed partition, or algorithms that are efficient but work in some specific scenarios (such as the Burrows-Wheeler Transform) and achieve compression performance that might be worse than the optimal-partitioning by a $\\Omega(\\sqrt{\\log n})$ factor. Therefore, computing efficiently the optimal solution is still open. In this paper we provide the first algorithm which is guaranteed to compute in $O(n \\log_{1+\\eps}...

  14. Thermoeconomic optimization of subcooled and superheated vapor compression refrigeration cycle

    Selbas, Resat; Kizilkan, OEnder; Sencan, Arzu [Technical Education Faculty, Department of Mechanical Education, Sueleyman Demirel University, Isparta 32260 (Turkey)


    An exergy-based thermoeconomic optimization application is applied to a subcooled and superheated vapor compression refrigeration system. The advantage of using the exergy method of thermoeconomic optimization is that various elements of the system - i.e., condenser, evaporator, subcooling and superheating heat exchangers - can be optimized on their own. The application consists of determining the optimum heat exchanger areas with the corresponding optimum subcooling and superheating temperatures. A cost function is specified for the optimum conditions. All calculations are made for three refrigerants: R22, R134a, and R407c. Thermodynamic properties of refrigerants are formulated using the Artificial Neural Network methodology. (author)


    Yang Guoan; Zheng Nanning; Guo Shugang


    A new approach for designing the Biorthogonal Wavelet Filter Bank (BWFB) for the purpose of image compression is presented in this letter. The approach is decomposed into two steps.First, an optimal filter bank is designed in theoretical sense based on Vaidyanathan's coding gain criterion in SubBand Coding (SBC) system. Then the above filter bank is optimized based on the criterion of Peak Signal-to-Noise Ratio (PSNR) in JPEG2000 image compression system, resulting in a BWFB in practical application sense. With the approach, a series of BWFB for a specific class of applications related to image compression, such as remote sensing images, can be fast designed. Here,new 5/3 BWFB and 9/7 BWFB are presented based on the above approach for the remote sensing image compression applications. Experiments show that the two filter banks are equally performed with respect to CDF 9/7 and LT 5/3 filter in JPEG2000 standard; at the same time, the coefficients and the lifting parameters of the lifting scheme are all rational, which bring the computational advantage, and the ease for VLSI implementation.

  16. Optimization of wavelet decomposition for image compression and feature preservation.

    Lo, Shih-Chung B; Li, Huai; Freedman, Matthew T


    A neural-network-based framework has been developed to search for an optimal wavelet kernel that can be used for a specific image processing task. In this paper, a linear convolution neural network was employed to seek a wavelet that minimizes errors and maximizes compression efficiency for an image or a defined image pattern such as microcalcifications in mammograms and bone in computed tomography (CT) head images. We have used this method to evaluate the performance of tap-4 wavelets on mammograms, CTs, magnetic resonance images, and Lena images. We found that the Daubechies wavelet or those wavelets with similar filtering characteristics can produce the highest compression efficiency with the smallest mean-square-error for many image patterns including general image textures as well as microcalcifications in digital mammograms. However, the Haar wavelet produces the best results on sharp edges and low-noise smooth areas. We also found that a special wavelet whose low-pass filter coefficients are 0.32252136, 0.85258927, 1.38458542, and -0.14548269) produces the best preservation outcomes in all tested microcalcification features including the peak signal-to-noise ratio, the contrast and the figure of merit in the wavelet lossy compression scheme. Having analyzed the spectrum of the wavelet filters, we can find the compression outcomes and feature preservation characteristics as a function of wavelets. This newly developed optimization approach can be generalized to other image analysis applications where a wavelet decomposition is employed.

  17. Optimizing measurements for feature-specific compressive sensing.

    Mahalanobis, Abhijit; Neifeld, Mark


    While the theory of compressive sensing has been very well investigated in the literature, comparatively little attention has been given to the issues that arise when compressive measurements are made in hardware. For instance, compressive measurements are always corrupted by detector noise. Further, the number of photons available is the same whether a conventional image is sensed or multiple coded measurements are made in the same interval of time. Thus it is essential that the effects of noise and the constraint on the number of photons must be taken into account in the analysis, design, and implementation of a compressive imager. In this paper, we present a methodology for designing a set of measurement kernels (or masks) that satisfy the photon constraint and are optimum for making measurements that minimize the reconstruction error in the presence of noise. Our approach finds the masks one at a time, by determining the vector that yields the best possible measurement for reducing the reconstruction error. The subspace represented by the optimized mask is removed from the signal space, and the process is repeated to find the next best measurement. Results of simulations are presented that show that the optimum masks always outperform reconstructions based on traditional feature measurements (such as principle components), and are also better than the conventional images in high noise conditions.

  18. Optimization of compressive strength of zirconia based dental composites

    U V Hambire; V K Tripathi


    Dental composites are tooth-coloured restorative material used by dentists for various applications. Restoration of a lost tooth structure requires a material having mechanical as well as aesthetic properties similar to that of tooth. This poses challenges to engineers and the dentist alike. Dental composites consist of a matrix and a dispersed phase called filler, which are mainly responsible for its mechanical properties. Most commonly used matrix is bisphenol glycidyl methacrylate (Bis-GMA) and triethylene glycol dimethacrylate (TEGMA). Silica and glass are conventional fillers used in the past. Recently, zirconia is being used due to its improved mechanical properties. A study was conducted to evaluate the contribution of zirconia to the mechanical properties in general and compressive strength in particular. We have attempted to make an experimental dental composite with a conglomerate of nanofillers, namely, zirconia, glass and silica, and optimize this filler volume percentage and obtain an optimum compressive strength for the experimental dental composite.

  19. Toeplitz block circulant matrix optimized with particle swarm optimization for compressive imaging

    Tao, Huifeng; Yin, Songfeng; Tang, Cong


    Compressive imaging is an imaging way based on the compressive sensing theory, which could achieve to capture the high resolution image through a small set of measurements. As the core of the compressive imaging, the design of the measurement matrix is sufficient to ensure that the image can be recovered from the measurements. Due to the fast computing capacity and the characteristic of easy hardware implementation, The Toeplitz block circulant matrix is proposed to realize the encoded samples. The measurement matrix is usually optimized for improving the image reconstruction quality. However, the existing optimization methods can destroy the matrix structure easily when applied to the Toeplitz block circulant matrix optimization process, and the deterministic iterative processes of them are inflexible, because of requiring the task optimized to need to satisfy some certain mathematical property. To overcome this problem, a novel method of optimizing the Toeplitz block circulant matrix based on the particle swarm optimization intelligent algorithm is proposed in this paper. The objective function is established by the way of approaching the target matrix that is the Gram matrix truncated by the Welch threshold. The optimized object is the vector composed by the free entries instead of the Gram matrix. The experimental results indicate that the Toeplitz block circulant measurement matrix can be optimized while preserving the matrix structure by our method, and result in the reconstruction quality improvement.

  20. Split Bregman's optimization method for image construction in compressive sensing

    Skinner, D.; Foo, S.; Meyer-Bäse, A.


    The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.

  1. Data compression on board the PLANCK Satellite Low Frequency Instrument optimal compression rate

    Gaztañaga, E; Romeo, A; Fosalba, P; Elizalde, E


    Data on board the future PLANCK Low Frequency Instrument (LFI), to measure the Cosmic Microwave Background (CMB) anisotropies, consist of $N$ differential temperature measurements, expanding a range of values we shall call $R$. Preliminary studies and telemetry allocation indicate the need of compressing these data by a ratio of $c_r \\simgt 10$. Here we present a study of entropy for (correlated multi-Gaussian discrete) noise, showing how the optimal compression $c_{r,opt}$, for a linearly discretized data set with $N_{bits}=\\log_2{N_{max}}$ bits is given by: $c_r \\simeq {N_{bits}/\\log_2(\\sqrt{2\\pi e} ~\\sigma_e/\\Delta)}$, where $\\sigma_e\\equiv (det C)^{1/2N}$ is some effective noise rms given by the covariance matrix $C$ and to be as small as the instrumental white noise RMS: $\\Delta \\simeq \\sigma_T averaging). Within the currently proposed $N_{bits}=16$ representation, a linear analogue to digital converter (ADC) will allow the digital storage of a large dynamic range of differential temperature $R= N_{max} ...

  2. Optimization Study on a Single-cylinder Compressed Air Engine

    YU Qihui; CAI Maolin; SHI Yan; XU Qiyue


    The current research of compressed air engine (CAE) mainly focused on simulations and system integrations. However, energy efficiency and output torque of the CAE is limited, which restricts its application and popularization. In this paper, the working principles of CAE are briefly introduced. To set a foundation for the study on the optimization of the CAE, the basic mathematical model of working processes is set up. A pressure-compensated valve which can reduce the inertia force of the valve is proposed. To verify the mathematical model, the prototype with the newly designed pressure-compensated intake valve is built and the experiment is carried out, simulation and experimental results of the CAE are conducted, and pressures inside the cylinder and output torque of the CAE are obtained. Orthogonal design and grey relation analysis are utilized to optimize structural parameters. The experimental and optimized results show that, first of all, pressure inside the cylinder has the same changing tendency in both simulation curve and experimental curve. Secondly, the highest average output torque is obtained at the highest intake pressure and the lowest rotate speed. Thirdly, the optimization of the single-cylinder CAE can improve the working efficiency from an original 21.95% to 50.1%, an overall increase of 28.15%, and the average output torque increases also increases from 22.047 5 N • m to 22.439 N • m. This research designs a single-cylinder CAE with pressure-compensated intake valve, and proposes a structural parameters design method which improves the single-cylinder CAE performance.

  3. Optimizing chest compressions during delivery-room resuscitation.

    Wyckoff, Myra H; Berg, Robert A


    There is a paucity of data to support the recommendations for cardiac compressions for the newly born. Techniques, compression to ventilation ratios, hand placement, and depth of compression guidelines are generally based on expert consensus, physiologic plausibility, and data from pediatric and adult models.


    Nishat kanvel


    Full Text Available This paper presents an adaptive lifting scheme with Particle Swarm Optimization technique for image compression. Particle swarm Optimization technique is used to improve the accuracy of the predictionfunction used in the lifting scheme. This scheme is applied in Image compression and parameters such as PSNR, Compression Ratio and the visual quality of the image is calculated .The proposed scheme iscompared with the existing methods.

  5. An Optimal Seed Based Compression Algorithm for DNA Sequences

    Pamela Vinitha Eric


    Full Text Available This paper proposes a seed based lossless compression algorithm to compress a DNA sequence which uses a substitution method that is similar to the LempelZiv compression scheme. The proposed method exploits the repetition structures that are inherent in DNA sequences by creating an offline dictionary which contains all such repeats along with the details of mismatches. By ensuring that only promising mismatches are allowed, the method achieves a compression ratio that is at par or better than the existing lossless DNA sequence compression algorithms.

  6. Adjoint-based Optimal Flow Control for Compressible DNS

    Otero, J Javier; Sandberg, Richard D


    A novel adjoint-based framework oriented to optimal flow control in compressible direct numerical simulations is presented. Also, a new formulation of the adjoint characteristic boundary conditions is introduced, which enhances the stability of the adjoint simulations. The flow configuration chosen as a case study consists of a two dimensional open cavity flow with aspect ratio $L/H=3$ and Reynolds number $Re=5000$. This flow configuration is of particular interest, as the turbulent and chaotic nature of separated flows pushes the adjoint approach to its limit. The target of the flow actuation, defined as cost, is the reduction of the pressure fluctuations at the sensor location. To exploit the advantages of the adjoint method, a large number of control parameters is used. The control consists of an actuating sub-domain where a two-dimensional body force is applied at every point within the sub-volume. This results in a total of $2.256 \\cdot 10^6$ control parameters. The final actuation achieved a successful ...

  7. Simultaneous encryption and compression of medical images based on optimized tensor compressed sensing with 3D Lorenz.

    Wang, Qingzhu; Chen, Xiaoming; Wei, Mengying; Miao, Zhuang


    The existing techniques for simultaneous encryption and compression of images refer lossy compression. Their reconstruction performances did not meet the accuracy of medical images because most of them have not been applicable to three-dimensional (3D) medical image volumes intrinsically represented by tensors. We propose a tensor-based algorithm using tensor compressive sensing (TCS) to address these issues. Alternating least squares is further used to optimize the TCS with measurement matrices encrypted by discrete 3D Lorenz. The proposed method preserves the intrinsic structure of tensor-based 3D images and achieves a better balance of compression ratio, decryption accuracy, and security. Furthermore, the characteristic of the tensor product can be used as additional keys to make unauthorized decryption harder. Numerical simulation results verify the validity and the reliability of this scheme.

  8. Information Content in Uniformly Discretized Gaussian Noise:. Optimal Compression Rates

    Romeo, August; Gaztañaga, Enrique; Barriga, Jose; Elizalde, Emilio

    We approach the theoretical problem of compressing a signal dominated by Gaussian noise. We present expressions for the compression ratio which can be reached, under the light of Shannon's noiseless coding theorem, for a linearly quantized stochastic Gaussian signal (noise). The compression ratio decreases logarithmically with the amplitude of the frequency spectrum P(f) of the noise. Entropy values and compression rates are shown to depend on the shape of this power spectrum, given different normalizations. The cases of white noise (w.n.), fnp power-law noise (including 1/f noise), (w.n.+1/f) noise, and piecewise (w.n.+1/f | w.n.+1/f2) noise are discussed, while quantitative behaviors and useful approximations are provided.

  9. Optimized Speech Compression Algorithm Based on Wavelets Techniques and its Real Time Implementation on DSP

    Noureddine Aloui


    Full Text Available This paper presents an optimized speech compression algorithm using discrete wavelet transform, and its real time implementation on fixed-point digital signal processor (DSP. The optimized speech compression algorithm presents the advantages to ensure low complexity, low bit rate and achieve high speech coding efficiency, and this by adding a voice activity detector (VAD module before the application of the discrete wavelet transform. The VAD module avoids the computation of the discrete wavelet coefficients during the inactive voice signal. In addition, a real-time implementation of the optimized speech compression algorithm is performed using fixed-point processor. The optimized and the original algorithms are evaluated and compared in terms of CPU time (sec, Cycle count (MCPS, Memory consumption (Ko, Compression Ratio (CR, Signal to Noise Ratio (SNR, Peak Signal to Noise Ratio (PSNR and Normalized Root Mean Square Error (NRMSE.

  10. Optimal context quantization in lossless compression of image data sequences

    Forchhammer, Søren; Wu, X.; Andersen, Jakob Dahl


    In image compression context-based entropy coding is commonly used. A critical issue to the performance of context-based image coding is how to resolve the conflict of a desire for large templates to model high-order statistic dependency of the pixels and the problem of context dilution due to in...


    Anna, Mîra; Carton, Ann-Katherine; Muller, Serge; Payan, Yohan


    International audience; The aim of this work is to develop a biomechanical Finite Element (FE) breast model in order to analyze different breast compression strategies and their impact on image quality. Large breast deformations will be simulated using this FE model. A particular attention will be granted to the computation of the initial stress in the model due to gravity and to boundary conditions imposed by the thorax anatomy. Finally, the model will be validated by comparing the estimated...

  12. Parameter optimization of pulse compression in ultrasound imaging systems with coded excitation.

    Behar, Vera; Adam, Dan


    A linear array imaging system with coded excitation is considered, where the proposed excitation/compression scheme maximizes the signal-to-noise ratio (SNR) and minimizes sidelobes at the output of the compression filter. A pulse with linear frequency modulation (LFM) is used for coded excitation. The excitation/compression scheme is based on the fast digital mismatched filtering. The parameter optimization of the excitation/compression scheme includes (i) choice of an optimal filtering function for the mismatched filtering; (ii) choice of an optimal window function for tapering of the chirp amplitude; (iii) optimization of a chirp-to-transducer bandwidth ratio; (iv) choice of an appropriate n-bit quantizer. The simulation results show that the excitation/compression scheme can be implemented as a Dolph-Chebyshev filter including amplitude tapering of the chirp with a Lanczos window. An example of such an optimized system is given where the chirp bandwidth is chosen to be 2.5 times the transducer bandwidth and equals 6 MHz: The sidelobes are suppressed to -80 dB, for a central frequency of 4 MHz, and to -94 dB, for a central frequency of 8 MHz. The corresponding improvement of the SNR is 18 and 21 dB, respectively, when compared to a conventional short pulse imaging system. Simulation of B-mode images demonstrates the advantage of coded excitation systems of detecting regions with low contrast.

  13. Fast compressive measurements acquisition using optimized binary sensing matrices for low-light-level imaging.

    Ke, Jun; Lam, Edmund Y


    Compressive measurements benefit low-light-level imaging (L3-imaging) due to the significantly improved measurement signal-to-noise ratio (SNR). However, as with other compressive imaging (CI) systems, compressive L3-imaging is slow. To accelerate the data acquisition, we develop an algorithm to compute the optimal binary sensing matrix that can minimize the image reconstruction error. First, we make use of the measurement SNR and the reconstruction mean square error (MSE) to define the optimal gray-value sensing matrix. Then, we construct an equality-constrained optimization problem to solve for a binary sensing matrix. From several experimental results, we show that the latter delivers a similar reconstruction performance as the former, while having a smaller dynamic range requirement to system sensors.

  14. Optimal Design and Experimental characterisation of short optical pulse compression using CDPF

    Yujun, Qian; Quist, S.


    We present optimal design and experimental characterisation ofoptical pulse compression using a comblike dispersion-profiled fibre(CDPF). A pulse train at 10GHz with puslewidth of 1ps and side-lobesuppression of 30dB can be obtained.......We present optimal design and experimental characterisation ofoptical pulse compression using a comblike dispersion-profiled fibre(CDPF). A pulse train at 10GHz with puslewidth of 1ps and side-lobesuppression of 30dB can be obtained....

  15. Optimization of composite sandwich cover panels subjected to compressive loadings

    Cruz, Juan R.


    An analysis and design method is presented for the design of composite sandwich cover panels that include the transverse shear effects and damage tolerance considerations. This method is incorporated into a sandwich optimization computer program entitled SANDOP. As a demonstration of its capabilities, SANDOP is used in the present study to design optimized composite sandwich cover panels for for transport aircraft wing applications. The results of this design study indicate that optimized composite sandwich cover panels have approximately the same structural efficiency as stiffened composite cover panels designed to satisfy individual constraints. The results also indicate that inplane stiffness requirements have a large effect on the weight of these composite sandwich cover panels at higher load levels. Increasing the maximum allowable strain and the upper percentage limit of the 0 degree and +/- 45 degree plies can yield significant weight savings. The results show that the structural efficiency of these optimized composite sandwich cover panels is relatively insensitive to changes in core density. Thus, core density should be chosen by criteria other than minimum weight (e.g., damage tolerance, ease of manufacture, etc.).

  16. Optimization of the dye-sensitized solar cell performance by mechanical compression.

    Meen, Teen Hang; Tsai, Jenn Kai; Tu, Yu Shin; Wu, Tian Chiuan; Hsu, Wen Dung; Chang, Shoou-Jinn


    In this study, the P25 titanium dioxide (TiO2) nanoparticle (NP) thin film was coated on the fluorine-doped tin oxide (FTO) glass substrate by a doctor blade method. The film then compressed mechanically to be the photoanode of dye-sensitized solar cells (DSSCs). Various compression pressures on TiO2 NP film were tested to optimize the performance of DSSCs. The mechanical compression reduces TiO2 inter-particle distance improving the electron transport efficiency. The UV-vis spectrophotometer and electrochemical impedance spectroscopy (EIS) were employed to quantify the light-harvesting efficiency and the charge transport impedance at various interfaces in DSSC, respectively. The incident photon-to-current conversion efficiency was also monitored. The results show that when the DSSC fabricated by the TiO2 NP thin film compressed at pressure of 279 kg/cm(2), the minimum resistance of 9.38 Ω at dye/TiO2 NP/electrolyte interfaces, the maximum short-circuit photocurrent density of 15.11 mA/cm(2), and the photoelectric conversion efficiency of 5.94% were observed. Compared to the DSSC fabricated by the non-compression of TiO2 NP thin film, the overall conversion efficiency is improved over 19.5%. The study proves that under suitable compression pressure the performance of DSSC can be optimized.

  17. Improved cuckoo search with particle swarm optimization for classification of compressed images

    Vamsidhar Enireddy; Reddi Kiran Kumar


    The need for a general purpose Content Based Image Retrieval (CBIR) system for huge image databases has attracted information-technology researchers and institutions for CBIR techniques development. These techniques include image feature extraction, segmentation, feature mapping, representation, semantics, indexing and storage, image similarity-distance measurement and retrieval making CBIR system development a challenge. Since medical images are large in size running to megabits of data they are compressed to reduce their size for storage and transmission. This paper investigates medical image retrieval problem for compressed images. An improved image classification algorithm for CBIR is proposed. In the proposed method, RAW images are compressed using Haar wavelet. Features are extracted using Gabor filter and Sobel edge detector. The extracted features are classified using Partial Recurrent Neural Network (PRNN). Since training parameters in Neural Network are NP hard, a hybrid Particle Swarm Optimization (PSO) – Cuckoo Search algorithm (CS) is proposed to optimize the learning rate of the neural network.

  18. Context-based lossless image compression with optimal codes for discretized Laplacian distributions

    Giurcaneanu, Ciprian Doru; Tabus, Ioan; Stanciu, Cosmin


    Lossless image compression has become an important research topic, especially in relation with the JPEG-LS standard. Recently, the techniques known for designing optimal codes for sources with infinite alphabets have been applied for the quantized Laplacian sources which have probability mass functions with two geometrically decaying tails. Due to the simple parametric model of the source distribution the Huffman iterations are possible to be carried out analytically, using the concept of reduced source, and the final codes are obtained as a sequence of very simple arithmetic operations, avoiding the need to store coding tables. We propose the use of these (optimal) codes in conjunction with context-based prediction, for noiseless compression of images. To reduce further the average code length, we design Escape sequences to be employed when the estimation of the distribution parameter is unreliable. Results on standard test files show improvements in compression ratio when comparing with JPEG-LS.


    令玉林; 周建红; 李国斌; 刘立华


    A dithiocarbamate heavy metal chelator(RDTC) was prepared with tetraethylenepentamine,1,2-dichloroethane and piperazine.Its structure was characterized by IR spectra.The effects of dosage,initial pH value and the substance causing turbidity on Cu(Ⅱ) removal from simulated copper-containing wastewater were investigated.The sedimentation property of the polymer precipitates was determined by turbidity measurement,and the stability of the sediment was studied by leaching test.Results showed that in 250 mL of 50 mg·L-1 free Cu2+,CuCA and CuEDTA with addition of 3.4,3.6 and 3.8 mL RDTC,Cu(Ⅱ) remoal rates were all above 99.5%.More than 99.3% of Cu(Ⅱ) was removed at pH value between 3 and 11.The substance causing turbidity was favorable for the removal of Cu2+ and the turbidity of wastewater decreased to less than 10 NTU after 10 minutes deposition,which indicated that the formed floc had excellent sedimentation property.Moreover,the sediment had good stability at pH value of no less than 5.%以四乙烯五胺、二氯乙烷和哌嗪为主要原料研制了一种二硫代氨基甲酸盐(DTC)类重金属螯合剂RDTC,采用红外光谱对其结构进行了表征.研究了RDTC投加量、废水初始pH值和致浊物质对模拟含铜废水中Cu(Ⅱ)去除效果的影响,用测定浊度的方法研究了絮体的沉降性能,并采用溶出法对沉渣的稳定性进行了分析.结果表明,处理250 mL含Cu(Ⅱ)为50 m.gL^-1的游离Cu^2+、CuCA和CuEDTA废水,对应的RDTC投加量分别为3.4、3.6和3.8 mL时,Cu(Ⅱ)去除率达到99.5%以上;RDTC在废水初

  20. Lossless image data sequence compression using optimal context quantization

    Forchhammer, Søren; WU, Xiaolin; Andersen, Jakob Dahl


    conditioning states. A solution giving the minimum adaptive code length for a given data set is presented (when the cost of the context quantizer is neglected). The resulting context quantizers can be used for sequential coding of the sequence X0, X1, X 2, …. A coding scheme based on binary decomposition...... and context quantization for coding the binary decisions is presented and applied to digital maps and α-plane sequences. The optimal context quantization is also used to evaluate existing heuristic context quantizations....

  1. Optimal Numerical Schemes for Compressible Large Eddy Simulations

    Edoh, Ayaboe; Karagozian, Ann; Sankaran, Venkateswaran; Merkle, Charles


    The design of optimal numerical schemes for subgrid scale (SGS) models in LES of reactive flows remains an area of continuing challenge. It has been shown that significant differences in solution can arise due to the choice of the SGS model's numerical scheme and its inherent dissipation properties, which can be exacerbated in combustion computations. This presentation considers the individual roles of artificial dissipation, filtering, secondary conservation (Kinetic Energy Preservation), and collocated versus staggered grid arrangements with respect to the dissipation and dispersion characteristics and their overall impact on the robustness and accuracy for time-dependent simulations of relevance to reacting and non-reacting LES. We utilize von Neumann stability analysis in order to quantify these effects and to determine the relative strengths and weaknesses of the different approaches. Distribution A: Approved for public release, distribution unlimited. Supported by AFOSR (PM: Dr. F. Fahroo).



    Excellent mechanical property of the anti-compression or high collapse pressure has become an essential feature of new coronary stents. How to determine the design parameters of stent becomes the key to improve the stent quality. An integrated approach using radial basis function neural network (RBFNN) and genetic algorithm (GA) for the optimization of anti-compression mechanical property of stent is presented in this paper. First, finite element simulation and RBFNN are used to map the complex non-linear relationship between the collapse pressure and stent design parameters. Then GA is employed with the fitness function based on an RBFNN model for arriving at optimum configuration of the stent by maximizing the collapse pressure. The results of numerical experiment demonstrate that the combination of RBFNN and GA is an effective approach for the mechanical properties optimization of stent.

  3. Compressed Air Energy Storage: Optimal Performance and Techno-Economical Indices

    Peter Vadasz


    Full Text Available A thermodynamic and techno-economical analysis of a Compressed Air Energy Storage system subjected to an exogenous periodic electricity price function of the interconnection is presented. The fundamental parameters affecting the thermodynamic performance and the techno-economical cost-benefit indices are identified and corresponding optimisation problems are formulated. The results of the analysis permit to obtain the optimal values of the fundamental plant parameters to be used in the design process.

  4. Optimal Numerical Schemes for Time Accurate Compressible Large Eddy Simulations: Comparison of Artificial Dissipation and Filtering Schemes


    for Time Accurate Compressible Large Eddy Simulations : Comparison of Artificial Dissipation and Filtering Schemes 5b. GRANT NUMBER 5c. PROGRAM...Optimal Numerical Schemes for Time Accurate Compressible Large Eddy Simulations : Comparison of Artificial Dissipation and Filtering Schemes 67th

  5. Optimal Acoustic Attenuation of Weakly Compressible Media Permeated with Air Bubbles

    LIANG Bin; CHENG Jian-Chun


    Based on fuzzy logic (FL) and genetic algorithm (GA), we present an optimization method to obtain the optimal acoustic attenuation of a longitudinal acoustic wave propagating in a weakly compressible medium permeated with air bubbles. In the optimization, the parameters of the size distribution of bubbles in the medium are optimized for providing uniformly high acoustic attenuation in the frequency band of interest. Compared with other traditional optimization methods, the unique advantage of the present method is that it can locate the global optimum quickly and effectively in need of knowing the mathematical model precisely. As illustrated by a numerical simulation, the method is effective and essential in enhancing the acoustic attenuation of such a medium in an optimal manner. The bubbly medium with optimized structural parameters can effectively attenuate longitudinal waves at intermediate frequencies with an acoustic attenuation approximating a constant value of 10(dB/cm). Such bubbly media with optimal acoustic attenuations may be applied to design acoustic absorbent by controlling broader attenuation band and higher efficiency.

  6. Optimization of Channel Coding for Transmitted Image Using Quincunx Wavelets Transforms Compression

    Mustapha Khelifi


    Full Text Available Many images you see on the Internet today have undergone compression for various reasons. Image compression can benefit users by having pictures load faster and webpages use up less space on a Web host. Image compression does not reduce the physical size of an image but instead compresses the data that makes up the image into a smaller size. In case of image transmission the noise will decrease the quality of recivide image which obliges us to use channel coding techniques to protect our data against the channel noise. The Reed-Solomon code is one of the most popular channel coding techniques used to correct errors in many systems ((Wireless or mobile communications, Satellite communications, Digital television / DVB,High-speed modems such as ADSL, xDSL, etc.. Since there is lot of possibilities to select the input parameters of RS code this will make us concerned about the optimum input that can protect our data with minimum number of redundant bits. In this paper we are going to use the genetic algorithm to optimize in the selction of input parameters of RS code acording to the channel conditions wich reduce the number of bits needed to protect our data with hight quality of received image.

  7. Optimal rate allocation for joint compression and classification in JPEG 2000

    Tabesh, Ali; Marcellin, Michael W.; Neifeld, Mark A.


    We present a framework for optimal rate allocation to image subbands to minimize the distortion in the joint compression and classification of JPEG2000-compressed images. The distortion due to compression is defined as a weighted linear combination of the mean-square error (MSE) and the loss in the Bhattacharyya distance (BD) between the class-conditional distributions of the classes. Lossy compression with JPEG2000 is accomplished via deadzone uniform quantization of wavelet subbands. Neglecting the effect of the deadzone, expressions are derived for the distortion in the case of two classes with generalized Gaussian distributions (GGDs), based on the high-rate analysis of Poor. In this regime, the distortion function takes the form of a weighted MSE (WMSE) function, which can be minimized using reverse water-filling. We present experimental results based on synthetic data to evaluate the efficacy of the proposed rate allocation scheme. The results indicate that by varying the weight factor balancing the MSE and the Bhattacharyya distance, we can control the trade-off between these two terms in the distortion function.

  8. Development of optimization models for the set behavior and compressive strength of sodium activated geopolymer pastes

    Fillenwarth, Brian Albert

    As large countries such as China begin to industrialize and concerns about global warming continue to grow, there is an increasing need for more environmentally friendly building materials. One promising material known as a geopolymer can be used as a portland cement replacement and in this capacity emits around 67% less carbon dioxide. In addition to potentially reducing carbon emissions, geopolymers can be synthesized with many industrial waste products such as fly ash. Although the benefits of geopolymers are substantial, there are a few difficulties with designing geopolymer mixes which have hindered widespread commercialization of the material. One such difficulty is the high variability of the materials used for their synthesis. In addition to this, interrelationships between mix design variables and how these interrelationships impact the set behavior and compressive strength are not well understood. A third complicating factor with designing geopolymer mixes is that the role of calcium in these systems is not well understood. In order to overcome these barriers, this study developed predictive optimization models through the use of genetic programming with experimentally collected set times and compressive strengths of several geopolymer paste mixes. The developed set behavior models were shown to predict the correct set behavior from the mix design over 85% of the time. The strength optimization model was shown to be capable of predicting compressive strengths of geopolymer pastes from their mix design to within about 1 ksi of their actual strength. In addition to this the optimization models give valuable insight into the key factors influencing strength development as well as the key factors responsible for flash set and long set behaviors in geopolymer pastes. A method for designing geopolymer paste mixes was developed from the generated optimization models. This design method provides an invaluable tool for use in future geopolymer research as well as

  9. Lossless image compression based on optimal prediction, adaptive lifting, and conditional arithmetic coding.

    Boulgouris, N V; Tzovaras, D; Strintzis, M G


    The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.

  10. Design, Optimization, and Evaluation of A1-2139 Compression Panel with Integral T-Stiffeners

    Mulani, Sameer B.; Havens, David; Norris, Ashley; Bird, R. Keith; Kapania, Rakesh K.; Olliffe, Robert


    A T-stiffened panel was designed and optimized for minimum mass subjected to constraints on buckling load, yielding, and crippling or local stiffener failure using a new analysis and design tool named EBF3PanelOpt. The panel was designed for a compression loading configuration, a realistic load case for a typical aircraft skin-stiffened panel. The panel was integrally machined from 2139 aluminum alloy plate and was tested in compression. The panel was loaded beyond buckling and strains and out-of-plane displacements were extracted from 36 strain gages and one linear variable displacement transducer. A digital photogrammetric system was used to obtain full field displacements and strains on the smooth (unstiffened) side of the panel. The experimental data were compared with the strains and out-of-plane deflections from a high-fidelity nonlinear finite element analysis.

  11. An optimal order interior penalty discontinuous Galerkin discretization of the compressible Navier Stokes equations

    Hartmann, Ralf; Houston, Paul


    In this article we propose a new symmetric version of the interior penalty discontinuous Galerkin finite element method for the numerical approximation of the compressible Navier-Stokes equations. Here, particular emphasis is devoted to the construction of an optimal numerical method for the evaluation of certain target functionals of practical interest, such as the lift and drag coefficients of a body immersed in a viscous fluid. With this in mind, the key ingredients in the construction of the method include: (i) an adjoint consistent imposition of the boundary conditions; (ii) an adjoint consistent reformulation of the underlying target functional of practical interest; (iii) design of appropriate interior penalty stabilization terms. Numerical experiments presented within this article clearly indicate the optimality of the proposed method when the error is measured in terms of both the L2-norm, as well as for certain target functionals. Computational comparisons with other discontinuous Galerkin schemes proposed in the literature, including the second scheme of Bassi and Rebay, cf. [F. Bassi, S. Rebay, GMRES discontinuous Galerkin solution of the compressible Navier-Stokes equations, in: B. Cockburn, G. Karniadakis, C.-W. Shu (Eds.), Discontinuous Galerkin Methods, Lecture Notes in Comput. Sci. Engrg., vol. 11, Springer, Berlin, 2000, pp. 197-208; F. Bassi, S. Rebay, Numerical evaluation of two discontinuous Galerkin methods for the compressible Navier-Stokes equations, Int. J. Numer. Methods Fluids 40 (2002) 197-207], the standard SIPG method outlined in [R. Hartmann, P. Houston, Symmetric interior penalty DG methods for the compressible Navier-Stokes equations. I: Method formulation, Int. J. Numer. Anal. Model. 3(1) (2006) 1-20], and an NIPG variant of the new scheme will be undertaken.


    M.Mohamed Ismail,


    Full Text Available This paper presents an image compression scheme with particle swarm optimization technique for clustering. The PSO technique is a powerful general purpose optimization technique that uses the concept of fitness.It provides a mechanism such that individuals in the swarm communicate and exchange information which is similar to the social behaviour of insects & human beings. Because of the mimicking the social sharing of information ,PSO directs particle to search the solution more efficiently.PSO is like a GA in that the population isinitialized with random potential solutions.The adjustment towards the best individual experience (PBEST and the best social experience (GBEST.Is conceptually similar to the cross over operaton of the GA.However it is unlike a GA in that each potential solution , called a particle is flying through the solution space with a velocity.Moreover the particles and the swarm have memory,which does not exist in the populatiom of GA.This optimization technique is used in Image compression and better results have obtained in terms of PSNR, CR and the visual quality of the image when compared to other existing methods.

  13. Optimization of the output power of a pulsed gas laser by using magnetic pulse compression

    Louhibi, D.; Ghobrini, Mourad; Bourai, K.


    In pulsed gas lasers, the excitation of the active medium is produced through the discharge of a storage capacitor. Performances of these lasers were essentially linked to the type of switch used and also to its mode of operation. Thyratrons are the most common switches. Nevertheless, their technological limitations do not allow a high repetition rate, necessary for optimization of the output power of this type of laser. These limitations can be surpassed by combining the thyratron to a one stage of a magnetic pulse compression circuit. The mpc driver can improve the laser excitation pulse rise time and increase the repetition rate, increasing the laser output power of pulsed gas laser such as; nitrogen, excimer and copper vapor lasers. We have proposed in this paper a new configuration of magnetic pulse compression, the magnetic switch is place in our case in the charge circuit, and while in the typical utilization of magnetic pulse compression, it is placed in the discharge circuit. In this paper, we are more particularly interested in the design and the modeling of a saturate inductance that represents the magnetic switch in the proposed configuration of a thyratron - mpc circuit combination.

  14. Optimal decay rates of classical solutions for the full compressible MHD equations

    Gao, Jincheng; Tao, Qiang; Yao, Zheng-an


    In this paper, we are concerned with optimal decay rates for higher-order spatial derivatives of classical solutions to the full compressible MHD equations in three-dimensional whole space. If the initial perturbation is small in {H^3}-norm and bounded in {L^q(qin [1, 6/5 ))}-norm, we apply the Fourier splitting method by Schonbek (Arch Ration Mech Anal 88:209-222, 1985) to establish optimal decay rates for the second-order spatial derivatives of solutions and the third-order spatial derivatives of magnetic field in {L^2}-norm. These results improve the work of Pu and Guo (Z Angew Math Phys 64:519-538, 2013).

  15. An improved partial SPIHT with classified weighted rate-distortion optimization for interferential multispectral image compression

    Keyan Wang; Chengke Wu; Fanqiang Kong; Lei Zhang


    Based on the property analysis of interferential multispectral images, a novel compression algorithm of partial set partitioning in hierarchical trees (SPIHT) with classified weighted rate-distortion optimization is presented.After wavelet decomposition, partial SPIHT is applied to each zero tree independently by adaptively selecting one of three coding modes according to the probability of the significant coefficients in each bitplane.Meanwhile the interferential multispectral image is partitioned into two kinds of regions in terms of luminous intensity, and the rate-distortion slopes of zero trees are then lifted with classified weights according to their distortion contribution to the constructed spectrum.Finally a global ratedistortion optimization truncation is performed.Compared with the conventional methods, the proposed algorithm not only improves the performance in spatial domain but also reduces the distortion in spectral domain.

  16. Optimization of dedicated scintimammography procedure using detector prototypes and compressible phantoms

    Majewski, S R; Curran, E; Keppel, C E; Kross, B J; Palumbo, A; Popov, V; Wisenberger, A G; Welch, B; Wojcik, R; Williams, M B; Goode, A R; More, M; Zhang, G


    Results are presented on the optimization of the design and use of dedicated compact scintimammography gamma cameras. Prototype imagers with a field of view (FOV) of 5*5 cm/sup 2/, 10*10 cm/sup 2/ and 15*20 cm/sup 2/ were used in either a dual modality mode as an adjunct technique to digital X-ray mammography imagers or as stand- alone instruments such as dedicated breast SPECT and planar imagers. Experimental data were acquired to select the best imaging modality (SPECT or planar) to detect small lesions using Tc/sup 99m/ radio- labeled pharmaceuticals. In addition, studies were performed to optimize the imaging geometry. Results suggest that the preferred imaging geometry is planar imaging with two opposing detector heads while the breast is under compression. However, further study of the dedicated breast SPECT is warranted. (24 refs).

  17. Integrated modeling for optimized regional transportation with compressed natural gas fuel

    Hossam A. Gabbar


    Full Text Available Transportation represents major energy consumption where fuel is considered as a primary energy source. Recent development in the vehicle technology revealed possible economical improvements when using natural gas as a fuel source instead of traditional gasoline. There are several fuel alternatives such as electricity, which showed potential for future long-term transportation. However, the move from current situation where gasoline vehicle is dominating shows high cost compared to compressed natural gas vehicle. This paper presents modeling and simulation methodology to optimize performance of transportation based on quantitative study of the risk-based performance of regional transportation. Emission estimation method is demonstrated and used to optimize transportation strategies based on life cycle costing. Different fuel supply scenarios are synthesized and evaluated, which showed strategic use of natural gas as a fuel supply.

  18. Simulation-Based Optimization of Cure Cycle of Large Area Compression Molding for LED Silicone Lens

    Min-Jae Song


    Full Text Available Three-dimensional heat transfer-curing simulation was performed for the curing process by introducing a large area compression molding for simultaneous forming and mass production for the lens and encapsulants in the LED molding process. A dynamic cure kinetics model for the silicone resin was adopted and cure model and analysis result were validated and compared through a temperature measurement experiment for cylinder geometry with cure model. The temperature deviation between each lens cavity could be reduced by implementing a simulation model on the large area compression mold and by optimizing the location of heat source. A two-step cure cycle was constructed to reduce excessive reaction peak at the initial stage and cycle time. An optimum cure cycle that could reduce cycle time by more than 29% compared to a one-step cure cycle by adjusting dwell temperature, heating rate, and dwell time was proposed. It was thus confirmed that an optimization of large area LED lens molding process was possible by using the present experiment and the finite element method.

  19. Optimal Chest Compression Rate and Compression to Ventilation Ratio in Delivery Room Resuscitation: Evidence from Newborn Piglets and Neonatal Manikins

    Solevåg, Anne Lee; Schmölzer, Georg M.


    Cardiopulmonary resuscitation (CPR) duration until return of spontaneous circulation (ROSC) influences survival and neurologic outcomes after delivery room (DR) CPR. High quality chest compressions (CC) improve cerebral and myocardial perfusion. Improved myocardial perfusion increases the likelihood of a faster ROSC. Thus, optimizing CC quality may improve outcomes both by preserving cerebral blood flow during CPR and by reducing the recovery time. CC quality is determined by rate, CC to ventilation (C:V) ratio, and applied force, which are influenced by the CC provider. Thus, provider performance should be taken into account. Neonatal resuscitation guidelines recommend a 3:1 C:V ratio. CCs should be delivered at a rate of 90/min synchronized with ventilations at a rate of 30/min to achieve a total of 120 events/min. Despite a lack of scientific evidence supporting this, the investigation of alternative CC interventions in human neonates is ethically challenging. Also, the infrequent occurrence of extensive CPR measures in the DR make randomized controlled trials difficult to perform. Thus, many biomechanical aspects of CC have been investigated in animal and manikin models. Despite mathematical and physiological rationales that higher rates and uninterrupted CC improve CPR hemodynamics, studies indicate that provider fatigue is more pronounced when CC are performed continuously compared to when a pause is inserted after every third CC as currently recommended. A higher rate (e.g., 120/min) is also more fatiguing, which affects CC quality. In post-transitional piglets with asphyxia-induced cardiac arrest, there was no benefit of performing continuous CC at a rate of 90/min. Not only rate but duty cycle, i.e., the duration of CC/total cycle time, is a known determinant of CC effectiveness. However, duty cycle cannot be controlled with manual CC. Mechanical/automated CC in neonatal CPR has not been explored, and feedback systems are under-investigated in this

  20. Real-time inverse high-dose-rate brachytherapy planning with catheter optimization by compressed sensing-inspired optimization strategies

    Guthier, C. V.; Aschenbrenner, K. P.; Müller, R.; Polster, L.; Cormack, R. A.; Hesser, J. W.


    This paper demonstrates that optimization strategies derived from the field of compressed sensing (CS) improve computational performance in inverse treatment planning (ITP) for high-dose-rate (HDR) brachytherapy. Following an approach applied to low-dose-rate brachytherapy, we developed a reformulation of the ITP problem with the same mathematical structure as standard CS problems. Two greedy methods, derived from hard thresholding and subspace pursuit are presented and their performance is compared to state-of-the-art ITP solvers. Applied to clinical prostate brachytherapy plans speed-up by a factor of 56-350 compared to state-of-the-art methods. Based on a Wilcoxon signed rank-test the novel method statistically significantly decreases the final objective function value (p  <  0.01). The optimization times were below one second and thus planing can be considered as real-time capable. The novel CS inspired strategy enables real-time ITP for HDR brachytherapy including catheter optimization. The generated plans are either clinically equivalent or show a better performance with respect to dosimetric measures.

  1. Local-in-Time Adjoint-Based Method for Optimal Control/Design Optimization of Unsteady Compressible Flows

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.


    .We study local-in-time adjoint-based methods for minimization of ow matching functionals subject to the 2-D unsteady compressible Euler equations. The key idea of the local-in-time method is to construct a very accurate approximation of the global-in-time adjoint equations and the corresponding sensitivity derivative by using only local information available on each time subinterval. In contrast to conventional time-dependent adjoint-based optimization methods which require backward-in-time integration of the adjoint equations over the entire time interval, the local-in-time method solves local adjoint equations sequentially over each time subinterval. Since each subinterval contains relatively few time steps, the storage cost of the local-in-time method is much lower than that of the global adjoint formulation, thus making the time-dependent optimization feasible for practical applications. The paper presents a detailed comparison of the local- and global-in-time adjoint-based methods for minimization of a tracking functional governed by the Euler equations describing the ow around a circular bump. Our numerical results show that the local-in-time method converges to the same optimal solution obtained with the global counterpart, while drastically reducing the memory cost as compared to the global-in-time adjoint formulation.

  2. Real-time inverse high-dose-rate brachytherapy planning with catheter optimization by compressed sensing-inspired optimization strategies.

    Guthier, C V; Aschenbrenner, K P; Müller, R; Polster, L; Cormack, R A; Hesser, J W


    This paper demonstrates that optimization strategies derived from the field of compressed sensing (CS) improve computational performance in inverse treatment planning (ITP) for high-dose-rate (HDR) brachytherapy. Following an approach applied to low-dose-rate brachytherapy, we developed a reformulation of the ITP problem with the same mathematical structure as standard CS problems. Two greedy methods, derived from hard thresholding and subspace pursuit are presented and their performance is compared to state-of-the-art ITP solvers. Applied to clinical prostate brachytherapy plans speed-up by a factor of 56-350 compared to state-of-the-art methods. Based on a Wilcoxon signed rank-test the novel method statistically significantly decreases the final objective function value (p  plans are either clinically equivalent or show a better performance with respect to dosimetric measures.

  3. Compressed Biogas-Diesel Dual-Fuel Engine Optimization Study for Ultralow Emission

    Hasan Koten


    Full Text Available The aim of this study is to find out the optimum operating conditions in a diesel engine fueled with compressed biogas (CBG and pilot diesel dual-fuel. One-dimensional (1D and three-dimensional (3D computational fluid dynamics (CFD code and multiobjective optimization code were employed to investigate the influence of CBG-diesel dual-fuel combustion performance and exhaust emissions on a diesel engine. In this paper, 1D engine code and multiobjective optimization code were coupled and evaluated about 15000 cases to define the proper boundary conditions. In addition, selected single diesel fuel (dodecane and dual-fuel (CBG-diesel combustion modes were modeled to compare the engine performances and exhaust emission characteristics by using CFD code under various operating conditions. In optimization study, start of pilot diesel fuel injection, CBG-diesel flow rate, and engine speed were optimized and selected cases were compared using CFD code. CBG and diesel fuels were defined as leading reactants using user defined code. The results showed that significantly lower NOx emissions were emitted under dual-fuel operation for all cases compared to single-fuel mode at all engine load conditions.

  4. Modeling and Optimization of Compressive Strength of Hollow Sandcrete Block with Rice Husk Ash Admixture


    Full Text Available The paper presents the report of an investigation into the model development and optimization of the compressive strength of 55/45 to 70/30 cement/Rice Husk Ash (RHA in hollow sandcrete block. The low cost and local availability potential of RHA, a pozzolanic material gasps for exploitation. The study applies the Scheffe\\'s optimization approach to obtain a mathematical model of the form f(xi1 ,xi2 ,xi3 xi4 , where x are proportions of the concrete components, viz: cement, RHA, sand and water. Scheffe\\'s i experimental design techniques are followed to mould various hollow block samples measuring 450mm x 225mm x 150mm and tested for 28 days strength. The task involved experimentation and design, applying the second order polynomial characterization process of the simplex lattice method. The model adequacy is checked using the control factors. Finally, a software is prepared to handle the design computation process to take the desired property of the mix, and generate the optimal mix ratios. Reversibly, any mix ratios can be desired and the attainable strength obtained.

  5. Film Cooling Optimization Using Numerical Computation of the Compressible Viscous Flow Equations and Simplex Algorithm

    Ahmed M. Elsayed


    Full Text Available Film cooling is vital to gas turbine blades to protect them from high temperatures and hence high thermal stresses. In the current work, optimization of film cooling parameters on a flat plate is investigated numerically. The effect of film cooling parameters such as inlet velocity direction, lateral and forward diffusion angles, blowing ratio, and streamwise angle on the cooling effectiveness is studied, and optimum cooling parameters are selected. The numerical simulation of the coolant flow through flat plate hole system is carried out using the “CFDRC package” coupled with the optimization algorithm “simplex” to maximize overall film cooling effectiveness. Unstructured finite volume technique is used to solve the steady, three-dimensional and compressible Navier-Stokes equations. The results are compared with the published numerical and experimental data of a cylindrically round-simple hole, and the results show good agreement. In addition, the results indicate that the average overall film cooling effectiveness is enhanced by decreasing the streamwise angle for high blowing ratio and by increasing the lateral and forward diffusion angles. Optimum geometry of the cooling hole on a flat plate is determined. In addition, numerical simulations of film cooling on actual turbine blade are performed using the flat plate optimal hole geometry.

  6. Thermal System Analysis and Optimization of Large-Scale Compressed Air Energy Storage (CAES

    Zhongguang Fu


    Full Text Available As an important solution to issues regarding peak load and renewable energy resources on grids, large-scale compressed air energy storage (CAES power generation technology has recently become a popular research topic in the area of large-scale industrial energy storage. At present, the combination of high-expansion ratio turbines with advanced gas turbine technology is an important breakthrough in energy storage technology. In this study, a new gas turbine power generation system is coupled with current CAES technology. Moreover, a thermodynamic cycle system is optimized by calculating for the parameters of a thermodynamic system. Results show that the thermal efficiency of the new system increases by at least 5% over that of the existing system.

  7. Optimal Arrays for Compressed Sensing in Snapshot-Mode Radio Interferometry

    Fannjiang, Clara


    Radio interferometry has always faced the problem of incomplete sampling of the Fourier plane. A possible remedy can be found in the promising new theory of compressed sensing (CS), which allows for the accurate recovery of sparse signals from sub-Nyquist sampling given certain measurement conditions. We provide an introductory assessment of optimal arrays for CS in snapshot-mode radio interferometry, using orthogonal matching pursuit (OMP), a widely used CS recovery algorithm similar in some respects to CLEAN. We focus on centrally condensed (specifically, Gaussian) arrays versus uniform arrays, and the principle of randomization versus deterministic arrays such as the VLA. The theory of CS is grounded in $a)$ sparse representation of signals and $b)$ measurement matrices of low coherence. We calculate a related quantity, mutual coherence (MC), as a theoretical indicator of arrays' suitability for OMP based on the recovery error bounds in (Donoho et al. 2006). OMP reconstructions of both point and extended o...

  8. Compressed lead-based perovskites reaching optimal Shockley-Queisser bandgap with prolonged carrier lifetime

    Liu, Gang; Gong, Jue; Yang, Wenge; Mao, Ho-kwang; Liu, Zhenxian; Schaller, Richard D; Zhang, Dongzhou; Xu, Tao


    Atomic structure of materials plays a decisive role in the light-matter interaction. Yet, despite its unprecedented progress, further efficiency boost of Lead-based organic-inorganic perovskite solar cells is hampered by its greater bandgap than the optimum value according to Shockley-Queisser limit. Here, we report the experimental achievement of bandgap narrowing in formamidinium lead triiodide from 1.489 to 1.337 eV by modulating the lattice constants under hydraulic compression, reaching the optimized bandgap for single-junction solar cells. Strikingly, such bandgap narrowing is accomplished with improved, instead of sacrificed carrier lifetime. More attractively, the narrowed bandgap is partially retainable after the release of pressure. This work opens a new dimension in basic science understanding of structural photonics and paves an alternative pathway towards more efficient photovoltaic materials.

  9. Generation of stable subfemtosecond hard x-ray pulses with optimized nonlinear bunch compression

    Senlin Huang


    Full Text Available In this paper, we propose a simple scheme that leverages existing x-ray free-electron laser hardware to produce stable single-spike, subfemtosecond x-ray pulses. By optimizing a high-harmonic radio-frequency linearizer to achieve nonlinear compression of a low-charge (20 pC electron beam, we obtain a sharp current profile possessing a few-femtosecond full width at half maximum temporal duration. A reverse undulator taper is applied to enable lasing only within the current spike, where longitudinal space charge forces induce an electron beam time-energy chirp. Simulations based on the Linac Coherent Light Source parameters show that stable single-spike x-ray pulses with a duration less than 200 attoseconds can be obtained.

  10. The preparation technique optimization of epoxy/compressed expanded graphite composite bipolar plates for proton exchange membrane fuel cells

    Du, Chao; Ming, Pingwen; Hou, Ming; Fu, Jie; Fu, Yunfeng; Luo, Xiaokuan; Shen, Qiang; Shao, Zhigang; Yi, Baolian

    Vacuum resin impregnation method has been used to prepare polymer/compressed expanded graphite (CEG) composite bipolar plates for proton exchange membrane fuel cells (PEMFCs). In this research, three different preparation techniques of the epoxy/CEG composite bipolar plate (Compression-Impregnation method, Impregnation-Compression method and Compression-Impregnation-Compression method) are optimized by the physical properties of the composite bipolar plates. The optimum conditions and the advantages/disadvantages of the different techniques are discussed respectively. Although having different characteristics, bipolar plates obtained by these three techniques can all meet the demands of PEMFC bipolar plates as long as the optimum conditions are selected. The Compression-Impregnation-Compression method is shown to be the optimum method because of the outstanding properties of the bipolar plates. Besides, the cell assembled with these optimum composite bipolar plates shows excellent stability after 200 h durability testing. Therefore the composite prepared by vacuum resin impregnation method is a promising candidate for bipolar plate materials in PEMFCs.

  11. Automated computer evaluation and optimization of image compression of x-ray coronary angiograms for signal known exactly detection tasks

    Eckstein, Miguel P.; Bartroff, Jay L.; Abbey, Craig K.; Whiting, James S.; Bochud, Francois O.


    We compared the ability of three model observers (nonprewhitening matched filter with an eye filter, Hotelling and channelized Hotelling) in predicting the effect of JPEG and wavelet-Crewcode image compression on human visual detection of a simulated lesion in single frame digital x-ray coronary angiograms. All three model observers predicted the JPEG superiority present in human performance, although the nonprewhitening matched filter with an eye filter (NPWE) and the channelized Hotelling models were better predictors than the Hotelling model. The commonly used root mean square error and related peak signal to noise ratio metrics incorrectly predicted a JPEG inferiority. A particular image discrimination/perceptual difference model correctly predicted a JPEG advantage at low compression ratios but incorrectly predicted a JPEG inferiority at high compression ratios. In the second part of the paper, the NPWE model was used to perform automated simulated annealing optimization of the quantization matrix of the JPEG algorithm at 25:1 compression ratio. A subsequent psychophysical study resulted in improved human detection performance for images compressed with the NPWE optimized quantization matrix over the JPEG default quantization matrix. Together, our results show how model observers can be successfully used to perform automated evaluation and optimization of diagnostic performance in clinically relevant visual tasks using real anatomic backgrounds.

  12. Optimal duration of percutaneous microballoon compression for treatment of trigeminal nerve injury

    Fuyong Li; Shuai Han; Yi Ma; Fuxin Yi; Xinmin Xu; Yunhui Liu


    Percutaneous microballoon compression of the trigeminal ganglion is a brand new operative technique for the treatment of trigeminal neuralgia. However, it is unclear how the procedure mediates pain relief, and there are no standardized criteria, such as compression pressure, com-pression time or balloon shape, for the procedure. In this study, percutaneous microballoon compression was performed on the rabbit trigeminal ganglion at a mean inlfation pressure of 1,005 ± 150 mmHg for 2 or 5 minutes. At 1, 7 and 14 days after percutaneous microballoon compression, the large-diameter myelinated nerves displayed axonal swelling, rupture and demy-elination under the electron microscope. Fragmentation of myelin and formation of digestion chambers were more evident after 5 minutes of compression. Image analyzer results showed that the diameter of trigeminal ganglion cells remained unaltered after compression. These experi-mental ifndings indicate that a 2-minute period of compression can suppress pain transduction. Immunohistochemical staining revealed that vascular endothelial growth factor expression in the ganglion cells and axons was signiifcantly increased 7 days after trigeminal ganglion compression, however, the changes were similar after 2-minute compression and 5-minute compression. The upregulated expression of vascular endothelial growth factor in the ganglion cells after percu-taneous microballoon compression can promote the repair of the injured nerve. These ifndings suggest that long-term compression is ideal for patients with recurrent trigeminal neuralgia.

  13. Optimization of current waveform tailoring for magnetically driven isentropic compression experiments

    Waisman, E. M.; Reisman, D. B.; Stoltzfus, B. S.; Stygar, W. A.; Cuneo, M. E.; Haill, T. A.; Davis, J.-P.; Brown, J. L.; Seagle, C. T.; Spielman, R. B.


    The Thor pulsed power generator is being developed at Sandia National Laboratories. The design consists of up to 288 decoupled and transit time isolated capacitor-switch units, called "bricks," that can be individually triggered to achieve a high degree of pulse tailoring for magnetically driven isentropic compression experiments (ICE) [D. B. Reisman et al., Phys. Rev. Spec. Top.-Accel. Beams 18, 090401 (2015)]. The connecting transmission lines are impedance matched to the bricks, allowing the capacitor energy to be efficiently delivered to an ICE strip-line load with peak pressures of over 100 GPa. Thor will drive experiments to explore equation of state, material strength, and phase transition properties of a wide variety of materials. We present an optimization process for producing tailored current pulses, a requirement for many material studies, on the Thor generator. This technique, which is unique to the novel "current-adder" architecture used by Thor, entirely avoids the iterative use of complex circuit models to converge to the desired electrical pulse. We begin with magnetohydrodynamic simulations for a given material to determine its time dependent pressure and thus the desired strip-line load current and voltage. Because the bricks are connected to a central power flow section through transit-time isolated coaxial cables of constant impedance, the brick forward-going pulses are independent of each other. We observe that the desired equivalent forward-going current driving the pulse must be equal to the sum of the individual brick forward-going currents. We find a set of optimal brick delay times by requiring that the L2 norm of the difference between the brick-sum current and the desired forward-going current be a minimum. We describe the optimization procedure for the Thor design and show results for various materials of interest.

  14. Optimization of compressive strength in admixture-reinforced cement-based grouts

    Sahin Zaimoglu, A.


    Full Text Available The Taguchi method was used in this study to optimize the unconfined (7-, 14- and 28-day compressive strength of cement-based grouts with bentonite, fly ash and silica fume admixtures. The experiments were designed using an L16 orthogonal array in which the three factors considered were bentonite (0%, 0.5%, 1.0% and 3%, fly ash (10%, 20%, 30% and 40% and silica fume (0%, 5%, 10% and 20% content. The experimental results, which were analyzed by ANOVA and the Taguchi method, showed that fly ash and silica fume content play a significant role in unconfined compressive strength. The optimum conditions were found to be: 0% bentonite, 10% fly ash, 20% silica fume and 28 days of curing time. The maximum unconfined compressive strength reached under the above optimum conditions was 17.1 MPa.En el presente trabajo se ha intentado optimizar, mediante el método de Taguchi, las resistencias a compresión (a las edades de 7, 14 y 28 días de lechadas de cemento reforzadas con bentonita, cenizas volantes y humo de sílice. Se diseñaron los experimentos de acuerdo con un arreglo ortogonal tipo L16 en el que se contemplaban tres factores: la bentonita (0, 0,5, 1 y 3%, las cenizas volantes (10, 20, 30 y 40% y el humo de sílice (0, 5, 10 y 20% (porcentajes en peso del sólido. Los datos obtenidos se analizaron con mediante ANOVA y el método de Taguchi. De acuerdo con los resultados experimentales, el contenido tanto de cenizas volantes como de humo de sílice desempeña un papel significativo en la resistencia a compresión. Por otra parte, las condiciones óptimas que se han identificado son: 0% bentonita, 10% cenizas volantes, 20% humo de sílice y 28 días de tiempo de curado. La resistencia a compresión máxima conseguida en las anteriores condiciones era de 17,1 MPa.

  15. Modelling and optimization of seawater desalination process using mechanical vapour compression

    V.P. Kravchenko


    Full Text Available In the conditions of global climate changes shortage of fresh water becomes an urgent problem for an increasing number of the countries. One of the most perspective technologies of a desalting of sea water is the mechanical vapour compression (MVC providing low energy consumption due to the principle of a heat pump. Aim: The aim of this research is to identify the reserves of efficiency increasing of the desalination systems based on mechanical vapour compression by optimization of the scheme and parameters of installations with MVC. Materials and Methods: The new type of desalination installation is offered which main element is the heat exchanger of the latent heat. Sea water after preliminary heating in heat exchangers comes to the evaporator-condenser where receives the main amount of heat from the condensed steam. A part of sea water evaporates, and the strong solution of salt (brine goes out of the evaporator, and after cooling is dumped back in the sea. The formed steam is compressed by the compressor and comes to the condenser. An essential singularity of this scheme is that condensation happens at higher temperature, than evaporation. Thanks to this the heat, which is comes out at devaporation, is used for evaporation of sea water. Thereby, in this class of desalination installations the principle of a heat pump is implemented. Results: For achievement of a goal the following tasks were solved: the mathematical model of installations with MVC is modified and supplemented; the scheme of heat exchangers switching is modified; influence of design data of desalination installation on the cost of an inventory and the electric power is investigated. The detailed analysis of the main schemes of installation and mathematical model allowed defining ways of decrease in energy consumption and the possible merit value. Influence of two key parameters - a specific power of the compressor and a specific surface area of the evaporator-condenser - on a

  16. On the Optimality of Successive Decoding in Compress-and-Forward Relay Schemes

    Wu, Xiugang


    In the classical compress-and-forward relay scheme developed by (Cover and El Gamal, 1979), the decoding process operates in a successive way: the destination first decodes the compressed observation of the relay, and then decodes the original message of the source. Recently, two modified compress-and-forward relay schemes were proposed, and in both of them, the destination jointly decodes the compressed observation of the relay and the original message, instead of successively. Such a modification on the decoding process was motivated by realizing that it is generally easier to decode the compressed observation jointly with the original message, and more importantly, the original message can be decoded even without completely decoding the compressed observation. However, the question remains whether this freedom of choosing a higher compression rate at the relay improves the achievable rate of the original message. It has been shown in (El Gamal and Kim, 2010) that the answer is negative in the single relay ...

  17. Data-Driven Sampling Matrix Boolean Optimization for Energy-Efficient Biomedical Signal Acquisition by Compressive Sensing.

    Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao


    Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.

  18. Network dynamics for optimal compressive-sensing input-signal recovery.

    Barranca, Victor J; Kovačič, Gregor; Zhou, Douglas; Cai, David


    By using compressive sensing (CS) theory, a broad class of static signals can be reconstructed through a sequence of very few measurements in the framework of a linear system. For networks with nonlinear and time-evolving dynamics, is it similarly possible to recover an unknown input signal from only a small number of network output measurements? We address this question for pulse-coupled networks and investigate the network dynamics necessary for successful input signal recovery. Determining the specific network characteristics that correspond to a minimal input reconstruction error, we are able to achieve high-quality signal reconstructions with few measurements of network output. Using various measures to characterize dynamical properties of network output, we determine that networks with highly variable and aperiodic output can successfully encode network input information with high fidelity and achieve the most accurate CS input reconstructions. For time-varying inputs, we also find that high-quality reconstructions are achievable by measuring network output over a relatively short time window. Even when network inputs change with time, the same optimal choice of network characteristics and corresponding dynamics apply as in the case of static inputs.

  19. Exergoeconomic optimization of an ammonia–water hybrid absorption–compression heat pump for heat supply in a spraydrying facility

    Jensen, Jonas Kjær; Markussen, Wiebke Brix; Reinholdt, Lars


    load of 6.1 MW. The exhaust air from the drying process is 80 C. The implementation of anammonia–water hybrid absorption–compression heat pump to partly cover the heat load is investigated. A thermodynamic analysis is applied to determine optimal circulation ratios for a number of ammonia mass...... fractions and heat pump loads. An exergo economic optimization is applied to minimize the lifetime cost of the system. Technological limitations are imposed to constrain the solution to commercial components. The best possible implementation is identified in terms of heat load, ammonia mass fraction...

  20. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    Pence, William D.; White, R. L.; Seaman, R.


    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  1. Optimisation énergétique des chambres de combustion à haut taux de compression Energy Optimization of High-Compression-Ratio Combustion Chambers

    Douaud A.


    Full Text Available Une synthèse des études entreprises à l'institut Français du Pétrole pour la compréhension des phénomènes de combustion, de transferts thermiques, de cliquetis et leur maîtrise pour l'optimisation du rendement de chambre à haut taux de compression conduit à proposer deux thèmes de réalisation : - chambre calme à double allumage; - chambre turbulente à effet de chasse. Les avantages de principe et les contraintes associés à la mise en oeuvre de chaque type de chambre sont examinés. A synthesis of research undertaken at the Institut Français du Pétrole on understanding combustion, heat-transfer and knock phenomena and on mastering them to optimize the efficiency of high-compression-ratio combustion chambers has led to the proposing of two topics of implementation:(a calm chamber with dual ignition;(b turbulent chamber with squish effect. The advantages of the principle and the constraints connected to the implementation of each type of chamber are examined.

  2. Optimal Compression of Floating-point Astronomical Images Without Significant Loss of Information

    Pence, W D; Seaman, R


    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 -- 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process can greatly improve the precision of measurements in the images. This is especially important if the analysis algorithm relies on the mode or the median which would be similarly quantized if the pixel values are not dithered. We perform a series of experiments on both synthetic and real...

  3. Thermo-Economic Comparison and Parametric Optimizations among Two Compressed Air Energy Storage System Based on Kalina Cycle and ORC

    Ruixiong Li


    Full Text Available The compressed air energy storage (CAES system, considered as one method for peaking shaving and load-levelling of the electricity system, has excellent characteristics of energy storage and utilization. However, due to the waste heat existing in compressed air during the charge stage and exhaust gas during the discharge stage, the efficient operation of the conventional CAES system has been greatly restricted. The Kalina cycle (KC and organic Rankine cycle (ORC have been proven to be two worthwhile technologies to fulfill the different residual heat recovery for energy systems. To capture and reuse the waste heat from the CAES system, two systems (the CAES system combined with KC and ORC, respectively are proposed in this paper. The sensitivity analysis shows the effect of the compression ratio and the temperature of the exhaust on the system performance: the KC-CAES system can achieve more efficient operation than the ORC-CAES system under the same temperature of exhaust gas; meanwhile, the larger compression ratio can lead to the higher efficiency for the KC-CAES system than that of ORC-CAES with the constant temperature of the exhaust gas. In addition, the evolutionary multi-objective algorithm is conducted between the thermodynamic and economic performances to find the optimal parameters of the two systems. The optimum results indicate that the solutions with an exergy efficiency of around 59.74% and 53.56% are promising for KC-CAES and ORC-CAES system practical designs, respectively.

  4. A Neurodynamic Optimization Method for Recovery of Compressive Sensed Signals With Globally Converged Solution Approximating to l0 Minimization.

    Guo, Chengan; Yang, Qingshan


    Finding the optimal solution to the constrained l0 -norm minimization problems in the recovery of compressive sensed signals is an NP-hard problem and it usually requires intractable combinatorial searching operations for getting the global optimal solution, unless using other objective functions (e.g., the l1 norm or lp norm) for approximate solutions or using greedy search methods for locally optimal solutions (e.g., the orthogonal matching pursuit type algorithms). In this paper, a neurodynamic optimization method is proposed to solve the l0 -norm minimization problems for obtaining the global optimum using a recurrent neural network (RNN) model. For the RNN model, a group of modified Gaussian functions are constructed and their sum is taken as the objective function for approximating the l0 norm and for optimization. The constructed objective function sets up a convexity condition under which the neurodynamic system is guaranteed to obtain the globally convergent optimal solution. An adaptive adjustment scheme is developed for improving the performance of the optimization algorithm further. Extensive experiments are conducted to test the proposed approach in this paper and the output results validate the effectiveness of the new method.

  5. Strain distribution in the intervertebral disc under unconfined compression and tension load by the optimized digital image correlation technique.

    Liu, Qing; Wang, Tai-Yong; Yang, Xiu-Ping; Li, Kun; Gao, Li-Lan; Zhang, Chun-Qiu; Guo, Yue-Hong


    The unconfined compression and tension experiments of the intervertebral disc were conducted by applying an optimized digital image correlation technique, and the internal strain distribution was analysed for the disc. It was found that the axial strain values of different positions increased obviously with the increase in loads, while inner annulus fibrosus and posterior annulus fibrosus experienced higher axial strains than the outer annulus fibrosus and anterior annulus fibrosus. Deep annulus fibrosus exhibited higher compressive and tensile axial strains than superficial annulus fibrosus for the anterior region, while there was an opposite result for the posterior region. It was noted that all samples demonstrated a nonlinear stress-strain profile in the process of deforming, and an elastic region was shown once the sample was deformed beyond its toe region.

  6. Exergy efficiency applied for the performance optimization of a direct injection compression ignition (CI) engine using biofuels

    Azoumah, Y. [Laboratoire Biomasse Energie Biocarburant (LBEB), Institut International d' Ingenierie de l' Eau et de l' Environnement (2iE), Rue de la Science, 01BP 594, Ouagadougou 01 (Burkina Faso); Blin, J. [Laboratoire Biomasse Energie Biocarburant (LBEB), Institut International d' Ingenierie de l' Eau et de l' Environnement (2iE), Rue de la Science, 01BP 594, Ouagadougou 01 (Burkina Faso)]|[Unite Propre de Recherche Biomasse Energie, CIRAD-PERSYST, TA B-42/16t, 73 Avenue J.-F. Breton, 34398 Montpellier Cedex 5 (France); Daho, T. [Laboratoire de Physique et de Chimie de l' Environnement (LPCE), Departement de Physique, UFR-SEA, Universite de Ouagadougou, 03 BP 7021, Ouagadougou 03 (Burkina Faso)


    The need to decrease the consumption of materials and energy and to promote the use of renewable resources, such as biofuels, stress the importance of evaluating the performance of engines based on the second law of thermodynamics. This paper suggests the use of exergy analysis (as an environmental assessment tool to account wastes and determine the exergy efficiency) combined with gas emissions analysis to optimize the performance of a compression ignition (CI) engine using biofuels such as cottonseed and palm oils, pure or blended with diesel for different engine loads. The results show that the combination of exergy and gas emissions analyses is a very effective tool for evaluating the optimal loads that can be supplied by CI engines. Taking into account technical constraints of engines, a tradeoff zone of engine loads (60% and 70% of the maximum load) was established between the gas emissions (NO and CO{sub 2}) and the exergy efficiency for optimal performance of the CI engine. (author)

  7. Compressive failure modes and parameter optimization of the trabecular structure of biomimetic fully integrated honeycomb plates.

    Chen, Jinxiang; Tuo, Wanyong; Zhang, Xiaoming; He, Chenglin; Xie, Juan; Liu, Chang


    To develop lightweight biomimetic composite structures, the compressive failure and mechanical properties of fully integrated honeycomb plates were investigated experimentally and through the finite element method. The results indicated that: fracturing of the fully integrated honeycomb plates primarily occurred in the core layer, including the sealing edge structure. The morphological failures can be classified into two types, namely dislocations and compactions, and were caused primarily by the stress concentrations at the interfaces between the core layer and the upper and lower laminations and secondarily by the disordered short-fiber distribution in the material; although the fully integrated honeycomb plates manufactured in this experiment were imperfect, their mass-specific compressive strength was superior to that of similar biomimetic samples. Therefore, the proposed bio-inspired structure possesses good overall mechanical properties, and a range of parameters, such as the diameter of the transition arc, was defined for enhancing the design of fully integrated honeycomb plates and improving their compressive mechanical properties.

  8. Compressive Sensing Over Networks

    Feizi, Soheil; Effros, Michelle


    In this paper, we demonstrate some applications of compressive sensing over networks. We make a connection between compressive sensing and traditional information theoretic techniques in source coding and channel coding. Our results provide an explicit trade-off between the rate and the decoding complexity. The key difference of compressive sensing and traditional information theoretic approaches is at their decoding side. Although optimal decoders to recover the original signal, compressed by source coding have high complexity, the compressive sensing decoder is a linear or convex optimization. First, we investigate applications of compressive sensing on distributed compression of correlated sources. Here, by using compressive sensing, we propose a compression scheme for a family of correlated sources with a modularized decoder, providing a trade-off between the compression rate and the decoding complexity. We call this scheme Sparse Distributed Compression. We use this compression scheme for a general multi...

  9. Compression limits in cascaded quadratic soliton compression

    Bache, Morten; Bang, Ole; Krolikowski, Wieslaw;


    Cascaded quadratic soliton compressors generate under optimal conditions few-cycle pulses. Using theory and numerical simulations in a nonlinear crystal suitable for high-energy pulse compression, we address the limits to the compression quality and efficiency.......Cascaded quadratic soliton compressors generate under optimal conditions few-cycle pulses. Using theory and numerical simulations in a nonlinear crystal suitable for high-energy pulse compression, we address the limits to the compression quality and efficiency....

  10. Direct embeddings of relatively hyperbolic groups with optimal $\\ell^p$ compression exponent

    Hume, David


    We prove that for all $p>1$, every relatively hyperbolic group has $\\ell^p$ compression exponent equal to the minimum of the exponents of its maximal parabolic subgroups. Moreover, this embedding can be explicitly described in terms of given embeddings of the maximal parabolic subgroups.

  11. Thermal System Analysis and Optimization of Large-Scale Compressed Air Energy Storage (CAES)

    Zhongguang Fu; Ke Lu; Yiming Zhu


    As an important solution to issues regarding peak load and renewable energy resources on grids, large-scale compressed air energy storage (CAES) power generation technology has recently become a popular research topic in the area of large-scale industrial energy storage. At present, the combination of high-expansion ratio turbines with advanced gas turbine technology is an important breakthrough in energy storage technology. In this study, a new gas turbine power generation system is coupled ...

  12. Analysis and Optimization of a Compressed Air Energy Storage—Combined Cycle System

    Wenyi Liu; Linzhi Liu; Luyao Zhou; Jian Huang; Yuwen Zhang; Gang Xu; Yongping Yang


    Compressed air energy storage (CAES) is a commercial, utility-scale technology that provides long-duration energy storage with fast ramp rates and good part-load operation. It is a promising storage technology for balancing the large-scale penetration of renewable energies, such as wind and solar power, into electric grids. This study proposes a CAES-CC system, which is based on a conventional CAES combined with a steam turbine cycle by waste heat boiler. Simulation and thermodynamic analysis...

  13. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning

    Guthier, C.; Aschenbrenner, K. P.; Buergy, D.; Ehmann, M.; Wenz, F.; Hesser, J. W.


    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.

  14. Efficient Design and Optimization of a Flow Control System for Supersonic Mixed Compression Inlets Project

    National Aeronautics and Space Administration — SynGenics Corporation proposes a program that unites mathematical and statistical processes, Response Surface Methodology, and multicriterial optimization methods to...

  15. Determination of composition of pozzolanic waste mixtures with optimized compressive strength

    Nardi José Vidal


    Full Text Available The utilization of ceramic wastes with pozzolanic properties along with other compounds for obtaining new materials with cementating properties is an alternative for reducing the environmental pollution. The acceptance of these new products in the market demands minimal changes in mechanical properties according to its utilization. For a variable range of compositional intervals, attempts were made to establish limiting incorporation proportions that assure the achievement of minimum pre-established mechanical strength values in the final product. In this case minimum compressive strength value is 3,000 kPa. A simultaneous association of other properties is also possible.

  16. Analysis and Optimization of a Compressed Air Energy Storage—Combined Cycle System

    Wenyi Liu


    Full Text Available Compressed air energy storage (CAES is a commercial, utility-scale technology that provides long-duration energy storage with fast ramp rates and good part-load operation. It is a promising storage technology for balancing the large-scale penetration of renewable energies, such as wind and solar power, into electric grids. This study proposes a CAES-CC system, which is based on a conventional CAES combined with a steam turbine cycle by waste heat boiler. Simulation and thermodynamic analysis are carried out on the proposed CAES-CC system. The electricity and heating rates of the proposed CAES-CC system are lower than those of the conventional CAES by 0.127 kWh/kWh and 0.338 kWh/kWh, respectively, because the CAES-CC system recycles high-temperature turbine-exhausting air. The overall efficiency of the CAES-CC system is improved by approximately 10% compared with that of the conventional CAES. In the CAES-CC system, compressing intercooler heat can keep the steam turbine on hot standby, thus improving the flexibility of CAES-CC. This study brought about a new method for improving the efficiency of CAES and provided new thoughts for integrating CAES with other electricity-generating modes.

  17. Optimal operation strategies of compressed air energy storage (CAES) on electricity spot markets with fluctuating prices

    Lund, Henrik; Salgi, Georges; Elmegaard, Brian;


    on electricity spot markets by storing energy when electricity prices are low and producing electricity when prices are high. In order to make a profit on such markets, CAES plant operators have to identify proper strategies to decide when to sell and when to buy electricity. This paper describes three...... plants will not be able to achieve such optimal operation, since the fluctuations of spot market prices in the coming hours and days are not known. Consequently, two simple practical strategies have been identified and compared to the results of the optimal strategy. This comparison shows that...... independent computer-based methodologies which may be used for identifying the optimal operation strategy for a given CAES plant, on a given spot market and in a given year. The optimal strategy is identified as the one which provides the best business-economic net earnings for the plant. In practice, CAES...

  18. Optimal design considering structural efficiency of compressed natural gas fuel storage vessels for automobiles

    Myung-Chang KANG; Hyung Woo LEE; Chul KIM


    The shape and thickness of the dome were investigated with the aim of optimizing the type Ⅱ CNG storage vessels by using a finite element analysis technique. The thickness of the liners and reinforcing materials was optimized based on the requirement of the cylinder and dome parts. In addition, the shape of the dome, which is most suitable for type Ⅱ CNG storage vessels, was proposed by a process of review and analysis of various existing shapes, and the minimum thickness was established in this sequence: metal liners, composite materials and dome parts. Therefore, the new proposed shape products give a mass reduction of 4.8 kg(5. 1%)

  19. Thresholded Basis Pursuit: Quantizing Linear Programming Solutions for Optimal Support Recovery and Approximation in Compressed Sensing

    Saligrama, V


    We consider the classical Compressed Sensing problem. We have a large under-determined set of noisy measurements Y=GX+N, where X is a sparse signal and G is drawn from a random ensemble. In this paper we focus on a quantized linear programming solution for support recovery. Our solution of the problem amounts to solving $\\min \\|Z\\|_1 ~ s.t. ~ Y=G Z$, and quantizing/thresholding the resulting solution $Z$. We show that this scheme is guaranteed to perfectly reconstruct a discrete signal or control the element-wise reconstruction error for a continuous signal for specific values of sparsity. We show that in the linear regime when the sparsity, $k$, increases linearly with signal dimension, $n$, the sign pattern of $X$ can be recovered with $SNR=O(\\log n)$ and $m= O(k)$ measurements. Our proof technique is based on perturbation of the noiseless $\\ell_1$ problem. Consequently, the achievable sparsity level in the noisy problem is comparable to that of the noiseless problem. Our result offers a sharp characterizat...

  20. Partial Data Compression and Text Indexing via Optimal Suffix Multi-Selection

    Franceschini, Gianni; Muthukrishnan, S


    Consider an input text string T[1,N] drawn from an unbounded alphabet. We study partial computation in suffix-based problems for Data Compression and Text Indexing such as (I) retrieve any segment of K<=N consecutive symbols from the Burrows-Wheeler transform of T, and (II) retrieve any chunk of K<=N consecutive entries of the Suffix Array or the Suffix Tree. Prior literature would take O(N log N) comparisons (and time) to solve these problems by solving the total problem of building the entire Burrows-Wheeler transform or Text Index for T, and performing a post-processing to single out the wanted portion. We introduce a novel adaptive approach to partial computational problems above, and solve both the partial problems in O(K log K + N) comparisons and time, improving the best known running times of O(N log N) for K=o(N). These partial-computation problems are intimately related since they share a common bottleneck: the suffix multi-selection problem, which is to output the suffixes of rank r_1,r_2,......

  1. FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression.

    Jacob, J Augustin; Kumar, N Senthil


    A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation.

  2. Control Optimization of a LHC 18 KW Cryoplant Warm Compression Station Using Dynamic Simulations

    Bradu, B; Niculescu, S I


    This paper addresses the control optimization of a 4.5 K refrigerator used in the cryogenic system of the Large Hadron Collider (LHC) at CERN. First, the compressor station with the cold-box have been modeled and simulated under PROCOS (Process and Control Simulator), a simulation environment developed at CERN. Next, an appropriate parameter identification has been performed on the simulator to obtain a simplified model of the system in order to design an Internal Model Control (IMC) enhancing the regulation of the high pressure. Finally, a floating high pressure control is proposed using a cascade control to reduce operational costs.

  3. Controlled supercontinuum generation for optimal pulse compression : a time-warp analysis of nonlinear propagation of ultra-broad-band pulses

    Spanner, M; Pshenichnikov, M; Olvo, [No Value; Ivanov, M


    We describe the virtues of the pump-probe approach for controlled supercontinuum generation in nonlinear media, using the example of pulse compression by cross-phase modulation in dielectrics. Optimization of a strong (pump) pulse and a weak (probe) pulse at the input into the medium opens the route

  4. Optimization of a transition radiation detector for the compressed baryonic matter experiment

    Arend, Andreas


    The Transition Radiation Detector (TRD) of the compressed baryonic matter (CBM) experiment at FAIR has to provide electron-pion separation as well as charged-particle tracking. Within this work, thin and symmetric Multi-Wire Proportional Chambers (MWPCs) without additional drift region were proposed. the proposed prototypes feature a foil-based entrance window to minimize the material budget and to reduce the absorption probability of the generated TR photon. Based on the conceptual design of thin and symmetric MWPCs without drift region, multiple prototypes were constructed and their performance presented within this thesis. With the constructed prototypes of generations II and III the geometries of the wire and cathode planes were determined to be 4+4 mm and 5+5 mm. Based on the results of a performed test beam campaign in 2011 with this prototypes new prototypes of generation IV were manufactured and tested in a subsequent test beam campaign in 2012. Prototypes of different radiators were developed together with the MWPC prototypes. Along with regular foil radiators, foam-based radiator types made of polyethylene foam were utilized. Also radiators constructed in a sandwich design, which used different fiber materials confined with solid foam sheets, were used. For the prototypes without drift region, simulations of the electrostatic and mechanical properties were performed. The GARFIELD software package was used to simulate the electric field and to determine the resulting drift lines of the generated electrons. The mean gas amplification depending on the utilized gas and the applied anode voltage was simulated and the gas-gain homogeneity was verified. Since the thin foil-based entrance window experiences a deformation due to pressure differences inside and outside the MWPC, the variation on the gas gain depending on the deformation was simulated. The mechanical properties focusing on the stability of the entrance window was determined with a finiteelement

  5. Revisiting CFHTLenS cosmic shear: optimal E/B mode decomposition using COSEBIs and compressed COSEBIs

    Asgari, Marika; Heymans, Catherine; Blake, Chris; Harnois-Deraps, Joachim; Schneider, Peter; Van Waerbeke, Ludovic


    We present a re-analysis of the CFHTLenS weak gravitational lensing survey using Complete Orthogonal Sets of E/B-mode Integrals, known as COSEBIs. COSEBIs provide a complete set of functions to efficiently separate E-modes from B-modes and hence allow for robust and stringent tests for systematic errors in the data. This analysis reveals significant B-modes on large angular scales that were not previously seen using the standard E/B decomposition analyses. We find that the significance of the B-modes is enhanced when the data are split by galaxy type and analysed in tomographic redshift bins. Adding tomographic bins to the analysis increases the number of COSEBIs modes, which results in a less-accurate estimation of the covariance matrix from a set of simulations. We therefore also present the first compressed COSEBIs analysis of survey data, where the COSEBIs modes are optimally combined based on their sensitivity to cosmological parameters. In this tomographic CCOSEBIs analysis, we find the B-modes to be consistent with zero when the full range of angular scales are considered.

  6. Simulation and optimization of a 10 A electron gun with electrostatic compression for the electron beam ion source.

    Pikin, A; Beebe, E N; Raparia, D


    Increasing the current density of the electron beam in the ion trap of the Electron Beam Ion Source (EBIS) in BNL's Relativistic Heavy Ion Collider facility would confer several essential benefits. They include increasing the ions' charge states, and therefore, the ions' energy out of the Booster for NASA applications, reducing the influx of residual ions in the ion trap, lowering the average power load on the electron collector, and possibly also reducing the emittance of the extracted ion beam. Here, we discuss our findings from a computer simulation of an electron gun with electrostatic compression for electron current up to 10 A that can deliver a high-current-density electron beam for EBIS. The magnetic field in the cathode-anode gap is formed with a magnetic shield surrounding the gun electrodes and the residual magnetic field on the cathode is (5 ÷ 6) Gs. It was demonstrated that for optimized gun geometry within the electron beam current range of (0.5 ÷ 10) A the amplitude of radial beam oscillations can be maintained close to 4% of the beam radius by adjusting the injection magnetic field generated by a separate magnetic coil. Simulating the performance of the gun by varying geometrical parameters indicated that the original gun model is close to optimum and the requirements to the precision of positioning the gun elements can be easily met with conventional technology.

  7. Simulation and optimization of a 10 A electron gun with electrostatic compression for the electron beam ion source

    Pikin, A.; Beebe, E. N.; Raparia, D. [Brookhaven National Laboratory, Upton, New York 11973 (United States)


    Increasing the current density of the electron beam in the ion trap of the Electron Beam Ion Source (EBIS) in BNL's Relativistic Heavy Ion Collider facility would confer several essential benefits. They include increasing the ions' charge states, and therefore, the ions' energy out of the Booster for NASA applications, reducing the influx of residual ions in the ion trap, lowering the average power load on the electron collector, and possibly also reducing the emittance of the extracted ion beam. Here, we discuss our findings from a computer simulation of an electron gun with electrostatic compression for electron current up to 10 A that can deliver a high-current-density electron beam for EBIS. The magnetic field in the cathode-anode gap is formed with a magnetic shield surrounding the gun electrodes and the residual magnetic field on the cathode is (5 Division-Sign 6) Gs. It was demonstrated that for optimized gun geometry within the electron beam current range of (0.5 Division-Sign 10) A the amplitude of radial beam oscillations can be maintained close to 4% of the beam radius by adjusting the injection magnetic field generated by a separate magnetic coil. Simulating the performance of the gun by varying geometrical parameters indicated that the original gun model is close to optimum and the requirements to the precision of positioning the gun elements can be easily met with conventional technology.

  8. Optimal Time-decay Estimates for the Compressible Navier-Stokes Equations in the Critical L p Framework

    Danchin, Raphaël; Xu, Jiang


    The global existence issue for the isentropic compressible Navier-Stokes equations in the critical regularity framework was addressed in Danchin (Invent Math 141(3):579-614, 2000) more than 15 years ago. However, whether (optimal) time-decay rates could be shown in critical spaces has remained an open question. Here we give a positive answer to that issue not only in the L 2 critical framework of Danchin (Invent Math 141(3):579-614, 2000) but also in the general L p critical framework of Charve and Danchin (Arch Ration Mech Anal 198(1):233-271, 2010), Chen et al. (Commun Pure Appl Math 63(9):1173-1224, 2010), Haspot (Arch Ration Mech Anal 202(2):427-460, 2011): we show that under a mild additional decay assumption that is satisfied if, for example, the low frequencies of the initial data are in {L^{p/2}({R}d)} , the L p norm (the slightly stronger {dot B^0_{p,1}} norm in fact) of the critical global solutions decays like {t^{-d(1/p-1/4)}} for {tto+∞,} exactly as firstly observed by Matsumura and Nishida in (Proc Jpn Acad Ser A 55:337-342, 1979) in the case p = 2 and d = 3, for solutions with high Sobolev regularity. Our method relies on refined time weighted inequalities in the Fourier space, and is likely to be effective for other hyperbolic/parabolic systems that are encountered in fluid mechanics or mathematical physics.

  9. 基于最优观测的语音信号压缩感知%Speech Compressed Sensing Based on Optimized Observation

    徐倩; 季云云


    压缩感知是一种结合采样和压缩的新技术,是近年来研究的热点.文中研究基于压缩感知(Compressed Sensing,CS)理论的语音信号处理新技术.验证了语音信号在离散余弦变换域(Discrete Cosing Transform,DCT)的近似稀疏性.根据文献[1]提出的最优观测理论,文中针对语音信号进行了研究,提出一种语音信号的最优观测矩阵算法.结合语音信号的近似稀疏性与最优观测矩阵算法,提出了基于最优观测的语音信号CS方法.实验研究语音信号在DCT域的CS重构性能.实验表明,基于最优观测的语音信号CS性能较好,验证了文献[1]理论的正确性.%Compressed sensing (CS) which offers a joint sampling and compression processes is a research hotspot in recent years. This paper researchs a new processing technology of speech signal based on CS. The approximate sparsity of speech signal in the DCT domain is verified. Based on the theory in the literature [ 1 ] , An optimal observation matrix algorithm for speech signal is intruduced. Speech compressed sensing based on optimized observation is proposed on the combination of the approximate sparsity of speech signal and the optimized observation matrix algorithm. The performance of speech CS in the DCT domain is analyzed by the experiments. The experiments' results show that performance of the speech signal CS based on the optimal observation is better than which of other algorithmes. The results verifys the correctness of the theory in the literature [ 1 ].

  10. Wavelet image compression

    Pearlman, William A


    This book explains the stages necessary to create a wavelet compression system for images and describes state-of-the-art systems used in image compression standards and current research. It starts with a high level discussion of the properties of the wavelet transform, especially the decomposition into multi-resolution subbands. It continues with an exposition of the null-zone, uniform quantization used in most subband coding systems and the optimal allocation of bitrate to the different subbands. Then the image compression systems of the FBI Fingerprint Compression Standard and the JPEG2000 S

  11. Comparison of transform coding methods with an optimal predictor for the data compression of digital elevation models

    Lewis, Michael


    Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.

  12. Global existence and optimal convergence rates of solutions for 3D compressible magneto-micropolar fluid equations

    Wei, Ruiying; Guo, Boling; Li, Yin


    The Cauchy problem for the three-dimensional compressible magneto-micropolar fluid equations is considered. Existence of global-in-time smooth solutions is established under the condition that the initial data are small perturbations of some given constant state. Moreover, we obtain the time decay rates of the higher-order spatial derivatives of the solution by combining the Lp-Lq estimates for the linearized equations and the Fourier splitting method, if the initial perturbation is small in H3-norm and bounded in L1-norm.

  13. Optimization of the Starting by compressed air techniques; Optimizacion del Arranque en el sutiraje mediante tecnicas de aire comprimido



    High-pressure compressed air shots have been begun to use for coal stop caving in horizontal sublevel caving workings, as alternative to explosives, since they do not condition the winning cycles and they produce a smaller deterioration in the vein walls. In spite of those advantages, different parameters influence on shot result is not known. For this reason, a research project has been carried out in order to improve the high-pressure compressed air technique to extend the system implementation and to reduce the winning costs in the sublevel caving workings. The research works have consisted of a numerical model development and reduced scale and real scale tests. The model describes fragile material fragmentation under dynamical loadings and it has been implemented in a code. The tests allow to study the different parameter influence and to validate the numerical model. The main research results are, on the one hand, a numerical model that allows to define the best shot plan for user's working conditions and, on the other hand, the great influence of the air volume on the disruptive strength has been proven. (Author)

  14. Thermomechanical process optimization of U-10 wt% Mo - Part 1: high-temperature compressive properties and microstructure

    Joshi, Vineet V.; Nyberg, Eric A.; Lavender, Curt A.; Paxton, Dean; Garmestani, Hamid; Burkes, Douglas E.


    Nuclear power research facilities require alternatives to existing highly enriched uranium alloy fuel. One option for a high density metal fuel is uranium alloyed with 10 wt% molybdenum (U-10Mo). Fuel fabrication process development requires specific mechanical property data that, to date has been unavailable. In this work, as-cast samples were compression tested at three strain rates over a temperature range of 400-800 °C to provide data for hot rolling and extrusion modeling. The results indicate that with increasing test temperature the U-10Mo flow stress decreases and becomes more sensitive to strain rate. In addition, above the eutectoid transformation temperature, the drop in material flow stress is prominent and shows a strain-softening behavior, especially at lower strain rates. Room temperature X-ray diffraction and scanning electron microscopy combined with energy dispersive spectroscopy analysis of the as-cast and compression tested samples were conducted. The analysis revealed that the as-cast samples and the samples tested below the eutectoid transformation temperature were predominantly γ phase with varying concentration of molybdenum, whereas the ones tested above the eutectoid transformation temperature underwent significant homogenization.

  15. A joint application of optimal threshold based discrete cosine transform and ASCII encoding for ECG data compression with its inherent encryption.

    Pandey, Anukul; Singh, Butta; Saini, Barjinder Singh; Sood, Neetu


    In this paper, a joint use of the discrete cosine transform (DCT), and differential pulse code modulation (DPCM) based quantization is presented for predefined quality controlled electrocardiogram (ECG) data compression. The formulated approach exploits the energy compaction property in transformed domain. The DPCM quantization has been applied to zero-sequence grouped DCT coefficients that were optimally thresholded via Regula-Falsi method. The generated sequence is encoded using Huffman coding. This encoded series is further converted to a valid ASCII code using the standard codebook for transmission purpose. Such a coded series possesses inherent encryption capability. The proposed technique is validated on all 48 records of standard MIT-BIH database using different measures for compression and encryption. The acquisition time has been taken in accordance to that existed in literature for the fair comparison with contemporary state-of-art approaches. The chosen measures are (1) compression ratio (CR), (2) percent root mean square difference (PRD), (3) percent root mean square difference without base (PRD1), (4) percent root mean square difference normalized (PRDN), (5) root mean square (RMS) error, (6) signal to noise ratio (SNR), (7) quality score (QS), (8) entropy, (9) Entropy score (ES) and (10) correlation coefficient (r x,y ). Prominently the average values of CR, PRD and QS were equal to 18.03, 1.06, and 17.57 respectively. Similarly, the mean encryption metrics i.e. entropy, ES and r x,y were 7.9692, 0.9962 and 0.0113 respectively. The novelty in combining the approaches is well justified by the values of these metrics that are significantly higher than the comparison counterparts.

  16. "Compressed" Compressed Sensing

    Reeves, Galen


    The field of compressed sensing has shown that a sparse but otherwise arbitrary vector can be recovered exactly from a small number of randomly constructed linear projections (or samples). The question addressed in this paper is whether an even smaller number of samples is sufficient when there exists prior knowledge about the distribution of the unknown vector, or when only partial recovery is needed. An information-theoretic lower bound with connections to free probability theory and an upper bound corresponding to a computationally simple thresholding estimator are derived. It is shown that in certain cases (e.g. discrete valued vectors or large distortions) the number of samples can be decreased. Interestingly though, it is also shown that in many cases no reduction is possible.

  17. Optimized Scheduling of Vehicle Body CAN Based on Data Compression%基于数据压缩的车身CAN网络优化调度设计


    针对汽车车身网络节点日渐增多而低速CAN带宽资源有限的问题,引入减少网络传输数据的数据压缩技术,分析了车身网络信息数据变化特点,提出了基于数据压缩的变化优先( CF)的优化调度算法,介绍了CF算法的数据压缩、解压缩和调度原理,将CF算法应用于汽车车身控制系统设计,理论分析和实验验证的结果表明,该算法可显著降低车身CAN总线负载,改善信息的实时性,提高车身CAN总线的扩展灵活性。%In view of the contradiction between the ever increasing network nodes in vehicle body and the limited bandwidth resources of low-speed CAN, a data compression technique is introduced to reduce network trans-mission data. Then based on the analysis on the changing features of vehicle body network data, an optimization scheduling algorithm with "changed first" principle is proposed for data compression and is then applied to the con-trol system design of vehicle body. The results of theoretical analysis and experimental verification indicate that the algorithm proposed can obviously reduce the load of vehicle body CAN bus, improve the real time performance of in-formation and enhance the expansion flexibility of vehicle body CAN bus.

  18. [Optimization of technological processes for the preparation of tablets with a low content of warfarin by direct compression].

    Muselík, Jan; Franc, Aleš; Starková, Jana; Matějková, Zuzana


    Warfarin is a rug with an arrow therapeutic index. It is commercially available in the form of tablets with immediate release from many generic manufacturers. Though attempts are being made to replace it with new drugs, in the is still the antithrombotic agent of the first choice. In the past there were cases when after replacement of the original brand preparation with the generic one, the patient suffered from complications such as the loss of control of anticoagulation and increased frequency of patients visits to the physician. One of the critical parameters of tablets containing an active substance with an arrow therapeutic index (NTI) is content uniformity, which must comply with the pharmacopoeial requirements as well as the strict criteria of regulatory authorities in the validation of the manufacture of the solid dosage form. Content uniformity is affected by a number of factors such as density, particle shape and size distribution, electrostatic charge, and concentration of the individual components. Of the technological parameters, it is mainly the intensity and length of mixing, shape of the mixing vessel and the mixer, the size of the charge, or the degree of filling of the mixing device, etc. This paper deals with the influence of the mixing time and concentration of the drug on the content uniformity of warfarin-containing tablets. In mixing the mixtures of solid substances, where the active substance is included in alow concentration, there occurs the so-called mixing-out and segregation of the active substance. For this reason it is necessary to optimize the period of mixing. This study managed to optimize the mixing time of mixtures prepared with the use of the patented technology of the Veterinary and Pharmaceutical University Brno and further to prepare tablets with varying content of warfarin (2-10 mg) from acommon blend, which fulfil the pharmacopoeial requirements as well as the requirements of regulatory authorities for content

  19. Optimization of Three-Dimensional (3D) Chemical Imaging by Soft X-Ray Spectro-Tomography Using a Compressed Sensing Algorithm.

    Wu, Juan; Lerotic, Mirna; Collins, Sean; Leary, Rowan; Saghi, Zineb; Midgley, Paul; Berejnov, Slava; Susac, Darija; Stumper, Juergen; Singh, Gurvinder; Hitchcock, Adam P


    Soft X-ray spectro-tomography provides three-dimensional (3D) chemical mapping based on natural X-ray absorption properties. Since radiation damage is intrinsic to X-ray absorption, it is important to find ways to maximize signal within a given dose. For tomography, using the smallest number of tilt series images that gives a faithful reconstruction is one such method. Compressed sensing (CS) methods have relatively recently been applied to tomographic reconstruction algorithms, providing faithful 3D reconstructions with a much smaller number of projection images than when conventional reconstruction methods are used. Here, CS is applied in the context of scanning transmission X-ray microscopy tomography. Reconstructions by weighted back-projection, the simultaneous iterative reconstruction technique, and CS are compared. The effects of varying tilt angle increment and angular range for the tomographic reconstructions are examined. Optimization of the regularization parameter in the CS reconstruction is explored and discussed. The comparisons show that CS can provide improved reconstruction fidelity relative to weighted back-projection and simultaneous iterative reconstruction techniques, with increasingly pronounced advantages as the angular sampling is reduced. In particular, missing wedge artifacts are significantly reduced and there is enhanced recovery of sharp edges. Examples of using CS for low-dose scanning transmission X-ray microscopy spectroscopic tomography are presented.

  20. Ultrasound beamforming using compressed data.

    Li, Yen-Feng; Li, Pai-Chi


    The rapid advancements in electronics technologies have made software-based beamformers for ultrasound array imaging feasible, thus facilitating the rapid development of high-performance and potentially low-cost systems. However, one challenge to realizing a fully software-based system is transferring data from the analog front end to the software back end at rates of up to a few gigabits per second. This study investigated the use of data compression to reduce the data transfer requirements and optimize the associated trade-off with beamforming quality. JPEG and JPEG2000 compression techniques were adopted. The acoustic data of a line phantom were acquired with a 128-channel array transducer at a center frequency of 3.5 MHz, and the acoustic data of a cyst phantom were acquired with a 64-channel array transducer at a center frequency of 3.33 MHz. The receive-channel data associated with each transmit event are separated into 8 × 8 blocks and several tiles before JPEG and JPEG2000 data compression is applied, respectively. In one scheme, the compression was applied to raw RF data, while in another only the amplitude of baseband data was compressed. The maximum compression ratio of RF data compression to produce an average error of lower than 5 dB was 15 with JPEG compression and 20 with JPEG2000 compression. The image quality is higher with baseband amplitude data compression than with RF data compression; although the maximum overall compression ratio (compared with the original RF data size), which was limited by the data size of uncompressed phase data, was lower than 12, the average error in this case was lower than 1 dB when the compression ratio was lower than 8.

  1. Optimizing Chest Compression to Rescue Ventilation Ratios During One-Rescuer CPR by Professionals and Lay Persons: Children are Not Just Little Adults

    Babbs, Charles F.; Nadkarni, Vinay


    Objective: To estimate the optimum ratio of chest compressions to ventilations for onerescuer CPR that maximizes systemic oxygen delivery in children. Method: Equations describing oxygen delivery and blood flow during CPR as functions of the number of compressions and the number of ventilations delivered over time were adapted from the former work of Babbs and Kern. These equations were solved explicitly as a function of body weight, using scaling algorithms based upon principles of developme...

  2. Maxwell's Demon and Data Compression

    Hosoya, Akio; Shikano, Yutaka


    In an asymmetric Szilard engine model of Maxwell's demon, we show the equivalence between information theoretical and thermodynamic entropies when the demon erases information optimally. The work gain by the engine can be exactly canceled out by the work necessary to reset demon's memory after optimal data compression a la Shannon before the erasure.

  3. Optimism

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.


    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  4. Optimization

    Pearce, Charles


    Focuses on mathematical structure, and on real-world applications. This book includes developments in several optimization-related topics such as decision theory, linear programming, turnpike theory, duality theory, convex analysis, and queuing theory.

  5. Wellhead compression

    Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)


    Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)

  6. Compressive beamforming

    Xenaki, Angeliki; Mosegaard, Klaus


    Sound source localization with sensor arrays involves the estimation of the direction-of-arrival (DOA) from a limited number of observations. Compressive sensing (CS) solves such underdetermined problems achieving sparsity, thus improved resolution, and can be solved efficiently with convex...

  7. Partial transparency of compressed wood

    Sugimoto, Hiroyuki; Sugimori, Masatoshi


    We have developed novel wood composite with optical transparency at arbitrary region. Pores in wood cells have a great variation in size. These pores expand the light path in the sample, because the refractive indexes differ between constituents of cell and air in lumen. In this study, wood compressed to close to lumen had optical transparency. Because the condition of the compression of wood needs the plastic deformation, wood was impregnated phenolic resin. The optimal condition for high transmission is compression ratio above 0.7.

  8. Development of a pressure-measuring device to optimize compression treatment of lymphedema and evaluation of change in garment pressure with simulated wear and tear.

    Brorson, Håkan; Hansson, Emma; Jense, Erik; Freccero, Carolin


    The use of compression garments in treating lymphedema following treatment of genital (penis, testes, uterus, cervical) and breast cancer treatment is a well-established practice. Although compression garments are classified in compression classes, little is known about the actual subgarment pressure exerted along the extremity. The aims of this study were to establish an in vitro method for measuring subgarment pressure along the extremity and to analyze initial and over time subgarment pressure of compression garments from three manufacturers. The measurements were performed with I-scan(®) (Tekscan Inc.) pressure measuring equipment once a week during a period of 4 weeks. Wear and tear was simulated by washing and putting on the garments on plastic legs every day. There was a statistically significant difference between the garments of some of manufacturers. There was no difference between garments from the same manufacturer. No significant decrease of subgarment pressure was observed during the trial period. The study demonstrated that Tekscan pressure-measuring equipment could measure subgarment pressure in vitro. The results may indicate that there was a difference in subgarment pressure exerted by garments from different manufacturers and that there was no clear decrease in subgarment pressure during the first four weeks of usage.

  9. models for predicting compressive strength and water absorption of ...


    combine laterite and quarry dust in sandcrete blocks or concrete are few. One of ... model for optimization of compressive strength of sand- laterite blocks using ..... compressive strength of Pulverise fuel Ash-Cement concrete''. IOSR Journal of ...

  10. On Network Functional Compression

    Feizi, Soheil


    In this paper, we consider different aspects of the network functional compression problem where computation of a function (or, some functions) of sources located at certain nodes in a network is desired at receiver(s). The rate region of this problem has been considered in the literature under certain restrictive assumptions, particularly in terms of the network topology, the functions and the characteristics of the sources. In this paper, we present results that significantly relax these assumptions. Firstly, we consider this problem for an arbitrary tree network and asymptotically lossless computation. We show that, for depth one trees with correlated sources, or for general trees with independent sources, a modularized coding scheme based on graph colorings and Slepian-Wolf compression performs arbitrarily closely to rate lower bounds. For a general tree network with independent sources, optimal computation to be performed at intermediate nodes is derived. We introduce a necessary and sufficient condition...

  11. Image Compression Using Harmony Search Algorithm

    Ryan Rey M. Daga


    Full Text Available Image compression techniques are important and useful in data storage and image transmission through the Internet. These techniques eliminate redundant information in an image which minimizes the physical space requirement of the image. Numerous types of image compression algorithms have been developed but the resulting image is still less than the optimal. The Harmony search algorithm (HSA, a meta-heuristic optimization algorithm inspired by the music improvisation process of musicians, was applied as the underlying algorithm for image compression. Experiment results show that it is feasible to use the harmony search algorithm as an algorithm for image compression. The HSA-based image compression technique was able to compress colored and grayscale images with minimal visual information loss.

  12. New Theory and Algorithms for Compressive Sensing


    are compressed by a factor of 10 or more when expressed in terms of their largest Fourier or wavelet coefficients. The usual approach to acquiring a...information conversion 2.2.1 Compressive sensing background Compressive Sensing (CS) provides a framework for acquisition of an N × 1 discrete -time signal...1) This optimization problem, also known as Basis Pursuit with Denoising (BPDN) [10] can be solved with tradi- tional convex programming techniques

  13. Evolution Strategies for Laser Pulse Compression

    Monmarché, Nicolas; Fanciulli, Riccardo; Willmes, Lars; Talbi, El-Ghazali; Savolainen, Janne; Collet, Pierre; Schoenauer, Marc; van der Walle, P.; Lutton, Evelyne; Back, Thomas; Herek, Jennifer Lynn


    This study describes first steps taken to bring evolutionary optimization technology from computer simulations to real world experimentation in physics laboratories. The approach taken considers a well understood Laser Pulse Compression problem accessible both to simulation and laboratory experiment

  14. Design Optimization of a Transonic-Fan Rotor Using Numerical Computations of the Full Compressible Navier-Stokes Equations and Simplex Algorithm

    M. A. Aziz


    Full Text Available The design of a transonic-fan rotor is optimized using numerical computations of the full three-dimensional Navier-Stokes equations. The CFDRC-ACE multiphysics module, which is a pressure-based solver, is used for the numerical simulation. The code is coupled with simplex optimization algorithm. The optimization process is started from a suitable design point obtained using low fidelity analytical methods that is based on experimental correlations for the pressure losses and blade deviation angle. The fan blade shape is defined by its stacking line and airfoil shape which are considered the optimization parameters. The stacking line is defined by lean, sweep, and skews, while blade airfoil shape is modified considering the thickness and camber distributions. The optimization has been performed to maximize the rotor total pressure ratio while keeping the rotor efficiency and surge margin above certain required values. The results obtained are verified with the experimental data of Rotor 67. In addition, the results of the optimized fan indicate that the optimum design is found to be leaned in the direction of rotation and has a forward sweep from the hub to mean section and backward sweep to the tip. The pressure ratio increases from 1.427 to 1.627 at the design speed and mass flow rate.

  15. Comparison and Analysis of Three Kinds of Optimizing Fractal Image Compression Algorithms%三种优化分形图片压缩算法比较分析



    Fractal image compression (FIC) is an image compression algorithm based on partitioned iterative function system (PIFS), i. e. self-similarity of natural image is used to conduct data compression, however, its huge time-consuming limits its real application. The time-consuming of FIC is mainly embodied in the aspects of the process of the optimal matched domain block search of every range block in defined domain block, calculation, quantification and storage of all affine transformation parameters and image partition process. In order to overcome the shortcoming of high computation cost, this paper uses optimization algorithm such as GA, ACO and PSO to reduce the search space for finding the self similarity in the given image and to speed up encoding. Experiment results show that optimized FIC can effectively reduce encoding time while peak value of signal-to-noise ratio is maintained.%分形图像压缩(FIC)是基于局部迭代函数系统(PIFS)的图像压缩算法,即用自然景物的自相似性来进行数据压缩;但是巨大的耗时量限制了其实际应用;FIC的耗时量主要体现在以下几方面:每一个值域块的最优匹配块的搜索都要在所有的定义域块中进行,需要花费大量的时间;计算、量化、存储所有的仿射变换参数;图像分割过程;为了克服FIC计算成本高的缺点,采用了遗传算法、蚁群算法和粒子群算法减少寻找相似定义域块的搜索空间,加快编码速度;实验结果表明:优化后的FIC能有效地减少编码时间同时保持峰值信噪比。

  16. Effects of Sequence Partitioning on Compression Rate

    Alagoz, B Baykant


    In the paper, a theoretical work is done for investigating effects of splitting data sequence into packs of data set. We proved that a partitioning of data sequence is possible to find such that the entropy rate at each subsequence is lower than entropy rate of the source. Effects of sequence partitioning on overall compression rate are argued on the bases of partitioning statistics, and then, an optimization problem for an optimal partition is defined to improve overall compression rate of a sequence.

  17. Novel Optimization Method for Projection Matrix in Compress Sensing Theory%一种用于压缩感知理论的投影矩阵优化算法

    吴光文; 张爱军; 王昌明


    Considering the influence of the projection matrix on Compressed Censing (CS), a novel method is proposed to optimize the projection matrix. In order to improve the signal’s reconstruction precise and the stability of the optimization algorithm of the projection matrix, the proposed method adopts a differentiable threshold function to shrink the off-diagonal items of a Gram matrix corresponding to the mutual coherence between the projection matrix and sparse dictionary, and introduces a gradient descent approach based on the Wolf’s-conditions to solve the optimization projection matrix. The Basis-Pursuit (BP) algorithm and the Orthogonal Matching Pursuit (OMP) algorithm are applied to find the solution of the minimum l0-norm optimization issue and the compressed sensing are utilized to sense and reconstruct the random vectors, wavelet’s noise test signals and pictures. The results of the simulation show the proposed method based on the projection matrix optimization is able to improve the quality of the reconstruction performance.%考虑到投影矩阵对压缩感知(CS)算法性能的影响,该文提出一种优化投影矩阵的算法。该方法提出可导的阈值函数,通过收缩 Gram 矩阵非对角元的方法压缩投影矩阵和稀疏字典的相关系数,引入基于沃尔夫条件(Wolfe’s conditions)的梯度下降法求解最佳投影矩阵,达到提高投影矩阵优化算法稳定度和重构信号精度的目的。通过基追踪(BP)算法和正交匹配追踪(OMP)算法求解l0优化问题,用压缩感知方法实现随机稀疏向量、小波测试信号和图像信号的感知和重构。仿真实验表明,该文提出的投影矩阵优化算法能较大地提高重构信号的精度。

  18. 压缩空气系统机组优化配置节能应用分析%Compressed Air System Unit Optimization Allocation of Energy Saving Application Analysis



    The compressed air system processed the production process for optimal allocation analysis,the application of lean six sigma management tool to analyses,control and avoid process configuration waste,at the same time focused on process improvements,and identify the future process configuration optimization measures to avoid system electrical transformation and system energy saving optimization scheme and curing have energy saving results.%对压缩空气系统工艺生产过程进行优化配置分析,应用精益六西格玛管理工具分析、控制、避免工艺过程配置浪费,同时关注工艺过程的改善以及识别未来流程配置优化措施,避免系统电气改造和系统节能优化方案并固化所取得的节能成果。

  19. Optimized Local Trigonometric Bases with Nonuniform Partitions

    Qiao Fang LIAN; Yong Ge WANG; Dun Yan YAN


    The authors provide optimized local trigonometric bases with nonuniform partitions which efficiently compress trigonometric functions. Numerical examples demonstrate that in many cases the proposed bases provide better compression than the optimized bases with uniform partitions obtained by Matviyenko.

  20. Satellite data compression

    Huang, Bormin


    Satellite Data Compression covers recent progress in compression techniques for multispectral, hyperspectral and ultra spectral data. A survey of recent advances in the fields of satellite communications, remote sensing and geographical information systems is included. Satellite Data Compression, contributed by leaders in this field, is the first book available on satellite data compression. It covers onboard compression methodology and hardware developments in several space agencies. Case studies are presented on recent advances in satellite data compression techniques via various prediction-

  1. Technique for chest compressions in adult CPR

    Rajab Taufiek K


    Full Text Available Abstract Chest compressions have saved the lives of countless patients in cardiac arrest as they generate a small but critical amount of blood flow to the heart and brain. This is achieved by direct cardiac massage as well as a thoracic pump mechanism. In order to optimize blood flow excellent chest compression technique is critical. Thus, the quality of the delivered chest compressions is a pivotal determinant of successful resuscitation. If a patient is found unresponsive without a definite pulse or normal breathing then the responder should assume that this patient is in cardiac arrest, activate the emergency response system and immediately start chest compressions. Contra-indications to starting chest compressions include a valid Do Not Attempt Resuscitation Order. Optimal technique for adult chest compressions includes positioning the patient supine, and pushing hard and fast over the center of the chest with the outstretched arms perpendicular to the patient's chest. The rate should be at least 100 compressions per minute and any interruptions should be minimized to achieve a minimum of 60 actually delivered compressions per minute. Aggressive rotation of compressors prevents decline of chest compression quality due to fatigue. Chest compressions are terminated following return of spontaneous circulation. Unconscious patients with normal breathing are placed in the recovery position. If there is no return of spontaneous circulation, then the decision to terminate chest compressions is based on the clinical judgment that the patient's cardiac arrest is unresponsive to treatment. Finally, it is important that family and patients' loved ones who witness chest compressions be treated with consideration and sensitivity.

  2. Fpack and Funpack User's Guide: FITS Image Compression Utilities

    Pence, William; White, Rick


    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see The associated funpack program restores the compressed image file back to its original state (if a lossless compression algorithm is used). (An experimental method for compressing FITS binary tables is also available; see section 7). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression options.

  3. Instability of ties in compression

    Buch-Hansen, Thomas Cornelius


    Masonry cavity walls are loaded by wind pressure and vertical load from upper floors. These loads results in bending moments and compression forces in the ties connecting the outer and the inner wall in a cavity wall. Large cavity walls are furthermore loaded by differential movements from...... the temperature gradient between the outer and the inner wall, which results in critical increase of the bending moments in the ties. Since the ties are loaded by combined compression and moment forces, the loadbearing capacity is derived from instability equilibrium equations. Most of them are iterative, since......-connectors in cavity walls was developed. The method takes into account constraint conditions limiting the free length of the wall tie, and the instability in case of pure compression which gives an optimal load bearing capacity. The model is illustrated with examples from praxis....

  4. Perceptually Lossless Wavelet Compression

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John


    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  5. Optimization Design of Two-Dimensional Hypersonic Curved Compression Surface with Controllable Mach Number Distribution%马赫数分布可控的二元高超弯曲压缩面优化设计

    翟永玺; 张堃元; 王磊; 李永洲; 张林


    A parametric research on the curved compression surface with controllable Mach number distri-bution was commenced to find the effect regularity of design parameters on the performance parameters of curved compression surface. On this basis,a polynomial response surface proxy model was built to make a multi-objec-tive optimization,and a hypersonic curved shock two-dimensional inlet was designed based on the optimization result, the performance was compared with the three-ramp compression inlet which was designed under the same constraints. Results indicate among the design parameters, the initial compress angle θ and the factor C and factor md1 affect most. The flow coefficient of the innovative inlet is up to 0.769 at Mach 4,when Mach num-ber ranges from 4 to 7,the two inlets have equally the same mass capture ratio,while the innovative inlet has high total pressure recovery of throat and outlet section. Compared with the relative three-ramp inlet , the total pressure recovery of throat section of the innovative inlet increased by 6.5%at Mach 4, 8.4%at Mach 6, and 10.7%at Mach 7.%针对一种马赫数分布可控的二元高超弯曲压缩面进行参数化研究,获得其设计参数对压缩面性能的影响规律,在此基础上建立多项式响应面代理模型并进行多目标优化,基于优化结果设计了二元弯曲激波进气道,并与同等约束条件下的三楔进气道进行比较。结果表明:压缩面初始压缩角θ与马赫数梯度函数中的设计参数md1,C对压缩面性能影响最为显著;Ma∞=4.0时弯曲激波进气道流量系数达0.769,与三楔进气道相比,在Ma∞=4~7工作范围内的流量捕获能力相当,但其喉道、出口截面的总压恢复系数均高于三楔进气道,在Ma∞=4,6,7工况下,喉道截面总压恢复分别有6.5%,8.4%和10.7%的提高。

  6. Optimal design of discharging device for compressive garbage truck with rear-loading based on ADAMS%基于ADAMS的后装式压缩垃圾车卸料机构优化设计

    蒲明辉; 苏飞; 李凯; 孙青; 欧洪彪


    在保证垃圾能卸载干净和一定箱体尺寸要求的条件下,为求得在最小卸料油缸推力下的后装式压缩垃圾车卸料机构各参数,对后装式压缩垃圾车卸料机构进行模型简化、模型分析和力学分析,应用ADAMS建立它的参数化模型,对其进行了优化仿真分析,得出了合理的推卸油缸的安装角度、推板折弯斜度和推板深度.为后装式压缩垃圾车推板机构选取几何参数及合适的卸料油缸提供了依据.%In order to obtain parameters of discharging device of compressive garbage truck with rear-loading and ensure certain body size and complete unloading of garbage ,the discharging device is simplified in it,and the model analysis and mechanical analysis are carried out as well.Then ADAMS software is applied to establish its parameters model and the optimal simulation analysis is done so that the rational installation angle for shift cylinder,bending angle of the push board and the depth of push board are obtained,which provide the basis for selecting geometrical factors and the appropriate shift cylinder of compressive garbage truck with rear-loading.

  7. Compressive Sensing and its Application: from Sparse to Low-rank Regularized Optimization%压缩感知及其应用:从稀疏约束到低秩约束优化

    马坚伟; 徐杰; 鲍跃全; 于四伟


    压缩感知(或称压缩采样)是国际上近期出现的一种信息理论.其核心思想是只要某高维信号是可压缩的或在某个变换域上具有稀疏性,就可以用一个与变换基不相关的测量矩阵将该信号投影到一个低维空间上,然后通过求解一个最优化问题以较高的概率从这些少量的投影中重构出原始信号.压缩感知理论突破了香农定理对信号采样频率的限制,能够以较少的采样资源,较高的采样速度和较低的软硬件复杂度获得原始信号的测量值.该理论已经被广泛应用于数字相机、医学成像、遥感成像、地震勘探、多媒体混合编码、通讯、结构健康监测等领域.本文归纳了压缩感知研究中的关键问题,探讨压缩感知从稀疏约束到低秩约束优化的发展历程,对压缩感知在遥感、地震勘探等几个相关领域的应用研究进行了综述.%Compressive sensing/compressive sampling (CS) is a novel information theory proposed recently. CS provides a new sampling theory to reduce data acquisition, which says that sparse or compressible signals can be exactly reconstructed from highly incomplete random sets of measurements. CS broke through the restrictions of the Shannon theorem on the sampling frequency, which can use fewer sampling resources, higher sampling rate and lower hardware and software complexity to obtain the required measurements. CS has been used widely in many fields including digital cameras, medical imaging, remote sensing, seismic exploration, multimedia hybrid coding, communications and structural health monitoring. This article firstly summarizes some key issues in CS, and then discusses the development process of the optimization algorithms in CS from the sparsity constraints to low-rank constraints. Lastly, several related applications of CS in remote sensing , seismic exploration are reviewed.

  8. Metal Hydride Compression

    Johnson, Terry A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bowman, Robert [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Barton [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Anovitz, Lawrence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jensen, Craig [Hawaii Hydrogen Carriers LLC, Honolulu, HI (United States)


    Conventional hydrogen compressors often contribute over half of the cost of hydrogen stations, have poor reliability, and have insufficient flow rates for a mature FCEV market. Fatigue associated with their moving parts including cracking of diaphragms and failure of seal leads to failure in conventional compressors, which is exacerbated by the repeated starts and stops expected at fueling stations. Furthermore, the conventional lubrication of these compressors with oil is generally unacceptable at fueling stations due to potential fuel contamination. Metal hydride (MH) technology offers a very good alternative to both conventional (mechanical) and newly developed (electrochemical, ionic liquid pistons) methods of hydrogen compression. Advantages of MH compression include simplicity in design and operation, absence of moving parts, compactness, safety and reliability, and the possibility to utilize waste industrial heat to power the compressor. Beyond conventional H2 supplies of pipelines or tanker trucks, another attractive scenario is the on-site generating, pressuring and delivering pure H2 at pressure (≥ 875 bar) for refueling vehicles at electrolysis, wind, or solar generating production facilities in distributed locations that are too remote or widely distributed for cost effective bulk transport. MH hydrogen compression utilizes a reversible heat-driven interaction of a hydride-forming metal alloy with hydrogen gas to form the MH phase and is a promising process for hydrogen energy applications [1,2]. To deliver hydrogen continuously, each stage of the compressor must consist of multiple MH beds with synchronized hydrogenation & dehydrogenation cycles. Multistage pressurization allows achievement of greater compression ratios using reduced temperature swings compared to single stage compressors. The objectives of this project are to investigate and demonstrate on a laboratory scale a two-stage MH hydrogen (H2) gas compressor with a

  9. Image compression algorithm using wavelet transform

    Cadena, Luis; Cadena, Franklin; Simonov, Konstantin; Zotin, Alexander; Okhotnikov, Grigory


    Within the multi-resolution analysis, the study of the image compression algorithm using the Haar wavelet has been performed. We have studied the dependence of the image quality on the compression ratio. Also, the variation of the compression level of the studied image has been obtained. It is shown that the compression ratio in the range of 8-10 is optimal for environmental monitoring. Under these conditions the compression level is in the range of 1.7 - 4.2, depending on the type of images. It is shown that the algorithm used is more convenient and has more advantages than Winrar. The Haar wavelet algorithm has improved the method of signal and image processing.

  10. Video Coding Technique using MPEG Compression Standards

    A. J. Falade


    Full Text Available Digital video compression technologies have become part of life, in the way visual information is created, communicated and consumed. Some application areas of video compression focused on the problem of optimizing storage space and transmission bandwidth (BW. The two dimensional discrete cosine transform (2-D DCT is an integral part of video and image compression, which is used in Moving Picture Expert Group (MPEG encoding standards. Thus, several video compression algorithms had been developed to reduce the data quantity and provide the acceptable quality standard. In the proposed study, the Matlab Simulink Model (MSM has been used for video coding/compression. The approach is more modern and reduces error resilience image distortion.

  11. Hyperspectral image data compression based on DSP

    Fan, Jiming; Zhou, Jiankang; Chen, Xinhua; Shen, Weimin


    The huge data volume of hyperspectral image challenges its transportation and store. It is necessary to find an effective method to compress the hyperspectral image. Through analysis and comparison of current various algorithms, a mixed compression algorithm based on prediction, integer wavelet transform and embedded zero-tree wavelet (EZW) is proposed in this paper. We adopt a high-powered Digital Signal Processor (DSP) of TMS320DM642 to realize the proposed algorithm. Through modifying the mixed algorithm and optimizing its algorithmic language, the processing efficiency of the program was significantly improved, compared the non-optimized one. Our experiment show that the mixed algorithm based on DSP runs much faster than the algorithm on personal computer. The proposed method can achieve the nearly real-time compression with excellent image quality and compression performance.

  12. Typical reconstruction performance for distributed compressed sensing based on ℓ2,1-norm regularized least square and Bayesian optimal reconstruction: influences of noise

    Shiraki, Yoshifumi; Kabashima, Yoshiyuki


    A signal model called joint sparse model 2 (JSM-2) or the multiple measurement vector problem, in which all sparse signals share their support, is important for dealing with practical signal processing problems. In this paper, we investigate the typical reconstruction performance of noisy measurement JSM-2 problems for {{\\ell}2,1} -norm regularized least square reconstruction and the Bayesian optimal reconstruction scheme in terms of mean square error. Employing the replica method, we show that these schemes, which exploit the knowledge of the sharing of the signal support, can recover the signals more precisely as the number of channels increases. In addition, we compare the reconstruction performance of two different ensembles of observation matrices: one is composed of independent and identically distributed random Gaussian entries and the other is designed so that row vectors are orthogonal to one another. As reported for the single-channel case in earlier studies, our analysis indicates that the latter ensemble offers better performance than the former ones for the noisy JSM-2 problem. The results of numerical experiments with a computationally feasible approximation algorithm we developed for this study agree with the theoretical estimation.

  13. Blind One-Bit Compressive Sampling


    notation and recalling some background from convex analysis . For the d-dimensional Euclidean space Rd, the class of all lower semicontinuous convex...compressed sensing, Applied and Computational Harmonic Analysis , 27 (2009), pp. 265 – 274. [3] P. T. Boufounos and R. G. Baraniuk, 1-bit compressive sensing...Convergence analysis of the algorithm is presented. Our approach is to obtain a sequence of optimization problems by successively approximating the ℓ0

  14. Focus on Compression Stockings

    ... the stocking every other day with a mild soap. Do not use Woolite™ detergent. Use warm water ... compression clothing will lose its elasticity and its effectiveness. Compression stockings last for about 4-6 months ...

  15. A Compressive Superresolution Display

    Heide, Felix


    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  16. Microbunching and RF Compression

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.


    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  17. Lossless Compression on MRI Images Using SWT.

    Anusuya, V; Raghavan, V Srinivasa; Kavitha, G


    Medical image compression is one of the growing research fields in biomedical applications. Most medical images need to be compressed using lossless compression as each pixel information is valuable. With the wide pervasiveness of medical imaging applications in health-care settings and the increased interest in telemedicine technologies, it has become essential to reduce both storage and transmission bandwidth requirements needed for archival and communication of related data, preferably by employing lossless compression methods. Furthermore, providing random access as well as resolution and quality scalability to the compressed data has become of great utility. Random access refers to the ability to decode any section of the compressed image without having to decode the entire data set. The system proposes to implement a lossless codec using an entropy coder. 3D medical images are decomposed into 2D slices and subjected to 2D-stationary wavelet transform (SWT). The decimated coefficients are compressed in parallel using embedded block coding with optimized truncation of the embedded bit stream. These bit streams are decoded and reconstructed using inverse SWT. Finally, the compression ratio (CR) is evaluated to prove the efficiency of the proposal. As an enhancement, the proposed system concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices.

  18. Hyperspectral data compression

    Motta, Giovanni; Storer, James A


    Provides a survey of results in the field of compression of remote sensed 3D data, with a particular interest in hyperspectral imagery. This work covers topics such as compression architecture, lossless compression, lossy techniques, and more. It also describes a lossless algorithm based on vector quantization.

  19. Compressed gas manifold

    Hildebrand, Richard J.; Wozniak, John J.


    A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.

  20. Compressing Binary Decision Diagrams

    Hansen, Esben Rune; Satti, Srinivasa Rao; Tiedemann, Peter


    The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1...

  1. Compressing Binary Decision Diagrams

    Rune Hansen, Esben; Srinivasa Rao, S.; Tiedemann, Peter

    The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1...

  2. Compressing Binary Decision Diagrams

    Hansen, Esben Rune; Satti, Srinivasa Rao; Tiedemann, Peter


    The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1...

  3. An adaptive signal compression system with pre-specified reconstruction quality and compression rate.

    Tümer, M Borahan; Demir, Mert C


    Two essential properties of a signal compression method are the compression rate and the distance between the original signal and the reconstruction from the compressed signal. These two properties are used to assess the performance and quality of the method. In a recent work [B. Tümer, B. Demiröz, Lecture Notes in Computer Science-Computer and Information Sciences, volume 2869, chapter Signal Compression Using Growing Cell Structures: A Transformational Approach, Springer Verlag, 2003, pp. 952-959], an adaptive signal compression system (ACS) is presented which defines the performance of the system as a function of the system complexity, system sensitivity and data size. For a compression method, it is desirable to formulate the performance of the system as a function of the system complexity and sensitivity to optimize the performance of the system. It would be further desirable to express the reconstruction quality in terms of the same system parameters so as to know up front what compression rate to end up with for a specific reconstruction quality. In this work, we modify ACS such that the modified ACS (MACS) estimates the reconstruction quality for a given system complexity and sensitivity. Once this relation is identified it is possible to optimize either compression rate or reconstruction quality with respect to system sensitivity and system complexity while limiting the other one.

  4. Soliton compression to few-cycle pulses using quadratic nonlinear photonic crystal fibers: A design study

    Bache, Morten; Moses, Jeffrey; Lægsgaard, Jesper;


    We show theoretically that high-quality soliton compression from ~500 fs to ~10 fs is possible in poled silica photonic crystal fibers using cascaded (2):(2) nonlinearities. A moderate group-velocity mismatch optimizes the compression.......We show theoretically that high-quality soliton compression from ~500 fs to ~10 fs is possible in poled silica photonic crystal fibers using cascaded (2):(2) nonlinearities. A moderate group-velocity mismatch optimizes the compression....

  5. Characterization of spectral compression of OFDM symbols using optical time lenses

    Røge, Kasper Meldgaard; Guan, Pengyu; Kjøller, Niels-Kristian


    We present a detailed investigation of a double-time-lens subsystem for spectral compression of OFDM symbols. We derive optimized parameter settings by simulations and experimental characterization. The required chirp for OFDM spectral compression is very large.......We present a detailed investigation of a double-time-lens subsystem for spectral compression of OFDM symbols. We derive optimized parameter settings by simulations and experimental characterization. The required chirp for OFDM spectral compression is very large....

  6. Compressive phase-only filtering - pattern recognition at extreme compression rates

    Pastor-Calle, David; Mikolajczyk, Michal; Kotynski, Rafal


    We introduce a compressive pattern recognition method for non-adaptive Walsh-Hadamard or discrete noiselet-based compressive measurements and show that images measured at extremely high compression rates may still contain sufficient information for pattern recognition and target localization. We report on a compressive pattern recognition experiment with a single-pixel detector with which we validate the proposed method. The correlation signals produced with the phase-only matched filter or with the pure-phase correlation are obtained from the compressive measurements through lasso optimization without the need to reconstruct the original image. This is possible owing to the two properties of phase-only filtering: such filtering is a unitary circulant transform, and the correlation plane it produces in pattern recognition applications is usually sparse.

  7. 基于优化内积模型的压缩感知快速重构算法%A Fast Compressed Sensing Reconstruction Algorithm Based on Inner Product Optimization

    刘勇; 魏东红; 毛京丽


    The existing reconstruction algorithms in compressed sensing (CS) theory commonly cost too much time. A novel reconstruction algorithm based on inner product optimization is proposed to reduce reconstruction time. And also stopping criterion is derived from theory. The proposed algorithm computes the inner product of measurement matrix and the residual only in the first iteration during the reconstruction process. In the remaining iterations, the inner product of vectors instead of matrices is calculated. Then least square calculation is done only once to reconstruct the signal after iterations stopped. Experiments show that the proposed algorithm reduces the reconstruction time largely without degrading the quality of the signal.%针对压缩感知理论中现有重构算法耗时过长的问题,提出一种基于优化内积模型的快速重构算法,且理论推导了迭代停止条件.该算法在重构的每次迭代过程中,仅在第1次迭代时采用传感矩阵与余量的矩阵求内积运算,在后续的迭代中则通过向量运算代替矩阵求内积的运算,迭代停止时只需进行一次最小二乘法即可获得重构信号.仿真结果表明,提出的快速重构算法在保证重构信号性能的基础上,大大减少了重构时间.

  8. Lossless Medical Image Compression

    Nagashree G


    Full Text Available Image compression has become an important process in today‟s world of information exchange. Image compression helps in effective utilization of high speed network resources. Medical Image Compression is very important in the present world for efficient archiving and transmission of images. In this paper two different approaches for lossless image compression is proposed. One uses the combination of 2D-DWT & FELICS algorithm for lossy to lossless Image Compression and another uses combination of prediction algorithm and Integer wavelet Transform (IWT. To show the effectiveness of the methodology used, different image quality parameters are measured and shown the comparison of both the approaches. We observed the increased compression ratio and higher PSNR values.

  9. Fuzzy energy management strategy for HEV based on particle swarm optimization with compressibility factor%带压缩因子粒子群优化的混合动力汽车模糊能量管理策略

    周美兰; 张宇; 杨子发; 康娣


    Aiming at the refit HAFEI hybrid electric vehicle(HEV) , a fuzzy controller of energy management is constructed based on particle swarm optimization( PSO) with compressibility factor,which using the system torque request and the battery state of charge ( SOC) as inputs, and the engine torque as the output. In this way, the problems of the low precision and limited adaptive capability in conventional fuzzy control are overcome, and the robustness and control accuracy of fuzzy controller are improved. The simulation results show that the optimized fuzzy control strategy can improve fuel economy 14. 19% compared with the traditional fuzzy control algorithm. The recovery efficiency of the battery is 18. 95% , 23.95% and 14.47% respectively before optimization, and is increased to 29. 37% , 26. 20% and 20. 12% respectively. Amount of NOX, CO, HC in exhaust is 0.202 g/km, 1. 1 g/km respectively, 0. 282 g/km, and is reduced to 0. 104 g/km, 0.46 g/km, 0. 279 g/km respectively after optimization.%针对改装后的哈飞赛豹混合动力汽车,以混合驱动系统需求转矩和电池组荷电状态(SOC)为输入,以发动机转矩为输出,应用带压缩因子粒子群算法优化量化因子的方法构建了能量管理模糊控制器,克服了传统的模糊控制存在精确度不高、自适应能力有限等问题,使模糊控制器的鲁棒性和控制精确度都得到提高.基于混合动力汽车专用仿真软件的研究表明,经过优化的模糊能量管理策略与未优化的策略相比燃油经济性提高了14.19%;优化前蓄电池的回收效率分别为18.95%、23.95%和14.47%,而优化后的分别提高至29.37%、26.20%、和20.12%;尾气中NOx、CO、HC由优化前0.202 g/km、1.1 g/km和0.282 g/km分别降至0.104 g/km、0.46 g/km和0.279 g/km.

  10. Celiac Artery Compression Syndrome

    Mohammed Muqeetadnan


    Full Text Available Celiac artery compression syndrome is a rare disorder characterized by episodic abdominal pain and weight loss. It is the result of external compression of celiac artery by the median arcuate ligament. We present a case of celiac artery compression syndrome in a 57-year-old male with severe postprandial abdominal pain and 30-pound weight loss. The patient eventually responded well to surgical division of the median arcuate ligament by laparoscopy.

  11. Error Resilient Video Compression Using Behavior Models

    Jacco R. Taal


    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  12. Compressed sensing & sparse filtering

    Carmi, Avishy Y; Godsill, Simon J


    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  13. Stiffness of compression devices

    Giovanni Mosti


    Full Text Available This issue of Veins and Lymphatics collects papers coming from the International Compression Club (ICC Meeting on Stiffness of Compression Devices, which took place in Vienna on May 2012. Several studies have demonstrated that the stiffness of compression products plays a major role for their hemodynamic efficacy. According to the European Committee for Standardization (CEN, stiffness is defined as the pressure increase produced by medical compression hosiery (MCH per 1 cm of increase in leg circumference.1 In other words stiffness could be defined as the ability of the bandage/stockings to oppose the muscle expansion during contraction.

  14. Compression Ratio Adjuster

    Akkerman, J. W.


    New mechanism alters compression ratio of internal-combustion engine according to load so that engine operates at top fuel efficiency. Ordinary gasoline, diesel and gas engines with their fixed compression ratios are inefficient at partial load and at low-speed full load. Mechanism ensures engines operate as efficiently under these conditions as they do at highload and high speed.

  15. Spectral Animation Compression

    Chao Wang; Yang Liu; Xiaohu Guo; Zichun Zhong; Binh Le; Zhigang Deng


    This paper presents a spectral approach to compress dynamic animation consisting of a sequence of homeomor-phic manifold meshes. Our new approach directly compresses the field of deformation gradient defined on the surface mesh, by decomposing it into rigid-body motion (rotation) and non-rigid-body deformation (stretching) through polar decompo-sition. It is known that the rotation group has the algebraic topology of 3D ring, which is different from other operations like stretching. Thus we compress these two groups separately, by using Manifold Harmonics Transform to drop out their high-frequency details. Our experimental result shows that the proposed method achieves a good balance between the reconstruction quality and the compression ratio. We compare our results quantitatively with other existing approaches on animation compression, using standard measurement criteria.

  16. 澳洲坚果破壳工艺参数优化及压缩特性的有限元分析%Optimization of technical parameters of breaking Macadamia nut shell and finite element analysis of compression characteristics

    涂灿; 杨薇; 尹青剑; 吕俊龙


    Macadamia nuts have been successfully cultivated as crops in Australia and the USA and were introduced to China for experimental planting since 1980s. It is rich in fat and protein. The current production is over 700000 tons annually in China, but processing technology for macadamia nuts is undeveloped, especially breaking shell. So it has important significance to optimize technical parameters of breaking macadamia nut shell. Orthogonal design was carried out for optimizing technical parameters of breaking macadamia nut shell. The loading rate, the loading direction and the moisture content of macadamia nut shell were selected as factors and the integrated kernel rate of macadamia nuts was selected as evaluation index in this experiment. Macadamia nuts with different moisture content were selected as test samples, the moisture content of which was obtained by hot air drying at 55℃. Experiment of breaking macadamia nut was carried out in electronic tensile testing machine. The results indicated that the moisture content of macadamia nut shell and the loading direction had more significant effect on the integrated kernel rate than the loading rate. The most optimal combination of technics parameters was that loading rate, loading direction and moisture content of macadamia nut shell were 45 mm/min, horizontal, and 6%-9%, respectively. In this case, the highest integrated kernel rate of macadamia nut was 93%. The compression test was carried out in the macadamia nut shell moisture content of 6%-9%and the loading rate of 45 mm/min. Average shelled forces were 1018, 2274 and 1173 N in hilum, width and horizontal, respectively. The elastic moduli of macadamia nut shell calculated by the Hertz contact stress theory were 32.24, 68.63 and 39.65 MPa in hilum, width and horizontal, respectively. The results indicated that macadamia nut was anisotropic. Compression capability was the strongest in width and the weakest in horizontal. The shape of macadamia nut was close to

  17. An efficient compression scheme for bitmap indices

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie


    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

  18. Vascular compression syndromes.

    Czihal, Michael; Banafsche, Ramin; Hoffmann, Ulrich; Koeppel, Thomas


    Dealing with vascular compression syndromes is one of the most challenging tasks in Vascular Medicine practice. This heterogeneous group of disorders is characterised by external compression of primarily healthy arteries and/or veins as well as accompanying nerval structures, carrying the risk of subsequent structural vessel wall and nerve damage. Vascular compression syndromes may severely impair health-related quality of life in affected individuals who are typically young and otherwise healthy. The diagnostic approach has not been standardised for any of the vascular compression syndromes. Moreover, some degree of positional external compression of blood vessels such as the subclavian and popliteal vessels or the celiac trunk can be found in a significant proportion of healthy individuals. This implies important difficulties in differentiating physiological from pathological findings of clinical examination and diagnostic imaging with provocative manoeuvres. The level of evidence on which treatment decisions regarding surgical decompression with or without revascularisation can be relied on is generally poor, mostly coming from retrospective single centre studies. Proper patient selection is critical in order to avoid overtreatment in patients without a clear association between vascular compression and clinical symptoms. With a focus on the thoracic outlet-syndrome, the median arcuate ligament syndrome and the popliteal entrapment syndrome, the present article gives a selective literature review on compression syndromes from an interdisciplinary vascular point of view.

  19. Critical Data Compression

    Scoville, John


    A new approach to data compression is developed and applied to multimedia content. This method separates messages into components suitable for both lossless coding and 'lossy' or statistical coding techniques, compressing complex objects by separately encoding signals and noise. This is demonstrated by compressing the most significant bits of data exactly, since they are typically redundant and compressible, and either fitting a maximally likely noise function to the residual bits or compressing them using lossy methods. Upon decompression, the significant bits are decoded and added to a noise function, whether sampled from a noise model or decompressed from a lossy code. This results in compressed data similar to the original. For many test images, a two-part image code using JPEG2000 for lossy coding and PAQ8l for lossless coding produces less mean-squared error than an equal length of JPEG2000. Computer-generated images typically compress better using this method than through direct lossy coding, as do man...

  20. Multiband and Lossless Compression of Hyperspectral Images

    Raffaele Pizzolante


    Full Text Available Hyperspectral images are widely used in several real-life applications. In this paper, we investigate on the compression of hyperspectral images by considering different aspects, including the optimization of the computational complexity in order to allow implementations on limited hardware (i.e., hyperspectral sensors, etc.. We present an approach that relies on a three-dimensional predictive structure. Our predictive structure, 3D-MBLP, uses one or more previous bands as references to exploit the redundancies among the third dimension. The achieved results are comparable, and often better, with respect to the other state-of-art lossless compression techniques for hyperspectral images.

  1. Compressive wavefront sensing with weak values.

    Howland, Gregory A; Lum, Daniel J; Howell, John C


    We demonstrate a wavefront sensor that unites weak measurement and the compressive-sensing, single-pixel camera. Using a high-resolution spatial light modulator (SLM) as a variable waveplate, we weakly couple an optical field's transverse-position and polarization degrees of freedom. By placing random, binary patterns on the SLM, polarization serves as a meter for directly measuring random projections of the wavefront's real and imaginary components. Compressive-sensing optimization techniques can then recover the wavefront. We acquire high quality, 256 × 256 pixel images of the wavefront from only 10,000 projections. Photon-counting detectors give sub-picowatt sensitivity.

  2. An Enhanced Static Data Compression Scheme Of Bengali Short Message

    Arif, Abu Shamim Mohammod; Islam, Rashedul


    This paper concerns a modified approach of compressing Short Bengali Text Message for small devices. The prime objective of this research technique is to establish a low complexity compression scheme suitable for small devices having small memory and relatively lower processing speed. The basic aim is not to compress text of any size up to its maximum level without having any constraint on space and time, rather than the main target is to compress short messages up to an optimal level which needs minimum space, consume less time and the processor requirement is lower. We have implemented Character Masking, Dictionary Matching, Associative rule of data mining and Hyphenation algorithm for syllable based compression in hierarchical steps to achieve low complexity lossless compression of text message for any mobile devices. The scheme to choose the diagrams are performed on the basis of extensive statistical model and the static Huffman coding is done through the same context.

  3. Review on Lossless Image Compression Techniques for Welding Radiographic Images

    B. Karthikeyan


    Full Text Available Recent development in image processing allows us to apply it in different domains. Radiography image of weld joint is one area where image processing techniques can be applied. It can be used to identify the quality of the weld joint. For this the image has to be stored and processed later in the labs. In order to optimize the use of disk space compression is required. The aim of this study is to find a suitable and efficient lossless compression technique for radiographic weld images. Image compression is a technique by which the amount of data required to represent information is reduced. Hence image compression is effectively carried out by removing the redundant data. This study compares different ways of compressing the radiography images using combinations of different lossless compression techniques like RLE, Huffman.

  4. Prediction by Compression

    Ratsaby, Joel


    It is well known that text compression can be achieved by predicting the next symbol in the stream of text data based on the history seen up to the current symbol. The better the prediction the more skewed the conditional probability distribution of the next symbol and the shorter the codeword that needs to be assigned to represent this next symbol. What about the opposite direction ? suppose we have a black box that can compress text stream. Can it be used to predict the next symbol in the stream ? We introduce a criterion based on the length of the compressed data and use it to predict the next symbol. We examine empirically the prediction error rate and its dependency on some compression parameters.

  5. LZW Data Compression

    Dheemanth H N


    Full Text Available Lempel–Ziv–Welch (LZW is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. LZW compression is one of the Adaptive Dictionary techniques. The dictionary is created while the data are being encoded. So encoding can be done on the fly. The dictionary need not be transmitted. Dictionary can be built up at receiving end on the fly. If the dictionary overflows then we have to reinitialize the dictionary and add a bit to each one of the code words. Choosing a large dictionary size avoids overflow, but spoils compressions. A codebook or dictionary containing the source symbols is constructed. For 8-bit monochrome images, the first 256 words of the dictionary are assigned to the gray levels 0-255. Remaining part of the dictionary is filled with sequences of the gray levels.LZW compression works best when applied on monochrome images and text files that contain repetitive text/patterns.

  6. Shocklets in compressible flows

    袁湘江; 男俊武; 沈清; 李筠


    The mechanism of shocklets is studied theoretically and numerically for the stationary fluid, uniform compressible flow, and boundary layer flow. The conditions that trigger shock waves for sound wave, weak discontinuity, and Tollmien-Schlichting (T-S) wave in compressible flows are investigated. The relations between the three types of waves and shocklets are further analyzed and discussed. Different stages of the shocklet formation process are simulated. The results show that the three waves in compressible flows will transfer to shocklets only when the initial disturbance amplitudes are greater than the certain threshold values. In compressible boundary layers, the shocklets evolved from T-S wave exist only in a finite region near the surface instead of the whole wavefront.

  7. Reference Based Genome Compression

    Chern, Bobbie; Manolakos, Alexandros; No, Albert; Venkat, Kartik; Weissman, Tsachy


    DNA sequencing technology has advanced to a point where storage is becoming the central bottleneck in the acquisition and mining of more data. Large amounts of data are vital for genomics research, and generic compression tools, while viable, cannot offer the same savings as approaches tuned to inherent biological properties. We propose an algorithm to compress a target genome given a known reference genome. The proposed algorithm first generates a mapping from the reference to the target genome, and then compresses this mapping with an entropy coder. As an illustration of the performance: applying our algorithm to James Watson's genome with hg18 as a reference, we are able to reduce the 2991 megabyte (MB) genome down to 6.99 MB, while Gzip compresses it to 834.8 MB.

  8. Deep Blind Compressed Sensing

    Singh, Shikha; Singhal, Vanika; Majumdar, Angshul


    This work addresses the problem of extracting deeply learned features directly from compressive measurements. There has been no work in this area. Existing deep learning tools only give good results when applied on the full signal, that too usually after preprocessing. These techniques require the signal to be reconstructed first. In this work we show that by learning directly from the compressed domain, considerably better results can be obtained. This work extends the recently proposed fram...

  9. Reference Based Genome Compression

    Chern, Bobbie; Ochoa, Idoia; Manolakos, Alexandros; No, Albert; Venkat, Kartik; Weissman, Tsachy


    DNA sequencing technology has advanced to a point where storage is becoming the central bottleneck in the acquisition and mining of more data. Large amounts of data are vital for genomics research, and generic compression tools, while viable, cannot offer the same savings as approaches tuned to inherent biological properties. We propose an algorithm to compress a target genome given a known reference genome. The proposed algorithm first generates a mapping from the reference to the target gen...

  10. Alternative Compression Garments

    Stenger, M. B.; Lee, S. M. C.; Ribeiro, L. C.; Brown, A. K.; Westby, C. M.; Platts, S. H.


    Orthostatic intolerance after spaceflight is still an issue for astronauts as no in-flight countermeasure has been 100% effective. Future anti-gravity suits (AGS) may be similar to the Shuttle era inflatable AGS or may be a mechanical compression device like the Russian Kentavr. We have evaluated the above garments as well as elastic, gradient compression garments of varying magnitude and determined that breast-high elastic compression garments may be a suitable replacement to the current AGS. This new garment should be more comfortable than the AGS, easy to don and doff, and as effective a countermeasure to orthostatic intolerance. Furthermore, these new compression garments could be worn for several days after space flight as necessary if symptoms persisted. We conducted two studies to evaluate elastic, gradient compression garments. The purpose of these studies was to evaluate the comfort and efficacy of an alternative compression garment (ACG) immediately after actual space flight and 6 degree head-down tilt bed rest as a model of space flight, and to determine if they would impact recovery if worn for up to three days after bed rest.

  11. Mixed raster content segmentation, compression, transmission

    Pavlidis, George


    This book presents the main concepts in handling digital images of mixed content, traditionally referenced as mixed raster content (MRC), in two main parts. The first includes introductory chapters covering the scientific and technical background aspects, whereas the second presents a set of research and development approaches to tackle key issues in MRC segmentation, compression and transmission. The book starts with a review of color theory and the mechanism of color vision in humans. In turn, the second chapter reviews data coding and compression methods so as to set the background and demonstrate the complexity involved in dealing with MRC. Chapter three addresses the segmentation of images through an extensive literature review, which highlights the various approaches used to tackle MRC segmentation. The second part of the book focuses on the segmentation of color images for optimized compression, including multi-layered decomposition and representation of MRC and the processes that can be employed to op...

  12. Pulse temporal compression by two-stage stimulated Brillouin scattering and laser-induced breakdown

    Liu, Zhaohong; Wang, Yulei; Wang, Hongli; Bai, Zhenxu; Li, Sensen; Zhang, Hengkang; Wang, Yirui; He, Weiming; Lin, Dianyang; Lu, Zhiwei


    A laser pulse temporal compression technique combining stimulated Brillouin scattering (SBS) and laser-induced breakdown (LIB) is proposed in which the leading edge of the laser pulse is compressed using SBS, and the low intensity trailing edge of the laser pulse is truncated by LIB. The feasibility of the proposed scheme is demonstrated by experiments in which a pulse duration of 8 ns is compressed to 170 ps. Higher compression ratios and higher efficiency are expected under optimal experimental conditions.

  13. Sagittal sinus compression is associated with neonatal cerebral sinovenous thrombosis.

    Tan, Marilyn; Deveber, Gabrielle; Shroff, Manohar; Moharir, Mahendra; Pontigon, Anne-Marie; Widjaja, Elisa; Kirton, Adam


    Neonatal cerebral sinovenous thrombosis (CSVT) causes lifelong morbidity. Newborns frequently incur positional occipital bone compression of the superior sagittal sinus (SSS). We hypothesized that SSS compression is associated with neonatal CSVT. Our retrospective case-control study recruited neonates with CSVT (SickKids Children's Stroke Program, January 1992-December 2006). Controls were neonates without CSVT undergoing magnetic resonance or computed tomography venography (institutional imaging database, 2002-2005) who were matched 2 per each case patient. Blinded neuroimaging review by 2 experts quantified SSS compression and head position. Effect of SSS compression on the primary outcome of CSVT was determined (logistic regression). Secondary analyses included the relationship of head position to SSS compression (t test) and group comparisons (cases versus controls, cases with and without compression) for demographic, clinical, and CSVT factors (χ² and Wilcoxon Mann-Whitney tests). Case (n = 55) and control (n = 90) patients had similar ages and delivery modes. SSS compression was common (cases: 43%; controls: 41%). Controlling for gender and head position, SSS compression was associated with CSVT (odds ratio: 2.5 [95% confidence interval: 1.07-5.67]). Compression was associated with greater mean (SD) angle toward head flexion (101.2 [15.0] vs 111.5 [9.7]; P infarction, recanalization, and outcome. Many idiopathic cases had SSS compression (79%). Interrater reliability of compression measurements was high (κ = 0.87). Neonatal SSS compression is common, quantifiable, and associated with CSVT. Optimizing head position and/or developing devices to alleviate mechanical SSS compression may represent a novel means to improve outcomes.

  14. Magnetic Compression Experiment at General Fusion

    Dunlea, Carl; Howard, Stephen; Epp, Kelly; Zawalski, Wade; Kim, Charlson; Fusion Team, General


    The magnetic compression experiment at General Fusion was designed as a repetitive non-destructive test to study plasma physics applicable to Magnetic Target Fusion compression. A spheromak compact torus (CT) is formed with a co-axial gun into a containment region with an hour-glass shaped inner flux conserver, and an insulating outer wall. The experiment has external coils to keep the CT off the outer wall (levitation) and then rapidly compress it inwards. Experiments used a variety of levitation/compression field profiles. The optimal configuration was seen to improve levitated CT lifetime by around 50% over that with the original design field. Suppression of impurity influx to the plasma is thought to be a significant factor in the improvement, as supported by spectrometer data. Improved levitation field may reduce the amount of edge plasma and current that intersects the insulating outer wall during the formation process. Higher formation current and stuffing field, and correspondingly higher CT flux, was possible with the improved configuration. Significant field and density compression factors were routinely observed. The level of MHD activity was reduced, and lifetime was increased further by matching the decay rate of the levitation field to that of the CT fields. Details of experimental results and comparisons to equilibrium models and MHD simulations will be presented.

  15. MIMO Radar Using Compressive Sampling

    Yu, Yao; Poor, H Vincent


    A MIMO radar system is proposed for obtaining angle and Doppler information on potential targets. Transmitters and receivers are nodes of a small scale wireless network and are assumed to be randomly scattered on a disk. The transmit nodes transmit uncorrelated waveforms. Each receive node applies compressive sampling to the received signal to obtain a small number of samples, which the node subsequently forwards to a fusion center. Assuming that the targets are sparsely located in the angle- Doppler space, based on the samples forwarded by the receive nodes the fusion center formulates an l1-optimization problem, the solution of which yields target angle and Doppler information. The proposed approach achieves the superior resolution of MIMO radar with far fewer samples than required by other approaches. This implies power savings during the communication phase between the receive nodes and the fusion center. Performance in the presence of a jammer is analyzed for the case of slowly moving targets. Issues rel...

  16. Transverse Compression of Tendons.

    Salisbury, S T Samuel; Buckley, C Paul; Zavatsky, Amy B


    A study was made of the deformation of tendons when compressed transverse to the fiber-aligned axis. Bovine digital extensor tendons were compression tested between flat rigid plates. The methods included: in situ image-based measurement of tendon cross-sectional shapes, after preconditioning but immediately prior to testing; multiple constant-load creep/recovery tests applied to each tendon at increasing loads; and measurements of the resulting tendon displacements in both transverse directions. In these tests, friction resisted axial stretch of the tendon during compression, giving approximately plane-strain conditions. This, together with the assumption of a form of anisotropic hyperelastic constitutive model proposed previously for tendon, justified modeling the isochronal response of tendon as that of an isotropic, slightly compressible, neo-Hookean solid. Inverse analysis, using finite-element (FE) simulations of the experiments and 10 s isochronal creep displacement data, gave values for Young's modulus and Poisson's ratio of this solid of 0.31 MPa and 0.49, respectively, for an idealized tendon shape and averaged data for all the tendons and E = 0.14 and 0.10 MPa for two specific tendons using their actual measured geometry. The compression load versus displacement curves, as measured and as simulated, showed varying degrees of stiffening with increasing load. This can be attributed mostly to geometrical changes in tendon cross section under load, varying according to the initial 3D shape of the tendon.


    Li Hongbo


    In an inner-product space, an invertible vector generates a reflection with re-spect to a hyperplane, and the Clifford product of several invertible vectors, called a versor in Clifford algebra, generates the composition of the corresponding reflections, which is an orthogonal transformation. Given a versor in a Clifford algebra, finding another sequence of invertible vectors of strictly shorter length but whose Clifford product still equals the input versor, is called versor compression. Geometrically, versor compression is equivalent to decomposing an orthogoual transformation into a shorter sequence of reflections. This paper proposes a simple algorithm of compressing versors of symbolic form in Clifford algebra. The algorithm is based on computing the intersections of lines with planes in the corresponding Grassmann-Cayley algebra, and is complete in the case of Euclidean or Minkowski inner-product space.

  18. Image compression for dermatology

    Cookson, John P.; Sneiderman, Charles; Colaianni, Joseph; Hood, Antoinette F.


    Color 35mm photographic slides are commonly used in dermatology for education, and patient records. An electronic storage and retrieval system for digitized slide images may offer some advantages such as preservation and random access. We have integrated a system based on a personal computer (PC) for digital imaging of 35mm slides that depict dermatologic conditions. Such systems require significant resources to accommodate the large image files involved. Methods to reduce storage requirements and access time through image compression are therefore of interest. This paper contains an evaluation of one such compression method that uses the Hadamard transform implemented on a PC-resident graphics processor. Image quality is assessed by determining the effect of compression on the performance of an image feature recognition task.

  19. Compressive Shift Retrieval

    Ohlsson, Henrik; Eldar, Yonina C.; Yang, Allen Y.; Sastry, S. Shankar


    The classical shift retrieval problem considers two signals in vector form that are related by a shift. The problem is of great importance in many applications and is typically solved by maximizing the cross-correlation between the two signals. Inspired by compressive sensing, in this paper, we seek to estimate the shift directly from compressed signals. We show that under certain conditions, the shift can be recovered using fewer samples and less computation compared to the classical setup. Of particular interest is shift estimation from Fourier coefficients. We show that under rather mild conditions only one Fourier coefficient suffices to recover the true shift.

  20. Graph Compression by BFS

    Alberto Apostolico


    Full Text Available The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.

  1. Image data compression investigation

    Myrie, Carlos


    NASA continuous communications systems growth has increased the demand for image transmission and storage. Research and analysis was conducted on various lossy and lossless advanced data compression techniques or approaches used to improve the efficiency of transmission and storage of high volume stellite image data such as pulse code modulation (PCM), differential PCM (DPCM), transform coding, hybrid coding, interframe coding, and adaptive technique. In this presentation, the fundamentals of image data compression utilizing two techniques which are pulse code modulation (PCM) and differential PCM (DPCM) are presented along with an application utilizing these two coding techniques.

  2. Image compression in local helioseismology

    Löptien, Björn; Gizon, Laurent; Schou, Jesper


    Context. Several upcoming helioseismology space missions are very limited in telemetry and will have to perform extensive data compression. This requires the development of new methods of data compression. Aims. We give an overview of the influence of lossy data compression on local helioseismology. We investigate the effects of several lossy compression methods (quantization, JPEG compression, and smoothing and subsampling) on power spectra and time-distance measurements of supergranulation flows at disk center. Methods. We applied different compression methods to tracked and remapped Dopplergrams obtained by the Helioseismic and Magnetic Imager onboard the Solar Dynamics Observatory. We determined the signal-to-noise ratio of the travel times computed from the compressed data as a function of the compression efficiency. Results. The basic helioseismic measurements that we consider are very robust to lossy data compression. Even if only the sign of the velocity is used, time-distance helioseismology is still...

  3. Advanced Topology Optimization Methods for Conceptual Architectural Design

    Aage, Niels; Amir, Oded; Clausen, Anders


    in topological optimization: Interactive control and continuous visualization; embedding flexible voids within the design space; consideration of distinct tension / compression properties; and optimization of dual material systems. In extension, optimization procedures for skeletal structures such as trusses...

  4. Fingerprints in Compressed Strings

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li


    The Karp-Rabin fingerprint of a string is a type of hash value that due to its strong properties has been used in many string algorithms. In this paper we show how to construct a data structure for a string S of size N compressed by a context-free grammar of size n that answers fingerprint queries...

  5. Multiple snapshot compressive beamforming

    Gerstoft, Peter; Xenaki, Angeliki; Mecklenbrauker, Christoph F.


    For sound fields observed on an array, compressive sensing (CS) reconstructs the multiple source signals at unknown directions-of-arrival (DOAs) using a sparsity constraint. The DOA estimation is posed as an underdetermined problem expressing the field at each sensor as a phase-lagged superposition...

  6. Compressive CFAR radar detection

    Anitori, L.; Otten, M.P.G.; Rossum, W.L. van; Maleki, A.; Baraniuk, R.


    In this paper we develop the first Compressive Sensing (CS) adaptive radar detector. We propose three novel architectures and demonstrate how a classical Constant False Alarm Rate (CFAR) detector can be combined with ℓ1-norm minimization. Using asymptotic arguments and the Complex Approximate Messag

  7. Compressive CFAR Radar Processing

    Anitori, L.; Rossum, W.L. van; Otten, M.P.G.; Maleki, A.; Baraniuk, R.


    In this paper we investigate the performance of a combined Compressive Sensing (CS) Constant False Alarm Rate (CFAR) radar processor under different interference scenarios using both the Cell Averaging (CA) and Order Statistic (OS) CFAR detectors. Using the properties of the Complex Approximate Mess

  8. Beamforming Using Compressive Sensing


    dB to align the peak at 7.3o. Comparing peaks to val- leys , compressive sensing provides a greater main to interference (and noise) ratio...elements. Acknowledgments This research was supported by the Office of Naval Research. The authors would like to especially thank of Roger Gauss and Joseph

  9. A Novel Memory Compress Algorithm for Arbitrary Waveform Generator

    吕铁良; 仇玉林


    A memory compress algorithm for 12-bit Arbitrary Waveform Generator (AWG) is presented and optimized. It can compress waveform memory for a sinusoid to 16× 13hits with a Spurious-Free Dynamic Range (SFDR) 90.7dBc (1/1890 of uncompressed memory at the same SFDR) and to 8× 12bits with a SFDR 79dBc. Its hardware cost is six adders and two multipliers. Exploiting this memory compress technique makes it possible to build a high performance AWG on a chip.

  10. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Yongjian Nian


    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  11. Randomness Testing of Compressed Data

    Chang, Weiling; Yun, Xiaochun; Wang, Shupeng; Yu, Xiangzhan


    Random Number Generators play a critical role in a number of important applications. In practice, statistical testing is employed to gather evidence that a generator indeed produces numbers that appear to be random. In this paper, we reports on the studies that were conducted on the compressed data using 8 compression algorithms or compressors. The test results suggest that the output of compression algorithms or compressors has bad randomness, the compression algorithms or compressors are not suitable as random number generator. We also found that, for the same compression algorithm, there exists positive correlation relationship between compression ratio and randomness, increasing the compression ratio increases randomness of compressed data. As time permits, additional randomness testing efforts will be conducted.

  12. TEM Video Compressive Sensing

    Stevens, Andrew J.; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.


    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental

  13. Compressive Sequential Learning for Action Similarity Labeling.

    Qin, Jie; Liu, Li; Zhang, Zhaoxiang; Wang, Yunhong; Shao, Ling


    Human action recognition in videos has been extensively studied in recent years due to its wide range of applications. Instead of classifying video sequences into a number of action categories, in this paper, we focus on a particular problem of action similarity labeling (ASLAN), which aims at verifying whether a pair of videos contain the same type of action or not. To address this challenge, a novel approach called compressive sequential learning (CSL) is proposed by leveraging the compressive sensing theory and sequential learning. We first project data points to a low-dimensional space by effectively exploring an important property in compressive sensing: the restricted isometry property. In particular, a very sparse measurement matrix is adopted to reduce the dimensionality efficiently. We then learn an ensemble classifier for measuring similarities between pairwise videos by iteratively minimizing its empirical risk with the AdaBoost strategy on the training set. Unlike conventional AdaBoost, the weak learner for each iteration is not explicitly defined and its parameters are learned through greedy optimization. Furthermore, an alternative of CSL named compressive sequential encoding is developed as an encoding technique and followed by a linear classifier to address the similarity-labeling problem. Our method has been systematically evaluated on four action data sets: ASLAN, KTH, HMDB51, and Hollywood2, and the results show the effectiveness and superiority of our method for ASLAN.

  14. Tree compression with top trees

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.;


    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  15. Tree compression with top trees

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.


    We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  16. Tree compression with top trees

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.


    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  17. Reinterpreting Compression in Infinitary Rewriting

    Ketema, J.; Tiwari, Ashish


    Departing from a computational interpretation of compression in infinitary rewriting, we view compression as a degenerate case of standardisation. The change in perspective comes about via two observations: (a) no compression property can be recovered for non-left-linear systems and (b) some standar

  18. Lossless Compression of Broadcast Video

    Martins, Bo; Eriksen, N.; Faber, E.


    We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...

  19. On Computing Compression Trees for Data Collection in Sensor Networks

    Li, Jian; Khuller, Samir


    We address the problem of efficiently gathering correlated data from a wired or a wireless sensor network, with the aim of designing algorithms with provable optimality guarantees, and understanding how close we can get to the known theoretical lower bounds. Our proposed approach is based on finding an optimal or a near-optimal {\\em compression tree} for a given sensor network: a compression tree is a directed tree over the sensor network nodes such that the value of a node is compressed using the value of its parent. We consider this problem under different communication models, including the {\\em broadcast communication} model that enables many new opportunities for energy-efficient data collection. We draw connections between the data collection problem and a previously studied graph concept, called {\\em weakly connected dominating sets}, and we use this to develop novel approximation algorithms for the problem. We present comparative results on several synthetic and real-world datasets showing that our al...

  20. Blind compressive sensing dynamic MRI.

    Lingala, Sajan Goud; Jacob, Mathews


    We propose a novel blind compressive sensing (BCS) frame work to recover dynamic magnetic resonance images from undersampled measurements. This scheme models the dynamic signal as a sparse linear combination of temporal basis functions, chosen from a large dictionary. In contrast to classical compressed sensing, the BCS scheme simultaneously estimates the dictionary and the sparse coefficients from the undersampled measurements. Apart from the sparsity of the coefficients, the key difference of the BCS scheme with current low rank methods is the nonorthogonal nature of the dictionary basis functions. Since the number of degrees-of-freedom of the BCS model is smaller than that of the low-rank methods, it provides improved reconstructions at high acceleration rates. We formulate the reconstruction as a constrained optimization problem; the objective function is the linear combination of a data consistency term and sparsity promoting l1 prior of the coefficients. The Frobenius norm dictionary constraint is used to avoid scale ambiguity. We introduce a simple and efficient majorize-minimize algorithm, which decouples the original criterion into three simpler subproblems. An alternating minimization strategy is used, where we cycle through the minimization of three simpler problems. This algorithm is seen to be considerably faster than approaches that alternates between sparse coding and dictionary estimation, as well as the extension of K-SVD dictionary learning scheme. The use of the l1 penalty and Frobenius norm dictionary constraint enables the attenuation of insignificant basis functions compared to the l0 norm and column norm constraint assumed in most dictionary learning algorithms; this is especially important since the number of basis functions that can be reliably estimated is restricted by the available measurements. We also observe that the proposed scheme is more robust to local minima compared to K-SVD method, which relies on greedy sparse coding. Our

  1. Building indifferentiable compression functions from the PGV compression functions

    Gauravaram, P.; Bagheri, Nasour; Knudsen, Lars Ramkilde


    Preneel, Govaerts and Vandewalle (PGV) analysed the security of single-block-length block cipher based compression functions assuming that the underlying block cipher has no weaknesses. They showed that 12 out of 64 possible compression functions are collision and (second) preimage resistant. Black...... cipher is ideal. We address the problem of building indifferentiable compression functions from the PGV compression functions. We consider a general form of 64 PGV compression functions and replace the linear feed-forward operation in this generic PGV compression function with an ideal block cipher...... independent of the one used in the generic PGV construction. This modified construction is called a generic modified PGV (MPGV). We analyse indifferentiability of the generic MPGV construction in the ideal cipher model and show that 12 out of 64 MPGV compression functions in this framework...

  2. Compressive Principal Component Pursuit

    Wright, John; Min, Kerui; Ma, Yi


    We consider the problem of recovering a target matrix that is a superposition of low-rank and sparse components, from a small set of linear measurements. This problem arises in compressed sensing of structured high-dimensional signals such as videos and hyperspectral images, as well as in the analysis of transformation invariant low-rank recovery. We analyze the performance of the natural convex heuristic for solving this problem, under the assumption that measurements are chosen uniformly at random. We prove that this heuristic exactly recovers low-rank and sparse terms, provided the number of observations exceeds the number of intrinsic degrees of freedom of the component signals by a polylogarithmic factor. Our analysis introduces several ideas that may be of independent interest for the more general problem of compressed sensing and decomposing superpositions of multiple structured signals.

  3. Hamming Compressed Sensing

    Zhou, Tianyi


    Compressed sensing (CS) and 1-bit CS cannot directly recover quantized signals and require time consuming recovery. In this paper, we introduce \\textit{Hamming compressed sensing} (HCS) that directly recovers a k-bit quantized signal of dimensional $n$ from its 1-bit measurements via invoking $n$ times of Kullback-Leibler divergence based nearest neighbor search. Compared with CS and 1-bit CS, HCS allows the signal to be dense, takes considerably less (linear) recovery time and requires substantially less measurements ($\\mathcal O(\\log n)$). Moreover, HCS recovery can accelerate the subsequent 1-bit CS dequantizer. We study a quantized recovery error bound of HCS for general signals and "HCS+dequantizer" recovery error bound for sparse signals. Extensive numerical simulations verify the appealing accuracy, robustness, efficiency and consistency of HCS.

  4. Compressive Spectral Renormalization Method

    Bayindir, Cihan


    In this paper a novel numerical scheme for finding the sparse self-localized states of a nonlinear system of equations with missing spectral data is introduced. As in the Petviashivili's and the spectral renormalization method, the governing equation is transformed into Fourier domain, but the iterations are performed for far fewer number of spectral components (M) than classical versions of the these methods with higher number of spectral components (N). After the converge criteria is achieved for M components, N component signal is reconstructed from M components by using the l1 minimization technique of the compressive sampling. This method can be named as compressive spectral renormalization (CSRM) method. The main advantage of the CSRM is that, it is capable of finding the sparse self-localized states of the evolution equation(s) with many spectral data missing.

  5. Speech Compression and Synthesis


    phonological rules combined with diphone improved the algorithms used by the phonetic synthesis prog?Im for gain normalization and time... phonetic vocoder, spectral template. i0^Th^TreprtTörc"u’d1sTuV^ork for the past two years on speech compression’and synthesis. Since there was an...from Block 19: speech recognition, pnoneme recogmtion. initial design for a phonetic recognition program. We also recorded ana partially labeled a

  6. Time-Space Topology Optimization

    Jensen, Jakob Søndergaard


    A method for space-time topology optimization is outlined. The space-time optimization strategy produces structures with optimized material distributions that vary in space and in time. The method is demonstrated for one-dimensional wave propagation in an elastic bar that has a time-dependent Young......’s modulus and is subjected to a transient load. In the example an optimized dynamic structure is demonstrated that compresses a propagating Gauss pulse....

  7. Shock compression of nitrobenzene

    Kozu, Naoshi; Arai, Mitsuru; Tamura, Masamitsu; Fujihisa, Hiroshi; Aoki, Katsutoshi; Yoshida, Masatake; Kondo, Ken-Ichi


    The Hugoniot (4 - 30 GPa) and the isotherm (1 - 7 GPa) of nitrobenzene have been investigated by shock and static compression experiments. Nitrobenzene has the most basic structure of nitro aromatic compounds, which are widely used as energetic materials, but nitrobenzene has been considered not to explode in spite of the fact its calculated heat of detonation is similar to TNT, about 1 kcal/g. Explosive plane-wave generators and diamond anvil cell were used for shock and static compression, respectively. The obtained Hugoniot consists of two linear lines, and the kink exists around 10 GPa. The upper line agrees well with the Hugoniot of detonation products calculated by KHT code, so it is expected that nitrobenzene detonates in that area. Nitrobenzene solidifies under 1 GPa of static compression, and the isotherm of solid nitrobenzene was obtained by X-ray diffraction technique. Comparing the Hugoniot and the isotherm, nitrobenzene is in liquid phase under experimented shock condition. From the expected phase diagram, shocked nitrobenzene seems to remain metastable liquid in solid phase region on that diagram.

  8. Compressed sensing electron tomography

    Leary, Rowan, E-mail: [Department of Materials Science and Metallurgy, University of Cambridge, Pembroke Street, Cambridge CB2 3QZ (United Kingdom); Saghi, Zineb; Midgley, Paul A. [Department of Materials Science and Metallurgy, University of Cambridge, Pembroke Street, Cambridge CB2 3QZ (United Kingdom); Holland, Daniel J. [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom)


    The recent mathematical concept of compressed sensing (CS) asserts that a small number of well-chosen measurements can suffice to reconstruct signals that are amenable to sparse or compressible representation. In addition to powerful theoretical results, the principles of CS are being exploited increasingly across a range of experiments to yield substantial performance gains relative to conventional approaches. In this work we describe the application of CS to electron tomography (ET) reconstruction and demonstrate the efficacy of CS–ET with several example studies. Artefacts present in conventional ET reconstructions such as streaking, blurring of object boundaries and elongation are markedly reduced, and robust reconstruction is shown to be possible from far fewer projections than are normally used. The CS–ET approach enables more reliable quantitative analysis of the reconstructions as well as novel 3D studies from extremely limited data. - Highlights: • Compressed sensing (CS) theory and its application to electron tomography (ET) is described. • The practical implementation of CS–ET is outlined and its efficacy demonstrated with examples. • High fidelity tomographic reconstruction is possible from a small number of images. • The CS–ET reconstructions can be more reliably segmented and analysed quantitatively. • CS–ET is applicable to different image content by choice of an appropriate sparsifying transform.

  9. Ultraspectral sounder data compression review

    Bormin HUANG; Hunglung HUANG


    Ultraspectral sounders provide an enormous amount of measurements to advance our knowledge of weather and climate applications. The use of robust data compression techniques will be beneficial for ultraspectral data transfer and archiving. This paper reviews the progress in lossless compression of ultra-spectral sounder data. Various transform-based, pre-diction-based, and clustering-based compression methods are covered. Also studied is a preprocessing scheme for data reordering to improve compression gains. All the coding experiments are performed on the ultraspectral compression benchmark dataset col-lected from the NASA Atmospheric Infrared Sounder (AIRS) observations.

  10. Engineering Relative Compression of Genomes

    Grabowski, Szymon


    Technology progress in DNA sequencing boosts the genomic database growth at faster and faster rate. Compression, accompanied with random access capabilities, is the key to maintain those huge amounts of data. In this paper we present an LZ77-style compression scheme for relative compression of multiple genomes of the same species. While the solution bears similarity to known algorithms, it offers significantly higher compression ratios at compression speed over a order of magnitude greater. One of the new successful ideas is augmenting the reference sequence with phrases from the other sequences, making more LZ-matches available.

  11. Chronic edema of the lower extremities: international consensus recommendations for compression therapy clinical research trials.

    Stout, N; Partsch, H; Szolnoky, G; Forner-Cordero, I; Mosti, G; Mortimer, P; Flour, M; Damstra, R; Piller, N; Geyer, M J; Benigni, J-P; Moffat, C; Cornu-Thenard, A; Schingale, F; Clark, M; Chauveau, M


    Chronic edema is a multifactorial condition affecting patients with various diseases. Although the pathophysiology of edema varies, compression therapy is a basic tenant of treatment, vital to reducing swelling. Clinical trials are disparate or lacking regarding specific protocols and application recommendations for compression materials and methodology to enable optimal efficacy. Compression therapy is a basic treatment modality for chronic leg edema; however, the evidence base for the optimal application, duration and intensity of compression therapy is lacking. The aim of this document was to present the proceedings of a day-long international expert consensus group meeting that examined the current state of the science for the use of compression therapy in chronic edema. An expert consensus group met in Brighton, UK, in March 2010 to examine the current state of the science for compression therapy in chronic edema of the lower extremities. Panel discussions and open space discussions examined the current literature, clinical practice patterns, common materials and emerging technologies for the management of chronic edema. This document outlines a proposed clinical research agenda focusing on compression therapy in chronic edema. Future trials comparing different compression devices, materials, pressures and parameters for application are needed to enhance the evidence base for optimal chronic oedema management. Important outcomes measures and methods of pressure and oedema quantification are outlined. Future trials are encouraged to optimize compression therapy in chronic edema of the lower extremities.

  12. Enhanced pulse compression induced by the interaction between the third-order dispersion and the cross-phase modulation in birefringent fibres

    徐文成; 陈伟成; 张书敏; 罗爱平; 刘颂豪


    In this paper, we report on the enhanced pulse compression due to the interaction between the positive third-order dispersion (TOD) and the nonlinear effect (cross-phase modulation effect) in birefringent fibres. Polarization soliton compression along the slow axis can be enhanced in a birefringent fibre with positive third-order dispersion. while the polarization soliton compression along the fast axis can be enhanced in the fibre with negative third-order dispersion.Moreover, there is an optimal third-order dispersion parameter for obtaining the optimal pulse compression.Redshifted initial chirp is helpful to the pulse compression, while blueshifted chirp is detrimental to the pulse compression. There is also an optimal chirp parameter to reach maximum pulse compression. The optimal pulse compression for TOD parameters under different N-order solitons is also found.

  13. Velocity and Magnetic Compressions in FEL Drivers

    Serafini, L


    We will compare merits and issues of these two techniques suitable for increasing the peak current of high brightness electron beams. The typical range of applicability is low energy for the velocity bunching and middle to high energy for magnetic compression. Velocity bunching is free from CSR effects but requires very high RF stability (time jitters), as well as a dedicated additional focusing and great cure in the beam transport: it is very well understood theoretically and numerical simulations are pretty straightforward. Several experiments of velocity bunching have been performed in the past few years: none of them, nevertheless, used a photoinjector designed and optimized for that purpose. Magnetic compression is a much more consolidated technique: CSR effects and micro-bunch instabilities are its main drawbacks. There is a large operational experience with chicanes used as magnetic compressors and their theoretical understanding is quite deep, though numerical simulations of real devices are still cha...

  14. Statistical Compressive Sensing of Gaussian Mixture Models

    Yu, Guoshen


    A new framework of compressive sensing (CS), namely statistical compressive sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution and achieving accurate reconstruction on average, is introduced. For signals following a Gaussian distribution, with Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably smaller than the O(k log(N/k)) required by conventional CS, where N is the signal dimension, and with an optimal decoder implemented with linear filtering, significantly faster than the pursuit decoders applied in conventional CS, the error of SCS is shown tightly upper bounded by a constant times the k-best term approximation error, with overwhelming probability. The failure probability is also significantly smaller than that of conventional CS. Stronger yet simpler results further show that for any sensing matrix, the error of Gaussian SCS is upper bounded by a constant times the k-best term approximation with probability one, and the ...

  15. Statistical Compressed Sensing of Gaussian Mixture Models

    Yu, Guoshen


    A novel framework of compressed sensing, namely statistical compressed sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution, and achieving accurate reconstruction on average, is introduced. SCS based on Gaussian models is investigated in depth. For signals that follow a single Gaussian model, with Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably smaller than the O(k log(N/k)) required by conventional CS based on sparse models, where N is the signal dimension, and with an optimal decoder implemented via linear filtering, significantly faster than the pursuit decoders applied in conventional CS, the error of SCS is shown tightly upper bounded by a constant times the best k-term approximation error, with overwhelming probability. The failure probability is also significantly smaller than that of conventional sparsity-oriented CS. Stronger yet simpler results further show that for any sensing matrix, the error of Gaussian SCS is u...

  16. Compressed sensing imaging techniques for radio interferometry

    Wiaux, Y; Puy, G; Scaife, A M M; Vandergheynst, P


    Radio interferometry probes astrophysical signals through incomplete and noisy Fourier measurements. The theory of compressed sensing demonstrates that such measurements may actually suffice for accurate reconstruction of sparse or compressible signals. We propose new generic imaging techniques based on convex optimization for global minimization problems defined in this context. The versatility of the framework notably allows introduction of specific prior information on the signals, which offers the possibility of significant improvements of reconstruction relative to the standard local matching pursuit algorithm CLEAN used in radio astronomy. We illustrate the potential of the approach by studying reconstruction performances on simulations of two different kinds of signals observed with very generic interferometric configurations. The first kind is an intensity field of compact astrophysical objects. The second kind is the imprint of cosmic strings in the temperature field of the cosmic microwave backgroun...

  17. Data compression of scanned halftone images

    Forchhammer, Søren; Jensen, Kim S.


    A new method for coding scanned halftone images is proposed. It is information-lossy, but still preserving the image quality, compression rates of 16-35 have been achieved for a typical test image scanned on a high resolution scanner. The bi-level halftone images are filtered, in phase...... with the halftone grid, and converted to a gray level representation. A new digital description of (halftone) grids has been developed for this purpose. The gray level values are coded according to a scheme based on states derived from a segmentation of gray values. To enable real-time processing of high resolution...... scanner output, the coding has been parallelized and implemented on a transputer system. For comparison, the test image was coded using existing (lossless) methods giving compression rates of 2-7. The best of these, a combination of predictive and binary arithmetic coding was modified and optimized...

  18. Multichannel Compressive Sensing MRI Using Noiselet Encoding

    Pawar, Kamlesh; Zhang, Jingxin


    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI, and presents a method to design the pulse sequence for the noiselet encoding. This novel encoding scheme is combined with the multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. An empirical RIP a...

  19. High reflection mirrors for pulse compression gratings.

    Palmier, S; Neauport, J; Baclet, N; Lavastre, E; Dupuy, G


    We report an experimental investigation of high reflection mirrors used to fabricate gratings for pulse compression application at the wavelength of 1.053microm. Two kinds of mirrors are studied: the mixed Metal MultiLayer Dielectric (MMLD) mirrors which combine a gold metal layer with some e-beam evaporated dielectric bilayers on the top and the standard e-beam evaporated MultiLayer Dielectric (MLD) mirrors. Various samples were manufactured, damage tested at a pulse duration of 500fs. Damage sites were subsequently observed by means of Nomarski microscopy and white light interferometer microscopy. The comparison of the results evidences that if MMLD design can offer damage performances rather similar to MLD design, it also exhibits lower stresses; being thus an optimal mirror substrate for a pulse compression grating operating under vacuum.

  20. Compressive Background Modeling for Foreground Extraction

    Yong Wang


    Full Text Available Robust and efficient foreground extraction is a crucial topic in many computer vision applications. In this paper, we propose an accurate and computationally efficient background subtraction method. The key idea is to reduce the data dimensionality of image frame based on compressive sensing and in the meanwhile apply sparse representation to build the current background by a set of preceding background images. According to greedy iterative optimization, the background image and background subtracted image can be recovered by using a few compressive measurements. The proposed method is validated through multiple challenging video sequences. Experimental results demonstrate the fact that the performance of our approach is comparable to those of existing classical background subtraction techniques.

  1. The compression of liquids

    Whalley, E.

    The compression of liquids can be measured either directly by applying a pressure and noting the volume change, or indirectly, by measuring the magnitude of the fluctuations of the local volume. The methods used in Ottawa for the direct measurement of the compression are reviewed. The mean-square deviation of the volume from the mean at constant temperature can be measured by X-ray and neutron scattering at low angles, and the meansquare deviation at constant entropy can be measured by measuring the speed of sound. The speed of sound can be measured either acoustically, using an acoustic transducer, or by Brillouin spectroscopy. Brillouin spectroscopy can also be used to study the shear waves in liquids if the shear relaxation time is > ∼ 10 ps. The relaxation time of water is too short for the shear waves to be studied in this way, but they do occur in the low-frequency Raman and infrared spectra. The response of the structure of liquids to pressure can be studied by neutron scattering, and recently experiments have been done at Atomic Energy of Canada Ltd, Chalk River, on liquid D 2O up to 15.6 kbar. They show that the near-neighbor intermolecular O-D and D-D distances are less spread out and at shorter distances at high pressure. Raman spectroscopy can also provide information on the structural response. It seems that the O-O distance in water decreases much less with pressure than it does in ice. Presumably, the bending of O-O-O angles tends to increase the O-O distance, and so to largely compensate the compression due to the direct effect of pressure.

  2. Compressive Transient Imaging

    Sun, Qilin


    High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.

  3. Statistical Mechanical Analysis of Compressed Sensing Utilizing Correlated Compression Matrix

    Takeda, Koujin


    We investigate a reconstruction limit of compressed sensing for a reconstruction scheme based on the L1-norm minimization utilizing a correlated compression matrix with a statistical mechanics method. We focus on the compression matrix modeled as the Kronecker-type random matrix studied in research on multi-input multi-output wireless communication systems. We found that strong one-dimensional correlations between expansion bases of original information slightly degrade reconstruction performance.

  4. Compressive full waveform lidar

    Yang, Weiyi; Ke, Jun


    To avoid high bandwidth detector, fast speed A/D converter, and large size memory disk, a compressive full waveform LIDAR system, which uses a temporally modulated laser instead of a pulsed laser, is studied in this paper. Full waveform data from NEON (National Ecological Observatory Network) are used. Random binary patterns are used to modulate the source. To achieve 0.15 m ranging resolution, a 100 MSPS A/D converter is assumed to make measurements. SPIRAL algorithm with canonical basis is employed when Poisson noise is considered in the low illuminated condition.

  5. Beamforming using compressive sensing.

    Edelmann, Geoffrey F; Gaumond, Charles F


    Compressive sensing (CS) is compared with conventional beamforming using horizontal beamforming of at-sea, towed-array data. They are compared qualitatively using bearing time records and quantitatively using signal-to-interference ratio. Qualitatively, CS exhibits lower levels of background interference than conventional beamforming. Furthermore, bearing time records show increasing, but tolerable, levels of background interference when the number of elements is decreased. For the full array, CS generates signal-to-interference ratio of 12 dB, but conventional beamforming only 8 dB. The superiority of CS over conventional beamforming is much more pronounced with undersampling.

  6. The Diagonal Compression Field Method using Circular Fans

    Hansen, Thomas


    In a concrete beam with transverse stirrups the shear forces are carried by inclined compression in the concrete. Along the tensile zone and the compression zone of the beam the transverse components of the inclined compressions are transferred to the stirrups, which are thus subjected to tension....... Since the eighties the diagonal compression field method has been used to design transverse shear reinforcement in concrete beams. The method is based on the lower-bound theorem of the theory of plasticity, and it has been adopted in Eurocode 2. The paper presents a new design method, which...... with low shear stresses. The larger inclination (the smaller -value) of the uniaxial concrete stress the more transverse shear reinforcement is needed; hence it would be optimal if the -value for a given beam could be set to a low value in regions with high shear stresses and thereafter increased...

  7. Adaptive Super-Spatial Prediction Approach For Lossless Image Compression

    Arpita C. Raut,


    Full Text Available Existing prediction based lossless image compression schemes perform prediction of an image data using their spatial neighborhood technique which can’t predict high-frequency image structure components, such as edges, patterns, and textures very well which will limit the image compression efficiency. To exploit these structure components, adaptive super-spatial prediction approach is developed. The super-spatial prediction approach is adaptive to compress high frequency structure components from the grayscale image. The motivation behind the proposed prediction approach is taken from motion prediction in video coding, which attempts to find an optimal prediction of structure components within the previously encoded image regions. This prediction approach is efficient for image regions with significant structure components with respect to parameters as compression ratio, bit rate as compared to CALIC (Context-based adaptive lossless image coding.

  8. All-optical three-dimensional electron pulse compression

    Wong, Liang Jie; Rohwer, Timm; Gedik, Nuh; Johnson, Steven G


    We propose an all-optical, three-dimensional electron pulse compression scheme in which Hermite-Gaussian optical modes are used to fashion a three-dimensional optical trap in the electron pulse's rest frame. We show that the correct choices of optical incidence angles are necessary for optimal compression. We obtain analytical expressions for the net impulse imparted by Hermite-Gaussian free-space modes of arbitrary order. Although we focus on electrons, our theory applies to any charged particle and any particle with non-zero polarizability in the Rayleigh regime. We verify our theory numerically using exact solutions to Maxwell's equations for first-order Hermite-Gaussian beams, demonstrating single-electron pulse compression factors of $>10^{2}$ in both longitudinal and transverse dimensions with experimentally realizable optical pulses. The proposed scheme is useful in ultrafast electron imaging for both single- and multi-electron pulse compression, and as a means of circumventing temporal distortions in ...

  9. Performance Improvement Of Bengali Text Compression Using Transliteration And Huffman Principle

    Md. Mamun Hossain


    Full Text Available In this paper, we propose a new compression technique based on transliteration of Bengali text to English. Compared to Bengali, English is a less symbolic language. Thus transliteration of Bengali text to English reduces the number of characters to be coded. Huffman coding is well known for producing optimal compression. When Huffman principal is applied on transliterated text significant performance improvement is achieved in terms of decoding speed and space requirement compared to Unicode compression

  10. Correlated image set compression system based on new fast efficient algorithm of Karhunen-Loeve transform

    Musatenko, Yurij S.; Kurashov, Vitalij N.


    The paper presents improved version of our new method for compression of correlated image sets Optimal Image Coding using Karhunen-Loeve transform (OICKL). It is known that Karhunen-Loeve (KL) transform is most optimal representation for such a purpose. The approach is based on fact that every KL basis function gives maximum possible average contribution in every image and this contribution decreases most quickly among all possible bases. So, we lossy compress every KL basis function by Embedded Zerotree Wavelet (EZW) coding with essentially different loss that depends on the functions' contribution in the images. The paper presents new fast low memory consuming algorithm of KL basis construction for compression of correlated image ensembles that enable our OICKL system to work on common hardware. We also present procedure for determining of optimal losses of KL basic functions caused by compression. It uses modified EZW coder which produce whole PSNR (bitrate) curve during the only compression pass.

  11. Compressive sensing in medical imaging.

    Graff, Christian G; Sidky, Emil Y


    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  12. Speech Compression Using Multecirculerletet Transform

    Sulaiman Murtadha


    Full Text Available Compressing the speech reduces the data storage requirements, leading to reducing the time of transmitting the digitized speech over long-haul links like internet. To obtain best performance in speech compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry.The MCT bases functions are derived from GHM bases function using 2D linear convolution .The fast computation algorithm methods introduced here added desirable features to the current transform. We further assess the performance of the MCT in speech compression application. This paper discusses the effect of using DWT and MCT (one and two dimension on speech compression. DWT and MCT performances in terms of compression ratio (CR, mean square error (MSE and peak signal to noise ratio (PSNR are assessed. Computer simulation results indicate that the two dimensions MCT offer a better compression ratio, MSE and PSNR than DWT.

  13. libpolycomp: Compression/decompression library

    Tomasi, Maurizio


    Libpolycomp compresses and decompresses one-dimensional streams of numbers by means of several algorithms. It is well-suited for time-ordered data acquired by astronomical instruments or simulations. One of the algorithms, called "polynomial compression", combines two widely-used ideas (namely, polynomial approximation and filtering of Fourier series) to achieve substantial compression ratios for datasets characterized by smoothness and lack of noise. Notable examples are the ephemerides of astronomical objects and the pointing information of astronomical telescopes. Other algorithms implemented in this C library are well known and already widely used, e.g., RLE, quantization, deflate (via libz) and Burrows-Wheeler transform (via libbzip2). Libpolycomp can compress the timelines acquired by the Planck/LFI instrument with an overall compression ratio of ~9, while other widely known programs (gzip, bzip2) reach compression ratios less than 1.5.

  14. Image Compression using GSOM Algorithm



    Full Text Available compression. Conventional techniques such as Huffman coding and the Shannon Fano method, LZ Method, Run Length Method, LZ-77 are more recent methods for the compression of data. A traditional approach to reduce the large amount of data would be to discard some data redundancy and introduce some noise after reconstruction. We present a neural network based Growing self-organizing map technique that may be a reliable and efficient way to achieve vector quantization. Typical application of such algorithm is image compression. Moreover, Kohonen networks realize a mapping between an input and an output space that preserves topology. This feature can be used to build new compression schemes which allow obtaining better compression rate than with classical method as JPEG without reducing the image quality .the experiment result show that proposed algorithm improve the compression ratio in BMP, JPG and TIFF File.

  15. Data compression on the sphere

    McEwen, J D; Eyers, D M; 10.1051/0004-6361/201015728


    Large data-sets defined on the sphere arise in many fields. In particular, recent and forthcoming observations of the anisotropies of the cosmic microwave background (CMB) made on the celestial sphere contain approximately three and fifty mega-pixels respectively. The compression of such data is therefore becoming increasingly important. We develop algorithms to compress data defined on the sphere. A Haar wavelet transform on the sphere is used as an energy compression stage to reduce the entropy of the data, followed by Huffman and run-length encoding stages. Lossless and lossy compression algorithms are developed. We evaluate compression performance on simulated CMB data, Earth topography data and environmental illumination maps used in computer graphics. The CMB data can be compressed to approximately 40% of its original size for essentially no loss to the cosmological information content of the data, and to approximately 20% if a small cosmological information loss is tolerated. For the topographic and il...

  16. Energy transfer in compressible turbulence

    Bataille, Francoise; Zhou, YE; Bertoglio, Jean-Pierre


    This letter investigates the compressible energy transfer process. We extend a methodology developed originally for incompressible turbulence and use databases from numerical simulations of a weak compressible turbulence based on Eddy-Damped-Quasi-Normal-Markovian (EDQNM) closure. In order to analyze the compressible mode directly, the well known Helmholtz decomposition is used. While the compressible component has very little influence on the solenoidal part, we found that almost all of the compressible turbulence energy is received from its solenoidal counterpart. We focus on the most fundamental building block of the energy transfer process, the triadic interactions. This analysis leads us to conclude that, at low turbulent Mach number, the compressible energy transfer process is dominated by a local radiative transfer (absorption) in both inertial and energy containing ranges.

  17. Compressive Sensing DNA Microarrays

    Richard G. Baraniuk


    Full Text Available Compressive sensing microarrays (CSMs are DNA-based sensors that operate using group testing and compressive sensing (CS principles. In contrast to conventional DNA microarrays, in which each genetic sensor is designed to respond to a single target, in a CSM, each sensor responds to a set of targets. We study the problem of designing CSMs that simultaneously account for both the constraints from CS theory and the biochemistry of probe-target DNA hybridization. An appropriate cross-hybridization model is proposed for CSMs, and several methods are developed for probe design and CS signal recovery based on the new model. Lab experiments suggest that in order to achieve accurate hybridization profiling, consensus probe sequences are required to have sequence homology of at least 80% with all targets to be detected. Furthermore, out-of-equilibrium datasets are usually as accurate as those obtained from equilibrium conditions. Consequently, one can use CSMs in applications in which only short hybridization times are allowed.

  18. Compressive light field sensing.

    Babacan, S Derin; Ansorge, Reto; Luessi, Martin; Matarán, Pablo Ruiz; Molina, Rafael; Katsaggelos, Aggelos K


    We propose a novel design for light field image acquisition based on compressive sensing principles. By placing a randomly coded mask at the aperture of a camera, incoherent measurements of the light passing through different parts of the lens are encoded in the captured images. Each captured image is a random linear combination of different angular views of a scene. The encoded images are then used to recover the original light field image via a novel Bayesian reconstruction algorithm. Using the principles of compressive sensing, we show that light field images with a large number of angular views can be recovered from only a few acquisitions. Moreover, the proposed acquisition and recovery method provides light field images with high spatial resolution and signal-to-noise-ratio, and therefore is not affected by limitations common to existing light field camera designs. We present a prototype camera design based on the proposed framework by modifying a regular digital camera. Finally, we demonstrate the effectiveness of the proposed system using experimental results with both synthetic and real images.

  19. Splines in Compressed Sensing

    S. Abhishek


    Full Text Available It is well understood that in any data acquisition system reduction in the amount of data reduces the time and energy, but the major trade-off here is the quality of outcome normally, lesser the amount of data sensed, lower the quality. Compressed Sensing (CS allows a solution, for sampling below the Nyquist rate. The challenging problem of increasing the reconstruction quality with less number of samples from an unprocessed data set is addressed here by the use of representative coordinate selected from different orders of splines. We have made a detailed comparison with 10 orthogonal and 6 biorthogonal wavelets with two sets of data from MIT Arrhythmia database and our results prove that the Spline coordinates work better than the wavelets. The generation of two new types of splines such as exponential and double exponential are also briefed here .We believe that this is one of the very first attempts made in Compressed Sensing based ECG reconstruction problems using raw data.  

  20. Imaging industry expectations for compressed sensing in MRI

    King, Kevin F.; Kanwischer, Adriana; Peters, Rob


    Compressed sensing requires compressible data, incoherent acquisition and a nonlinear reconstruction algorithm to force creation of a compressible image consistent with the acquired data. MRI images are compressible using various transforms (commonly total variation or wavelets). Incoherent acquisition of MRI data by appropriate selection of pseudo-random or non-Cartesian locations in k-space is straightforward. Increasingly, commercial scanners are sold with enough computing power to enable iterative reconstruction in reasonable times. Therefore integration of compressed sensing into commercial MRI products and clinical practice is beginning. MRI frequently requires the tradeoff of spatial resolution, temporal resolution and volume of spatial coverage to obtain reasonable scan times. Compressed sensing improves scan efficiency and reduces the need for this tradeoff. Benefits to the user will include shorter scans, greater patient comfort, better image quality, more contrast types per patient slot, the enabling of previously impractical applications, and higher throughput. Challenges to vendors include deciding which applications to prioritize, guaranteeing diagnostic image quality, maintaining acceptable usability and workflow, and acquisition and reconstruction algorithm details. Application choice depends on which customer needs the vendor wants to address. The changing healthcare environment is putting cost and productivity pressure on healthcare providers. The improved scan efficiency of compressed sensing can help alleviate some of this pressure. Image quality is strongly influenced by image compressibility and acceleration factor, which must be appropriately limited. Usability and workflow concerns include reconstruction time and user interface friendliness and response. Reconstruction times are limited to about one minute for acceptable workflow. The user interface should be designed to optimize workflow and minimize additional customer training. Algorithm

  1. Computer Modeling of a CI Engine for Optimization of Operating Parameters Such as Compression Ratio, Injection Timing and Injection Pressure for Better Performance and Emission Using Diesel-Diesel Biodiesel Blends

    M. Venkatraman


    Full Text Available Problem statement: The present work describes a theoretical investigation concerning the performance of a four strokes compression ignition engine, which is powered by alternative fuels in the form of diesel and diesel biodiesel blends. Approach: The developed simulation model used to estimate the cylinder pressure, heat release rate, brake thermal efficiency, brake specific fuel consumption and engine out emissions. The simulation model includes Honerberg’s equation heat transfer model, Zero dimensional combustion model for the prediction of combustion parameters. Results: Experiments were performed in a single cylinder DI diesel engine fuelled with a blend of pungam methyl ester for the proportion of PME10, PME20 and PME30 by volume with diesel fuel for validation of simulated results. Conclusion/Recommendations: It was observed that there is a good agreement between simulated and experimental results which reveals the fact that the simulation model developed predicts the performance and emission characteristics of any biodiesel and diesel fuel and engine specifications given as input.

  2. q-ary compressive sensing

    Mroueh, Youssef; Rosasco, Lorenzo


    We introduce q-ary compressive sensing, an extension of 1-bit compressive sensing. We propose a novel sensing mechanism and a corresponding recovery procedure. The recovery properties of the proposed approach are analyzed both theoretically and empirically. Results in 1-bit compressive sensing are recovered as a special case. Our theoretical results suggest a tradeoff between the quantization parameter q, and the number of measurements m in the control of the error of the resulting recovery a...

  3. Introduction to compressible fluid flow

    Oosthuizen, Patrick H


    IntroductionThe Equations of Steady One-Dimensional Compressible FlowSome Fundamental Aspects of Compressible FlowOne-Dimensional Isentropic FlowNormal Shock WavesOblique Shock WavesExpansion Waves - Prandtl-Meyer FlowVariable Area FlowsAdiabatic Flow with FrictionFlow with Heat TransferLinearized Analysis of Two-Dimensional Compressible FlowsHypersonic and High-Temperature FlowsHigh-Temperature Gas EffectsLow-Density FlowsBibliographyAppendices

  4. Prediction of Concrete Compressive Strength by Evolutionary Artificial Neural Networks

    Mehdi Nikoo


    Full Text Available Compressive strength of concrete has been predicted using evolutionary artificial neural networks (EANNs as a combination of artificial neural network (ANN and evolutionary search procedures, such as genetic algorithms (GA. In this paper for purpose of constructing models samples of cylindrical concrete parts with different characteristics have been used with 173 experimental data patterns. Water-cement ratio, maximum sand size, amount of gravel, cement, 3/4 sand, 3/8 sand, and coefficient of soft sand parameters were considered as inputs; and using the ANN models, the compressive strength of concrete is calculated. Moreover, using GA, the number of layers and nodes and weights are optimized in ANN models. In order to evaluate the accuracy of the model, the optimized ANN model is compared with the multiple linear regression (MLR model. The results of simulation verify that the recommended ANN model enjoys more flexibility, capability, and accuracy in predicting the compressive strength of concrete.

  5. Fast Adaptive Wavelet for Remote Sensing Image Compression

    Bo Li; Run-Hai Jiao; Yuan-Cheng Li


    Remote sensing images are hard to achieve high compression ratio because of their rich texture. By analyzing the influence of wavelet properties on image compression, this paper proposes wavelet construction rules and builds a new biorthogonal wavelet construction model with parameters. The model parameters are optimized by using genetic algorithm and adopting energy compaction as the optimization object function. In addition, in order to resolve the computation complexity problem of online construction, according to the image classification rule proposed in this paper we construct wavelets for different classes of images and implement the fast adaptive wavelet selection algorithm (FAWS). Experimental results show wavelet bases of FAWS gain better compression performance than Daubechies9/7.

  6. Compressive sensing of sparse tensors.

    Friedland, Shmuel; Li, Qun; Schonfeld, Dan


    Compressive sensing (CS) has triggered an enormous research activity since its first appearance. CS exploits the signal's sparsity or compressibility in a particular domain and integrates data compression and acquisition, thus allowing exact reconstruction through relatively few nonadaptive linear measurements. While conventional CS theory relies on data representation in the form of vectors, many data types in various applications, such as color imaging, video sequences, and multisensor networks, are intrinsically represented by higher order tensors. Application of CS to higher order data representation is typically performed by conversion of the data to very long vectors that must be measured using very large sampling matrices, thus imposing a huge computational and memory burden. In this paper, we propose generalized tensor compressive sensing (GTCS)-a unified framework for CS of higher order tensors, which preserves the intrinsic structure of tensor data with reduced computational complexity at reconstruction. GTCS offers an efficient means for representation of multidimensional data by providing simultaneous acquisition and compression from all tensor modes. In addition, we propound two reconstruction procedures, a serial method and a parallelizable method. We then compare the performance of the proposed method with Kronecker compressive sensing (KCS) and multiway compressive sensing (MWCS). We demonstrate experimentally that GTCS outperforms KCS and MWCS in terms of both reconstruction accuracy (within a range of compression ratios) and processing speed. The major disadvantage of our methods (and of MWCS as well) is that the compression ratios may be worse than that offered by KCS.

  7. Uncommon upper extremity compression neuropathies.

    Knutsen, Elisa J; Calfee, Ryan P


    Hand surgeons routinely treat carpal and cubital tunnel syndromes, which are the most common upper extremity nerve compression syndromes. However, more infrequent nerve compression syndromes of the upper extremity may be encountered. Because they are unusual, the diagnosis of these nerve compression syndromes is often missed or delayed. This article reviews the causes, proposed treatments, and surgical outcomes for syndromes involving compression of the posterior interosseous nerve, the superficial branch of the radial nerve, the ulnar nerve at the wrist, and the median nerve proximal to the wrist. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Image Compression Algorithms Using Dct

    Er. Abhishek Kaushik


    Full Text Available Image compression is the application of Data compression on digital images. The discrete cosine transform (DCT is a technique for converting a signal into elementary frequency components. It is widely used in image compression. Here we develop some simple functions to compute the DCT and to compress images. An image compression algorithm was comprehended using Matlab code, and modified to perform better when implemented in hardware description language. The IMAP block and IMAQ block of MATLAB was used to analyse and study the results of Image Compression using DCT and varying co-efficients for compression were developed to show the resulting image and error image from the original images. Image Compression is studied using 2-D discrete Cosine Transform. The original image is transformed in 8-by-8 blocks and then inverse transformed in 8-by-8 blocks to create the reconstructed image. The inverse DCT would be performed using the subset of DCT coefficients. The error image (the difference between the original and reconstructed image would be displayed. Error value for every image would be calculated over various values of DCT co-efficients as selected by the user and would be displayed in the end to detect the accuracy and compression in the resulting image and resulting performance parameter would be indicated in terms of MSE , i.e. Mean Square Error.

  9. Compressive Deconvolution in Medical Ultrasound Imaging.

    Chen, Zhouye; Basarab, Adrian; Kouamé, Denis


    The interest of compressive sampling in ultrasound imaging has been recently extensively evaluated by several research teams. Following the different application setups, it has been shown that the RF data may be reconstructed from a small number of measurements and/or using a reduced number of ultrasound pulse emissions. Nevertheless, RF image spatial resolution, contrast and signal to noise ratio are affected by the limited bandwidth of the imaging transducer and the physical phenomenon related to US wave propagation. To overcome these limitations, several deconvolution-based image processing techniques have been proposed to enhance the ultrasound images. In this paper, we propose a novel framework, named compressive deconvolution, that reconstructs enhanced RF images from compressed measurements. Exploiting an unified formulation of the direct acquisition model, combining random projections and 2D convolution with a spatially invariant point spread function, the benefit of our approach is the joint data volume reduction and image quality improvement. The proposed optimization method, based on the Alternating Direction Method of Multipliers, is evaluated on both simulated and in vivo data.

  10. An underwater acoustic data compression method based on compressed sensing

    郭晓乐; 杨坤德; 史阳; 段睿


    The use of underwater acoustic data has rapidly expanded with the application of multichannel, large-aperture underwater detection arrays. This study presents an underwater acoustic data compression method that is based on compressed sensing. Underwater acoustic signals are transformed into the sparse domain for data storage at a receiving terminal, and the improved orthogonal matching pursuit (IOMP) algorithm is used to reconstruct the original underwater acoustic signals at a data processing terminal. When an increase in sidelobe level occasionally causes a direction of arrival estimation error, the proposed compression method can achieve a 10 times stronger compression for narrowband signals and a 5 times stronger compression for wideband signals than the orthogonal matching pursuit (OMP) algorithm. The IOMP algorithm also reduces the computing time by about 20% more than the original OMP algorithm. The simulation and experimental results are discussed.

  11. TPC data compression

    Berger, Jens; Frankenfeld, Ulrich; Lindenstruth, Volker; Plamper, Patrick; Roehrich, Dieter; Schaefer, Erich; W. Schulz, Markus; M. Steinbeck, Timm; Stock, Reinhard; Sulimma, Kolja; Vestboe, Anders; Wiebalck, Arne E-mail:


    In the collisions of ultra-relativistic heavy ions in fixed-target and collider experiments, multiplicities of several ten thousand charged particles are generated. The main devices for tracking and particle identification are large-volume tracking detectors (TPCs) producing raw event sizes in excess of 100 Mbytes per event. With increasing data rates, storage becomes the main limiting factor in such experiments and, therefore, it is essential to represent the data in a way that is as concise as possible. In this paper, we present several compression schemes, such as entropy encoding, modified vector quantization, and data modeling techniques applied on real data from the CERN SPS experiment NA49 and on simulated data from the future CERN LHC experiment ALICE.

  12. TPC data compression

    Berger, Jens; Lindenstruth, Volker; Plamper, Patrick; Röhrich, Dieter; Schafer, Erich; Schulz, M W; Steinbeck, T M; Stock, Reinhard; Sulimma, Kolja; Vestbo, Anders S; Wiebalck, Arne


    In the collisions of ultra-relativistic heavy ions in fixed-target and collider experiments, multiplicities of several ten thousand charged particles are generated. The main devices for tracking and particle identification are large-volume tracking detectors (TPCs) producing raw event sizes in excess of 100 Mbytes per event. With increasing data rates, storage becomes the main limiting factor in such experiments and, therefore, it is essential to represent the data in a way that is as concise as possible. In this paper, we present several compression schemes, such as entropy encoding, modified vector quantization, and data modeling techniques applied on real data from the CERN SPS experiment NA49 and on simulated data from the future CERN LHC experiment ALICE.

  13. TPC data compression

    Berger, Jens; Frankenfeld, Ulrich; Lindenstruth, Volker; Plamper, Patrick; Röhrich, Dieter; Schäfer, Erich; Schulz, Markus W.; Steinbeck, Timm M.; Stock, Reinhard; Sulimma, Kolja; Vestbø, Anders; Wiebalck, Arne


    In the collisions of ultra-relativistic heavy ions in fixed-target and collider experiments, multiplicities of several ten thousand charged particles are generated. The main devices for tracking and particle identification are large-volume tracking detectors (TPCs) producing raw event sizes in excess of 100 Mbytes per event. With increasing data rates, storage becomes the main limiting factor in such experiments and, therefore, it is essential to represent the data in a way that is as concise as possible. In this paper, we present several compression schemes, such as entropy encoding, modified vector quantization, and data modeling techniques applied on real data from the CERN SPS experiment NA49 and on simulated data from the future CERN LHC experiment ALICE.

  14. Waves and compressible flow

    Ockendon, Hilary


    Now in its second edition, this book continues to give readers a broad mathematical basis for modelling and understanding the wide range of wave phenomena encountered in modern applications.  New and expanded material includes topics such as elastoplastic waves and waves in plasmas, as well as new exercises.  Comprehensive collections of models are used to illustrate the underpinning mathematical methodologies, which include the basic ideas of the relevant partial differential equations, characteristics, ray theory, asymptotic analysis, dispersion, shock waves, and weak solutions. Although the main focus is on compressible fluid flow, the authors show how intimately gasdynamic waves are related to wave phenomena in many other areas of physical science.   Special emphasis is placed on the development of physical intuition to supplement and reinforce analytical thinking. Each chapter includes a complete set of carefully prepared exercises, making this a suitable textbook for students in applied mathematics, ...

  15. Central cooling: compressive chillers

    Christian, J.E.


    Representative cost and performance data are provided in a concise, useable form for three types of compressive liquid packaged chillers: reciprocating, centrifugal, and screw. The data are represented in graphical form as well as in empirical equations. Reciprocating chillers are available from 2.5 to 240 tons with full-load COPs ranging from 2.85 to 3.87. Centrifugal chillers are available from 80 to 2,000 tons with full load COPs ranging from 4.1 to 4.9. Field-assemblied centrifugal chillers have been installed with capacities up to 10,000 tons. Screw-type chillers are available from 100 to 750 tons with full load COPs ranging from 3.3 to 4.5.

  16. Compressive Fatigue in Wood

    Clorius, Christian Odin; Pedersen, Martin Bo Uhre; Hoffmeyer, Preben;


    An investigation of fatigue failure in wood subjected to load cycles in compression parallel to grain is presented. Small clear specimens of spruce are taken to failure in square wave formed fatigue loading at a stress excitation level corresponding to 80% of the short term strength. Four...... frequencies ranging from 0.01 Hz to 10 Hz are used. The number of cycles to failure is found to be a poor measure of the fatigue performance of wood. Creep, maximum strain, stiffness and work are monitored throughout the fatigue tests. Accumulated creep is suggested identified with damage and a correlation...... is observed between stiffness reduction and accumulated creep. A failure model based on the total work during the fatigue life is rejected, and a modified work model based on elastic, viscous and non-recovered viscoelastic work is experimentally supported, and an explanation at a microstructural level...

  17. Compression-based Similarity

    Vitanyi, Paul M B


    First we consider pair-wise distances for literal objects consisting of finite binary files. These files are taken to contain all of their meaning, like genomes or books. The distances are based on compression of the objects concerned, normalized, and can be viewed as similarity distances. Second, we consider pair-wise distances between names of objects, like "red" or "christianity." In this case the distances are based on searches of the Internet. Such a search can be performed by any search engine that returns aggregate page counts. We can extract a code length from the numbers returned, use the same formula as before, and derive a similarity or relative semantics between names for objects. The theory is based on Kolmogorov complexity. We test both similarities extensively experimentally.

  18. Adaptively Compressed Exchange Operator

    Lin, Lin


    The Fock exchange operator plays a central role in modern quantum chemistry. The large computational cost associated with the Fock exchange operator hinders Hartree-Fock calculations and Kohn-Sham density functional theory calculations with hybrid exchange-correlation functionals, even for systems consisting of hundreds of atoms. We develop the adaptively compressed exchange operator (ACE) formulation, which greatly reduces the computational cost associated with the Fock exchange operator without loss of accuracy. The ACE formulation does not depend on the size of the band gap, and thus can be applied to insulating, semiconducting as well as metallic systems. In an iterative framework for solving Hartree-Fock-like systems, the ACE formulation only requires moderate modification of the code, and can be potentially beneficial for all electronic structure software packages involving exchange calculations. Numerical results indicate that the ACE formulation can become advantageous even for small systems with tens...

  19. Adaptive compressive sensing camera

    Hsu, Charles; Hsu, Ming K.; Cha, Jae; Iwamura, Tomo; Landa, Joseph; Nguyen, Charles; Szu, Harold


    We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal-oxide-semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).

  20. New Regenerative Cycle for Vapor Compression Refrigeration

    Mark J. Bergander


    The main objective of this project is to confirm on a well-instrumented prototype the theoretically derived claims of higher efficiency and coefficient of performance for geothermal heat pumps based on a new regenerative thermodynamic cycle as comparing to existing technology. In order to demonstrate the improved performance of the prototype, it will be compared to published parameters of commercially available geothermal heat pumps manufactured by US and foreign companies. Other objectives are to optimize the design parameters and to determine the economic viability of the new technology. Background (as stated in the proposal): The proposed technology closely relates to EERE mission by improving energy efficiency, bringing clean, reliable and affordable heating and cooling to the residential and commercial buildings and reducing greenhouse gases emission. It can provide the same amount of heating and cooling with considerably less use of electrical energy and consequently has a potential of reducing our nations dependence on foreign oil. The theoretical basis for the proposed thermodynamic cycle was previously developed and was originally called a dynamic equilibrium method. This theory considers the dynamic equations of state of the working fluid and proposes the methods for modification of T-S trajectories of adiabatic transformation by changing dynamic properties of gas, such as flow rate, speed and acceleration. The substance of this proposal is a thermodynamic cycle characterized by the regenerative use of the potential energy of two-phase flow expansion, which in traditional systems is lost in expansion valves. The essential new features of the process are: (1) The application of two-step throttling of the working fluid and two-step compression of its vapor phase. (2) Use of a compressor as the initial step compression and a jet device as a second step, where throttling and compression are combined. (3) Controlled ratio of a working fluid at the first and

  1. Effect of Functional Nano Channel Structures Different Widths on Injection Molding and Compression Molding Replication Capabilities

    Calaon, M.; Tosello, G.; Garnaes, J.

    The present study investigates the capabilities of the two employed processes, injection molding (IM) and injection compression molding (ICM) on replicating different channel cross sections. Statistical design of experiment was adopted to optimize replication quality of produced polymer parts wit...

  2. Application specific compression : final report.

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.


    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  3. Streaming Compression of Hexahedral Meshes

    Isenburg, M; Courbet, C


    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  4. Data Compression with Linear Algebra

    Etler, David


    A presentation on the applications of linear algebra to image compression. Covers entropy, the discrete cosine transform, thresholding, quantization, and examples of images compressed with DCT. Given in Spring 2015 at Ocean County College as part of the honors program.

  5. Compressed sensing for body MRI.

    Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh


    The introduction of compressed sensing for increasing imaging speed in magnetic resonance imaging (MRI) has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This article presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and nonlinear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the article discusses current challenges and future opportunities. 5 J. Magn. Reson. Imaging 2017;45:966-987. © 2016 International Society for Magnetic Resonance in Medicine.

  6. Compression Maps and Stable Relations

    Price, Kenneth L


    Balanced relations were defined by G. Abrams to extend the convolution product used in the construction of incidence rings. We define stable relations,which form a class between balanced relations and preorders. We also define a compression map to be a surjective function between two sets which preserves order, preserves off-diagonal relations, and has the additional property every transitive triple is the image of a transitive triple. We show a compression map preserves the balanced and stable properties but the compression of a preorder may be stable and not transitive. We also cover an example of a stable relation which is not the compression of a preorder. In our main theorem we provide necessary and sufficient conditions for a finite stable relation to be the compression of a preorder.

  7. Strategies for high-performance resource-efficient compression of neural spike recordings.

    Thorbergsson, Palmi Thor; Garwicz, Martin; Schouenborg, Jens; Johansson, Anders J


    Brain-machine interfaces (BMIs) based on extracellular recordings with microelectrodes provide means of observing the activities of neurons that orchestrate fundamental brain function, and are therefore powerful tools for exploring the function of the brain. Due to physical restrictions and risks for post-surgical complications, wired BMIs are not suitable for long-term studies in freely behaving animals. Wireless BMIs ideally solve these problems, but they call for low-complexity techniques for data compression that ensure maximum utilization of the wireless link and energy resources, as well as minimum heat dissipation in the surrounding tissues. In this paper, we analyze the performances of various system architectures that involve spike detection, spike alignment and spike compression. Performance is analyzed in terms of spike reconstruction and spike sorting performance after wireless transmission of the compressed spike waveforms. Compression is performed with transform coding, using five different compression bases, one of which we pay special attention to. That basis is a fixed basis derived, by singular value decomposition, from a large assembly of experimentally obtained spike waveforms, and therefore represents a generic basis specially suitable for compressing spike waveforms. Our results show that a compression factor of 99.8%, compared to transmitting the raw acquired data, can be achieved using the fixed generic compression basis without compromising performance in spike reconstruction and spike sorting. Besides illustrating the relative performances of various system architectures and compression bases, our findings show that compression of spikes with a fixed generic compression basis derived from spike data provides better performance than compression with downsampling or the Haar basis, given that no optimization procedures are implemented for compression coefficients, and the performance is similar to that obtained when the optimal SVD based

  8. Strategies for high-performance resource-efficient compression of neural spike recordings.

    Palmi Thor Thorbergsson

    Full Text Available Brain-machine interfaces (BMIs based on extracellular recordings with microelectrodes provide means of observing the activities of neurons that orchestrate fundamental brain function, and are therefore powerful tools for exploring the function of the brain. Due to physical restrictions and risks for post-surgical complications, wired BMIs are not suitable for long-term studies in freely behaving animals. Wireless BMIs ideally solve these problems, but they call for low-complexity techniques for data compression that ensure maximum utilization of the wireless link and energy resources, as well as minimum heat dissipation in the surrounding tissues. In this paper, we analyze the performances of various system architectures that involve spike detection, spike alignment and spike compression. Performance is analyzed in terms of spike reconstruction and spike sorting performance after wireless transmission of the compressed spike waveforms. Compression is performed with transform coding, using five different compression bases, one of which we pay special attention to. That basis is a fixed basis derived, by singular value decomposition, from a large assembly of experimentally obtained spike waveforms, and therefore represents a generic basis specially suitable for compressing spike waveforms. Our results show that a compression factor of 99.8%, compared to transmitting the raw acquired data, can be achieved using the fixed generic compression basis without compromising performance in spike reconstruction and spike sorting. Besides illustrating the relative performances of various system architectures and compression bases, our findings show that compression of spikes with a fixed generic compression basis derived from spike data provides better performance than compression with downsampling or the Haar basis, given that no optimization procedures are implemented for compression coefficients, and the performance is similar to that obtained when the

  9. Study on the quality evaluation metrics for compressed spaceborne hyperspectral data

    LI; Xiaohui; ZHANG; Jing; LI; Chuanrong; LIU; Yi; LI; Ziyang; ZHU; Jiajia; ZENG; Xiangzhao


    Based on the raw data of spaceborne dispersive and interferometry imaging spectrometer,a set of quality evaluation metrics for compressed hyperspectral data is initially established in this paper.These quality evaluation metrics,which consist of four aspects including compression statistical distortion,sensor performance evaluation,data application performance and image quality,are suited to the comprehensive and systematical analysis of the impact of lossy compression in spaceborne hyperspectral remote sensing data quality.Furthermore,the evaluation results would be helpful to the selection and optimization of satellite data compression scheme.

  10. A Tiled-Table Convention for Compressing FITS Binary Tables

    Pence, William; White, Richard L


    This document describes a convention for compressing FITS binary tables that is modeled after the FITS tiled-image compression method (White et al. 2009) that has been in use for about a decade. The input table is first optionally subdivided into tiles, each containing an equal number of rows, then every column of data within each tile is compressed and stored as a variable-length array of bytes in the output FITS binary table. All the header keywords from the input table are copied to the header of the output table and remain uncompressed for efficient access. The output compressed table contains the same number and order of columns as in the input uncompressed binary table. There is one row in the output table corresponding to each tile of rows in the input table. In principle, each column of data can be compressed using a different algorithm that is optimized for the type of data within that column, however in the prototype implementation described here, the gzip algorithm is used to compress every column.

  11. Compression-compression fatigue of selective electron beam melted cellular titanium (Ti-6Al-4V).

    Hrabe, Nikolas W; Heinl, Peter; Flinn, Brian; Körner, Carolin; Bordia, Rajendra K


    Regular 3D periodic porous Ti-6Al-4V structures intended to reduce the effects of stress shielding in load-bearing bone replacement implants (e.g., hip stems) were fabricated over a range of relative densities (0.17-0.40) and pore sizes (approximately 500-1500 μm) using selective electron beam melting (EBM). Compression-compression fatigue testing (15 Hz, R = 0.1) resulted in normalized fatigue strengths at 10(6) cycles ranging from 0.15 to 0.25, which is lower than the expected value of 0.4 for solid material of the same acicular α microstructure. The three possible reasons for this reduced fatigue lifetime are stress concentrations from closed porosity observed within struts, stress concentrations from observed strut surface features (sintered particles and texture lines), and microstructure (either acicular α or martensite) with less than optimal high-cycle fatigue resistance. 2011 Wiley Periodicals, Inc.

  12. Compressive Sensing for Quantum Imaging

    Howland, Gregory A.

    This thesis describes the application of compressive sensing to several challenging problems in quantum imaging with practical and fundamental implications. Compressive sensing is a measurement technique that compresses a signal during measurement such that it can be dramatically undersampled. Compressive sensing has been shown to be an extremely efficient measurement technique for imaging, particularly when detector arrays are not available. The thesis first reviews compressive sensing through the lens of quantum imaging and quantum measurement. Four important applications and their corresponding experiments are then described in detail. The first application is a compressive sensing, photon-counting lidar system. A novel depth mapping technique that uses standard, linear compressive sensing is described. Depth maps up to 256 x 256 pixel transverse resolution are recovered with depth resolution less than 2.54 cm. The first three-dimensional, photon counting video is recorded at 32 x 32 pixel resolution and 14 frames-per-second. The second application is the use of compressive sensing for complementary imaging---simultaneously imaging the transverse-position and transverse-momentum distributions of optical photons. This is accomplished by taking random, partial projections of position followed by imaging the momentum distribution on a cooled CCD camera. The projections are shown to not significantly perturb the photons' momenta while allowing high resolution position images to be reconstructed using compressive sensing. A variety of objects and their diffraction patterns are imaged including the double slit, triple slit, alphanumeric characters, and the University of Rochester logo. The third application is the use of compressive sensing to characterize spatial entanglement of photon pairs produced by spontaneous parametric downconversion. The technique gives a theoretical speedup N2/log N for N-dimensional entanglement over the standard raster scanning technique

  13. Compression of perceived depth as a function of viewing conditions.

    Nolan, Ann; Delshad, Rebecca; Sedgwick, Harold A


    The magnification produced by a low-vision telescope has been shown to compress perceived depth. Looking through such a telescope, however, also entails monocular viewing and visual field restriction, and these viewing conditions, taken together, were also shown to compress perceived depth. The research presented here quantitatively explores the separate effects of each of these viewing conditions on perceived depth. Participants made verbal estimates of the length, relative to the width, of rectangles presented in a controlled table-top setting. In experiment 1, the rectangles were either in the frontal plane or receding in depth, and they were viewed either binocularly or monocularly with an unrestricted field of view (FOV). In experiment 2, the rectangles were in depth and were viewed monocularly with an unrestricted FOV, a moderately (40 degrees) restricted FOV, or a severely (11.5 degrees) restricted FOV. Viewed in the frontal plane, either monocularly or binocularly, the vertical dimension was expanded by about 10%. Viewed in depth, with an unrestricted FOV, the (projectively vertical) depth dimension was compressed by 12% when seen binocularly or 24% when seen monocularly. A monocular moderately (40 degrees) restricted FOV was very similar to the unrestricted monocular FOV. A severely (11.5 degrees) restricted FOV, however, produced a substantially greater 44% compression of perceived depth. Even under near-optimal binocular viewing conditions, there is some compression of perceived depth. The compression found when viewing through a low-vision telescope has been shown to be substantially greater. In addition to the previously demonstrated contribution of telescopic magnification to this effect, we have now shown that the viewing conditions of monocularity and severely restricted (11.5 degrees) FOV can each produce substantial increments in the compression of perceived depth. We found, however, that a moderately restricted (40 degrees) FOV does not increase

  14. Advances in compressible turbulent mixing

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E. [eds.


    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  15. A new compression design that increases proximal locking screw bending resistance in femur compression nails.

    Karaarslan, Ahmet Adnan; Karakaşli, Ahmet; Karci, Tolga; Aycan, Hakan; Yildirim, Serhat; Sesli, Erhan


    The aim is to present our new method of compression, a compression tube instead of conventional compression screw and to investigate the difference of proximal locking screw bending resistance between compression screw application (6 mm wide contact) and compression tube (two contact points with 13 mm gap) application. We formed six groups each consisting of 10 proximal locking screws. On metal cylinder representing lesser trochanter level, we performed 3-point bending tests with compression screw and with compression tube. We determined the yield points of the screws in 3-point bending tests using an axial compression testing machine. We determined the yield point of 5 mm screws as 1963±53 N (mean±SD) with compression screw, and as 2929±140 N with compression tubes. We found 51% more locking screw bending resistance with compression tube than with compression screw (p=0,000). Therefore compression tubes instead of compression screw must be preferred at femur compression nails.

  16. Compressed Submanifold Multifactor Analysis.

    Luu, Khoa; Savvides, Marios; Bui, Tien; Suen, Ching


    Although widely used, Multilinear PCA (MPCA), one of the leading multilinear analysis methods, still suffers from four major drawbacks. First, it is very sensitive to outliers and noise. Second, it is unable to cope with missing values. Third, it is computationally expensive since MPCA deals with large multi-dimensional datasets. Finally, it is unable to maintain the local geometrical structures due to the averaging process. This paper proposes a novel approach named Compressed Submanifold Multifactor Analysis (CSMA) to solve the four problems mentioned above. Our approach can deal with the problem of missing values and outliers via SVD-L1. The Random Projection method is used to obtain the fast low-rank approximation of a given multifactor dataset. In addition, it is able to preserve the geometry of the original data. Our CSMA method can be used efficiently for multiple purposes, e.g. noise and outlier removal, estimation of missing values, biometric applications. We show that CSMA method can achieve good results and is very efficient in the inpainting problem as compared to [1], [2]. Our method also achieves higher face recognition rates compared to LRTC, SPMA, MPCA and some other methods, i.e. PCA, LDA and LPP, on three challenging face databases, i.e. CMU-MPIE, CMU-PIE and Extended YALE-B.

  17. Compressed modes for variational problems in mathematics and physics.

    Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley


    This article describes a general formalism for obtaining spatially localized ("sparse") solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support ("compressed modes"). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size.

  18. The OMV Data Compression System Science Data Compression Workshop

    Lewis, Garton H., Jr.


    The Video Compression Unit (VCU), Video Reconstruction Unit (VRU), theory and algorithms for implementation of Orbital Maneuvering Vehicle (OMV) source coding, docking mode, channel coding, error containment, and video tape preprocessed space imagery are presented in viewgraph format.

  19. Wearable EEG via lossless compression.

    Dufort, Guillermo; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo


    This work presents a wearable multi-channel EEG recording system featuring a lossless compression algorithm. The algorithm, based in a previously reported algorithm by the authors, exploits the existing temporal correlation between samples at different sampling times, and the spatial correlation between different electrodes across the scalp. The low-power platform is able to compress, by a factor between 2.3 and 3.6, up to 300sps from 64 channels with a power consumption of 176μW/ch. The performance of the algorithm compares favorably with the best compression rates reported up to date in the literature.

  20. Context-Aware Image Compression.

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  1. Compressive sensing for urban radar

    Amin, Moeness


    With the emergence of compressive sensing and sparse signal reconstruction, approaches to urban radar have shifted toward relaxed constraints on signal sampling schemes in time and space, and to effectively address logistic difficulties in data acquisition. Traditionally, these challenges have hindered high resolution imaging by restricting both bandwidth and aperture, and by imposing uniformity and bounds on sampling rates.Compressive Sensing for Urban Radar is the first book to focus on a hybrid of two key areas: compressive sensing and urban sensing. It explains how reliable imaging, tracki

  2. Designing experiments through compressed sensing.

    Young, Joseph G.; Ridzal, Denis


    In the following paper, we discuss how to design an ensemble of experiments through the use of compressed sensing. Specifically, we show how to conduct a small number of physical experiments and then use compressed sensing to reconstruct a larger set of data. In order to accomplish this, we organize our results into four sections. We begin by extending the theory of compressed sensing to a finite product of Hilbert spaces. Then, we show how these results apply to experiment design. Next, we develop an efficient reconstruction algorithm that allows us to reconstruct experimental data projected onto a finite element basis. Finally, we verify our approach with two computational experiments.

  3. Compressive myelopathy in fluorosis: MRI

    Gupta, R.K. [MR Section, Department of Radiology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow-226014 (India); Agarwal, P. [MR Section, Department of Radiology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow-226014 (India); Kumar, S. [MR Section, Department of Radiology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow-226014 (India); Surana, P.K. [Department of Neurology, SGPGIMS, Lucknow-226014 (India); Lal, J.H. [MR Section, Department of Radiology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow-226014 (India); Misra, U.K. [Department of Neurology, SGPGIMS, Lucknow-226014 (India)


    We examined four patients with fluorosis, presenting with compressive myelopathy, by MRI, using spin-echo and fast low-angle shot sequences. Cord compression due to ossification of the posterior longitudinal ligament (PLL) and ligamentum flavum (LF) was demonstrated in one and ossification of only the LF in one. Marrow signal was observed in the PLL and LF in all the patients on all pulse sequences. In patients with compressive myelopathy secondary to ossification of PLL and/or LF, fluorosis should be considered as a possible cause, especially in endemic regions. (orig.). With 2 figs., 1 tab.

  4. Compressive phase-only filtering at extreme compression rates

    Pastor-Calle, David; Pastuszczak, Anna; Mikołajczyk, Michał; Kotyński, Rafał


    We introduce an efficient method for the reconstruction of the correlation between a compressively measured image and a phase-only filter. The proposed method is based on two properties of phase-only filtering: such filtering is a unitary circulant transform, and the correlation plane it produces is usually sparse. Thanks to these properties, phase-only filters are perfectly compatible with the framework of compressive sensing. Moreover, the lasso-based recovery algorithm is very fast when phase-only filtering is used as the compression matrix. The proposed method can be seen as a generalization of the correlation-based pattern recognition technique, which is hereby applied directly to non-adaptively acquired compressed data. At the time of measurement, any prior knowledge of the target object for which the data will be scanned is not required. We show that images measured at extremely high compression rates may still contain sufficient information for target classification and localization, even if the compression rate is high enough, that visual recognition of the target in the reconstructed image is no longer possible. The method has been applied by us to highly undersampled measurements obtained from a single-pixel camera, with sampling based on randomly chosen Walsh-Hadamard patterns.

  5. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Xiangwei Li


    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  6. Efficient lossy compression for compressive sensing acquisition of images in compressive sensing imaging systems.

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning


    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  7. Compressive Acquisition of Dynamic Scenes

    Sankaranarayanan, Aswin C; Chellappa, Rama; Baraniuk, Richard G


    Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Video CS is complicated by the ephemeral nature of dynamic events, which makes direct extensions of standard CS imaging architectures and signal models difficult. In this paper, we develop a new framework for video CS for dynamic textured scenes that models the evolution of the scene as a linear dynamical system (LDS). This reduces the video recovery problem to first estimating the model parameters of the LDS from compressive measurements, and then reconstructing the image frames. We exploit the low-dimensional dynamic parameters (the state sequence) and high-dimensional static parameters (the observation matrix) of the LDS to devise a novel compressive measurement strategy that measures only the...

  8. Normalized Compression Distance of Multiples

    Cohen, Andrew R


    Normalized compression distance (NCD) is a parameter-free similarity measure based on compression. The NCD between pairs of objects is not sufficient for all applications. We propose an NCD of finite multisets (multiples) of objacts that is metric and is better for many applications. Previously, attempts to obtain such an NCD failed. We use the theoretical notion of Kolmogorov complexity that for practical purposes is approximated from above by the length of the compressed version of the file involved, using a real-world compression program. We applied the new NCD for multiples to retinal progenitor cell questions that were earlier treated with the pairwise NCD. Here we get significantly better results. We also applied the NCD for multiples to synthetic time sequence data. The preliminary results are as good as nearest neighbor Euclidean classifier.

  9. Compression fractures of the back

    Taking steps to prevent and treat osteoporosis is the most effective way to prevent compression or insufficiency fractures. Getting regular load-bearing exercise (such as walking) can help you avoid bone loss.

  10. Compressed sensing for distributed systems

    Coluccia, Giulio; Magli, Enrico


    This book presents a survey of the state-of-the art in the exciting and timely topic of compressed sensing for distributed systems. It has to be noted that, while compressed sensing has been studied for some time now, its distributed applications are relatively new. Remarkably, such applications are ideally suited to exploit all the benefits that compressed sensing can provide. The objective of this book is to provide the reader with a comprehensive survey of this topic, from the basic concepts to different classes of centralized and distributed reconstruction algorithms, as well as a comparison of these techniques. This book collects different contributions on these aspects. It presents the underlying theory in a complete and unified way for the first time, presenting various signal models and their use cases. It contains a theoretical part collecting latest results in rate-distortion analysis of distributed compressed sensing, as well as practical implementations of algorithms obtaining performance close to...

  11. Preprocessing of compressed digital video

    Segall, C. Andrew; Karunaratne, Passant V.; Katsaggelos, Aggelos K.


    Pre-processing algorithms improve on the performance of a video compression system by removing spurious noise and insignificant features from the original images. This increases compression efficiency and attenuates coding artifacts. Unfortunately, determining the appropriate amount of pre-filtering is a difficult problem, as it depends on both the content of an image as well as the target bit-rate of compression algorithm. In this paper, we explore a pre- processing technique that is loosely coupled to the quantization decisions of a rate control mechanism. This technique results in a pre-processing system that operates directly on the Displaced Frame Difference (DFD) and is applicable to any standard-compatible compression system. Results explore the effect of several standard filters on the DFD. An adaptive technique is then considered.

  12. Compressed gas fuel storage system

    Wozniak, John J. (Columbia, MD); Tiller, Dale B. (Lincoln, NE); Wienhold, Paul D. (Baltimore, MD); Hildebrand, Richard J. (Edgemere, MD)


    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

  13. Few-cycle fiber pulse compression and evolution of negative resonant radiation

    McLenaghan, Joanna


    We present numerical simulations and experimental observations of the spectral expansion of fs-pulses compressing in optical fibers. Using the input pulse frequency chirp we are able to scan through the pulse compression spectra and observe in detail the emergence of negative-frequency resonant radiation (NRR), a recently discovered pulse instability coupling to negative frequencies [Rubino et al., PRL 108, 253901 (2012)]. We observe how the compressing pulse is exciting NRR as long as it overlaps spectrally with the resonant frequency. Furthermore, we observe that optimal pulse compression can be achieved at an optimal input chirp and for an optimal fiber length. The results are important for Kerr-effect pulse compressors, to generate novel light sources, as well as for the observation of quantum vacuum radiation.

  14. Shock compression of polyvinyl chloride

    Neogi, Anupam; Mitra, Nilanjan


    This study presents shock compression simulation of atactic polyvinyl chloride (PVC) using ab-initio and classical molecular dynamics. The manuscript also identifies the limits of applicability of classical molecular dynamics based shock compression simulation for PVC. The mechanism of bond dissociation under shock loading and its progression is demonstrated in this manuscript using the density functional theory based molecular dynamics simulations. The rate of dissociation of different bonds at different shock velocities is also presented in this manuscript.

  15. Bayesian online compressed sensing

    Rossi, Paulo V.; Kabashima, Yoshiyuki; Inoue, Jun-ichi


    In this paper, we explore the possibilities and limitations of recovering sparse signals in an online fashion. Employing a mean field approximation to the Bayes recursion formula yields an online signal recovery algorithm that can be performed with a computational cost that is linearly proportional to the signal length per update. Analysis of the resulting algorithm indicates that the online algorithm asymptotically saturates the optimal performance limit achieved by the offline method in the presence of Gaussian measurement noise, while differences in the allowable computational costs may result in fundamental gaps of the achievable performance in the absence of noise.

  16. Bridgman's concern (shock compression science)

    Graham, R. A.


    In 1956 P. W. Bridgman published a letter to the editor in the Journal of Applied Physics reporting results of electrical resistance measurements on iron under static high pressure. The work was undertaken to verify the existence of a polymorphic phase transition at 130 kbar (13 GPa) reported in the same journal and year by the Los Alamos authors, Bancroft, Peterson, and Minshall for high pressure, shock-compression loading. In his letter, Bridgman reported that he failed to find any evidence for the transition. Further, he raised some fundamental concerns as to the state of knowledge of shock-compression processes in solids. Later it was determined that Bridgman's static pressure scale was in error, and the shock observations became the basis for calibration of pressure values in static high pressure apparatuses. In spite of the error in pressure scales, Bridgman's concerns on descriptions of shock-compression processes were perceptive and have provided the basis for subsequent fundamental studies of shock-compressed solids. The present paper, written in response to receipt of the 1993 American Physical Society Shock-Compression Science Award, provides a brief contemporary assessment of those shock-compression issues which were the basis of Bridgman's 1956 concerns.

  17. Hidden force opposing ice compression

    Sun, Chang Q; Zheng, Weitao


    Coulomb repulsion between the unevenly-bound bonding and nonbonding electron pairs in the O:H-O hydrogen-bond is shown to originate the anomalies of ice under compression. Consistency between experimental observations, density functional theory and molecular dynamics calculations confirmed that the resultant force of the compression, the repulsion, and the recovery of electron-pair dislocations differentiates ice from other materials in response to pressure. The compression shortens and strengthens the longer-and-softer intermolecular O:H lone-pair virtual-bond; the repulsion pushes the bonding electron pair away from the H+/p and hence lengthens and weakens the intramolecular H-O real-bond. The virtual-bond compression and the real-bond elongation symmetrize the O:H-O as observed at ~60 GPa and result in the abnormally low compressibility of ice. The virtual-bond stretching phonons ( 3000 cm-1) softened upon compression. The cohesive energy of the real-bond dominates and its loss lowers the critical temperat...


    Wang Wen; Wu Shixiong; Chen Zichen


    NC code or STL file can be generated directly from measuring data in a fast reverse-engineering mode.Compressing the massive data from laser scanner is the key of the new mode.An adaptive compression method based on triangulated-surfaces model is put forward.Normal-vector angles between triangles are computed to find prime vertices for removal.Ring data structure is adopted to save massive data effectively.It allows the efficient retrieval of all neighboring vertices and triangles of a given vertices.To avoid long and thin triangles,a new re-triangulation approach based on normalized minimum-vertex-distance is proposed,in which the vertex distance and interior angle of triangle are considered.Results indicate that the compression method has high efficiency and can get reliable precision.The method can be applied in fast reverse engineering to acquire an optimal subset of the original massive data.

  19. "Compressing liquid": an efficient global minima search strategy for clusters.

    Zhou, R L; Zhao, L Y; Pan, B C


    In this paper we present a new global search strategy named as "compressing liquid" for atomic clusters. In this strategy, a random fragment of liquid structure is adopted as a starting geometry, followed by iterative operations of "compressing" and Monte Carlo adjustment of the atom positions plus structural optimization. It exhibits fair efficiency when it is applied to seeking the global minima of Lennard-Jones clusters. We also employed it to search the low-lying candidates of medium silicon clusters Si(n)(n=40-60), where the global search is absent. We found the best candidates for most sizes. More importantly, we obtained non-fullerene-based structures for some sized clusters, which were not found from the endohedral-fullerene strategy. These results indicate that the "compressing-liquid" method is highly efficient for global minima search of clusters.

  20. Structured sublinear compressive sensing via dense belief propagation

    Dai, Wei; Pham, Hoa Vin


    Compressive sensing (CS) is a sampling technique designed for reducing the complexity of sparse data acquisition. One of the major obstacles for practical deployment of CS techniques is the signal reconstruction time and the high storage cost of random sensing matrices. We propose a new structured compressive sensing scheme, based on codes of graphs, that allows for a joint design of structured sensing matrices and logarithmic-complexity reconstruction algorithms. The compressive sensing matrices can be shown to offer asymptotically optimal performance when used in combination with Orthogonal Matching Pursuit (OMP) methods. For more elaborate greedy reconstruction schemes, we propose a new family of dense list decoding belief propagation algorithms, as well as reinforced- and multiple-basis belief propagation algorithms. Our simulation results indicate that reinforced BP CS schemes offer very good complexity-performance tradeoffs for very sparse signal vectors.

  1. Adaptive interference hyperspectral image compression with spectrum distortion control

    Jing Ma; Yunsong Li; Chengke Wu; Dong Chen


    As one of the next generation imaging spectrometers,interferential spectrometer has been paid much attention.With traditional spectrum compression methods,the hyperspectral images generated by interferential spectrometer can only be protected with better visual quality in spatial domain,but its optical applications in Fourier domain are often ignored.So the relation between the distortion in Fourier domain and the compression in spatial domain is analyzed in this letter.Based on this analysis,a novel coding scheme is proposed,which can compress data in spatial domain while reducing the distortion in Fourier domain.The bitstream of set partitioning in hierarchical trees (SPIHT) is truncated by adaptively lifting the rate-distortion slopes of zerotrees according to the priorities of optical path difference (OPD) based on rate-distortion optimization theory.Experimental results show that the proposed scheme can achieve better performance in Fourier domain while maintaining the image quality in spatial domain.

  2. Metal hydride hydrogen compression: recent advances and future prospects

    Yartys, Volodymyr A.; Lototskyy, Mykhaylo; Linkov, Vladimir; Grant, David; Stuart, Alastair; Eriksen, Jon; Denys, Roman; Bowman, Robert C.


    Metal hydride (MH) thermal sorption compression is one of the more important applications of the MHs. The present paper reviews recent advances in the field based on the analysis of the fundamental principles of this technology. The performances when boosting hydrogen pressure, along with two- and three-step compression units, are analyzed. The paper includes also a theoretical modelling of a two-stage compressor aimed at describing the performance of the experimentally studied systems, their optimization and design of more advanced MH compressors. Business developments in the field are reviewed for the Norwegian company HYSTORSYS AS and the South African Institute for Advanced Materials Chemistry. Finally, future prospects are outlined presenting the role of the MH compression in the overall development of the hydrogen-driven energy systems. The work is based on the analysis of the development of the technology in Europe, USA and South Africa.

  3. Drug release mechanisms of compressed lipid implants.

    Kreye, F; Siepmann, F; Siepmann, J


    The aim of this study was to elucidate the mass transport mechanisms controlling drug release from compressed lipid implants. The latter steadily gain in importance as parenteral controlled release dosage forms, especially for acid-labile drugs. A variety of lipid powders were blended with theophylline and propranolol hydrochloride as sparingly and freely water-soluble model drugs. Cylindrical implants were prepared by direct compression and thoroughly characterized before and after exposure to phosphate buffer pH 7.4. Based on the experimental results, an appropriate mathematical theory was identified in order to quantitatively describe the resulting drug release patterns. Importantly, broad release spectra and release periods ranging from 1 d to several weeks could easily be achieved by varying the type of lipid, irrespective of the type of drug. Interestingly, diffusion with constant diffusivities was found to be the dominant mass transport mechanism, if the amount of water within the implant was sufficient to dissolve all of the drug. In these cases an analytical solution of Fick's second law could successfully describe the experimentally measured theophylline and propranolol hydrochloride release profiles, even if varying formulation and processing parameters, e.g. the type of lipid, initial drug loading, drug particles size as well as compression force and time. However, based on the available data it was not possible to distinguish between drug diffusion control and water diffusion control. The obtained new knowledge can nevertheless significantly help facilitating the optimization of this type of advanced drug delivery systems, in particular if long release periods are targeted, which require time consuming experimental trials.

  4. Comparing image compression methods in biomedical applications

    Libor Hargas


    Full Text Available Compression methods suitable for image processing are described in this article in biomedical applications. The compression is often realized by reduction of irrelevance or redundancy. There are described lossless and lossy compression methods which can be use for compress of images in biomedical applications and comparison of these methods based on fidelity criteria.

  5. 29 CFR 1917.154 - Compressed air.


    ... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  6. A Coded Aperture Compressive Imaging Array and Its Visual Detection and Tracking Algorithms for Surveillance Systems

    Hanxiao Wu


    Full Text Available In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l1 optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l1 tracker without any optimization.

  7. Perceptually tuned JPEG coder for echocardiac image compression.

    Al-Fahoum, Amjed S; Reza, Ali M


    In this work, we propose an efficient framework for compressing and displaying medical images. Image compression for medical applications, due to available Digital Imaging and Communications in Medicine requirements, is limited to the standard discrete cosine transform-based joint picture expert group. The objective of this work is to develop a set of quantization tables (Q tables) for compression of a specific class of medical image sequences, namely echocardiac. The main issue of concern is to achieve a Q table that matches the specific application and can linearly change the compression rate by adjusting the gain factor. This goal is achieved by considering the region of interest, optimum bit allocation, human visual system constraint, and optimum coding technique. These parameters are jointly optimized to design a Q table that works robustly for a category of medical images. Application of this approach to echocardiac images shows high subjective and quantitative performance. The proposed approach exhibits objectively a 2.16-dB improvement in the peak signal-to-noise ratio and subjectively 25% improvement over the most useable compression techniques.

  8. CMOS low data rate imaging method based on compressed sensing

    Xiao, Long-long; Liu, Kun; Han, Da-peng


    Complementary metal-oxide semiconductor (CMOS) technology enables the integration of image sensing and image compression processing, making improvements on overall system performance possible. We present a CMOS low data rate imaging approach by implementing compressed sensing (CS). On the basis of the CS framework, the image sensor projects the image onto a separable two-dimensional (2D) basis set and measures the corresponding coefficients obtained. First, the electrical current output from the pixels in a column are combined, with weights specified by voltage, in accordance with Kirchhoff's law. The second computation is performed in an analog vector-matrix multiplier (VMM). Each element of the VMM considers the total value of each column as the input and multiplies it by a unique coefficient. Both weights and coefficients are reprogrammable through analog floating-gate (FG) transistors. The image can be recovered from a percentage of these measurements using an optimization algorithm. The percentage, which can be altered flexibly by programming on the hardware circuit, determines the image compression ratio. These novel designs facilitate image compression during the image-capture phase before storage, and have the potential to reduce power consumption. Experimental results demonstrate that the proposed method achieves a large image compression ratio and ensures imaging quality.

  9. Word aligned bitmap compression method, data structure, and apparatus

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow


    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  10. Phase diagram of matrix compressed sensing

    Schülke, Christophe; Schniter, Philip; Zdeborová, Lenka


    In the problem of matrix compressed sensing, we aim to recover a low-rank matrix from a few noisy linear measurements. In this contribution, we analyze the asymptotic performance of a Bayes-optimal inference procedure for a model where the matrix to be recovered is a product of random matrices. The results that we obtain using the replica method describe the state evolution of the Parametric Bilinear Generalized Approximate Message Passing (P-BiG-AMP) algorithm, recently introduced in J. T. Parker and P. Schniter [IEEE J. Select. Top. Signal Process. 10, 795 (2016), 10.1109/JSTSP.2016.2539123]. We show the existence of two different types of phase transition and their implications for the solvability of the problem, and we compare the results of our theoretical analysis to the numerical performance reached by P-BiG-AMP. Remarkably, the asymptotic replica equations for matrix compressed sensing are the same as those for a related but formally different problem of matrix factorization.

  11. Designing robust sensing matrix for image compression.

    Li, Gang; Li, Xiao; Li, Sheng; Bai, Huang; Jiang, Qianru; He, Xiongxiong


    This paper deals with designing sensing matrix for compressive sensing systems. Traditionally, the optimal sensing matrix is designed so that the Gram of the equivalent dictionary is as close as possible to a target Gram with small mutual coherence. A novel design strategy is proposed, in which, unlike the traditional approaches, the measure considers of mutual coherence behavior of the equivalent dictionary as well as sparse representation errors of the signals. The optimal sensing matrix is defined as the one that minimizes this measure and hence is expected to be more robust against sparse representation errors. A closed-form solution is derived for the optimal sensing matrix with a given target Gram. An alternating minimization-based algorithm is also proposed for addressing the same problem with the target Gram searched within a set of relaxed equiangular tight frame Grams. The experiments are carried out and the results show that the sensing matrix obtained using the proposed approach outperforms those existing ones using a fixed dictionary in terms of signal reconstruction accuracy for synthetic data and peak signal-to-noise ratio for real images.

  12. Compressibility, turbulence and high speed flow

    Gatski, Thomas B


    Compressibility, Turbulence and High Speed Flow introduces the reader to the field of compressible turbulence and compressible turbulent flows across a broad speed range, through a unique complimentary treatment of both the theoretical foundations and the measurement and analysis tools currently used. The book provides the reader with the necessary background and current trends in the theoretical and experimental aspects of compressible turbulent flows and compressible turbulence. Detailed derivations of the pertinent equations describing the motion of such turbulent flows is provided and

  13. 30 CFR 75.1730 - Compressed air; general; compressed air systems.


    ... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... pressure has been relieved from that part of the system to be repaired. (d) At no time shall compressed air... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems...

  14. Optimally Stopped Optimization

    Vinci, Walter; Lidar, Daniel A.


    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.

  15. Optimization and Optimal Control

    Chinchuluun, Altannar; Enkhbat, Rentsen; Tseveendorj, Ider


    During the last four decades there has been a remarkable development in optimization and optimal control. Due to its wide variety of applications, many scientists and researchers have paid attention to fields of optimization and optimal control. A huge number of new theoretical, algorithmic, and computational results have been observed in the last few years. This book gives the latest advances, and due to the rapid development of these fields, there are no other recent publications on the same topics. Key features: Provides a collection of selected contributions giving a state-of-the-art accou

  16. Compressive Imaging with Iterative Forward Models

    Liu, Hsiou-Yuan; Liu, Dehong; Mansour, Hassan; Boufounos, Petros T


    We propose a new compressive imaging method for reconstructing 2D or 3D objects from their scattered wave-field measurements. Our method relies on a novel, nonlinear measurement model that can account for the multiple scattering phenomenon, which makes the method preferable in applications where linear measurement models are inaccurate. We construct the measurement model by expanding the scattered wave-field with an accelerated-gradient method, which is guaranteed to converge and is suitable for large-scale problems. We provide explicit formulas for computing the gradient of our measurement model with respect to the unknown image, which enables image formation with a sparsity- driven numerical optimization algorithm. We validate the method both analytically and with numerical simulations.

  17. Word-Based Text Compression

    Platos, Jan


    Today there are many universal compression algorithms, but in most cases is for specific data better using specific algorithm - JPEG for images, MPEG for movies, etc. For textual documents there are special methods based on PPM algorithm or methods with non-character access, e.g. word-based compression. In the past, several papers describing variants of word-based compression using Huffman encoding or LZW method were published. The subject of this paper is the description of a word-based compression variant based on the LZ77 algorithm. The LZ77 algorithm and its modifications are described in this paper. Moreover, various ways of sliding window implementation and various possibilities of output encoding are described, as well. This paper also includes the implementation of an experimental application, testing of its efficiency and finding the best combination of all parts of the LZ77 coder. This is done to achieve the best compression ratio. In conclusion there is comparison of this implemented application wi...

  18. Statistical mechanics analysis of thresholding 1-bit compressed sensing

    Xu, Yingying


    The one-bit compressed sensing framework aims to reconstruct a sparse signal by only using the sign information of its linear measurements. To compensate for the loss of scale information, past studies in the area have proposed recovering the signal by imposing an additional constraint on the L2-norm of the signal. Recently, an alternative strategy that captures scale information by introducing a threshold parameter to the quantization process was advanced. In this paper, we analyze the typical behavior of the thresholding 1-bit compressed sensing utilizing the replica method of statistical mechanics, so as to gain an insight for properly setting the threshold value. Our result shows that, fixing the threshold at a constant value yields better performance than varying it randomly when the constant is optimally tuned, statistically. Unfortunately, the optimal threshold value depends on the statistical properties of the target signal, which may not be known in advance. In order to handle this inconvenience, we ...

  19. Coding Strategies and Implementations of Compressive Sensing

    Tsai, Tsung-Han

    information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

  20. MAXAD distortion minimization for wavelet compression of remote sensing data

    Alecu, Alin; Munteanu, Adrian; Schelkens, Peter; Cornelis, Jan P.; Dewitte, Steven


    In the context of compression of high resolution multi-spectral satellite image data consisting of radiances and top-of-the-atmosphere fluxes, it is vital that image calibration characteristics (luminance, radiance) must be preserved within certain limits in lossy image compression. Though existing compression schemes (SPIHT, JPEG2000, SQP) give good results as far as minimization of the global PSNR error is concerned, they fail to guarantee a maximum local error. With respect to this, we introduce a new image compression scheme, which guarantees a MAXAD distortion, defined as the maximum absolute difference between original pixel values and reconstructed pixel values. In terms of defining the Lagrangian optimization problem, this reflects in minimization of the rate given the MAXAD distortion. Our approach thus uses the l-infinite distortion measure, which is applied to the lifting scheme implementation of the 9-7 floating point Cohen-Daubechies-Feauveau (CDF) filter. Scalar quantizers, optimal in the D-R sense, are derived for every subband, by solving a global optimization problem that guarantees a user-defined MAXAD. The optimization problem has been defined and solved for the case of the 9-7 filter, and we show that our approach is valid and may be applied to any finite wavelet filters synthesized via lifting. The experimental assessment of our codec shows that our technique provides excellent results in applications such as those for remote sensing, in which reconstruction of image calibration characteristics within a tolerable local error (MAXAD) is perceived as being of crucial importance compared to obtaining an acceptable global error (PSNR), as is the case of existing quantizer design techniques.

  1. Parametric Analysis of Composite Reinforced Wood Tubes Under Axial Compression

    Cabrero, J.; Heiduschke, A.; Haller, P. (P.)


    Wood tubes combine economy, an efficient use of the material and optimal structural performance. They can be optionally reinforced with technical fibers and/or textiles laminated to the outer wood surface. The paper presents the outcomes of a parametric study on the performance of wood reinforced tubes submitted to axial compression. Simple analytical models were applied to estimate the load-carrying capacity of the tubes and their failure mechanisms. Analytical and numerical models were deve...

  2. Morphological Transform for Image Compression

    Luis Pastor Sanchez Fernandez


    Full Text Available A new method for image compression based on morphological associative memories (MAMs is presented. We used the MAM to implement a new image transform and applied it at the transformation stage of image coding, thereby replacing such traditional methods as the discrete cosine transform or the discrete wavelet transform. Autoassociative and heteroassociative MAMs can be considered as a subclass of morphological neural networks. The morphological transform (MT presented in this paper generates heteroassociative MAMs derived from image subblocks. The MT is applied to individual blocks of the image using some transformation matrix as an input pattern. Depending on this matrix, the image takes a morphological representation, which is used to perform the data compression at the next stages. With respect to traditional methods, the main advantage offered by the MT is the processing speed, whereas the compression rate and the signal-to-noise ratio are competitive to conventional transforms.

  3. Compressive Sensing in Communication Systems

    Fyhn, Karsten


    Wireless communication is omnipresent today, but this development has led to frequency spectrum becoming a limited resource. Furthermore, wireless devices become more and more energy-limited, due to the demand for continual wireless communication of higher and higher amounts of information....... The need for cheaper, smarter and more energy efficient wireless devices is greater now than ever. This thesis addresses this problem and concerns the application of the recently developed sampling theory of compressive sensing in communication systems. Compressive sensing is the merging of signal...... acquisition and compression. It allows for sampling a signal with a rate below the bound dictated by the celebrated Shannon-Nyquist sampling theorem. In some communication systems this necessary minimum sample rate, dictated by the Shannon-Nyquist sampling theorem, is so high it is at the limit of what...

  4. Compressive Sensing for MIMO Radar

    Yu, Yao; Poor, H Vincent


    Multiple-input multiple-output (MIMO) radar systems have been shown to achieve superior resolution as compared to traditional radar systems with the same number of transmit and receive antennas. This paper considers a distributed MIMO radar scenario, in which each transmit element is a node in a wireless network, and investigates the use of compressive sampling for direction-of-arrival (DOA) estimation. According to the theory of compressive sampling, a signal that is sparse in some domain can be recovered based on far fewer samples than required by the Nyquist sampling theorem. The DOA of targets form a sparse vector in the angle space, and therefore, compressive sampling can be applied for DOA estimation. The proposed approach achieves the superior resolution of MIMO radar with far fewer samples than other approaches. This is particularly useful in a distributed scenario, in which the results at each receive node need to be transmitted to a fusion center for further processing.

  5. Compressive Sensing with Optical Chaos

    Rontani, D.; Choi, D.; Chang, C.-Y.; Locquet, A.; Citrin, D. S.


    Compressive sensing (CS) is a technique to sample a sparse signal below the Nyquist-Shannon limit, yet still enabling its reconstruction. As such, CS permits an extremely parsimonious way to store and transmit large and important classes of signals and images that would be far more data intensive should they be sampled following the prescription of the Nyquist-Shannon theorem. CS has found applications as diverse as seismology and biomedical imaging. In this work, we use actual optical signals generated from temporal intensity chaos from external-cavity semiconductor lasers (ECSL) to construct the sensing matrix that is employed to compress a sparse signal. The chaotic time series produced having their relevant dynamics on the 100 ps timescale, our results open the way to ultrahigh-speed compression of sparse signals.

  6. Compressive behavior of fine sand.

    Martin, Bradley E. (Air Force Research Laboratory, Eglin, FL); Kabir, Md. E. (Purdue University, West Lafayette, IN); Song, Bo; Chen, Wayne (Purdue University, West Lafayette, IN)


    The compressive mechanical response of fine sand is experimentally investigated. The strain rate, initial density, stress state, and moisture level are systematically varied. A Kolsky bar was modified to obtain uniaxial and triaxial compressive response at high strain rates. A controlled loading pulse allows the specimen to acquire stress equilibrium and constant strain-rates. The results show that the compressive response of the fine sand is not sensitive to strain rate under the loading conditions in this study, but significantly dependent on the moisture content, initial density and lateral confinement. Partially saturated sand is more compliant than dry sand. Similar trends were reported in the quasi-static regime for experiments conducted at comparable specimen conditions. The sand becomes stiffer as initial density and/or confinement pressure increases. The sand particle size become smaller after hydrostatic pressure and further smaller after dynamic axial loading.

  7. Fast, efficient lossless data compression

    Ross, Douglas


    This paper presents lossless data compression and decompression algorithms which can be easily implemented in software. The algorithms can be partitioned into their fundamental parts which can be implemented at various stages within a data acquisition system. This allows for efficient integration of these functions into systems at the stage where they are most applicable. The algorithms were coded in Forth to run on a Silicon Composers Single Board Computer (SBC) using the Harris RTX2000 Forth processor. The algorithms require very few system resources and operate very fast. The performance of the algorithms with the RTX enables real time data compression and decompression to be implemented for a wide range of applications.

  8. [Vascular compression of the duodenum].

    Acosta, B; Guachalla, G; Martínez, C; Felce, S; Ledezma, G


    The acute vascular compression of the duodenum is a well-recognized clinical entity, characterized by recurrent vomiting, abdominal distention, weight loss, post prandial distress. The cause of compression is considered to be effect produced as a result of the angle formed by the superior mesenteric vessels and sometimes by one of its first two branches, and vertebrae and paravertebral muscles, when the angle between superior mesenteric vessels and the aorta it's lower than 18 degrees we can saw this syndrome. The duodenojejunostomy is the best treatment, as well as in our patient.

  9. GPU-accelerated compressive holography.

    Endo, Yutaka; Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi


    In this paper, we show fast signal reconstruction for compressive holography using a graphics processing unit (GPU). We implemented a fast iterative shrinkage-thresholding algorithm on a GPU to solve the ℓ1 and total variation (TV) regularized problems that are typically used in compressive holography. Since the algorithm is highly parallel, GPUs can compute it efficiently by data-parallel computing. For better performance, our implementation exploits the structure of the measurement matrix to compute the matrix multiplications. The results show that GPU-based implementation is about 20 times faster than CPU-based implementation.

  10. Compressing the Inert Doublet Model

    Blinov, Nikita; Morrissey, David E; de la Puente, Alejandro


    The Inert Doublet Model relies on a discrete symmetry to prevent couplings of the new scalars to Standard Model fermions. This stabilizes the lightest inert state, which can then contribute to the observed dark matter density. In the presence of additional approximate symmetries, the resulting spectrum of exotic scalars can be compressed. Here, we study the phenomenological and cosmological implications of this scenario. We derive new limits on the compressed Inert Doublet Model from LEP, and outline the prospects for exclusion and discovery of this model at dark matter experiments, the LHC, and future colliders.


    Xiao Jiang; Wu Chengke


    In order to apply the Human Visual System (HVS) model to JPEG2000 standard,several implementation alternatives are discussed and a new scheme of visual optimization isintroduced with modifying the slope of rate-distortion. The novelty is that the method of visual weighting is not lifting the coefficients in wavelet domain, but is complemented by code stream organization. It remains all the features of Embedded Block Coding with Optimized Truncation (EBCOT) such as resolution progressive, good robust for error bit spread and compatibility of lossless compression. Well performed than other methods, it keeps the shortest standard codestream and decompression time and owns the ability of VIsual Progressive (VIP) coding.

  12. DNA Lossless Differential Compression Algorithm based on Similarity of Genomic Sequence Database

    Afify, Heba; Wahed, Manal Abdel


    Modern biological science produces vast amounts of genomic sequence data. This is fuelling the need for efficient algorithms for sequence compression and analysis. Data compression and the associated techniques coming from information theory are often perceived as being of interest for data communication and storage. In recent years, a substantial effort has been made for the application of textual data compression techniques to various computational biology tasks, ranging from storage and indexing of large datasets to comparison of genomic databases. This paper presents a differential compression algorithm that is based on production of difference sequences according to op-code table in order to optimize the compression of homologous sequences in dataset. Therefore, the stored data are composed of reference sequence, the set of differences, and differences locations, instead of storing each sequence individually. This algorithm does not require a priori knowledge about the statistics of the sequence set. The...

  13. Optimization of injection pressure for a compression ignition engine ...


    techniques, such as heating of fuel lines, trans-esterification, modification of .... cylinder, 4 stroke, naturally aspirated, direct injection, water cooled, eddy current .... Initially the engine was run with diesel to know the performance at 180 bar .... modeling, Alternate fuels, Heat transfer, Refrigeration and Air-conditioning. She.

  14. Towards optimal cosmological parameter recovery from compressed bispectrum statistics

    Byun, Joyce; Eggemeier, Alexander; Regan, Donough; Seery, David; Smith, Robert E.


    Over the next decade, improvements in cosmological parameter constraints will be driven by surveys of a large-scale structure in the Universe. The information they contain can be measured by suitably chosen correlation functions, and the non-linearity of structure formation implies that significant information will be carried by the 3-point function or higher correlators. Extracting this information is extremely challenging, requiring accurate modelling and significant computational resources to estimate the covariance matrix describing correlation between different Fourier configurations. We investigate whether it is possible to reduce this matrix without significant loss of information by using a proxy that aggregates the bispectrum over a subset of configurations. Specifically, we study constraints on ΛCDM parameters from a future galaxy survey combining the power spectrum with (a) the integrated bispectrum, (b) the line correlation function and (c) the modal decomposition of the bispectrum. We include a simple estimate for the degradation of the bispectrum with shot noise. Our results demonstrate that the modal bispectrum has comparable performance to the Fourier bispectrum, even using considerably fewer modes than Fourier configurations. The line correlation function has good performance, but is less effective. The integrated bispectrum is comparatively insensitive to the background cosmology. Addition of bispectrum data can improve constraints on bias parameters and σ8 by a factor between 3 and 5 compared to power spectrum measurements alone. For other parameters, improvements of up to ∼20 per cent are possible. Finally, we use a range of theoretical models to explore the sophistication required to produce realistic predictions for each proxy.

  15. Energy optimization in industrial bakery : refrigeration, compressed air, ovens



    Ontario's $3.3 billion baking industry includes 459 companies with 20,000 employees. Although the industry relies on grain and other raw materials, the location and geographic market potential has been broadened by the use of refrigeration technology. The cost of refrigeration and baking has rendered bakeries energy intensive, requiring gas for baking and electricity for freezers. The rising energy costs are becoming a major portion of the ingredient costs of baked goods. The Ontario Food Industry Cost Reduction Program was prepared to help the food industry reduce their energy, water and sewer costs. This paper describes the participation of Oakrun Farm Bakery in an Industrial Energy Efficiency Audit in the winter of 2001. The audit revealed that the bakery has the potential to reduce natural gas consumption by 17 per cent and reduce electricity consumption by 13 per cent for a potential reduction in greenhouse gas emissions of 800 tonnes per year. The audit identified 4 energy saving opportunities for the refrigeration system: reduce compressor discharge; increase suction pressure; install an evaporative condenser; and, install electronic temperature controls. The company plans to implement the opportunities on a prioritized basis in 2003 following an expansion to the plant. 1 tab., 2 figs.

  16. Wavelet and wavelet packet compression of electrocardiograms.

    Hilton, M L


    Wavelets and wavelet packets have recently emerged as powerful tools for signal compression. Wavelet and wavelet packet-based compression algorithms based on embedded zerotree wavelet (EZW) coding are developed for electrocardiogram (ECG) signals, and eight different wavelets are evaluated for their ability to compress Holter ECG data. Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECG's are clinically useful.

  17. Grid-free compressive beamforming

    Xenaki, Angeliki; Gerstoft, Peter


    The direction-of-arrival (DOA) estimation problem involves the localization of a few sources from a limited number of observations on an array of sensors, thus it can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve high...

  18. LIDAR data compression using wavelets

    Pradhan, B.; Mansor, Shattri; Ramli, Abdul Rahman; Mohamed Sharif, Abdul Rashid B.; Sandeep, K.


    The lifting scheme has been found to be a flexible method for constructing scalar wavelets with desirable properties. In this paper, it is extended to the LIDAR data compression. A newly developed data compression approach to approximate the LIDAR surface with a series of non-overlapping triangles has been presented. Generally a Triangulated Irregular Networks (TIN) are the most common form of digital surface model that consists of elevation values with x, y coordinates that make up triangles. But over the years the TIN data representation has become a case in point for many researchers due its large data size. Compression of TIN is needed for efficient management of large data and good surface visualization. This approach covers following steps: First, by using a Delaunay triangulation, an efficient algorithm is developed to generate TIN, which forms the terrain from an arbitrary set of data. A new interpolation wavelet filter for TIN has been applied in two steps, namely splitting and elevation. In the splitting step, a triangle has been divided into several sub-triangles and the elevation step has been used to 'modify' the point values (point coordinates for geometry) after the splitting. Then, this data set is compressed at the desired locations by using second generation wavelets. The quality of geographical surface representation after using proposed technique is compared with the original LIDAR data. The results show that this method can be used for significant reduction of data set.

  19. Compressed Blind De-convolution

    Saligrama, V


    Suppose the signal x is realized by driving a k-sparse signal u through an arbitrary unknown stable discrete-linear time invariant system H. These types of processes arise naturally in Reflection Seismology. In this paper we are interested in several problems: (a) Blind-Deconvolution: Can we recover both the filter $H$ and the sparse signal $u$ from noisy measurements? (b) Compressive Sensing: Is x compressible in the conventional sense of compressed sensing? Namely, can x, u and H be reconstructed from a sparse set of measurements. We develop novel L1 minimization methods to solve both cases and establish sufficient conditions for exact recovery for the case when the unknown system H is auto-regressive (i.e. all pole) of a known order. In the compressed sensing/sampling setting it turns out that both H and x can be reconstructed from O(k log(n)) measurements under certain technical conditions on the support structure of u. Our main idea is to pass x through a linear time invariant system G and collect O(k lo...

  20. Compressing spatio-temporal trajectories

    Gudmundsson, Joachim; Katajainen, Jyrki; Merrick, Damian


    A trajectory is a sequence of locations, each associated with a timestamp, describing the movement of a point. Trajectory data is becoming increasingly available and the size of recorded trajectories is getting larger. In this paper we study the problem of compressing planar trajectories such tha...

  1. Range Compressed Holographic Aperture Ladar


    digital holography, laser, active imaging, remote sensing, laser imaging 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT: SAR 8...slow speed tunable lasers, while relaxing the need to precisely track the transceiver or target motion. In the following section we describe a scenario...contrast targets. As shown in Figure 28, augmenting holographic ladar with range compression relaxes the dependence of image reconstruction on

  2. Compressive passive millimeter wave imager

    Gopalsami, Nachappa; Liao, Shaolin; Elmer, Thomas W; Koehl, Eugene R; Heifetz, Alexander; Raptis, Apostolos C


    A compressive scanning approach for millimeter wave imaging and sensing. A Hadamard mask is positioned to receive millimeter waves from an object to be imaged. A subset of the full set of Hadamard acquisitions is sampled. The subset is used to reconstruct an image representing the object.

  3. Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.

    Gupta, Rajarshi


    Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.

  4. Lossless Compression Performance of a Simple Counter-Based Entropy Coder

    Armein Z. R. Langi


    This paper describes the performance of a simple counter based entropy coder, as compared to other entropy coders, especially Huffman coder. Lossless data compression, such as Huffman coder and arithmetic coder, are designed to perform well over a wide range of data entropy. As a result, the coders require significant computational resources that could be the bottleneck of a compression implementation performance. In contrast, counter-based coders are designed to be optimal on a limited entro...

  5. Development of Experimental Device for Compression Load Deflection of Car Door Seals

    赵建才; 朱训生; 万德安


    A new experimental device has been developed for analyzing compression load deflection of the door seal by using stereovision theory. Precision instruments of optical grating and force sensor are also integrated in this device. Force-displacement response characteristics of compression at varied speed can be controlled. Solid foundations for characteristic and structure as well as optimization design of the car door seal are elucidated.

  6. Optimization of plasma amplifiers

    Sadler, James D.; Trines, Raoul M. Â. G. Â. M.; Tabak, Max; Haberberger, Dan; Froula, Dustin H.; Davies, Andrew S.; Bucht, Sara; Silva, Luís O.; Alves, E. Paulo; Fiúza, Frederico; Ceurvorst, Luke; Ratan, Naren; Kasim, Muhammad F.; Bingham, Robert; Norreys, Peter A.


    Plasma amplifiers offer a route to side-step limitations on chirped pulse amplification and generate laser pulses at the power frontier. They compress long pulses by transferring energy to a shorter pulse via the Raman or Brillouin instabilities. We present an extensive kinetic numerical study of the three-dimensional parameter space for the Raman case. Further particle-in-cell simulations find the optimal seed pulse parameters for experimentally relevant constraints. The high-efficiency self-similar behavior is observed only for seeds shorter than the linear Raman growth time. A test case similar to an upcoming experiment at the Laboratory for Laser Energetics is found to maintain good transverse coherence and high-energy efficiency. Effective compression of a 10 kJ , nanosecond-long driver pulse is also demonstrated in a 15-cm-long amplifier.

  7. Compressed Sensing and Low-Rank Matrix Decomposition in Multisource Images Fusion

    Kan Ren


    Full Text Available We propose a novel super-resolution multisource images fusion scheme via compressive sensing and dictionary learning theory. Under the sparsity prior of images patches and the framework of the compressive sensing theory, the multisource images fusion is reduced to a signal recovery problem from the compressive measurements. Then, a set of multiscale dictionaries are learned from several groups of high-resolution sample image’s patches via a nonlinear optimization algorithm. Moreover, a new linear weights fusion rule is proposed to obtain the high-resolution image. Some experiments are taken to investigate the performance of our proposed method, and the results prove its superiority to its counterparts.

  8. Informationally complete measurements from compressed sensing methodology

    Kalev, Amir; Riofrio, Carlos; Kosut, Robert; Deutsch, Ivan


    Compressed sensing (CS) is a technique to faithfully estimate an unknown signal from relatively few data points when the measurement samples satisfy a restricted isometry property (RIP). Recently this technique has been ported to quantum information science to perform tomography with a substantially reduced number of measurement settings. In this work we show that the constraint that a physical density matrix is positive semidefinite provides a rigorous connection between the RIP and the informational completeness (IC) of a POVM used for state tomography. This enables us to construct IC measurements that are robust to noise using tools provided by the CS methodology. The exact recovery no longer hinges on a particular convex optimization program; solving any optimization, constrained on the cone of positive matrices, effectively results in a CS estimation of the state. From a practical point of view, we can therefore employ fast algorithms developed to handle large dimensional matrices for efficient tomography of quantum states of a large dimensional Hilbert space. Supported by the National Science Foundation.

  9. Semantic Source Coding for Flexible Lossy Image Compression

    Phoha, Shashi; Schmiedekamp, Mendel


    Semantic Source Coding for Lossy Video Compression investigates methods for Mission-oriented lossy image compression, by developing methods to use different compression levels for different portions...

  10. Infraspinatus muscle atrophy from suprascapular nerve compression.

    Cordova, Christopher B; Owens, Brett D


    Muscle weakness without pain may signal a nerve compression injury. Because these injuries should be identified and treated early to prevent permanent muscle weakness and atrophy, providers should consider suprascapular nerve compression in patients with shoulder muscle weakness.


    Danny M. Deffenbaugh; Klaus Brun; Ralph E. Harris; J. Pete Harrell; Robert J. Mckee; J. Jeffrey Moore; Steven J. Svedeman; Anthony J. Smalley; Eugene L. Broerman; Robert A Hart; Marybeth G. Nored; Ryan S. Gernentz; Shane P. Siebenaler


    The U.S. natural gas pipeline industry is facing the twin challenges of increased flexibility and capacity expansion. To meet these challenges, the industry requires improved choices in gas compression to address new construction and enhancement of the currently installed infrastructure. The current fleet of installed reciprocating compression is primarily slow-speed integral machines. Most new reciprocating compression is and will be large, high-speed separable units. The major challenges with the fleet of slow-speed integral machines are: limited flexibility and a large range in performance. In an attempt to increase flexibility, many operators are choosing to single-act cylinders, which are causing reduced reliability and integrity. While the best performing units in the fleet exhibit thermal efficiencies between 90% and 92%, the low performers are running down to 50% with the mean at about 80%. The major cause for this large disparity is due to installation losses in the pulsation control system. In the better performers, the losses are about evenly split between installation losses and valve losses. The major challenges for high-speed machines are: cylinder nozzle pulsations, mechanical vibrations due to cylinder stretch, short valve life, and low thermal performance. To shift nozzle pulsation to higher orders, nozzles are shortened, and to dampen the amplitudes, orifices are added. The shortened nozzles result in mechanical coupling with the cylinder, thereby, causing increased vibration due to the cylinder stretch mode. Valve life is even shorter than for slow speeds and can be on the order of a few months. The thermal efficiency is 10% to 15% lower than slow-speed equipment with the best performance in the 75% to 80% range. The goal of this advanced reciprocating compression program is to develop the technology for both high speed and low speed compression that will expand unit flexibility, increase thermal efficiency, and increase reliability and integrity

  12. Considerations and Algorithms for Compression of Sets

    Larsson, Jesper

    compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques.......We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...

  13. Cascaded quadratic soliton compression at 800 nm

    Bache, Morten; Bang, Ole; Moses, Jeffrey;


    We study soliton compression in quadratic nonlinear materials at 800 nm, where group-velocity mismatch dominates. We develop a nonlocal theory showing that efficient compression depends strongly on characteristic nonlocal time scales related to pulse dispersion.......We study soliton compression in quadratic nonlinear materials at 800 nm, where group-velocity mismatch dominates. We develop a nonlocal theory showing that efficient compression depends strongly on characteristic nonlocal time scales related to pulse dispersion....

  14. Still image and video compression with MATLAB

    Thyagarajan, K


    This book describes the principles of image and video compression techniques and introduces current and popular compression standards, such as the MPEG series. Derivations of relevant compression algorithms are developed in an easy-to-follow fashion. Numerous examples are provided in each chapter to illustrate the concepts. The book includes complementary software written in MATLAB SIMULINK to give readers hands-on experience in using and applying various video compression methods. Readers can enhance the software by including their own algorithms.

  15. Simultaneous denoising and compression of multispectral images

    Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.


    A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.

  16. Image quality (IQ) guided multispectral image compression

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik


    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  17. Brain image Compression, a brief survey

    Saleha Masood


    Full Text Available Brain image compression is known as a subfield of image compression. It allows the deep analysis and measurements of brain images in different modes. Brain images are compressed to analyze and diagnose in an effective manner while reducing the image storage space. This survey study describes the different existing techniques regarding brain image compression. The techniques come under different categories. The study also discusses these categories.

  18. Position index preserving compression of text data

    Akhtar, Nasim; Rashid, Mamunur; Islam, Shafiqul; Kashem, Mohammod Abul; Kolybanov, Cyrll Y.


    Data compression offers an attractive approach to reducing communication cost by using available bandwidth effectively. It also secures data during transmission for its encoded form. In this paper an index based position oriented lossless text compression called PIPC ( Position Index Preserving Compression) is developed. In PIPC the position of the input word is denoted by ASCII code. The basic philosopy of the secure compression is to preprocess the text and transform it into some intermedia...

  19. Discontinuity of Gas-dynamic Variables in the Center of the Compression Wave

    Pavel Viktorovich Bulat


    Full Text Available The purpose of research-the study of the flow in the center of the centered isentropic compression waves. Gas-dynamic discontinuities cover shocks, shockwaves, interfaces and sliding surfaces and also the center of the centered compression wave one-dimensional and two-dimensional. For a long time there has been no analysis of the shockwave structures arising in the center of compression waves. At the same time, the problem of development of supersonic and hypersonic air inlets demands to consider the process of the stream isentropic compression. This problem is connected (three-dimensional case to the problem of arising inside the streams of hinged shocks as opposite to the usual discontinuities not resulted by interaction of supersonic streams, waves and discontinuities, but like from nowhere. This study sets the problem for study in the terms of the developed theory of the interference of gas-dynamic discontinuities of the area of existing solutions for the structures of possible types. We have obtained the relations describing the parameters in the center of the compression wave. We have considered the neutral polar of neither compression meeting the case when in the center of the compression wave there neither shocks nor depression waves. The analysis of properties of the centered compression wave adds to the theory of stationary gas-dynamic discontinuities. We have specified the borders of the shock structure existence area optimal for development of supersonic diffusers.

  20. Audio-visual perception of compressed speech by profoundly hearing-impaired subjects.

    Drullman, R; Smoorenburg, G F


    For many people with profound hearing loss conventional hearing aids give only little support in speechreading. This study aims at optimizing the presentation of speech signals in the severely reduced dynamic range of the profoundly hearing impaired by means of multichannel compression and multichannel amplification. The speech signal in each of six 1-octave channels (125-4000 Hz) was compressed instantaneously, using compression ratios of 1, 2, 3, or 5, and a compression threshold of 35 dB below peak level. A total of eight conditions were composed in which the compression ratio varied per channel. Sentences were presented audio-visually to 16 profoundly hearing-impaired subjects and syllable intelligibility was measured. Results show that all auditory signals are valuable supplements to speechreading. No clear overall preference is found for any of the compression conditions, but relatively high compression ratios (> 3-5) have a significantly detrimental effect. Inspection of the individual results reveals that compression may be beneficial for one subject.

  1. A Proxy Architecture to Enhance the Performance of WAP 2.0 by Data Compression

    Yin Zhanping


    Full Text Available This paper presents a novel proxy architecture for wireless application protocol (WAP employing an advanced data compression scheme. Though optional in WAP , a proxy can isolate the wireless from the wired domain to prevent error propagations and to eliminate wireless session delays (WSD by enabling long-lived connections between the proxy and wireless terminals. The proposed data compression scheme combines content compression together with robust header compression (ROHC, which minimizes the air-interface traffic data, thus significantly reduces the wireless access time. By using the content compression at the transport layer, it also enables TLS tunneling, which overcomes the end-to-end security problem in WAP 1.x. Performance evaluations show that while WAP 1.x is optimized for narrowband wireless channels, WAP utilizing TCP/IP outperforms WAP 1.x over wideband wireless channels even without compression. The proposed data compression scheme reduces the wireless access time of WAP by over in CDMA2000 1XRTT channels, and in low-speed IS-95 channels, substantially reduces access time to give comparable performance to WAP 1.x. The performance enhancement is mainly contributed by the reply content compression, with ROHC offering further enhancements.

  2. A Proxy Architecture to Enhance the Performance of WAP 2.0 by Data Compression

    Yin Zhanping


    Full Text Available This paper presents a novel proxy architecture for wireless application protocol (WAP 2.0 employing an advanced data compression scheme. Though optional in WAP 2.0 , a proxy can isolate the wireless from the wired domain to prevent error propagations and to eliminate wireless session delays (WSD by enabling long-lived connections between the proxy and wireless terminals. The proposed data compression scheme combines content compression together with robust header compression (ROHC, which minimizes the air-interface traffic data, thus significantly reduces the wireless access time. By using the content compression at the transport layer, it also enables TLS tunneling, which overcomes the end-to-end security problem in WAP 1.x. Performance evaluations show that while WAP 1.x is optimized for narrowband wireless channels, WAP 2.0 utilizing TCP/IP outperforms WAP 1.x over wideband wireless channels even without compression. The proposed data compression scheme reduces the wireless access time of WAP 2.0 by over 45% in CDMA2000 1XRTT channels, and in low-speed IS-95 channels, substantially reduces access time to give comparable performance to WAP 1.x. The performance enhancement is mainly contributed by the reply content compression, with ROHC offering further enhancements.

  3. Evaluation and Comparison of Motion Estimation Algorithms for Video Compression

    Avinash Nayak


    Full Text Available Video compression has become an essential component of broadcast and entertainment media. Motion Estimation and compensation techniques, which can eliminate temporal redundancy between adjacent frames effectively, have been widely applied to popular video compression coding standards such as MPEG-2, MPEG-4. Traditional fast block matching algorithms are easily trapped into the local minima resulting in degradation on video quality to some extent after decoding. In this paper various computing techniques are evaluated in video compression for achieving global optimal solution for motion estimation. Zero motion prejudgment is implemented for finding static macro blocks (MB which do not need to perform remaining search thus reduces the computational cost. Adaptive Rood Pattern Search (ARPS motion estimation algorithm is also adapted to reduce the motion vector overhead in frame prediction. The simulation results showed that the ARPS algorithm is very effective in reducing the computations overhead and achieves very good Peak Signal to Noise Ratio (PSNR values. This method significantly reduces the computational complexity involved in the frame prediction and also least prediction error in all video sequences. Thus ARPS technique is more efficient than the conventional searching algorithms in video compression.

  4. A novel psychovisual threshold on large DCT for image compression.

    Abu, Nur Azman; Ernawan, Ferda


    A psychovisual experiment prescribes the quantization values in image compression. The quantization process is used as a threshold of the human visual system tolerance to reduce the amount of encoded transform coefficients. It is very challenging to generate an optimal quantization value based on the contribution of the transform coefficient at each frequency order. The psychovisual threshold represents the sensitivity of the human visual perception at each frequency order to the image reconstruction. An ideal contribution of the transform at each frequency order will be the primitive of the psychovisual threshold in image compression. This research study proposes a psychovisual threshold on the large discrete cosine transform (DCT) image block which will be used to automatically generate the much needed quantization tables. The proposed psychovisual threshold will be used to prescribe the quantization values at each frequency order. The psychovisual threshold on the large image block provides significant improvement in the quality of output images. The experimental results on large quantization tables from psychovisual threshold produce largely free artifacts in the visual output image. Besides, the experimental results show that the concept of psychovisual threshold produces better quality image at the higher compression rate than JPEG image compression.

  5. Modeling 3D faces from samplings via compressive sensing

    Sun, Qi; Tang, Yanlong; Hu, Ping


    3D data is easier to acquire for family entertainment purpose today because of the mass-production, cheapness and portability of domestic RGBD sensors, e.g., Microsoft Kinect. However, the accuracy of facial modeling is affected by the roughness and instability of the raw input data from such sensors. To overcome this problem, we introduce compressive sensing (CS) method to build a novel 3D super-resolution scheme to reconstruct high-resolution facial models from rough samples captured by Kinect. Unlike the simple frame fusion super-resolution method, this approach aims to acquire compressed samples for storage before a high-resolution image is produced. In this scheme, depth frames are firstly captured and then each of them is measured into compressed samples using sparse coding. Next, the samples are fused to produce an optimal one and finally a high-resolution image is recovered from the fused sample. This framework is able to recover 3D facial model of a given user from compressed simples and this can reducing storage space as well as measurement cost in future devices e.g., single-pixel depth cameras. Hence, this work can potentially be applied into future applications, such as access control system using face recognition, and smart phones with depth cameras, which need high resolution and little measure time.

  6. Compressive measurement and feature reconstruction method for autonomous star trackers

    Yin, Hang; Yan, Ye; Song, Xin; Yang, Yueneng


    Compressive sensing (CS) theory provides a framework for signal reconstruction using a sub-Nyquist sampling rate. CS theory enables the reconstruction of a signal that is sparse or compressible from a small set of measurements. The current CS application in optical field mainly focuses on reconstructing the original image using optimization algorithms and conducts data processing in full-dimensional image, which cannot reduce the data processing rate. This study is based on the spatial sparsity of star image and proposes a new compressive measurement and reconstruction method that extracts the star feature from compressive data and directly reconstructs it to the original image for attitude determination. A pixel-based folding model that preserves the star feature and enables feature reconstruction is presented to encode the original pixel location into the superposed space. A feature reconstruction method is then proposed to extract the star centroid by compensating distortions and to decode the centroid without reconstructing the whole image, which reduces the sampling rate and data processing rate at the same time. The statistical results investigate the proportion of star distortion and false matching results, which verifies the correctness of the proposed method. The results also verify the robustness of the proposed method to a great extent and demonstrate that its performance can be improved by sufficient measurement in noise cases. Moreover, the result on real star images significantly ensures the correct star centroid estimation for attitude determination and confirms the feasibility of applying the proposed method in a star tracker.

  7. Informational Analysis for Compressive Sampling in Radar Imaging

    Jingxiong Zhang


    Full Text Available Compressive sampling or compressed sensing (CS works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs. Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.

  8. A Novel Psychovisual Threshold on Large DCT for Image Compression


    A psychovisual experiment prescribes the quantization values in image compression. The quantization process is used as a threshold of the human visual system tolerance to reduce the amount of encoded transform coefficients. It is very challenging to generate an optimal quantization value based on the contribution of the transform coefficient at each frequency order. The psychovisual threshold represents the sensitivity of the human visual perception at each frequency order to the image reconstruction. An ideal contribution of the transform at each frequency order will be the primitive of the psychovisual threshold in image compression. This research study proposes a psychovisual threshold on the large discrete cosine transform (DCT) image block which will be used to automatically generate the much needed quantization tables. The proposed psychovisual threshold will be used to prescribe the quantization values at each frequency order. The psychovisual threshold on the large image block provides significant improvement in the quality of output images. The experimental results on large quantization tables from psychovisual threshold produce largely free artifacts in the visual output image. Besides, the experimental results show that the concept of psychovisual threshold produces better quality image at the higher compression rate than JPEG image compression. PMID:25874257

  9. H.264/AVC Video Compression on Smartphones

    Sharabayko, M. P.; Markov, N. G.


    In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.

  10. BPCS steganography using EZW lossy compressed images

    Spaulding, Jeremiah; Noda, Hideki; Shirazi, Mahdad N.; Kawaguchi, Eiji


    This paper presents a steganography method based on an embedded zerotree wavelet (EZW) compression scheme and bit-plane complexity segmentation (BPCS) steganography. The proposed steganography enables us to use lossy compressed images as dummy files in bit-plane-based steganographic algorithms. Large embedding rates of around 25% of the compressed image size were achieved with little noticeable degradation in image quality.

  11. A review on the recent development of solar absorption and vapour compression based hybrid air conditioning with low temperature storage

    Noor D. N.


    Full Text Available Conventional air conditioners or vapour compression systems are main contributors to energy consumption in modern buildings. There are common environmental issues emanating from vapour compression system such as greenhouse gas emission and heat wastage. These problems can be reduced by adaptation of solar energy components to vapour compression system. However, intermittence input of daily solar radiation was the main issue of solar energy system. This paper presents the recent studies on hybrid air conditioning system. In addition, the basic vapour compression system and components involved in the solar air conditioning system are discussed. Introduction of low temperature storage can be an interactive solution and improved economically which portray different modes of operating strategies. Yet, very few studies have examined on optimal operating strategies of the hybrid system. Finally, the findings of this review will help suggest optimization of solar absorption and vapour compression based hybrid air conditioning system for future work while considering both economic and environmental factors.

  12. CPAC: Energy-Efficient Data Collection through Adaptive Selection of Compression Algorithms for Sensor Networks

    HyungJune Lee


    Full Text Available We propose a technique to optimize the energy efficiency of data collection in sensor networks by exploiting a selective data compression. To achieve such an aim, we need to make optimal decisions regarding two aspects: (1 which sensor nodes should execute compression; and (2 which compression algorithm should be used by the selected sensor nodes. We formulate this problem into binary integer programs, which provide an energy-optimal solution under the given latency constraint. Our simulation results show that the optimization algorithm significantly reduces the overall network-wide energy consumption for data collection. In the environment having a stationary sink from stationary sensor nodes, the optimized data collection shows 47% energy savings compared to the state-of-the-art collection protocol (CTP. More importantly, we demonstrate that our optimized data collection provides the best performance in an intermittent network under high interference. In such networks, we found that the selective compression for frequent packet retransmissions saves up to 55% energy compared to the best known protocol.

  13. Multichannel compressive sensing MRI using noiselet encoding.

    Kamlesh Pawar

    Full Text Available The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS. In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding.

  14. Multichannel compressive sensing MRI using noiselet encoding.

    Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin


    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding.

  15. Compressive imaging system design using task-specific information.

    Ashok, Amit; Baheti, Pawan K; Neifeld, Mark A


    We present a task-specific information (TSI) based framework for designing compressive imaging (CI) systems. The task of target detection is chosen to demonstrate the performance of the optimized CI system designs relative to a conventional imager. In our optimization framework, we first select a projection basis and then find the associated optimal photon-allocation vector in the presence of a total photon-count constraint. Several projection bases, including principal components (PC), independent components, generalized matched-filter, and generalized Fisher discriminant (GFD) are considered for candidate CI systems, and their respective performance is analyzed for the target-detection task. We find that the TSI-optimized CI system design based on a GFD projection basis outperforms all other candidate CI system designs as well as the conventional imager. The GFD-based compressive imager yields a TSI of 0.9841 bits (out of a maximum possible 1 bit for the detection task), which is nearly ten times the 0.0979 bits achieved by the conventional imager at a signal-to-noise ratio of 5.0. We also discuss the relation between the information-theoretic TSI metric and a conventional statistical metric like probability of error in the context of the target-detection problem. It is shown that the TSI can be used to derive an upper bound on the probability of error that can be attained by any detection algorithm.

  16. Stability of compressible boundary layers

    Nayfeh, Ali H.


    The stability of compressible 2-D and 3-D boundary layers is reviewed. The stability of 2-D compressible flows differs from that of incompressible flows in two important features: There is more than one mode of instability contributing to the growth of disturbances in supersonic laminar boundary layers and the most unstable first mode wave is 3-D. Whereas viscosity has a destabilizing effect on incompressible flows, it is stabilizing for high supersonic Mach numbers. Whereas cooling stabilizes first mode waves, it destabilizes second mode waves. However, second order waves can be stabilized by suction and favorable pressure gradients. The influence of the nonparallelism on the spatial growth rate of disturbances is evaluated. The growth rate depends on the flow variable as well as the distance from the body. Floquet theory is used to investigate the subharmonic secondary instability.

  17. Conservative regularization of compressible flow

    Krishnaswami, Govind S; Thyagaraja, Anantanarayanan


    Ideal Eulerian flow may develop singularities in vorticity w. Navier-Stokes viscosity provides a dissipative regularization. We find a local, conservative regularization - lambda^2 w times curl(w) of compressible flow and compressible MHD: a three dimensional analogue of the KdV regularization of the one dimensional kinematic wave equation. The regulator lambda is a field subject to the constitutive relation lambda^2 rho = constant. Lambda is like a position-dependent mean-free path. Our regularization preserves Galilean, parity and time-reversal symmetries. We identify locally conserved energy, helicity, linear and angular momenta and boundary conditions ensuring their global conservation. Enstrophy is shown to remain bounded. A swirl velocity field is identified, which transports w/rho and B/rho generalizing the Kelvin-Helmholtz and Alfven theorems. A Hamiltonian and Poisson bracket formulation is given. The regularized equations are used to model a rotating vortex, channel flow, plane flow, a plane vortex ...

  18. Progressive image data compression with adaptive scale-space quantization

    Przelaskowski, Artur


    Some improvements of embedded zerotree wavelet algorithm are considere. Compression methods tested here are based on dyadic wavelet image decomposition, scalar quantization and coding in progressive fashion. Profitable coders with embedded form of code and rate fixing abilities like Shapiro EZW and Said nad Pearlman SPIHT are modified to improve compression efficiency. We explore the modifications of the initial threshold value, reconstruction levels and quantization scheme in SPIHT algorithm. Additionally, we present the result of the best filter bank selection. The most efficient biorthogonal filter banks are tested. Significant efficiency improvement of SPIHT coder was finally noticed even up to 0.9dB of PSNR in some cases. Because of the problems with optimization of quantization scheme in embedded coder we propose another solution: adaptive threshold selection of wavelet coefficients in progressive coding scheme. Two versions of this coder are tested: progressive in quality and resolution. As a result, improved compression effectiveness is achieved - close to 1.3 dB in comparison to SPIHT for image Barbara. All proposed algorithms are optimized automatically and are not time-consuming. But sometimes the most efficient solution must be found in iterative way. Final results are competitive across the most efficient wavelet coders.

  19. Compressing DNA sequence databases with coil

    Hendy Michael D


    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  20. Antiproton compression and radial measurements

    Andresen, G. B.; Bertsche, W.; Bowe, P. D.; Bray, C. C.; Butler, E.; Cesar, C. L.; Chapman, S.; Charlton, M.; Fajans, J.; Fujiwara, M. C.; Funakoshi, R.; Gill, D. R.; Hangst, J. S.; Hardy, W. N.; Hayano, R. S.; Hayden, M. E.; Humphries, A. J.; Hydomako, R.; Jenkins, M. J.; Jørgensen, L. V.; Kurchaninov, L.; Lambo, R.; Madsen, N.; Nolan, P.; Olchanski, K.; Olin, A.; Page, R. D.; Povilus, A.; Pusa, P.; Robicheaux, F.; Sarid, E.; El Nasr, S. Seif; Silveira, D. M.; Storey, J. W.; Thompson, R. I.; van der Werf, D. P.; Wurtele, J. S.; Yamazaki, Y.


    Control of the radial profile of trapped antiproton clouds is critical to trapping antihydrogen. We report detailed measurements of the radial manipulation of antiproton clouds, including areal density compressions by factors as large as ten, achieved by manipulating spatially overlapped electron plasmas. We show detailed measurements of the near-axis antiproton radial profile, and its relation to that of the electron plasma. We also measure the outer radial profile by ejecting antiprotons to the trap wall using an octupole magnet.

  1. Compressibility effects on turbulent mixing

    Panickacheril John, John; Donzis, Diego


    We investigate the effect of compressibility on passive scalar mixing in isotropic turbulence with a focus on the fundamental mechanisms that are responsible for such effects using a large Direct Numerical Simulation (DNS) database. The database includes simulations with Taylor Reynolds number (Rλ) up to 100, turbulent Mach number (Mt) between 0.1 and 0.6 and Schmidt number (Sc) from 0.5 to 1.0. We present several measures of mixing efficiency on different canonical flows to robustly identify compressibility effects. We found that, like shear layers, mixing is reduced as Mach number increases. However, data also reveal a non-monotonic trend with Mt. To assess directly the effect of dilatational motions we also present results with both dilatational and soleniodal forcing. Analysis suggests that a small fraction of dilatational forcing decreases mixing time at higher Mt. Scalar spectra collapse when normalized by Batchelor variables which suggests that a compressive mechanism similar to Batchelor mixing in incompressible flows might be responsible for better mixing at high Mt and with dilatational forcing compared to pure solenoidal mixing. We also present results on scalar budgets, in particular on production and dissipation. Support from NSF is gratefully acknowledged.

  2. Laser Compression of Nanocrystalline Metals

    Meyers, M. A.; Jarmakani, H. N.; Bringa, E. M.; Earhart, P.; Remington, B. A.; Vo, N. Q.; Wang, Y. M.


    Shock compression in nanocrystalline nickel is simulated over a range of pressures (10-80 GPa) and compared with experimental results. Laser compression carried out at Omega and Janus yields new information on the deformation mechanisms of nanocrystalline Ni. Although conventional deformation does not produce hardening, the extreme regime imparted by laser compression generates an increase in hardness, attributed to the residual dislocations observed in the structure by TEM. An analytical model is applied to predict the critical pressure for the onset of twinning in nanocrystalline nickel. The slip-twinning transition pressure is shifted from 20 GPa, for polycrystalline Ni, to 80 GPa, for Ni with g. s. of 10 nm. Contributions to the net strain from the different mechanisms of plastic deformation (partials, perfect dislocations, twinning, and grain boundary shear) were quantified in the nanocrystalline samples through MD calculations. The effect of release, a phenomenon often neglected in MD simulations, on dislocation behavior was established. A large fraction of the dislocations generated at the front are annihilated.

  3. Simultaneous compression and encryption of closely resembling images: application to video sequences and polarimetric images.

    Aldossari, M; Alfalou, A; Brosseau, C


    This study presents and validates an optimized method of simultaneous compression and encryption designed to process images with close spectra. This approach is well adapted to the compression and encryption of images of a time-varying scene but also to static polarimetric images. We use the recently developed spectral fusion method [Opt. Lett.35, 1914-1916 (2010)] to deal with the close resemblance of the images. The spectral plane (containing the information to send and/or to store) is decomposed in several independent areas which are assigned according a specific way. In addition, each spectrum is shifted in order to minimize their overlap. The dual purpose of these operations is to optimize the spectral plane allowing us to keep the low- and high-frequency information (compression) and to introduce an additional noise for reconstructing the images (encryption). Our results show that not only can the control of the spectral plane enhance the number of spectra to be merged, but also that a compromise between the compression rate and the quality of the reconstructed images can be tuned. We use a root-mean-square (RMS) optimization criterion to treat compression. Image encryption is realized at different security levels. Firstly, we add a specific encryption level which is related to the different areas of the spectral plane, and then, we make use of several random phase keys. An in-depth analysis at the spectral fusion methodology is done in order to find a good trade-off between the compression rate and the quality of the reconstructed images. Our new proposal spectral shift allows us to minimize the image overlap. We further analyze the influence of the spectral shift on the reconstructed image quality and compression rate. The performance of the multiple-image optical compression and encryption method is verified by analyzing several video sequences and polarimetric images.

  4. Image Compression Using Discrete Wavelet Transform

    Mohammad Mozammel Hoque Chowdhury


    Full Text Available Image compression is a key technology in transmission and storage of digital images because of vast data associated with them. This research suggests a new image compression scheme with pruning proposal based on discrete wavelet transformation (DWT. The effectiveness of the algorithm has been justified over some real images, and the performance of the algorithm has been compared with other common compression standards. The algorithm has been implemented using Visual C++ and tested on a Pentium Core 2 Duo 2.1 GHz PC with 1 GB RAM. Experimental results demonstrate that the proposed technique provides sufficient high compression ratios compared to other compression techniques.

  5. Compression Waves and Phase Plots: Simulations

    Orlikowski, Daniel


    Compression wave analysis started nearly 50 years ago with Fowles.[1] Coperthwaite and Williams [2] gave a method that helps identify simple and steady waves. We have been developing a method that gives describes the non-isentropic character of compression waves, in general.[3] One result of that work is a simple analysis tool. Our method helps clearly identify when a compression wave is a simple wave, a steady wave (shock), and when the compression wave is in transition. This affects the analysis of compression wave experiments and the resulting extraction of the high-pressure equation of state.

  6. Mathematical theory of compressible fluid flow

    Von Mises, Richard


    Mathematical Theory of Compressible Fluid Flow covers the conceptual and mathematical aspects of theory of compressible fluid flow. This five-chapter book specifically tackles the role of thermodynamics in the mechanics of compressible fluids. This text begins with a discussion on the general theory of characteristics of compressible fluid with its application. This topic is followed by a presentation of equations delineating the role of thermodynamics in compressible fluid mechanics. The discussion then shifts to the theory of shocks as asymptotic phenomena, which is set within the context of

  7. Video compressive sensing using Gaussian mixture models.

    Yang, Jianbo; Yuan, Xin; Liao, Xuejun; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence


    A Gaussian mixture model (GMM)-based algorithm is proposed for video reconstruction from temporally compressed video measurements. The GMM is used to model spatio-temporal video patches, and the reconstruction can be efficiently computed based on analytic expressions. The GMM-based inversion method benefits from online adaptive learning and parallel computation. We demonstrate the efficacy of the proposed inversion method with videos reconstructed from simulated compressive video measurements, and from a real compressive video camera. We also use the GMM as a tool to investigate adaptive video compressive sensing, i.e., adaptive rate of temporal compression.

  8. Compression therapy in elderly and overweight patients.

    Reich-Schupke, Stefanie; Murmann, Friederike; Altmeyer, Peter; Stücker, Markus


    According to the current demography of the western population, age and weight will have increasing impact on medical therapies. The aim of the analysis was to examine if there are differences in the use of compression therapy depending on age and BMI. Questioning of 200 consecutive phlebological patients (C2-C6) with a compression therapy time of > 2 weeks. Analysis of 110 returned questionnaires. Sub-analysis according to age (≥ 60 years vs. 60 years even need the help of another person to apply compression. Patients ≥ 25 kg/m2 have an ulcer stocking significantly more often (15 % vs. 4.3 %, p = 0.05) and need the help of family members to put on the compression therapy (11.7 % vs. 2.1 %, p = 0.04). There is a tendency of patients ≥ 25 kg/m2 to complain more often about a constriction of compression therapy (35 % vs. 19.2 %, p = 0.06). There are special aspects that have to be regarded for compression therapy in elderly and overweight patients. Data should encourage prescribers, sellers and manufacturers of compression therapy to use compression in a very differentiated way for these patients and to consider: Is the recommended compression therapy right for this patient (pressure, material, type)? What advice and adjuvants do the patients need to get along more easily with the compression therapy? Are there any new materials or adjuvants that allow those increasing groups of people to get along with compression therapy alone?

  9. Chapter 22: Compressed Air Evaluation Protocol

    Benton, N.


    Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: high-efficiency/variable speed drive (VSD) compressor replacing modulating compressor; compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.

  10. Binary-phase compression of stretched pulses

    Lozovoy, Vadim V.; Nairat, Muath; Dantus, Marcos


    Pulse stretching and compression are essential for the energy scale-up of ultrafast lasers. Here, we consider a radical approach using spectral binary phases, containing only two values (0 and π) for stretching and compressing laser pulses. We numerically explore different strategies and present results for pulse compression of factors up to a million back to the transform limit and experimentally obtain results for pulse compression of a factor of one hundred, in close agreement with numerical calculations. Imperfections resulting from binary-phase compression are addressed by considering cross-polarized wave generation filtering, and show that this approach leads to compressed pulses with contrast ratios greater than ten orders of magnitude. This new concept of binary-phase stretching and compression, if implemented in a multi-layer optic, could eliminate the need for traditional pulse stretchers and more importantly expensive compressors.

  11. Envera Variable Compression Ratio Engine

    Charles Mendler


    Aggressive engine downsizing, variable compression ratio and use of the Atkinson cycle are being combined to improve fuel economy by up to 40 percent relative to port fuel injected gasoline engines, while maintaining full engine power. Approach Engine downsizing is viewed by US and foreign automobile manufacturers as one of the best options for improving fuel economy. While this strategy has already demonstrated a degree of success, downsizing and fuel economy gains are currently limited. With new variable compression ratio technology however, the degree of engine downsizing and fuel economy improvement can be greatly increased. A small variable compression ratio (VCR) engine has the potential to return significantly higher vehicle fuel economy while also providing high power. Affordability and potential for near term commercialization are key attributes of the Envera VCR engine. VCR Technology To meet torque and power requirements, a smaller engine needs to do more work per stroke. This is typically accomplished by boosting the incoming charge with either a turbo or supercharger so that more energy is present in the cylinder per stroke to do the work. With current production engines the degree of engine boosting (which correlates to downsizing) is limited by detonation (combustion knock) at high boost levels. Additionally, the turbo or supercharger needs to be responsive and efficient while providing the needed boost. VCR technology eliminates the limitation of engine knock at high load levels by reducing compression ratio to {approx}9:1 (or whatever level is appropriate) when high boost pressures are needed. By reducing the compression ratio during high load demand periods there is increased volume in the cylinder at top dead center (TDC) which allows more charge (or energy) to be present in the cylinder without increasing the peak pressure. Cylinder pressure is thus kept below the level at which the engine would begin to knock. When loads on the engine are low

  12. Initial transient process in a simple helical flux compression generator

    Yang Xian-Jun


    An analytical scheme on the initial transient process in a simple helical flux compression generator, which includes the distributions of both the magnetic field in the hollow of an armature and the conducting current density in the stator, is developed by means of a diffusion equation. A relationship between frequency of the conducting current, root of the characteristic function of Bessel equation and decay time in the armature is given. The skin depth in the helical stator is calculated and is compared with the approximate one which is widely used in the calculation of magnetic diffusion. Our analytical results are helpful to understanding the mechanism of the loss of magnetic flux in both the armature and stator and to suggesting an optimal design for improving performance of the helical flux compression generator.

  13. Empirical data decomposition and its applications in image compression

    Deng Jiaxian; Wu Xiaoqin


    A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT). Simulation results show that EDD is more suitable for non-stationary image data compression.

  14. Delivery of Compression Therapy for Venous Leg Ulcers

    Zarchi, Kian; Jemec, Gregor B E


    municipalities. Sixty-eight home care nurses who managed wounds in their everyday practice were included. MAIN OUTCOMES AND MEASURES: Participant-masked measurements of subbandage pressure achieved with an elastic, long-stretch, single-component bandage; an inelastic, short-stretch, single-component bandage......IMPORTANCE: Despite the documented effect of compression therapy in clinical studies and its widespread prescription, treatment of venous leg ulcers is often prolonged and recurrence rates high. Data on provided compression therapy are limited. OBJECTIVE: To assess whether home care nurses achieve...... adequate subbandage pressure when treating patients with venous leg ulcers and the factors that predict the ability to achieve optimal pressure. DESIGN, SETTING, AND PARTICIPANTS: We performed a cross-sectional study from March 1, 2011, through March 31, 2012, in home care centers in 2 Danish...

  15. Compressive SAR imaging with joint sparsity and local similarity exploitation.

    Shen, Fangfang; Zhao, Guanghui; Shi, Guangming; Dong, Weisheng; Wang, Chenglong; Niu, Yi


    Compressive sensing-based synthetic aperture radar (SAR) imaging has shown its superior capability in high-resolution image formation. However, most of those works focus on the scenes that can be sparsely represented in fixed spaces. When dealing with complicated scenes, these fixed spaces lack adaptivity in characterizing varied image contents. To solve this problem, a new compressive sensing-based radar imaging approach with adaptive sparse representation is proposed. Specifically, an autoregressive model is introduced to adaptively exploit the structural sparsity of an image. In addition, similarity among pixels is integrated into the autoregressive model to further promote the capability and thus an adaptive sparse representation facilitated by a weighted autoregressive model is derived. Since the weighted autoregressive model is inherently determined by the unknown image, we propose a joint optimization scheme by iterative SAR imaging and updating of the weighted autoregressive model to solve this problem. Eventually, experimental results demonstrated the validity and generality of the proposed approach.

  16. Characterization of stock market regimes by data compression

    Vogel, Eugenio E.; Saravia, Gonzalo


    It has been shown that data compression can characterize magnetic phases (Physica A 388 (2009) 4075). In the introduction of this presentation we briefly review this result. We then go onto introducing a new data compressor (wlzip) developed by us to optimize recognition of meaningful patterns in the compressing procedure, yielding sharp transition curves at the magnetic critical temperatures. The advantages of the new compressor, such as better definition and tuning capabilities are presented. The rest of the talk consists of applying wlzip to the Chilean stock market along several months during 2010. The accumulated daily data allow to recognizing days with different types of activity. Moreover, the data recorded every minute allow to analyzing the ``present'' status of the stock market by applying wlzip to the data of the last hour or couple of hours. Possible extensions of the application of this technique to other fields are discussed. Partial support from Fondecyt 1100156, ICM and CEDENNA is acknowledged.

  17. Accelerated MR imaging using compressive sensing with no free parameters.

    Khare, Kedar; Hardy, Christopher J; King, Kevin F; Turski, Patrick A; Marinelli, Luca


    We describe and evaluate a robust method for compressive sensing MRI reconstruction using an iterative soft thresholding framework that is data-driven, so that no tuning of free parameters is required. The approach described here combines a Nesterov type optimal gradient scheme for iterative update along with standard wavelet-based adaptive denoising methods, resulting in a leaner implementation compared with the nonlinear conjugate gradient method. Tests with T₂ weighted brain data and vascular 3D phase contrast data show that the image quality of reconstructions is comparable with those from an empirically tuned nonlinear conjugate gradient approach. Statistical analysis of image quality scores for multiple datasets indicates that the iterative soft thresholding approach as presented here may improve the robustness of the reconstruction and the image quality, when compared with nonlinear conjugate gradient that requires manual tuning for each dataset. A data-driven approach as illustrated in this article should improve future clinical applicability of compressive sensing image reconstruction.

  18. Implementation of aeronautic image compression technology on DSP

    Wang, Yujing; Gao, Xueqiang; Wang, Mei


    According to the designed characteristics and demands of aeronautic image compression system, lifting scheme wavelet and SPIHT algorithm was selected as the key part of software implementation, which was introduced with details. In order to improve execution efficiency, border processing was simplified reasonably and SPIHT (Set Partitioning in Hierarchical Trees) algorithm was also modified partly. The results showed that the selected scheme has a 0.4dB improvement in PSNR(peak-peak-ratio) compared with classical Shaprio's scheme. To improve the operating speed, the hardware system was then designed based on DSP and many optimization measures were then applied successfully. Practical test showed that the system can meet the real-time demand with good quality of reconstruct image, which has been used in an aeronautic image compression system practically.

  19. Credal Classification based on AODE and compression coefficients

    Corani, Giorgio


    Bayesian model averaging (BMA) is an approach to average over alternative models; yet, it usually gets excessively concentrated around the single most probable model, therefore achieving only sub-optimal classification performance. The compression-based approach (Boulle, 2007) overcomes this problem, averaging over the different models by applying a logarithmic smoothing over the models' posterior probabilities. This approach has shown excellent performances when applied to ensembles of naive Bayes classifiers. AODE is another ensemble of models with high performance (Webb, 2005), based on a collection of non-naive classifiers (called SPODE) whose probabilistic predictions are aggregated by simple arithmetic mean. Aggregating the SPODEs via BMA rather than by arithmetic mean deteriorates the performance; instead, we aggregate the SPODEs via the compression coefficients and we show that the resulting classifier obtains a slight but consistent improvement over AODE. However, an important issue in any Bayesian e...

  20. Prior image constrained compressed sensing: a quantitative performance evaluation

    Thériault Lauzier, Pascal; Tang, Jie; Chen, Guang-Hong


    The appeal of compressed sensing (CS) in the context of medical imaging is undeniable. In MRI, it could enable shorter acquisition times while in CT, it has the potential to reduce the ionizing radiation dose imparted to patients. However, images reconstructed using a CS-based approach often show an unusual texture and a potential loss in spatial resolution. The prior image constrained compressed sensing (PICCS) algorithm has been shown to enable accurate image reconstruction at lower levels of sampling. This study systematically evaluates an implementation of PICCS applied to myocardial perfusion imaging with respect to two parameters of its objective function. The prior image parameter α was shown here to yield an optimal image quality in the range 0.4 to 0.5. A quantitative evaluation in terms of temporal resolution, spatial resolution, noise level, noise texture, and reconstruction accuracy was performed.

  1. Compressed Sensing Based Fingerprint Identification for Wireless Transmitters

    Caidan Zhao


    Full Text Available Most of the existing fingerprint identification techniques are unable to distinguish different wireless transmitters, whose emitted signals are highly attenuated, long-distance propagating, and of strong similarity to their transient waveforms. Therefore, this paper proposes a new method to identify different wireless transmitters based on compressed sensing. A data acquisition system is designed to capture the wireless transmitter signals. Complex analytical wavelet transform is used to obtain the envelope of the transient signal, and the corresponding features are extracted by using the compressed sensing theory. Feature selection utilizing minimum redundancy maximum relevance (mRMR is employed to obtain the optimal feature subsets for identification. The results show that the proposed method is more efficient for the identification of wireless transmitters with similar transient waveforms.

  2. Compressive sampling for energy spectrum estimation of turbulent flows

    Adalsteinsson, Gudmundur F


    Recent results from compressive sampling (CS) have demonstrated that accurate reconstruction of sparse signals often requires far fewer samples than suggested by the classical Nyquist--Shannon sampling theorem. Typically, signal reconstruction errors are measured in the $\\ell^2$ norm and the signal is assumed to be sparse, compressible or having a prior distribution. Our spectrum estimation by sparse optimization (SpESO) method uses prior information about isotropic homogeneous turbulent flows with power law energy spectra and applies the methods of CS to 1-D and 2-D turbulence signals to estimate their energy spectra with small logarithmic errors. SpESO is distinct from existing energy spectrum estimation methods which are based on sparse support of the signal in Fourier space. SpESO approximates energy spectra with an order of magnitude fewer samples than needed with Shannon sampling. Our results demonstrate that SpESO performs much better than lumped orthogonal matching pursuit (LOMP), and as well or bette...

  3. Control structure selection for vapor compression refrigeration cycle

    Yin, Xiaohong; Li, Shaoyuan [Shanghai Jiao Tong Univ., Shanghai (China). Dept. of Automation; Shandong Jianzhu Univ., Jinan (China). School of Information and Electrical Engineering; Cai, Wenjian; Ding, Xudong [Nanyang Technological Univ., Singapore (Singapore). School of Electrical and Electronic Engineering


    A control structure selection criterion which can be used to evaluate the control performance of different control structures for the vapor compression refrigeration cycle is proposed in this paper. The calculation results of the proposed criterion based on the different reduction models are utilized to determine the optimized control model structure. The effectiveness of the criterion is verified by the control effects of the model predictive control (MPC) controllers which are designed based on different model structures. The response of the different controllers applied on the actual vapor compression refrigeration system indicate that the best model structure is in consistent with the one obtained by the proposed structure selection criterion which is a trade-off between computation complexity and control performance.

  4. Digital image compression in dermatology: format comparison.

    Guarneri, F; Vaccaro, M; Guarneri, C


    Digital image compression (reduction of the amount of numeric data needed to represent a picture) is widely used in electronic storage and transmission devices. Few studies have compared the suitability of the different compression algorithms for dermatologic images. We aimed at comparing the performance of four popular compression formats, Tagged Image File (TIF), Portable Network Graphics (PNG), Joint Photographic Expert Group (JPEG), and JPEG2000 on clinical and videomicroscopic dermatologic images. Nineteen (19) clinical and 15 videomicroscopic digital images were compressed using JPEG and JPEG2000 at various compression factors and TIF and PNG. TIF and PNG are "lossless" formats (i.e., without alteration of the image), JPEG is "lossy" (the compressed image has a lower quality than the original), JPEG2000 has a lossless and a lossy mode. The quality of the compressed images was assessed subjectively (by three expert reviewers) and quantitatively (by measuring, point by point, the color differences from the original). Lossless JPEG2000 (49% compression) outperformed the other lossless algorithms, PNG and TIF (42% and 31% compression, respectively). Lossy JPEG2000 compression was slightly less efficient than JPEG, but preserved image quality much better, particularly at higher compression factors. For its good quality and compression ratio, JPEG2000 appears to be a good choice for clinical/videomicroscopic dermatologic image compression. Additionally, its diffusion and other features, such as the possibility of embedding metadata in the image file and to encode various parts of an image at different compression levels, make it perfectly suitable for the current needs of dermatology and teledermatology.

  5. Noise Robust Joint Sparse Recovery using Compressive Subspace Fitting

    Kim, Jong Min; Ye, Jong Chul


    We study a multiple measurement vector (MMV) problem where multiple signals share a common sparse support set and are sampled by a common sensing matrix. Although we can expect that joint sparsity can improve the recovery performance over a single measurement vector (SMV) problem, compressive sensing (CS) algorithms for MMV exhibit performance saturation as the number of multiple signals increases. Recently, to overcome these drawbacks of CS approaches, hybrid algorithms that optimally combine CS with sensor array signal processing using a generalized MUSIC criterion have been proposed. While these hybrid algorithms are optimal for critically sampled cases, they are not efficient in exploiting the redundant sampling to improve noise robustness. Hence, in this work, we introduce a novel subspace fitting criterion that extends the generalized MUSIC criterion so that it exhibits near-optimal behaviors for various sampling conditions. In addition, the subspace fitting criterion leads to two alternative forms of c...

  6. Fully non-linear hyper-viscoelastic modeling of skeletal muscle in compression.

    Wheatley, Benjamin B; Pietsch, Renée B; Haut Donahue, Tammy L; Williams, Lakiesha N


    Understanding the behavior of skeletal muscle is critical to implementing computational methods to study how the body responds to compressive loading. This work presents a novel approach to studying the fully nonlinear response of skeletal muscle in compression. Porcine muscle was compressed in both the longitudinal and transverse directions under five stress relaxation steps. Each step consisted of 5% engineering strain over 1 s followed by a relaxation period until equilibrium was reached at an observed change of 1 g/min. The resulting data were analyzed to identify the peak and equilibrium stresses as well as relaxation time for all samples. Additionally, a fully nonlinear strain energy density-based Prony series constitutive model was implemented and validated with independent constant rate compressive data. A nonlinear least squares optimization approach utilizing the Levenberg-Marquardt algorithm was implemented to fit model behavior to experimental data. The results suggested the time-dependent material response plays a key role in the anisotropy of skeletal muscle as increasing strain showed differences in peak stress and relaxation time (p 0.05). The optimizing procedure produced a single set of hyper-viscoelastic parameters which characterized compressive muscle behavior under stress relaxation conditions. The utilized constitutive model was the first orthotropic, fully nonlinear hyper-viscoelastic model of skeletal muscle in compression while maintaining agreement with constitutive physical boundaries. The model provided an excellent fit to experimental data and agreed well with the independent validation in the transverse direction.

  7. Colon Targeted Guar Gum Compression Coated Tablets of Flurbiprofen: Formulation, Development, and Pharmacokinetics

    Sateesh Kumar Vemula


    Full Text Available The rationale of the present study is to formulate flurbiprofen colon targeted compression coated tablets using guar gum to improve the therapeutic efficacy by increasing drug levels in colon, and also to reduce the side effects in upper gastrointestinal tract. Direct compression method was used to prepare flurbiprofen core tablets, and they were compression coated with guar gum. Then the tablets were optimized with the support of in vitro dissolution studies, and further it was proved by pharmacokinetic studies. The optimized formulation (F4 showed almost complete drug release in the colon (99.86% within 24 h without drug loss in the initial lag period of 5 h (only 6.84% drug release was observed during this period. The pharmacokinetic estimations proved the capability of guar gum compression coated tablets to achieve colon targeting. The Cmax of colon targeted tablets was 11956.15 ng/mL at Tmax of 10 h whereas it was 15677.52 ng/mL at 3 h in case of immediate release tablets. The area under the curve for the immediate release and compression coated tablets was 40385.78 and 78214.50 ng-h/mL and the mean resident time was 3.49 and 10.78 h, respectively. In conclusion, formulation of guar gum compression coated tablets was appropriate for colon targeting of flurbiprofen.

  8. Colon targeted guar gum compression coated tablets of flurbiprofen: formulation, development, and pharmacokinetics.

    Vemula, Sateesh Kumar; Bontha, Vijaya Kumar


    The rationale of the present study is to formulate flurbiprofen colon targeted compression coated tablets using guar gum to improve the therapeutic efficacy by increasing drug levels in colon, and also to reduce the side effects in upper gastrointestinal tract. Direct compression method was used to prepare flurbiprofen core tablets, and they were compression coated with guar gum. Then the tablets were optimized with the support of in vitro dissolution studies, and further it was proved by pharmacokinetic studies. The optimized formulation (F4) showed almost complete drug release in the colon (99.86%) within 24 h without drug loss in the initial lag period of 5 h (only 6.84% drug release was observed during this period). The pharmacokinetic estimations proved the capability of guar gum compression coated tablets to achieve colon targeting. The C(max) of colon targeted tablets was 11956.15 ng/mL at T max of 10 h whereas it was 15677.52 ng/mL at 3 h in case of immediate release tablets. The area under the curve for the immediate release and compression coated tablets was 40385.78 and 78214.50 ng-h/mL and the mean resident time was 3.49 and 10.78 h, respectively. In conclusion, formulation of guar gum compression coated tablets was appropriate for colon targeting of flurbiprofen.

  9. Fast Plasma Instrument for MMS: Data Compression Simulation Results

    Barrie, A.; Adrian, M. L.; Yeh, P.; Winkert, G.; Lobell, J.; Vinas, A. F.; Simpson, D. G.


    /PEACE electron measurements and Cluster/CIS ion measurements. Topics to be discussed include: (i) Review of compression algorithm; (ii) Data quality; (iii) Data formatting/organization; (iv) Compression optimization; (v) Investigation of pseudo-log precompression; and (vi) Analysis of compression effectiveness for burst mode as well as fast survey mode data packets for both electron and ion data We conclude with a presentation of the current base-lined FPI data compression approach.

  10. Offshore compression system design for low cost high and reliability

    Castro, Carlos J. Rocha de O.; Carrijo Neto, Antonio Dias; Cordeiro, Alexandre Franca [Chemtech Engineering Services and Software Ltd., Rio de Janeiro, RJ (Brazil). Special Projects Div.], Emails:,,


    In the offshore oil fields, the oil streams coming from the wells usually have significant amounts of gas. This gas is separated at low pressure and has to be compressed to the export pipeline pressure, usually at high pressure to reduce the needed diameter of the pipelines. In the past, this gases where flared, but nowadays there are a increasing pressure for the energy efficiency improvement of the oil rigs and the use of this gaseous fraction. The most expensive equipment of this kind of plant are the compression and power generation systems, being the second a strong function of the first, because the most power consuming equipment are the compressors. For this reason, the optimization of the compression system in terms of efficiency and cost are determinant to the plant profit. The availability of the plants also have a strong influence in the plant profit, specially in gas fields where the products have a relatively low aggregated value, compared to oil. Due this, the third design variable of the compression system becomes the reliability. As high the reliability, larger will be the plant production. The main ways to improve the reliability of compression system are the use of multiple compression trains in parallel, in a 2x50% or 3x50% configuration, with one in stand-by. Such configurations are possible and have some advantages and disadvantages, but the main side effect is the increase of the cost. This is the offshore common practice, but that does not always significantly improve the plant availability, depending of the previous process system. A series arrangement and a critical evaluation of the overall system in some cases can provide a cheaper system with equal or better performance. This paper shows a case study of the procedure to evaluate a compression system design to improve the reliability but without extreme cost increase, balancing the number of equipment, the series or parallel arrangement, and the driver selection. Two cases studies will be

  11. Algorithmic height compression of unordered trees.

    Ben-Naoum, Farah; Godin, Christophe


    By nature, tree structures frequently present similarities between their sub-parts. Making use of this redundancy, different types of tree compression techniques have been designed in the literature to reduce the complexity of tree structures. A popular and efficient way to compress a tree consists of merging its isomorphic subtrees, which produces a directed acyclic graph (DAG) equivalent to the original tree. An important property of this method is that the compressed structure (i.e. the DAG) has the same height as the original tree, thus limiting partially the possibility of compression. In this paper we address the problem of further compressing this DAG in height. The difficulty is that compression must be carried out on substructures that are not exactly isomorphic as they are strictly nested within each-other. We thus introduced a notion of quasi-isomorphism between subtrees that makes it possible to define similar patterns along any given path in a tree. We then proposed an algorithm to detect these patterns and to merge them, thus leading to compressed structures corresponding to DAGs augmented with return edges. In this way, redundant information is removed from the original tree in both width and height, thus achieving minimal structural compression. The complete compression algorithm is then illustrated on the compression of various plant-like structures.

  12. Optimization of structural shapes

    Durelli, A. J.


    The direct design of shapes of two dimensional structures, loaded in their plane, within specified design constraints and exhibiting optimum distribution of stresses is studied. Photoelasticity and a large field diffused polariscope is used. The optimization process involves the removal of material (with a hand file or router) from the low stress portions of the hole boundary of the model till an isochromatic fringe coincides with the boundary both on the tensile and compressive segments. Applications are also shown to the design of dove tails and slots in turbine blades and rotors, and to the design of star shaped solid propellant grains for rockets, both for the case of parallel side rays and enlarged tip of rays. The use of other methods, in particular the method using finite elements, to optimize structural forms is discussed.

  13. XPath Whole Query Optimization

    Maneth, Sebastian


    Previous work reports about SXSI, a fast XPath engine which executes tree automata over compressed XML indexes. Here, reasons are investigated why SXSI is so fast. It is shown that tree automata can be used as a general framework for fine grained XML query optimization. We define the "relevant nodes" of a query as those nodes that a minimal automaton must touch in order to answer the query. This notion allows to skip many subtrees during execution, and, with the help of particular tree indexes, even allows to skip internal nodes of the tree. We efficiently approximate runs over relevant nodes by means of on-the-fly removal of alternation and non-determinism of (alternating) tree automata. We also introduce many implementation techniques which allows us to efficiently evaluate tree automata, even in the absence of special indexes. Through extensive experiments, we demonstrate the impact of the different optimization techniques.

  14. Instability of ties in compression

    Buch-Hansen, Thomas Cornelius


    Masonry cavity walls are loaded by wind pressure and vertical load from upper floors. These loads results in bending moments and compression forces in the ties connecting the outer and the inner wall in a cavity wall. Large cavity walls are furthermore loaded by differential movements from...... exact instability solutions are complex to derive, not to mention the extra complexity introducing dimensional instability from the temperature gradients. Using an inverse variable substitution and comparing an exact theory with an analytical instability solution a method to design tie...

  15. Ab initio compressive phase retrieval

    Marchesini, S


    Any object on earth has two fundamental properties: it is finite, and it is made of atoms. Structural information about an object can be obtained from diffraction amplitude measurements that account for either one of these traits. Nyquist-sampling of the Fourier amplitudes is sufficient to image single particles of finite size at any resolution. Atomic resolution data is routinely used to image molecules replicated in a crystal structure. Here we report an algorithm that requires neither information, but uses the fact that an image of a natural object is compressible. Intended applications include tomographic diffractive imaging, crystallography, powder diffraction, small angle x-ray scattering and random Fourier amplitude measurements.

  16. Lossless Compression of Digital Images

    Martins, Bo

    Presently, tree coders are the best bi-level image coders. The currentISO standard, JBIG, is a good example.By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code.A number of general-purpose coders...... version that is substantially faster than its precursorsand brings it close to the multi-pass coders in compression performance.Handprinted characters are of unequal complexity; recent work by Singer and Tishby demonstrates that utilizing the physiological process of writing one can synthesize cursive...

  17. Antiproton compression and radial measurements

    Andresen, G B; Bowe, P D; Bray, C C; Butler, E; Cesar, C L; Chapman, S; Charlton, M; Fajans, J; Fujiwara, M C; Funakoshi, R; Gill, D R; Hangst, J S; Hardy, W N; Hayano, R S; Hayden, M E; Humphries, A J; Hydomako, R; Jenkins, M J; Jorgensen, L V; Kurchaninov, L; Lambo, R; Madsen, N; Nolan, P; Olchanski, K; Olin, A; Page R D; Povilus, A; Pusa, P; Robicheaux, F; Sarid, E; Seif El Nasr, S; Silveira, D M; Storey, J W; Thompson, R I; Van der Werf, D P; Wurtele, J S; Yamazaki, Y


    Control of the radial profile of trapped antiproton clouds is critical to trapping antihydrogen. We report detailed measurements of the radial manipulation of antiproton clouds, including areal density compressions by factors as large as ten, achieved by manipulating spatially overlapped electron plasmas. We show detailed measurements of the near-axis antiproton radial profile, and its relation to that of the electron plasma. We also measure the outer radial profile by ejecting antiprotons to the trap wall using an octupole magnet.

  18. Lossless Compression of Broadcast Video

    Martins, Bo; Eriksen, N.; Faber, E.


    complexity, difficult but natural material is compressed up to 20\\% better than with coding using lossless JPEG-LS. More complex schemes lower the bit rate even further. A real-time implementation of JPEG-LS may be carried out in a DSP environment or a FPGA environment. Conservative analysis supported...... with actual measurements on a DSP suggests that a real-time implementation may be carried out using about 5 DSPs. An FPGA based solution is estimated to demand 4 or 6 FPGAs (each 40.000 gate equivalent)...

  19. Image Quality Meter Using Compression

    Muhammad Ibrar-Ul-Haque


    Full Text Available This paper proposed a new technique to compressed image blockiness/blurriness in frequency domain through edge detection method by applying Fourier transform. In image processing, boundaries are characterized by edges and thus, edges are the problems of fundamental importance. The edges have to be identified and computed thoroughly in order to retrieve the complete illustration of the image. Our novel edge detection scheme for blockiness and blurriness shows improvement of 60 and 100 blocks for high frequency components respectively than any other detection technique.

  20. Photovoltaic driven vapor compression cycles

    Anand, D. K.

    Since the vast majority of heat pumps, air conditioning and refrigeration equipment employs the vapor compression cycle (VCC), the use of renewable energy represents a significant opportunity. As discussed in this report, it is clear that the use of photovoltaics (PV) to drive the VCC has more potential than any other active solar cooling approach. This potential exists due to improvements in not only the PV cells but VCC machinery and control algorithms. It is estimated that the combined improvements will result in reducing the PV cell requirements by as much as one half.

  1. Remote sensing image compression for deep space based on region of interest

    王振华; 吴伟仁; 田玉龙; 田金文; 柳健


    A major limitation for deep space communication is the limited bandwidths available. The downlinkrate using X-band with an L2 halo orbit is estimated to be of only 5.35 GB/d. However, the Next GenerationSpace Telescope (NGST) will produce about 600 GB/d. Clearly the volume of data to downlink must be re-duced by at least a factor of 100. One of the resolutions is to encode the data using very low bit rate image com-pression techniques. An very low bit rate image compression method based on region of interest(ROI) has beenproposed for deep space image. The conventional image compression algorithms which encode the original datawithout any data analysis can maintain very good details and haven' t high compression rate while the modernimage compressions with semantic organization can have high compression rate even to be hundred and can' tmaintain too much details. The algorithms based on region of interest inheriting from the two previews algorithmshave good semantic features and high fidelity, and is therefore suitable for applications at a low bit rate. Theproposed method extracts the region of interest by texture analysis after wavelet transform and gains optimal localquality with bit rate control. The Result shows that our method can maintain more details in ROI than generalimage compression algorithm(SPIHT) under the condition of sacrificing the quality of other uninterested areas.

  2. Image Compression based on DCT and BPSO for MRI and Standard Images

    D.J. Ashpin Pabi


    Full Text Available Nowadays, digital image compression has become a crucial factor of modern telecommunication systems. Image compression is the process of reducing total bits required to represent an image by reducing redundancies while preserving the image quality as much as possible. Various applications including internet, multimedia, satellite imaging, medical imaging uses image compression in order to store and transmit images in an efficient manner. Selection of compression technique is an application-specific process. In this paper, an improved compression technique based on Butterfly-Particle Swarm Optimization (BPSO is proposed. BPSO is an intelligence-based iterative algorithm utilized for finding optimal solution from a set of possible values. The dominant factors of BPSO over other optimization techniques are higher convergence rate, searching ability and overall performance. The proposed technique divides the input image into 88 blocks. Discrete Cosine Transform (DCT is applied to each block to obtain the coefficients. Then, the threshold values are obtained from BPSO. Based on this threshold, values of the coefficients are modified. Finally, quantization followed by the Huffman encoding is used to encode the image. Experimental results show the effectiveness of the proposed method over the existing method.

  3. Pairwise KLT-Based Compression for Multispectral Images

    Nian, Yongjian; Liu, Yu; Ye, Zhen


    This paper presents a pairwise KLT-based compression algorithm for multispectral images. Although the KLT has been widely employed for spectral decorrelation, its complexity is high if it is performed on the global multispectral images. To solve this problem, this paper presented a pairwise KLT for spectral decorrelation, where KLT is only performed on two bands every time. First, KLT is performed on the first two adjacent bands and two principle components are obtained. Secondly, one remainning band and the principal component (PC) with the larger eigenvalue is selected to perform a KLT on this new couple. This procedure is repeated until the last band is reached. Finally, the optimal truncation technique of post-compression rate-distortion optimization is employed for the rate allocation of all the PCs, followed by embedded block coding with optimized truncation to generate the final bit-stream. Experimental results show that the proposed algorithm outperforms the algorithm based on global KLT. Moreover, the pairwise KLT structure can significantly reduce the complexity compared with a global KLT.

  4. Lossy compression of floating point high-dynamic range images using JPEG2000

    Springer, Dominic; Kaup, Andre


    In recent years, a new technique called High Dynamic Range (HDR) has gained attention in the image processing field. By representing pixel values with floating point numbers, recorded images can hold significantly more luminance information than ordinary integer images. This paper focuses on the realization of a lossy compression scheme for HDR images. The JPEG2000 standard is used as a basic component and is efficiently integrated into the compression chain. Based on a detailed analysis of the floating point format and the human visual system, a concept for lossy compression is worked out and thoroughly optimized. Our scheme outperforms all other existing lossy HDR compression schemes and shows superior performance both at low and high bitrates.

  5. An improved image compression algorithm using binary space partition scheme and geometric wavelets.

    Chopra, Garima; Pal, A K


    Geometric wavelet is a recent development in the field of multivariate nonlinear piecewise polynomials approximation. The present study improves the geometric wavelet (GW) image coding method by using the slope intercept representation of the straight line in the binary space partition scheme. The performance of the proposed algorithm is compared with the wavelet transform-based compression methods such as the embedded zerotree wavelet (EZW), the set partitioning in hierarchical trees (SPIHT) and the embedded block coding with optimized truncation (EBCOT), and other recently developed "sparse geometric representation" based compression algorithms. The proposed image compression algorithm outperforms the EZW, the Bandelets and the GW algorithm. The presented algorithm reports a gain of 0.22 dB over the GW method at the compression ratio of 64 for the Cameraman test image.

  6. All-optical image processing and compression based on Haar wavelet transform.

    Parca, Giorgia; Teixeira, Pedro; Teixeira, Antonio


    Fast data processing and compression methods based on wavelet transform are fundamental tools in the area of real-time 2D data/image analysis, enabling high definition applications and redundant data reduction. The need for information processing at high data rates motivates the efforts on exploiting the speed and the parallelism of the light for data analysis and compression. Among several schemes for optical wavelet transform implementation, the Haar transform offers simple design and fast computation, plus it can be easily implemented by optical planar interferometry. We present an all optical scheme based on an asymmetric couplers network for achieving fast image processing and compression in the optical domain. The implementation of Haar wavelet transform through a 3D passive structure is supported by theoretical formulation and simulations results. Asymmetrical coupler 3D network design and optimization are reported and Haar wavelet transform, including compression, was achieved, thus demonstrating the feasibility of our approach.

  7. Soliton compression to ultra-short pulses using cascaded quadratic nonlinearities in silica photonic crystal fibers

    Bache, Morten; Lægsgaard, Jesper; Bang, Ole;


    We investigate the possibility of using poled silica photonic crystal fibers for self-defocusing soliton compression with cascaded quadratic nonlinearities. Such a configuration has promise due to the desirable possibility of reducing the group-velocity mismatch. However, this unfortunately leads...... nonlinearity, and show that compression of nJ pulses to few-cycle duration is possible in such a fiber. A small amount of group-velocity mismatch optimizes the compression.......We investigate the possibility of using poled silica photonic crystal fibers for self-defocusing soliton compression with cascaded quadratic nonlinearities. Such a configuration has promise due to the desirable possibility of reducing the group-velocity mismatch. However, this unfortunately leads...

  8. MR diagnosis of retropatellar chondral lesions under compression. A comparison with histological findings

    Andresen, R. [Dept. of Radiology, Div. of Radiodiagnostics, Steglitz Medical Centre, Free Univ. of Berlin (Germany); Radmer, S. [Dept. of Radiology and Nuclear Medicine, Behring Municipal Hospital, Academic Teaching Hospital, Free Univ. of Berlin (Germany); Koenig, H. [Dept. of Radiology, Div. of Radiodiagnostics, Steglitz Medical Centre, Free Univ. of Berlin (Germany); Banzer, D. [Dept. of Radiology and Nuclear Medicine, Behring Municipal Hospital, Academic Teaching Hospital, Free Univ. of Berlin (Germany); Wolf, K.J. [Dept. of Radiology, Div. of Radiodiagnostics, Steglitz Medical Centre, Free Univ. of Berlin (Germany)


    Purpose: The aim of the study was to improve the chondromalacia patellae (CMP) diagnosis by MR imaging under defined compression of the retropatellar cartilage, using a specially designed knee compressor. The results were compared with histological findings to obtain an MR classification of CMP. Method: MR imaging was performed in in vitro studies of 25 knees from cadavers to investigate the effects of compression on the rentropatellar articular cartilage. The results were verified by subsequent histological evaluations. Results: There was significant difference in cartilage thickness reduction and signal intensity behaviour under compression according to the stage of CMP. Conclusion: Based on the decrease in cartilage thickness, signal intensity behaviour under compression, and cartilage morphology, the studies permitted and MR classifiction of CMP into stages I-IV in line with the histological findings. Healthy cartilage was clearly distinguished, a finding which may optimize CMP diagnosis. (orig.).

  9. Economic Modeling of Compressed Air Energy Storage

    Rui Bo


    Full Text Available Due to the variable nature of wind resources, the increasing penetration level of wind power will have a significant impact on the operation and planning of the electric power system. Energy storage systems are considered an effective way to compensate for the variability of wind generation. This paper presents a detailed production cost simulation model to evaluate the economic value of compressed air energy storage (CAES in systems with large-scale wind power generation. The co-optimization of energy and ancillary services markets is implemented in order to analyze the impacts of CAES, not only on energy supply, but also on system operating reserves. Both hourly and 5-minute simulations are considered to capture the economic performance of CAES in the day-ahead (DA and real-time (RT markets. The generalized network flow formulation is used to model the characteristics of CAES in detail. The proposed model is applied on a modified IEEE 24-bus reliability test system. The numerical example shows that besides the economic benefits gained through energy arbitrage in the DA market, CAES can also generate significant profits by providing reserves, compensating for wind forecast errors and intra-hour fluctuation, and participating in the RT market.

  10. Energy Preserved Sampling for Compressed Sensing MRI

    Yudong Zhang


    Full Text Available The sampling patterns, cost functions, and reconstruction algorithms play important roles in optimizing compressed sensing magnetic resonance imaging (CS-MRI. Simple random sampling patterns did not take into account the energy distribution in k-space and resulted in suboptimal reconstruction of MR images. Therefore, a variety of variable density (VD based samplings patterns had been developed. To further improve it, we propose a novel energy preserving sampling (ePRESS method. Besides, we improve the cost function by introducing phase correction and region of support matrix, and we propose iterative thresholding algorithm (ITA to solve the improved cost function. We evaluate the proposed ePRESS sampling method, improved cost function, and ITA reconstruction algorithm by 2D digital phantom and 2D in vivo MR brains of healthy volunteers. These assessments demonstrate that the proposed ePRESS method performs better than VD, POWER, and BKO; the improved cost function can achieve better reconstruction quality than conventional cost function; and the ITA is faster than SISTA and is competitive with FISTA in terms of computation time.

  11. Visually weighted reconstruction of compressive sensing MRI.

    Oh, Heeseok; Lee, Sanghoon


    Compressive sensing (CS) enables the reconstruction of a magnetic resonance (MR) image from undersampled data in k-space with relatively low-quality distortion when compared to the original image. In addition, CS allows the scan time to be significantly reduced. Along with a reduction in the computational overhead, we investigate an effective way to improve visual quality through the use of a weighted optimization algorithm for reconstruction after variable density random undersampling in the phase encoding direction over k-space. In contrast to conventional magnetic resonance imaging (MRI) reconstruction methods, the visual weight, in particular, the region of interest (ROI), is investigated here for quality improvement. In addition, we employ a wavelet transform to analyze the reconstructed image in the space domain and fully utilize data sparsity over the spatial and frequency domains. The visual weight is constructed by reflecting the perceptual characteristics of the human visual system (HVS), and then applied to ℓ1 norm minimization, which gives priority to each coefficient during the reconstruction process. Using objective quality assessment metrics, it was found that an image reconstructed using the visual weight has higher local and global quality than those processed by conventional methods.

  12. Adaptive and compressive matched field processing.

    Gemba, Kay L; Hodgkiss, William S; Gerstoft, Peter


    Matched field processing is a generalized beamforming method that matches received array data to a dictionary of replica vectors in order to locate one or more sources. Its solution set is sparse since there are considerably fewer sources than replicas. Using compressive sensing (CS) implemented using basis pursuit, the matched field problem is reformulated as an underdetermined, convex optimization problem. CS estimates the unknown source amplitudes using the replica dictionary to best explain the data, subject to a row-sparsity constraint. This constraint selects the best matching replicas within the dictionary when using multiple observations and/or frequencies. For a single source, theory and simulations show that the performance of CS and the Bartlett processor are equivalent for any number of snapshots. Contrary to most adaptive processors, CS also can accommodate coherent sources. For a single and multiple incoherent sources, simulations indicate that CS offers modest localization performance improvement over the adaptive white noise constraint processor. SWellEx-96 experiment data results show comparable performance for both processors when localizing a weaker source in the presence of a stronger source. Moreover, CS often displays less ambiguity, demonstrating it is robust to data-replica mismatch.

  13. Assessment of myocardial bridge by cardiac CT: Intracoronary transluminal attenuation gradient derived from diastolic phase predicts systolic compression

    Yu, Meng Meng; Zhang, Yang; Li, Yue Hua; Li, Wen Bin; Li, Ming Hua; Zhang, Jiayin [Institute of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People' s Hospital, Shangha (China)


    To study the predictive value of transluminal attenuation gradient (TAG) derived from diastolic phase of coronary computed tomography angiography (CCTA) for identifying systolic compression of myocardial bridge (MB). Consecutive patients diagnosed with MB based on CCTA findings and without obstructive coronary artery disease were retrospectively enrolled. In total, 143 patients with 144 MBs were included in the study. Patients were classified into three groups: without systolic compression, with systolic compression < 50%, and with systolic compression ≥ 50%. TAG was defined as the linear regression coefficient between intraluminal attenuation in Hounsfield units (HU) and length from the vessel ostium. Other indices such as the length and depth of the MB were also recorded. TAG was the lowest in MB patients with systolic compression ≥ 50% (-19.9 ± 8.7 HU/10 mm). Receiver operating characteristic curve analysis was performed to determine the optimal cutoff values for identifying systolic compression ≥ 50%. The result indicated an optimal cutoff value of TAG as -18.8 HU/10 mm (area under curve = 0.778, p < 0.001), which yielded higher sensitivity, specificity, positive predictive value, negative predictive value, and diagnostic accuracy (54.1, 80.5, 72.8, and 75.0%, respectively). In addition, the TAG of MB with diastolic compression was significantly lower than the TAG of MB without diastolic compression (-21.4 ± 4.8 HU/10 mm vs. -12.7 ± 8 HU/10 mm, p < 0.001). TAG was a better predictor of MB with systolic compression ≥ 50%, compared to the length or depth of the MB. The TAG of MB with persistent diastolic compression was significantly lower than the TAG without diastolic compression.

  14. Compression des fichiers son de type wave.

    BAKLI, Meriem


    Ce travail de projet de fin d’étude s’intéresse à une étude comparative sur la compression d’un fichier son. La compression est l'action utilisée pour réduire la taille physique d'un bloc d'information.. Il existe plusieurs algorithmes pour la compression comme HUFFMAN, …etc. Nous avons fait la compression d’un fichier son de format WAVE non compressé à un fichier MP3 compressé avec différent format de codage, différent frame et quelque soit le fichier mono où stéréo. A partir ...

  15. Direct numerical simulation of compressible isotropic turbulence

    LI; Xinliang(李新亮); FU; Dexun(傅德薰); MAYanwen(马延文)


    Direct numerical simulation (DNS) of decaying compressible isotropic turbulence at tur-bulence Mach numbers of Mt = 0.2-0.7 and Taylor Reynolds numbers of 72 and 153 is per-formed by using the 7th order upwind-biased difference and 8th order center difference schemes.Results show that proper upwind-biased difference schemes can release the limit of "start-up"problem to Mach numbers.Compressibility effects on the statistics of turbulent flow as well as the mechanics of shockletsin compressible turbulence are also studied, and the conclusion is drawn that high Mach numberleads to more dissipation. Scaling laws in compressible turbulence are also analyzed. Evidence isobtained that scaling laws and extended self similarity (ESS) hold in the compressible turbulentflow in spite of the presence of shocklets, and compressibility has little effect on scaling exponents.

  16. Accelerating Lossless Data Compression with GPUs

    Cloud, R L; Ward, H L; Skjellum, A; Bangalore, P


    Huffman compression is a statistical, lossless, data compression algorithm that compresses data by assigning variable length codes to symbols, with the more frequently appearing symbols given shorter codes than the less. This work is a modification of the Huffman algorithm which permits uncompressed data to be decomposed into indepen- dently compressible and decompressible blocks, allowing for concurrent compression and decompression on multiple processors. We create implementations of this modified algorithm on a current NVIDIA GPU using the CUDA API as well as on a current Intel chip and the performance results are compared, showing favorable GPU performance for nearly all tests. Lastly, we discuss the necessity for high performance data compression in today's supercomputing ecosystem.

  17. Industrial Compressed Air System Energy Efficiency Guidebook.

    United States. Bonneville Power Administration.


    Energy efficient design, operation and maintenance of compressed air systems in industrial plants can provide substantial reductions in electric power and other operational costs. This guidebook will help identify cost effective, energy efficiency opportunities in compressed air system design, re-design, operation and maintenance. The guidebook provides: (1) a broad overview of industrial compressed air systems, (2) methods for estimating compressed air consumption and projected air savings, (3) a description of applicable, generic energy conservation measures, and, (4) a review of some compressed air system demonstration projects that have taken place over the last two years. The primary audience for this guidebook includes plant maintenance supervisors, plant engineers, plant managers and others interested in energy management of industrial compressed air systems.

  18. LDPC Codes for Compressed Sensing

    Dimakis, Alexandros G; Vontobel, Pascal O


    We present a mathematical connection between channel coding and compressed sensing. In particular, we link, on the one hand, \\emph{channel coding linear programming decoding (CC-LPD)}, which is a well-known relaxation of maximum-likelihood channel decoding for binary linear codes, and, on the other hand, \\emph{compressed sensing linear programming decoding (CS-LPD)}, also known as basis pursuit, which is a widely used linear programming relaxation for the problem of finding the sparsest solution of an under-determined system of linear equations. More specifically, we establish a tight connection between CS-LPD based on a zero-one measurement matrix over the reals and CC-LPD of the binary linear channel code that is obtained by viewing this measurement matrix as a binary parity-check matrix. This connection allows the translation of performance guarantees from one setup to the other. The main message of this paper is that parity-check matrices of "good" channel codes can be used as provably "good" measurement ...

  19. Hemifacial spasm and neurovascular compression.

    Lu, Alex Y; Yeung, Jacky T; Gerrard, Jason L; Michaelides, Elias M; Sekula, Raymond F; Bulsara, Ketan R


    Hemifacial spasm (HFS) is characterized by involuntary unilateral contractions of the muscles innervated by the ipsilateral facial nerve, usually starting around the eyes before progressing inferiorly to the cheek, mouth, and neck. Its prevalence is 9.8 per 100,000 persons with an average age of onset of 44 years. The accepted pathophysiology of HFS suggests that it is a disease process of the nerve root entry zone of the facial nerve. HFS can be divided into two types: primary and secondary. Primary HFS is triggered by vascular compression whereas secondary HFS comprises all other causes of facial nerve damage. Clinical examination and imaging modalities such as electromyography (EMG) and magnetic resonance imaging (MRI) are useful to differentiate HFS from other facial movement disorders and for intraoperative planning. The standard medical management for HFS is botulinum neurotoxin (BoNT) injections, which provides low-risk but limited symptomatic relief. The only curative treatment for HFS is microvascular decompression (MVD), a surgical intervention that provides lasting symptomatic relief by reducing compression of the facial nerve root. With a low rate of complications such as hearing loss, MVD remains the treatment of choice for HFS patients as intraoperative technique and monitoring continue to improve.

  20. optimal control

    L. I. Rozonoer


    Full Text Available Necessary and sufficient conditions for existence of optimal control for all initial data are proved for LQ-optimization problem. If these conditions are fulfilled, necessary and sufficient conditions of optimality are formulated. Basing on the results, some general hypotheses on optimal control in terms of Pontryagin's maximum condition and Bellman's equation are proposed.

  1. Dual compression is not an uncommon type of iliac vein compression syndrome.

    Shi, Wan-Yin; Gu, Jian-Ping; Liu, Chang-Jian; Lou, Wen-Sheng; He, Xu


    Typical iliac vein compression syndrome (IVCS) is characterized by compression of left common iliac vein (LCIV) by the overlying right common iliac artery (RCIA). We described an underestimated type of IVCS with dual compression by right and left common iliac arteries (LCIA) simultaneously. Thirty-one patients with IVCS were retrospectively included. All patients received trans-catheter venography and computed tomography (CT) examinations for diagnosing and evaluating IVCS. Late venography and reconstructed CT were used for evaluating the anatomical relationship among LCIV, RCIA and LCIA. Imaging manifestations as well as demographic data were collected and evaluated by two experienced radiologists. Sole and dual compression were found in 32.3% (n = 10) and 67.7% (n = 21) of 31 patients respectively. No statistical differences existed between them in terms of age, gender, LCIV diameter at the maximum compression point, pressure gradient across stenosis, and the percentage of compression level. On CT and venography, sole compression was commonly presented with a longitudinal compression at the orifice of LCIV while dual compression was usually presented as two types: one had a lengthy stenosis along the upper side of LCIV and the other was manifested by a longitudinal compression near to the orifice of external iliac vein. The presence of dual compression seemed significantly correlated with the tortuous LCIA (p = 0.006). Left common iliac vein can be presented by dual compression. This type of compression has typical manifestations on late venography and CT.

  2. Image compression with a hybrid wavelet-fractal coder.

    Li, J; Kuo, C J


    A hybrid wavelet-fractal coder (WFC) for image compression is proposed. The WFC uses the fractal contractive mapping to predict the wavelet coefficients of the higher resolution from those of the lower resolution and then encode the prediction residue with a bitplane wavelet coder. The fractal prediction is adaptively applied only to regions where the rate saving offered by fractal prediction justifies its overhead. A rate-distortion criterion is derived to evaluate the fractal rate saving and used to select the optimal fractal parameter set for WFC. The superior performance of the WFC is demonstrated with extensive experimental results.

  3. OTHR Spectrum Reconstruction of Maneuvering Target with Compressive Sensing

    Yinghui Quan


    Full Text Available High-frequency (HF over-the-horizon radar (OTHR works in a very complicated electromagnetic environment. It usually suffers performance degradation caused by transient interference. In this paper, we study the transient interference excision and full spectrum reconstruction of maneuvering targets. The segmental subspace projection (SP approach is applied to suppress the clutter and locate the transient interference. After interference excision, the spectrum is reconstructed from incomplete measurements via compressive sensing (CS by using a redundant Fourier-chirp dictionary. An improved orthogonal matching pursuit (IOMP algorithm is developed to solve the sparse decomposition optimization. Experimental results demonstrate the effectiveness of the proposed methods.

  4. Compressive Radar with Off-Grid and Extended Targets

    Fannjiang, Albert


    Compressed sensing (CS) schemes are proposed for monostatic as well as synthetic aperture radar (SAR) imaging of sparse targets with chirps. In particular, a simple method is developed to improve performance with off-grid targets. Tomographic formulation of spotlight SAR is analyzed by CS methods with several bases and under various bandwidth constraints. Performance guarantees are established via coherence bound and the restricted isometry property. CS analysis provides a fresh and clear perspective on how to optimize temporal and angular samplings for spotlight SAR.

  5. On Phase Transition of Compressed Sensing in the Complex Domain

    Yang, Zai; Xie, Lihua


    The phase transition is a performance measure of the sparsity-undersampling tradeoff in compressed sensing (CS). This letter reports, for the first time, the existence of an exact phase transition for the $\\ell_1$ minimization approach to the complex valued CS problem. This discovery is not only a complementary result to the known phase transition of the real valued CS but also shows considerable superiority of the phase transition of complex valued CS over that of the real valued CS. The results are obtained by extending the recently developed ONE-L1 algorithms to complex valued CS and applying their optimal and iterative solutions to empirically evaluate the phase transition.

  6. Efficiency of Compressed Air Energy Storage

    Elmegaard, Brian; Brix, Wiebke


    The simplest type of a Compressed Air Energy Storage (CAES) facility would be an adiabatic process consisting only of a compressor, a storage and a turbine, compressing air into a container when storing and expanding when producing. This type of CAES would be adiabatic and would if the machines were reversible have a storage efficiency of 100%. However, due to the specific capacity of the storage and the construction materials the air is cooled during and after compression in practice, making...

  7. Compression Techniques for Improved Algorithm Computational Performance

    Zalameda, Joseph N.; Howell, Patricia A.; Winfree, William P.


    Analysis of thermal data requires the processing of large amounts of temporal image data. The processing of the data for quantitative information can be time intensive especially out in the field where large areas are inspected resulting in numerous data sets. By applying a temporal compression technique, improved algorithm performance can be obtained. In this study, analysis techniques are applied to compressed and non-compressed thermal data. A comparison is made based on computational speed and defect signal to noise.

  8. Wavelet transform approach to video compression

    Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay


    In this research, we propose a video compression scheme that uses the boundary-control vectors to represent the motion field and the embedded zerotree wavelet (EZW) to compress the displacement frame difference. When compared to the DCT-based MPEG, the proposed new scheme achieves a better compression performance in terms of the MSE (mean square error) value and visual perception for the same given bit rate.

  9. Quantum Data Compression of a Qubit Ensemble

    Rozema, Lee A.; Mahler, Dylan H.; Hayat, Alex; Turner, Peter S.; Steinberg, Aephraim M.


    Data compression is a ubiquitous aspect of modern information technology, and the advent of quantum information raises the question of what types of compression are feasible for quantum data, where it is especially relevant given the extreme difficulty involved in creating reliable quantum memories. We present a protocol in which an ensemble of quantum bits (qubits) can in principle be perfectly compressed into exponentially fewer qubits. We then experimentally implement our algorithm, compre...

  10. Image Processing by Compression: An Overview


    International audience; This article aims to present the various applications of data compression in image processing. Since some time ago, several research groups have been developing methods based on different data compression techniques to classify, segment, filter and detect digital images fakery. It is necessary to analyze the relationship between different methods and put them into a framework to better understand and better exploit the possibilities that compression provides us respect...

  11. Less is More: Bigger Data from Compressive Measurements

    Stevens, Andrew; Browning, Nigel D.


    Compressive sensing approaches are beginning to take hold in (scanning) transmission electron microscopy (S/TEM) [1,2,3]. Compressive sensing is a mathematical theory about acquiring signals in a compressed form (measurements) and the probability of recovering the original signal by solving an inverse problem [4]. The inverse problem is underdetermined (more unknowns than measurements), so it is not obvious that recovery is possible. Compression is achieved by taking inner products of the signal with measurement weight vectors. Both Gaussian random weights and Bernoulli (0,1) random weights form a large class of measurement vectors for which recovery is possible. The measurements can also be designed through an optimization process. The key insight for electron microscopists is that compressive sensing can be used to increase acquisition speed and reduce dose. Building on work initially developed for optical cameras, this new paradigm will allow electron microscopists to solve more problems in the engineering and life sciences. We will be collecting orders of magnitude more data than previously possible. The reason that we will have more data is because we will have increased temporal/spatial/spectral sampling rates, and we will be able ability to interrogate larger classes of samples that were previously too beam sensitive to survive the experiment. For example consider an in-situ experiment that takes 1 minute. With traditional sensing, we might collect 5 images per second for a total of 300 images. With compressive sensing, each of those 300 images can be expanded into 10 more images, making the collection rate 50 images per second, and the decompressed data a total of 3000 images [3]. But, what are the implications, in terms of data, for this new methodology? Acquisition of compressed data will require downstream reconstruction to be useful. The reconstructed data will be much larger than traditional data, we will need space to store the reconstructions during

  12. Comparing biological networks via graph compression

    Hayashida Morihiro


    Full Text Available Abstract Background Comparison of various kinds of biological data is one of the main problems in bioinformatics and systems biology. Data compression methods have been applied to comparison of large sequence data and protein structure data. Since it is still difficult to compare global structures of large biological networks, it is reasonable to try to apply data compression methods to comparison of biological networks. In existing compression methods, the uniqueness of compression results is not guaranteed because there is some ambiguity in selection of overlapping edges. Results This paper proposes novel efficient methods, CompressEdge and CompressVertices, for comparing large biological networks. In the proposed methods, an original network structure is compressed by iteratively contracting identical edges and sets of connected edges. Then, the similarity of two networks is measured by a compression ratio of the concatenated networks. The proposed methods are applied to comparison of metabolic networks of several organisms, H. sapiens, M. musculus, A. thaliana, D. melanogaster, C. elegans, E. coli, S. cerevisiae, and B. subtilis, and are compared with an existing method. These results suggest that our methods can efficiently measure the similarities between metabolic networks. Conclusions Our proposed algorithms, which compress node-labeled networks, are useful for measuring the similarity of large biological networks.

  13. Stability of compressible reacting mixing layer

    Shin, D. S.; Ferziger, J. H.


    Linear instability of compressible reacting mixing layers is analyzed with emphasis on the effects of heat release and compressibility. Laminar solutions of the compressible boundary-layer equations are used as the base flows. The parameters of this study are the adiabatic flame temperature, the Mach number of the upper stream, frequency, wavenumber, and the direction of propagation of the disturbance wave. Stability characteristics of the flow are presented. Three groups of unstable modes are found when the Mach number and/or heat release are large. Finally, it is shown that the unstable modes are two-dimensional for large heat release even in highly compressible flow.

  14. The applicability of the wind compression model

    Cariková, Zuzana


    Compression of the stellar winds from rapidly rotating hot stars is described by the wind compression model. However, it was also shown that rapid rotation leads to rotational distortion of the stellar surface, resulting in the appearance of non-radial forces acting against the wind compression. In this note we justify the wind compression model for moderately rotating white dwarfs and slowly rotating giants. The former could be conducive to understanding density/ionization structure of the mass outflow from symbiotic stars and novae, while the latter can represent an effective mass-transfer mode in the wide interacting binaries.

  15. Compression Properties of Polyester Needlepunched Fabric

    Sanjoy Debnath, Ph.D.


    Full Text Available In the present paper, a study of the effects of fabricweight, fiber cross-sectional shapes (round, hollowand trilobal and presence of reinforcing materialon the compression properties (initial thickness,percentage compression, percentage thickness lossand percentage compression resilience of polyesterneedle punched industrial nonwoven fabrics ispresented. It was found that for fabrics with noreinforcing material, the initial thickness,compression, and thickness loss were higher thanfabrics with reinforcing material, irrespectiveoffiber cross-section. Compression resilience datashowed the reverse trend. Initial thickness fortrilobal cross-sectional fabric sample was highestfollowed by round and hollow cross-sectionedpolyester needle punched fabrics. The polyesterfabric made from hollow cross-sectioned fibersshowed the least percentage compression at everylevel of fabric weights. The trilobal cross-sectionedpolyester fabric sample showed higher thicknessloss followed by round and hollow cross-sectionedpolyester fabric samples respectively. The hollowcross-sectioned polyester fabric samples showedmaximum compression resilience followed byround and trilobal cross-sectioned polyestersamples irrespective of fabric weights. The initialthickness increases, but percentage compression,thickness loss and compression resilience decreaseswith the increase in fabric weight irrespective offiber cross-sectional shapes.

  16. Spinal cord compression due to ethmoid adenocarcinoma.

    Johns, D R; Sweriduk, S T


    Adenocarcinoma of the ethmoid sinus is a rare tumor which has been epidemiologically linked to woodworking in the furniture industry. It has a low propensity to metastasize and has not been previously reported to cause spinal cord compression. A symptomatic epidural spinal cord compression was confirmed on magnetic resonance imaging (MRI) scan in a former furniture worker with widely disseminated metastases. The clinical features of ethmoid sinus adenocarcinoma and neoplastic spinal cord compression, and the comparative value of MRI scanning in the neuroradiologic diagnosis of spinal cord compression are reviewed.

  17. Efficient compression of molecular dynamics trajectory files.

    Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James


    We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases.

  18. Memory hierarchy using row-based compression

    Loh, Gabriel H.; O'Connor, James M.


    A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.

  19. Optimal Surgical Therapy in a Porcine (Sus scrofa) Model of Extra-Thoracic Penetrating Trauma Resulting in Hemorrhagic Shock: ED Thoracotomy vs. Immediate Trans-Abdominal Vascular Control. A Porcine Model for Evaluating the Management of Non-Compressible Torso Hemorrhage


    infrared brain oximetry monitor ( Pediatric SomaSensor®, Somanetics Corporation, Troy, MI) was positioned across the frontal scalp of the animal. treat aneurysmal disease. In summary, non-compressible hemorrhage represents a significant area of interest for the military community




    For compressible two-phase displacement problem, a kind of upwind operator splitting finite difference schemes is put forward and make use of operator splitting, of calculus of variations, multiplicative commutation rule of difference operators, decomposition of high order difference operators and prior estimates are adopted. Optimal order estinates in L2 norm are derived to determine the error in the approximate solution.

  1. New thermodynamical systems. Alternative of compression-absorption; Nouveaux systemes thermodynamiques. Alternative de la compression-absorption

    Feidt, M.; Brunin, O.; Lottin, O.; Vidal, J.F. [Universite Henri Poincare Nancy, 54 - Vandoeuvre-les-Nancy (France); Hivet, B. [Electricite de France, 77 - Moret sur Loing (France)


    This paper describes a 5 years joint research work carried out by Electricite de France (EdF) and the ESPE group of the LEMTA on compression-absorption heat pumps. It shows how a thermodynamical model of machinery, completed with precise exchanger-reactor models, allows to simulate and dimension (and eventually optimize) the system. A small power prototype has been tested and the first results are analyzed with the help of the models. A real scale experiment in industrial sites is expected in the future. (J.S.) 20 refs.

  2. Novel Concepts for the Compression of Large Volumes of Carbon Dioxide

    J. Jeffrey Moore; Marybeth G. Nored; Ryan S. Gernentz; Klaus Brun


    In the effort to reduce the release of CO{sub 2} greenhouse gases to the atmosphere, sequestration of CO{sub 2} from Integrated Gasification Combined Cycle (IGCC) and Oxy-Fuel power plants is being pursued. This approach, however, requires significant compression power to boost the pressure to typical pipeline levels. The penalty can be as high as 8% to 12% on a typical IGCC plant. The goal of this research is to reduce this penalty through novel compression concepts and integration with existing IGCC processes. The primary objective of the study of novel CO{sub 2} compression concepts is to boost the pressure of CO{sub 2} to pipeline pressures with the minimal amount of energy required. Fundamental thermodynamics were studied to explore pressure rise in both liquid and gaseous states. For gaseous compression, the project investigated novel methods to compress CO{sub 2} while removing the heat of compression internal to the compressor. The high-pressure ratio due to the delivery pressure of the CO{sub 2} for enhanced oil recovery results in significant heat of compression. Since less energy is required to boost the pressure of a cooler gas stream, both upstream and interstage cooling is desirable. While isothermal compression has been utilized in some services, it has not been optimized for the IGCC environment. This project determined the optimum compressor configuration and developed technology concepts for internal heat removal. Other compression options using liquefied CO{sub 2} and cryogenic pumping were explored as well. Preliminary analysis indicates up to a 35% reduction in power is possible with the new concepts being considered.

  3. Edge compression techniques for visualization of dense directed graphs.

    Dwyer, Tim; Henry Riche, Nathalie; Marriott, Kim; Mears, Christopher


    We explore the effectiveness of visualizing dense directed graphs by replacing individual edges with edges connected to 'modules'-or groups of nodes-such that the new edges imply aggregate connectivity. We only consider techniques that offer a lossless compression: that is, where the entire graph can still be read from the compressed version. The techniques considered are: a simple grouping of nodes with identical neighbor sets; Modular Decomposition which permits internal structure in modules and allows them to be nested; and Power Graph Analysis which further allows edges to cross module boundaries. These techniques all have the same goal--to compress the set of edges that need to be rendered to fully convey connectivity--but each successive relaxation of the module definition permits fewer edges to be drawn in the rendered graph. Each successive technique also, we hypothesize, requires a higher degree of mental effort to interpret. We test this hypothetical trade-off with two studies involving human participants. For Power Graph Analysis we propose a novel optimal technique based on constraint programming. This enables us to explore the parameter space for the technique more precisely than could be achieved with a heuristic. Although applicable to many domains, we are motivated by--and discuss in particular--the application to software dependency analysis.

  4. Effect of Embedding Watermark on Compression of the Digital Images

    Aggarwal, Deepak


    Image Compression plays a very important role in image processing especially when we are to send the image on the internet. The threat to the information on the internet increases and image is no exception. Generally the image is sent on the internet as the compressed image to optimally use the bandwidth of the network. But as we are on the network, at any intermediate level the image can be changed intentionally or unintentionally. To make sure that the correct image is being delivered at the other end we embed the water mark to the image. The watermarked image is then compressed and sent on the network. When the image is decompressed at the other end we can extract the watermark and make sure that the image is the same that was sent by the other end. Though watermarking the image increases the size of the uncompressed image but that has to done to achieve the high degree of robustness i.e. how an image sustains the attacks on it. The present paper is an attempt to make transmission of the images secure from...

  5. Development of 1D Liner Compression Code for IDL

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony


    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  6. Compressed Sensing and Matrix Completion with Constant Proportion of Corruptions

    Li, Xiaodong


    We improve existing results in the field of compressed sensing and matrix completion when sampled data may be grossly corrupted. We introduce three new theorems. 1) In compressed sensing, we show that if the m \\times n sensing matrix has independent Gaussian entries, then one can recover a sparse signal x exactly by tractable \\ell1 minimimization even if a positive fraction of the measurements are arbitrarily corrupted, provided the number of nonzero entries in x is O(m/(log(n/m) + 1)). 2) In the very general sensing model introduced in "A probabilistic and RIPless theory of compressed sensing" by Candes and Plan, and assuming a positive fraction of corrupted measurements, exact recovery still holds if the signal now has O(m/(log^2 n)) nonzero entries. 3) Finally, we prove that one can recover an n \\times n low-rank matrix from m corrupted sampled entries by tractable optimization provided the rank is on the order of O(m/(n log^2 n)); again, this holds when there is a positive fraction of corrupted samples.

  7. Motion-Compensated Compression of Dynamic Voxelized Point Clouds.

    De Queiroz, Ricardo L; Chou, Philip A


    Dynamic point clouds are a potential new frontier in visual communication systems. A few articles have addressed the compression of point clouds, but very few references exist on exploring temporal redundancies. This paper presents a novel motion-compensated approach to encoding dynamic voxelized point clouds at low bit rates. A simple coder breaks the voxelized point cloud at each frame into blocks of voxels. Each block is either encoded in intra-frame mode or is replaced by a motion-compensated version of a block in the previous frame. The decision is optimized in a rate-distortion sense. In this way, both the geometry and the color are encoded with distortion, allowing for reduced bit-rates. In-loop filtering is employed to minimize compression artifacts caused by distortion in the geometry information. Simulations reveal that this simple motion compensated coder can efficiently extend the compression range of dynamic voxelized point clouds to rates below what intra-frame coding alone can accommodate, trading rate for geometry accuracy.

  8. High-resolution three-dimensional imaging with compress sensing

    Wang, Jingyi; Ke, Jun


    LIDAR three-dimensional imaging technology have been used in many fields, such as military detection. However, LIDAR require extremely fast data acquisition speed. This makes the manufacture of detector array for LIDAR system is very difficult. To solve this problem, we consider using compress sensing which can greatly decrease the data acquisition and relax the requirement of a detection device. To use the compressive sensing idea, a spatial light modulator will be used to modulate the pulsed light source. Then a photodetector is used to receive the reflected light. A convex optimization problem is solved to reconstruct the 2D depth map of the object. To improve the resolution in transversal direction, we use multiframe image restoration technology. For each 2D piecewise-planar scene, we move the SLM half-pixel each time. Then the position where the modulated light illuminates will changed accordingly. We repeat moving the SLM to four different directions. Then we can get four low-resolution depth maps with different details of the same plane scene. If we use all of the measurements obtained by the subpixel movements, we can reconstruct a high-resolution depth map of the sense. A linear minimum-mean-square error algorithm is used for the reconstruction. By combining compress sensing and multiframe image restoration technology, we reduce the burden on data analyze and improve the efficiency of detection. More importantly, we obtain high-resolution depth maps of a 3D scene.

  9. Comparative compressibility of hydrous wadsleyite

    Chang, Y.; Jacobsen, S. D.; Thomas, S.; Bina, C. R.; Smyth, J. R.; Frost, D. J.; Hauri, E. H.; Meng, Y.; Dera, P. K.


    Determining the effects of hydration on the density and elastic properties of wadsleyite, β-Mg2SiO4, is critical to constraining Earth’s global geochemical water cycle. Whereas previous studies of the bulk modulus (KT) have studied either hydrous Mg-wadsleyite, or anhydrous Fe-bearing wadsleyite, the combined effects of hydration and iron are under investigation. Also, whereas KT from compressibility studies is relatively well constrained by equation of state fitting to P-V data, the pressure derivative of the bulk modulus (K’) is usually not well constrained either because of poor data resolution, uncertainty in pressure calibrations, or narrow pressure ranges of previous single-crystal studies. Here we report the comparative compressibility of dry versus hydrous wadsleyite with Fo90 composition containing 1.9(2) wt% H2O, nearly the maximum water storage capacity of this phase. The composition was characterized by EMPA and nanoSIMS. The experiments were carried out using high-pressure, single-crystal diffraction up to 30 GPa at HPCAT, Advanced Photon Source. By loading three crystals each of hydrous and anhydrous wadsleyite together in the same diamond-anvil cell, we achieve good hkl coverage and eliminate the pressure scale as a variable in comparing the relative value of K’ between the dry and hydrous samples. We used MgO as an internal diffraction standard, in addition to recording ruby fluorescence pressures. By using neon as a pressure medium and about 1 GPa pressure steps up to 30 GPa, we obtain high-quality diffraction data for constraining the effect of hydration on the density and K’ of hydrous wadsleyite. Due to hydration, the initial volume of hydrous Fo90 wadsleyite is larger than anhydrous Fo90 wadsleyite, however the higher compressibility of hydrous wadsleyite leads to a volume crossover at 6 GPa. Hydration to 2 wt% H2O reduces the bulk modulus of Fo90 wadsleyite from 170(2) to 157(2) GPa, or about 7.6% reduction. In contrast to previous

  10. Magnetic Flux Compression in Plasmas

    Velikovich, A. L.


    Magnetic flux compression (MFC) as a method for producing ultra-high pulsed magnetic fields had been originated in the 1950s by Sakharov et al. at Arzamas in the USSR (now VNIIEF, Russia) and by Fowler et al. at Los Alamos in the US. The highest magnetic field produced by explosively driven MFC generator, 28 MG, was reported by Boyko et al. of VNIIEF. The idea of using MFC to increase the magnetic field in a magnetically confined plasma to 3-10 MG, relaxing the strict requirements on the plasma density and Lawson time, gave rise to the research area known as MTF in the US and MAGO in Russia. To make a difference in ICF, a magnetic field of ˜100 MG should be generated via MFC by a plasma liner as a part of the capsule compression scenario on a laser or pulsed power facility. This approach was first suggested in mid-1980s by Liberman and Velikovich in the USSR and Felber in the US. It has not been obvious from the start that it could work at all, given that so many mechanisms exist for anomalously fast penetration of magnetic field through plasma. And yet, many experiments stimulated by this proposal since 1986, mostly using pulsed-power drivers, demonstrated reasonably good flux compression up to ˜42 MG, although diagnostics of magnetic fields of such magnitude in HED plasmas is still problematic. The new interest of MFC in plasmas emerged with the advancement of new drivers, diagnostic methods and simulation tools. Experiments on MFC in a deuterium plasma filling a cylindrical plastic liner imploded by OMEGA laser beam led by Knauer, Betti et al. at LLE produced peak fields of 36 MG. The novel MagLIF approach to low-cost, high-efficiency ICF pursued by Herrmann, Slutz, Vesey et al. at Sandia involves pulsed-power-driven MFC to a peak field of ˜130 MG in a DT plasma. A review of the progress, current status and future prospects of MFC in plasmas is presented.

  11. Decentralized Distribution Update Algorithm Based on Compressibility-controlled Wireless Sensor Network

    Hehua Li


    Full Text Available Sensor network adopts the lossy compression techniques to collect long-term data, analyze the data tendency and the interested specific data model. In these applications, the sensor is established to collect large numbers of continuous data, and allow the access to the lossy and untimely data. In addition, the neighbor sensor data is correlated both in time and in space. Therefore, the data sensed by the sensor itself in the intermediate node and the data will be lossy compressed for prolonging the system operation lifetime. To study the optimal distributed problem of the bite-rate and the lossy degree. When the optimal distribution problem between the bit-rate and the lossy degree is discussed, how to make the optimal decision distributes the compressibility of all sensors in the satisfaction of the acceptable data distorted condition. For adopting the minimum transmittal bit-rate to collect the top-quality data. The optimal solution is introduced for the distribution problem, and the decentralized distribution algorithm is introduced in terms of the optimal solution. Compared with the average distribution strategy, the simulation result shows that the optimal solution and the decentralized actually can reduce large numbers of the network transmittal data volume

  12. Optimization and industry new frontiers

    Korotkikh, Victor


    Optimization from Human Genes to Cutting Edge Technologies The challenges faced by industry today are so complex that they can only be solved through the help and participation of optimization ex­ perts. For example, many industries in e-commerce, finance, medicine, and engineering, face several computational challenges due to the mas­ sive data sets that arise in their applications. Some of the challenges include, extended memory algorithms and data structures, new program­ ming environments, software systems, cryptographic protocols, storage devices, data compression, mathematical and statistical methods for knowledge mining, and information visualization. With advances in computer and information systems technologies, and many interdisci­ plinary efforts, many of the "data avalanche challenges" are beginning to be addressed. Optimization is the most crucial component in these efforts. Nowadays, the main task of optimization is to investigate the cutting edge frontiers of these technologies and systems ...

  13. A New Approach for Fingerprint Image Compression

    Mazieres, Bertrand


    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  14. Spectral compression of single photons

    Lavoie, Jonathan; Wright, Logan G; Fedrizzi, Alessandro; Resch, Kevin J


    Photons are critical to quantum technologies since they can be used for virtually all quantum information tasks: in quantum metrology, as the information carrier in photonic quantum computation, as a mediator in hybrid systems, and to establish long distance networks. The physical characteristics of photons in these applications differ drastically; spectral bandwidths span 12 orders of magnitude from 50 THz for quantum-optical coherence tomography to 50 Hz for certain quantum memories. Combining these technologies requires coherent interfaces that reversibly map centre frequencies and bandwidths of photons to avoid excessive loss. Here we demonstrate bandwidth compression of single photons by a factor 40 and tunability over a range 70 times that bandwidth via sum-frequency generation with chirped laser pulses. This constitutes a time-to-frequency interface for light capable of converting time-bin to colour entanglement and enables ultrafast timing measurements. It is a step toward arbitrary waveform generatio...

  15. Genetic disorders producing compressive radiculopathy.

    Corey, Joseph M


    Back pain is a frequent complaint seen in neurological practice. In evaluating back pain, neurologists are asked to evaluate patients for radiculopathy, determine whether they may benefit from surgery, and help guide management. Although disc herniation is the most common etiology of compressive radiculopathy, there are many other causes, including genetic disorders. This article is a discussion of genetic disorders that cause or contribute to radiculopathies. These genetic disorders include neurofibromatosis, Paget's disease of bone, and ankylosing spondylitis. Numerous genetic disorders can also lead to deformities of the spine, including spinal muscular atrophy, Friedreich's ataxia, Charcot-Marie-Tooth disease, familial dysautonomia, idiopathic torsional dystonia, Marfan's syndrome, and Ehlers-Danlos syndrome. However, the extent of radiculopathy caused by spine deformities is essentially absent from the literature. Finally, recent investigation into the heritability of disc degeneration and lumbar disc herniation suggests a significant genetic component in the etiology of lumbar disc disease.

  16. Photon counting compressive depth mapping

    Howland, Gregory A; Ware, Matthew R; Howell, John C


    We demonstrate a compressed sensing, photon counting lidar system based on the single-pixel camera. Our technique recovers both depth and intensity maps from a single under-sampled set of incoherent, linear projections of a scene of interest at ultra-low light levels around 0.5 picowatts. Only two-dimensional reconstructions are required to image a three-dimensional scene. We demonstrate intensity imaging and depth mapping at 256 x 256 pixel transverse resolution with acquisition times as short as 3 seconds. We also show novelty filtering, reconstructing only the difference between two instances of a scene. Finally, we acquire 32 x 32 pixel real-time video for three-dimensional object tracking at 14 frames-per-second.

  17. Compressed Encoding for Rank Modulation

    Gad, Eyal En; Jiang,; Bruck, Jehoshua


    Rank modulation has been recently proposed as a scheme for storing information in flash memories. While rank modulation has advantages in improving write speed and endurance, the current encoding approach is based on the "push to the top" operation that is not efficient in the general case. We propose a new encoding procedure where a cell level is raised to be higher than the minimal necessary subset - instead of all - of the other cell levels. This new procedure leads to a significantly more compressed (lower charge levels) encoding. We derive an upper bound for a family of codes that utilize the proposed encoding procedure, and consider code constructions that achieve that bound for several special cases.

  18. Construction and compression of Dwarf

    XIANG Long-gang; FENG Yu-cai; GUI Hao


    There exists an inherent difficulty in the original algorithm for the construction of Dwarf, which prevents it from constructing true Dwarfs. We explained when and why it introduces suffix redundancies into the Dwarf structure. To solve this problem, we proposed a completely new algorithm called PID. It bottom-up computes partitions of a fact table, and inserts them into the Dwarf structure. Ifa partition is an MSV partition, coalesce its sub-Dwarf; otherwise create necessary nodes and cells. Our performance study showed that PID is efficient. For further condensing of Dwarf, we proposed Condensed Dwarf, a more compressed structure, combining the strength of Dwarf and Condensed Cube. By eliminating unnecessary stores of "ALL" cells from the Dwarf structure, Condensed Dwarf could effectively reduce the size of Dwarf, especially for Dwarfs of the real world, which was illustrated by our experiments. Its query processing is still simple and, only two minor modifications to PID are required for the construction of Condensed Dwarf.

  19. Fragment separator momentum compression schemes

    Bandura, Laura, E-mail: [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); National Superconducting Cyclotron Lab, Michigan State University, 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Erdelyi, Bela [Argonne National Laboratory, Argonne, IL 60439 (United States); Northern Illinois University, DeKalb, IL 60115 (United States); Hausmann, Marc [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Kubo, Toshiyuki [RIKEN Nishina Center, RIKEN, Wako (Japan); Nolen, Jerry [Argonne National Laboratory, Argonne, IL 60439 (United States); Portillo, Mauricio [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Sherrill, Bradley M. [National Superconducting Cyclotron Lab, Michigan State University, 1 Cyclotron, East Lansing, MI 48824-1321 (United States)


    We present a scheme to use a fragment separator and profiled energy degraders to transfer longitudinal phase space into transverse phase space while maintaining achromatic beam transport. The first order beam optics theory of the method is presented and the consequent enlargement of the transverse phase space is discussed. An interesting consequence of the technique is that the first order mass resolving power of the system is determined by the first dispersive section up to the energy degrader, independent of whether or not momentum compression is used. The fragment separator at the Facility for Rare Isotope Beams is a specific application of this technique and is described along with simulations by the code COSY INFINITY.

  20. Fragment separator momentum compression schemes.

    Bandura, L.; Erdelyi, B.; Hausmann, M.; Kubo, T.; Nolen, J.; Portillo, M.; Sherrill, B.M. (Physics); (MSU); (Northern Illinois Univ.); (RIKEN)


    We present a scheme to use a fragment separator and profiled energy degraders to transfer longitudinal phase space into transverse phase space while maintaining achromatic beam transport. The first order beam optics theory of the method is presented and the consequent enlargement of the transverse phase space is discussed. An interesting consequence of the technique is that the first order mass resolving power of the system is determined by the first dispersive section up to the energy degrader, independent of whether or not momentum compression is used. The fragment separator at the Facility for Rare Isotope Beams is a specific application of this technique and is described along with simulations by the code COSY INFINITY.