WorldWideScience

Sample records for proposed compression scheme

  1. A stable penalty method for the compressible Navier-Stokes equations: II: One-dimensional domain decomposition schemes

    DEFF Research Database (Denmark)

    Hesthaven, Jan

    1997-01-01

    This paper presents asymptotically stable schemes for patching of nonoverlapping subdomains when approximating the compressible Navier-Stokes equations given on conservation form. The scheme is a natural extension of a previously proposed scheme for enforcing open boundary conditions and as a res......This paper presents asymptotically stable schemes for patching of nonoverlapping subdomains when approximating the compressible Navier-Stokes equations given on conservation form. The scheme is a natural extension of a previously proposed scheme for enforcing open boundary conditions...... and as a result the patching of subdomains is local in space. The scheme is studied in detail for Burgers's equation and developed for the compressible Navier-Stokes equations in general curvilinear coordinates. The versatility of the proposed scheme for the compressible Navier-Stokes equations is illustrated...

  2. An Image Compression Scheme in Wireless Multimedia Sensor Networks Based on NMF

    Directory of Open Access Journals (Sweden)

    Shikang Kong

    2017-02-01

    Full Text Available With the goal of addressing the issue of image compression in wireless multimedia sensor networks with high recovered quality and low energy consumption, an image compression and transmission scheme based on non-negative matrix factorization (NMF is proposed in this paper. First, the NMF algorithm theory is studied. Then, a collaborative mechanism of image capture, block, compression and transmission is completed. Camera nodes capture images and send them to ordinary nodes which use an NMF algorithm for image compression. Compressed images are transmitted to the station by the cluster head node and received from ordinary nodes. The station takes on the image restoration. Simulation results show that, compared with the JPEG2000 and singular value decomposition (SVD compression schemes, the proposed scheme has a higher quality of recovered images and lower total node energy consumption. It is beneficial to reduce the burden of energy consumption and prolong the life of the whole network system, which has great significance for practical applications of WMSNs.

  3. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    Science.gov (United States)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  4. A test data compression scheme based on irrational numbers stored coding.

    Science.gov (United States)

    Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan

    2014-01-01

    Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.

  5. A Test Data Compression Scheme Based on Irrational Numbers Stored Coding

    Directory of Open Access Journals (Sweden)

    Hai-feng Wu

    2014-01-01

    Full Text Available Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS, is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.

  6. Efficient JPEG 2000 Image Compression Scheme for Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Halim Sghaier

    2011-08-01

    Full Text Available When using wireless sensor networks for real-time data transmission, some critical points should be considered. Restricted computational power, reduced memory, narrow bandwidth and energy supplied present strong limits in sensor nodes. Therefore, maximizing network lifetime and minimizing energy consumption are always optimization goals. To overcome the computation and energy limitation of individual sensor nodes during image transmission, an energy efficient image transport scheme is proposed, taking advantage of JPEG2000 still image compression standard using MATLAB and C from Jasper. JPEG2000 provides a practical set of features, not necessarily available in the previous standards. These features were achieved using techniques: the discrete wavelet transform (DWT, and embedded block coding with optimized truncation (EBCOT. Performance of the proposed image transport scheme is investigated with respect to image quality and energy consumption. Simulation results are presented and show that the proposed scheme optimizes network lifetime and reduces significantly the amount of required memory by analyzing the functional influence of each parameter of this distributed image compression algorithm.

  7. A Hybrid Data Compression Scheme for Power Reduction in Wireless Sensors for IoT.

    Science.gov (United States)

    Deepu, Chacko John; Heng, Chun-Huat; Lian, Yong

    2017-04-01

    This paper presents a novel data compression and transmission scheme for power reduction in Internet-of-Things (IoT) enabled wireless sensors. In the proposed scheme, data is compressed with both lossy and lossless techniques, so as to enable hybrid transmission mode, support adaptive data rate selection and save power in wireless transmission. Applying the method to electrocardiogram (ECG), the data is first compressed using a lossy compression technique with a high compression ratio (CR). The residual error between the original data and the decompressed lossy data is preserved using entropy coding, enabling a lossless restoration of the original data when required. Average CR of 2.1 × and 7.8 × were achieved for lossless and lossy compression respectively with MIT/BIH database. The power reduction is demonstrated using a Bluetooth transceiver and is found to be reduced to 18% for lossy and 53% for lossless transmission respectively. Options for hybrid transmission mode, adaptive rate selection and system level power reduction make the proposed scheme attractive for IoT wireless sensors in healthcare applications.

  8. Self-adjusting entropy-stable scheme for compressible Euler equations

    Institute of Scientific and Technical Information of China (English)

    程晓晗; 聂玉峰; 封建湖; LuoXiao-Yu; 蔡力

    2015-01-01

    In this work, a self-adjusting entropy-stable scheme is proposed for solving compressible Euler equations. The entropy-stable scheme is constructed by combining the entropy conservative flux with a suitable diffusion operator. The entropy has to be preserved in smooth solutions and be dissipated at shocks. To achieve this, a switch function, based on entropy variables, is employed to make the numerical diffusion term added around discontinuities automatically. The resulting scheme is still entropy-stable. A number of numerical experiments illustrating the robustness and accuracy of the scheme are presented. From these numerical results, we observe a remarkable gain in accuracy.

  9. Wavelet-based Encoding Scheme for Controlling Size of Compressed ECG Segments in Telecardiology Systems.

    Science.gov (United States)

    Al-Busaidi, Asiya M; Khriji, Lazhar; Touati, Farid; Rasid, Mohd Fadlee; Mnaouer, Adel Ben

    2017-09-12

    One of the major issues in time-critical medical applications using wireless technology is the size of the payload packet, which is generally designed to be very small to improve the transmission process. Using small packets to transmit continuous ECG data is still costly. Thus, data compression is commonly used to reduce the huge amount of ECG data transmitted through telecardiology devices. In this paper, a new ECG compression scheme is introduced to ensure that the compressed ECG segments fit into the available limited payload packets, while maintaining a fixed CR to preserve the diagnostic information. The scheme automatically divides the ECG block into segments, while maintaining other compression parameters fixed. This scheme adopts discrete wavelet transform (DWT) method to decompose the ECG data, bit-field preserving (BFP) method to preserve the quality of the DWT coefficients, and a modified running-length encoding (RLE) scheme to encode the coefficients. The proposed dynamic compression scheme showed promising results with a percentage packet reduction (PR) of about 85.39% at low percentage root-mean square difference (PRD) values, less than 1%. ECG records from MIT-BIH Arrhythmia Database were used to test the proposed method. The simulation results showed promising performance that satisfies the needs of portable telecardiology systems, like the limited payload size and low power consumption.

  10. Self-adjusting entropy-stable scheme for compressible Euler equations

    International Nuclear Information System (INIS)

    Cheng Xiao-Han; Nie Yu-Feng; Cai Li; Feng Jian-Hu; Luo Xiao-Yu

    2015-01-01

    In this work, a self-adjusting entropy-stable scheme is proposed for solving compressible Euler equations. The entropy-stable scheme is constructed by combining the entropy conservative flux with a suitable diffusion operator. The entropy has to be preserved in smooth solutions and be dissipated at shocks. To achieve this, a switch function, which is based on entropy variables, is employed to make the numerical diffusion term be automatically added around discontinuities. The resulting scheme is still entropy-stable. A number of numerical experiments illustrating the robustness and accuracy of the scheme are presented. From these numerical results, we observe a remarkable gain in accuracy. (paper)

  11. Secure biometric image sensor and authentication scheme based on compressed sensing.

    Science.gov (United States)

    Suzuki, Hiroyuki; Suzuki, Masamichi; Urabe, Takuya; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2013-11-20

    It is important to ensure the security of biometric authentication information, because its leakage causes serious risks, such as replay attacks using the stolen biometric data, and also because it is almost impossible to replace raw biometric information. In this paper, we propose a secure biometric authentication scheme that protects such information by employing an optical data ciphering technique based on compressed sensing. The proposed scheme is based on two-factor authentication, the biometric information being supplemented by secret information that is used as a random seed for a cipher key. In this scheme, a biometric image is optically encrypted at the time of image capture, and a pair of restored biometric images for enrollment and verification are verified in the authentication server. If any of the biometric information is exposed to risk, it can be reenrolled by changing the secret information. Through numerical experiments, we confirm that finger vein images can be restored from the compressed sensing measurement data. We also present results that verify the accuracy of the scheme.

  12. An efficient compression scheme for bitmap indices

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

  13. Maximum-principle-satisfying space-time conservation element and solution element scheme applied to compressible multifluids

    KAUST Repository

    Shen, Hua; Wen, Chih-Yung; Parsani, Matteo; Shu, Chi-Wang

    2016-01-01

    A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

  14. Maximum-principle-satisfying space-time conservation element and solution element scheme applied to compressible multifluids

    KAUST Repository

    Shen, Hua

    2016-10-19

    A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

  15. An Unequal Secure Encryption Scheme for H.264/AVC Video Compression Standard

    Science.gov (United States)

    Fan, Yibo; Wang, Jidong; Ikenaga, Takeshi; Tsunoo, Yukiyasu; Goto, Satoshi

    H.264/AVC is the newest video coding standard. There are many new features in it which can be easily used for video encryption. In this paper, we propose a new scheme to do video encryption for H.264/AVC video compression standard. We define Unequal Secure Encryption (USE) as an approach that applies different encryption schemes (with different security strength) to different parts of compressed video data. This USE scheme includes two parts: video data classification and unequal secure video data encryption. Firstly, we classify the video data into two partitions: Important data partition and unimportant data partition. Important data partition has small size with high secure protection, while unimportant data partition has large size with low secure protection. Secondly, we use AES as a block cipher to encrypt the important data partition and use LEX as a stream cipher to encrypt the unimportant data partition. AES is the most widely used symmetric cryptography which can ensure high security. LEX is a new stream cipher which is based on AES and its computational cost is much lower than AES. In this way, our scheme can achieve both high security and low computational cost. Besides the USE scheme, we propose a low cost design of hybrid AES/LEX encryption module. Our experimental results show that the computational cost of the USE scheme is low (about 25% of naive encryption at Level 0 with VEA used). The hardware cost for hybrid AES/LEX module is 4678 Gates and the AES encryption throughput is about 50Mbps.

  16. Lightweight SIP/SDP compression scheme (LSSCS)

    Science.gov (United States)

    Wu, Jian J.; Demetrescu, Cristian

    2001-10-01

    In UMTS new IP based services with tight delay constraints will be deployed over the W-CDMA air interface such as IP multimedia and interactive services. To integrate the wireline and wireless IP services, 3GPP standard forum adopted the Session Initiation Protocol (SIP) as the call control protocol for the UMTS Release 5, which will implement next generation, all IP networks for real-time QoS services. In the current form the SIP protocol is not suitable for wireless transmission due to its large message size which will need either a big radio pipe for transmission or it will take far much longer to transmit than the current GSM Call Control (CC) message sequence. In this paper we present a novel compression algorithm called Lightweight SIP/SDP Compression Scheme (LSSCS), which acts at the SIP application layer and therefore removes the information redundancy before it is sent to the network and transport layer. A binary octet-aligned header is added to the compressed SIP/SDP message before sending it to the network layer. The receiver uses this binary header as well as the pre-cached information to regenerate the original SIP/SDP message. The key features of the LSSCS compression scheme are presented in this paper along with implementation examples. It is shown that this compression algorithm makes SIP transmission efficient over the radio interface without losing the SIP generality and flexibility.

  17. New pellet compression schemes by indirect irradiation of REB and related preliminary experiments

    International Nuclear Information System (INIS)

    Sato, M.; Tazima, T.; Yonezu, H.

    1986-01-01

    Preliminary experiments on a proposed scheme for pellet compression is carried out with a Point Pinch Diode. A high current density of ion beam is observed, and its value corresponds to 13.5 kA/cm 2 from the anode to the cathode. (author)

  18. A parallelizable compression scheme for Monte Carlo scatter system matrices in PET image reconstruction

    International Nuclear Information System (INIS)

    Rehfeld, Niklas; Alber, Markus

    2007-01-01

    Scatter correction techniques in iterative positron emission tomography (PET) reconstruction increasingly utilize Monte Carlo (MC) simulations which are very well suited to model scatter in the inhomogeneous patient. Due to memory constraints the results of these simulations are not stored in the system matrix, but added or subtracted as a constant term or recalculated in the projector at each iteration. This implies that scatter is not considered in the back-projector. The presented scheme provides a method to store the simulated Monte Carlo scatter in a compressed scatter system matrix. The compression is based on parametrization and B-spline approximation and allows the formation of the scatter matrix based on low statistics simulations. The compression as well as the retrieval of the matrix elements are parallelizable. It is shown that the proposed compression scheme provides sufficient compression so that the storage in memory of a scatter system matrix for a 3D scanner is feasible. Scatter matrices of two different 2D scanner geometries were compressed and used for reconstruction as a proof of concept. Compression ratios of 0.1% could be achieved and scatter induced artifacts in the images were successfully reduced by using the compressed matrices in the reconstruction algorithm

  19. On the modelling of compressible inviscid flow problems using AUSM schemes

    Directory of Open Access Journals (Sweden)

    Hajžman M.

    2007-11-01

    Full Text Available During last decades, upwind schemes have become a popular method in the field of computational fluid dynamics. Although they are only first order accurate, AUSM (Advection Upstream Splitting Method schemes proved to be well suited for modelling of compressible flows due to their robustness and ability of capturing shock discontinuities. In this paper, we review the composition of the AUSM flux-vector splitting scheme and its improved version noted AUSM+, proposed by Liou, for the solution of the Euler equations. Mach number splitting functions operating with values from adjacent cells are used to determine numerical convective fluxes and pressure splitting is used for the evaluation of numerical pressure fluxes. Both versions of the AUSM scheme are applied for solving some test problems such as one-dimensional shock tube problem and three dimensional GAMM channel. Features of the schemes are discussed in comparison with some explicit central schemes of the first order accuracy (Lax-Friedrichs and of the second order accuracy (MacCormack.

  20. Adaptive nonseparable vector lifting scheme for digital holographic data compression.

    Science.gov (United States)

    Xing, Yafei; Kaaniche, Mounir; Pesquet-Popescu, Béatrice; Dufaux, Frédéric

    2015-01-01

    Holographic data play a crucial role in recent three-dimensional imaging as well as microscopic applications. As a result, huge amounts of storage capacity will be involved for this kind of data. Therefore, it becomes necessary to develop efficient hologram compression schemes for storage and transmission purposes. In this paper, we focus on the shifted distance information, obtained by the phase-shifting algorithm, where two sets of difference data need to be encoded. More precisely, a nonseparable vector lifting scheme is investigated in order to exploit the two-dimensional characteristics of the holographic contents. Simulations performed on different digital holograms have shown the effectiveness of the proposed method in terms of bitrate saving and quality of object reconstruction.

  1. Multiple-image encryption via lifting wavelet transform and XOR operation based on compressive ghost imaging scheme

    Science.gov (United States)

    Li, Xianye; Meng, Xiangfeng; Yang, Xiulun; Wang, Yurong; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-03-01

    A multiple-image encryption method via lifting wavelet transform (LWT) and XOR operation is proposed, which is based on a row scanning compressive ghost imaging scheme. In the encryption process, the scrambling operation is implemented for the sparse images transformed by LWT, then the XOR operation is performed on the scrambled images, and the resulting XOR images are compressed in the row scanning compressive ghost imaging, through which the ciphertext images can be detected by bucket detector arrays. During decryption, the participant who possesses his/her correct key-group, can successfully reconstruct the corresponding plaintext image by measurement key regeneration, compression algorithm reconstruction, XOR operation, sparse images recovery, and inverse LWT (iLWT). Theoretical analysis and numerical simulations validate the feasibility of the proposed method.

  2. A Hybrid Data Compression Scheme for Improved VNC

    Directory of Open Access Journals (Sweden)

    Xiaozheng (Jane Zhang

    2007-04-01

    Full Text Available Virtual Network Computing (VNC has emerged as a promising technology in distributed computing environment since its invention in the late nineties. Successful application of VNC requires rapid data transfer from one machine to another over a TCP/IP network connection. However transfer of screen data consumes much network bandwidth and current data encoding schemes for VNC are far from being ideal. This paper seeks to improve screen data compression techniques to enable VNC over slow connections and present a reasonable speed and image quality. In this paper, a hybrid technique is proposed for improving coding efficiency. The algorithm first divides a screen image into pre-defined regions and applies encoding schemes to each area according to the region characteristics. Second, correlation of screen data in consecutive frames is exploited where multiple occurrences of similar image contents are detected. The improved results are demonstrated in a dynamic environment with various screen image types and desktop manipulation.

  3. Multiple-correction hybrid k-exact schemes for high-order compressible RANS-LES simulations on fully unstructured grids

    Science.gov (United States)

    Pont, Grégoire; Brenner, Pierre; Cinnella, Paola; Maugars, Bruno; Robinet, Jean-Christophe

    2017-12-01

    A Godunov's type unstructured finite volume method suitable for highly compressible turbulent scale-resolving simulations around complex geometries is constructed by using a successive correction technique. First, a family of k-exact Godunov schemes is developed by recursively correcting the truncation error of the piecewise polynomial representation of the primitive variables. The keystone of the proposed approach is a quasi-Green gradient operator which ensures consistency on general meshes. In addition, a high-order single-point quadrature formula, based on high-order approximations of the successive derivatives of the solution, is developed for flux integration along cell faces. The proposed family of schemes is compact in the algorithmic sense, since it only involves communications between direct neighbors of the mesh cells. The numerical properties of the schemes up to fifth-order are investigated, with focus on their resolvability in terms of number of mesh points required to resolve a given wavelength accurately. Afterwards, in the aim of achieving the best possible trade-off between accuracy, computational cost and robustness in view of industrial flow computations, we focus more specifically on the third-order accurate scheme of the family, and modify locally its numerical flux in order to reduce the amount of numerical dissipation in vortex-dominated regions. This is achieved by switching from the upwind scheme, mostly applied in highly compressible regions, to a fourth-order centered one in vortex-dominated regions. An analytical switch function based on the local grid Reynolds number is adopted in order to warrant numerical stability of the recentering process. Numerical applications demonstrate the accuracy and robustness of the proposed methodology for compressible scale-resolving computations. In particular, supersonic RANS/LES computations of the flow over a cavity are presented to show the capability of the scheme to predict flows with shocks

  4. A lossless multichannel bio-signal compression based on low-complexity joint coding scheme for portable medical devices.

    Science.gov (United States)

    Kim, Dong-Sun; Kwon, Jin-San

    2014-09-18

    Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor.

  5. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding.

    Science.gov (United States)

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-04-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering--CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes--MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme.

  6. A robust and accurate approach to computing compressible multiphase flow: Stratified flow model and AUSM+-up scheme

    International Nuclear Information System (INIS)

    Chang, Chih-Hao; Liou, Meng-Sing

    2007-01-01

    In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM + scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM + -up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion

  7. On some Approximation Schemes for Steady Compressible Viscous Flow

    Science.gov (United States)

    Bause, M.; Heywood, J. G.; Novotny, A.; Padula, M.

    This paper continues our development of approximation schemes for steady compressible viscous flow based on an iteration between a Stokes like problem for the velocity and a transport equation for the density, with the aim of improving their suitability for computations. Such schemes seem attractive for computations because they offer a reduction to standard problems for which there is already highly refined software, and because of the guidance that can be drawn from an existence theory based on them. Our objective here is to modify a recent scheme of Heywood and Padula [12], to improve its convergence properties. This scheme improved upon an earlier scheme of Padula [21], [23] through the use of a special ``effective pressure'' in linking the Stokes and transport problems. However, its convergence is limited for several reasons. Firstly, the steady transport equation itself is only solvable for general velocity fields if they satisfy certain smallness conditions. These conditions are met here by using a rescaled variant of the steady transport equation based on a pseudo time step for the equation of continuity. Another matter limiting the convergence of the scheme in [12] is that the Stokes linearization, which is a linearization about zero, has an inevitably small range of convergence. We replace it here with an Oseen or Newton linearization, either of which has a wider range of convergence, and converges more rapidly. The simplicity of the scheme offered in [12] was conducive to a relatively simple and clearly organized proof of its convergence. The proofs of convergence for the more complicated schemes proposed here are structured along the same lines. They strengthen the theorems of existence and uniqueness in [12] by weakening the smallness conditions that are needed. The expected improvement in the computational performance of the modified schemes has been confirmed by Bause [2], in an ongoing investigation.

  8. An efficient shock-capturing scheme for simulating compressible homogeneous mixture flow

    Energy Technology Data Exchange (ETDEWEB)

    Dang, Son Tung; Ha, Cong Tu; Park, Warn Gyu [School of Mechanical Engineering, Pusan National University, Busan (Korea, Republic of); Jung, Chul Min [Advanced Naval Technology CenterNSRDI, ADD, Changwon (Korea, Republic of)

    2016-09-15

    This work is devoted to the development of a procedure for the numerical solution of Navier-Stokes equations for cavitating flows with and without ventilation based on a compressible, multiphase, homogeneous mixture model. The governing equations are discretized on a general structured grid using a high-resolution shock-capturing scheme in conjunction with appropriate limiters to prevent the generation of spurious solutions near shock waves or discontinuities. Two well-known limiters are examined, and a new limiter is proposed to enhance the accuracy and stability of the numerical scheme. A sensitivity analysis is first conducted to determine the relative influences of various model parameters on the solution. These parameters are adopted for the computation of water flows over a hemispherical body, conical body and a divergent/convergent nozzle. Finally, numerical calculations of ventilated supercavitating flows over a hemispherical cylinder body with a hot propulsive jet are conducted to verify the capabilities of the numerical scheme.

  9. An efficient shock-capturing scheme for simulating compressible homogeneous mixture flow

    International Nuclear Information System (INIS)

    Dang, Son Tung; Ha, Cong Tu; Park, Warn Gyu; Jung, Chul Min

    2016-01-01

    This work is devoted to the development of a procedure for the numerical solution of Navier-Stokes equations for cavitating flows with and without ventilation based on a compressible, multiphase, homogeneous mixture model. The governing equations are discretized on a general structured grid using a high-resolution shock-capturing scheme in conjunction with appropriate limiters to prevent the generation of spurious solutions near shock waves or discontinuities. Two well-known limiters are examined, and a new limiter is proposed to enhance the accuracy and stability of the numerical scheme. A sensitivity analysis is first conducted to determine the relative influences of various model parameters on the solution. These parameters are adopted for the computation of water flows over a hemispherical body, conical body and a divergent/convergent nozzle. Finally, numerical calculations of ventilated supercavitating flows over a hemispherical cylinder body with a hot propulsive jet are conducted to verify the capabilities of the numerical scheme

  10. Feedback Compression Schemes for Downlink Carrier Aggregation in LTE-Advanced

    DEFF Research Database (Denmark)

    Nguyen, Hung Tuan; Kovac, Istvan; Wang, Yuanye

    2011-01-01

    With full channel state information (CSI) available, it has been shown that carrier aggregation (CA) in the downlink can significantly improve the data rate experienced at the user equipments (UE) [1], [2], [3], [4]. However, full CSI feedback in all component carriers (CCs) requires a large...... portion of the uplink bandwidth and the feedback information increases linearly with the number of CCs. Therefore, the performance gain brought by deploying CA could be easily hindered if the amount of CSI feedback is not thoroughly controlled. In this paper we analyze several feedback overhead...... compression schemes in CA systems. To avoid a major re-design of the feedback schemes, only CSI compression schemes closely related to the ones specified in LTE-Release 8 and LTE-Release 9 are considered. Extensive simulations at system level were carried out to evaluate the performance of these feedback...

  11. Implementation of a compressive sampling scheme for wireless sensors to achieve energy efficiency in a structural health monitoring system

    Science.gov (United States)

    O'Connor, Sean M.; Lynch, Jerome P.; Gilbert, Anna C.

    2013-04-01

    Wireless sensors have emerged to offer low-cost sensors with impressive functionality (e.g., data acquisition, computing, and communication) and modular installations. Such advantages enable higher nodal densities than tethered systems resulting in increased spatial resolution of the monitoring system. However, high nodal density comes at a cost as huge amounts of data are generated, weighing heavy on power sources, transmission bandwidth, and data management requirements, often making data compression necessary. The traditional compression paradigm consists of high rate (>Nyquist) uniform sampling and storage of the entire target signal followed by some desired compression scheme prior to transmission. The recently proposed compressed sensing (CS) framework combines the acquisition and compression stage together, thus removing the need for storage and operation of the full target signal prior to transmission. The effectiveness of the CS approach hinges on the presence of a sparse representation of the target signal in a known basis, similarly exploited by several traditional compressive sensing applications today (e.g., imaging, MRI). Field implementations of CS schemes in wireless SHM systems have been challenging due to the lack of commercially available sensing units capable of sampling methods (e.g., random) consistent with the compressed sensing framework, often moving evaluation of CS techniques to simulation and post-processing. The research presented here describes implementation of a CS sampling scheme to the Narada wireless sensing node and the energy efficiencies observed in the deployed sensors. Of interest in this study is the compressibility of acceleration response signals collected from a multi-girder steel-concrete composite bridge. The study shows the benefit of CS in reducing data requirements while ensuring data analysis on compressed data remain accurate.

  12. Positivity-preserving CE/SE schemes for solving the compressible Euler and Navier–Stokes equations on hybrid unstructured meshes

    KAUST Repository

    Shen, Hua

    2018-05-28

    We construct positivity-preserving space–time conservation element and solution element (CE/SE) schemes for solving the compressible Euler and Navier–Stokes equations on hybrid unstructured meshes consisting of triangular and rectangular elements. The schemes use an a posteriori limiter to prevent negative densities and pressures based on the premise of preserving optimal accuracy. The limiter enforces a constraint for spatial derivatives and does not change the conservative property of CE/SE schemes. Several numerical examples suggest that the proposed schemes preserve accuracy for smooth flows and strictly preserve positivity of densities and pressures for the problems involving near vacuum and very strong discontinuities.

  13. A Distributed Compressive Sensing Scheme for Event Capture in Wireless Visual Sensor Networks

    Science.gov (United States)

    Hou, Meng; Xu, Sen; Wu, Weiling; Lin, Fei

    2018-01-01

    Image signals which acquired by wireless visual sensor network can be used for specific event capture. This event capture is realized by image processing at the sink node. A distributed compressive sensing scheme is used for the transmission of these image signals from the camera nodes to the sink node. A measurement and joint reconstruction algorithm for these image signals are proposed in this paper. Make advantage of spatial correlation between images within a sensing area, the cluster head node which as the image decoder can accurately co-reconstruct these image signals. The subjective visual quality and the reconstruction error rate are used for the evaluation of reconstructed image quality. Simulation results show that the joint reconstruction algorithm achieves higher image quality at the same image compressive rate than the independent reconstruction algorithm.

  14. Preliminary Study on Two Possible Bunch Compression Schemes at NLCTA

    International Nuclear Information System (INIS)

    Sun, Yipeng

    2011-01-01

    lasing coherently in an undulator, one needs a very bright beam in all three dimensions. In other words, one needs an electron beam with very short bunch length (high intensity), very small transverse emittance and very small energy spread. Most FELs currently being operated, commissioned, constructed or proposed are based on RF acceleration in a frequency range from L-band ( 1 GHz) to C-band ( 6 GHz). As RF frequency goes higher, wake fields effects tend to be much stronger and jitter tolerances are tighter. To demonstrate that X-band acceleration structures can be applied in constructing an FEL, one could perform bunch compression experiments at NLCTA as a first step, and investigate tolerances on timing jitter, misalignments etc.. Another important point is to evaluate the transverse emittance growth in this bunch compression process. In the following sections, two possible bunch compression schemes are proposed to be tested at NLCTA. Elegant (4) 3-D simulation is performed to evaluate these two schemes, with wake fields, space charge and coherent synchrotron radiation (CSR) effects included. One million macro particles are adopted in the numerical simulations. The simulation starts with an electron beam of 20 pC at a beam energy of 5 MeV. The initial RMS bunch length is taken as 0.5 ps at such a low bunch charge, and the RMS energy spread is 5 x 10 -3 . The normalized transverse emittance is 1 mm.mrad.

  15. A finite-volume HLLC-based scheme for compressible interfacial flows with surface tension

    Energy Technology Data Exchange (ETDEWEB)

    Garrick, Daniel P. [Department of Aerospace Engineering, Iowa State University, Ames, IA (United States); Owkes, Mark [Department of Mechanical and Industrial Engineering, Montana State University, Bozeman, MT (United States); Regele, Jonathan D., E-mail: jregele@iastate.edu [Department of Aerospace Engineering, Iowa State University, Ames, IA (United States)

    2017-06-15

    Shock waves are often used in experiments to create a shear flow across liquid droplets to study secondary atomization. Similar behavior occurs inside of supersonic combustors (scramjets) under startup conditions, but it is challenging to study these conditions experimentally. In order to investigate this phenomenon further, a numerical approach is developed to simulate compressible multiphase flows under the effects of surface tension forces. The flow field is solved via the compressible multicomponent Euler equations (i.e., the five equation model) discretized with the finite volume method on a uniform Cartesian grid. The solver utilizes a total variation diminishing (TVD) third-order Runge–Kutta method for time-marching and second order TVD spatial reconstruction. Surface tension is incorporated using the Continuum Surface Force (CSF) model. Fluxes are upwinded with a modified Harten–Lax–van Leer Contact (HLLC) approximate Riemann solver. An interface compression scheme is employed to counter numerical diffusion of the interface. The present work includes modifications to both the HLLC solver and the interface compression scheme to account for capillary force terms and the associated pressure jump across the gas–liquid interface. A simple method for numerically computing the interface curvature is developed and an acoustic scaling of the surface tension coefficient is proposed for the non-dimensionalization of the model. The model captures the surface tension induced pressure jump exactly if the exact curvature is known and is further verified with an oscillating elliptical droplet and Mach 1.47 and 3 shock-droplet interaction problems. The general characteristics of secondary atomization at a range of Weber numbers are also captured in a series of simulations.

  16. Fragment separator momentum compression schemes

    Energy Technology Data Exchange (ETDEWEB)

    Bandura, Laura, E-mail: bandura@anl.gov [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); National Superconducting Cyclotron Lab, Michigan State University, 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Erdelyi, Bela [Argonne National Laboratory, Argonne, IL 60439 (United States); Northern Illinois University, DeKalb, IL 60115 (United States); Hausmann, Marc [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Kubo, Toshiyuki [RIKEN Nishina Center, RIKEN, Wako (Japan); Nolen, Jerry [Argonne National Laboratory, Argonne, IL 60439 (United States); Portillo, Mauricio [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Sherrill, Bradley M. [National Superconducting Cyclotron Lab, Michigan State University, 1 Cyclotron, East Lansing, MI 48824-1321 (United States)

    2011-07-21

    We present a scheme to use a fragment separator and profiled energy degraders to transfer longitudinal phase space into transverse phase space while maintaining achromatic beam transport. The first order beam optics theory of the method is presented and the consequent enlargement of the transverse phase space is discussed. An interesting consequence of the technique is that the first order mass resolving power of the system is determined by the first dispersive section up to the energy degrader, independent of whether or not momentum compression is used. The fragment separator at the Facility for Rare Isotope Beams is a specific application of this technique and is described along with simulations by the code COSY INFINITY.

  17. Fragment separator momentum compression schemes

    International Nuclear Information System (INIS)

    Bandura, Laura; Erdelyi, Bela; Hausmann, Marc; Kubo, Toshiyuki; Nolen, Jerry; Portillo, Mauricio; Sherrill, Bradley M.

    2011-01-01

    We present a scheme to use a fragment separator and profiled energy degraders to transfer longitudinal phase space into transverse phase space while maintaining achromatic beam transport. The first order beam optics theory of the method is presented and the consequent enlargement of the transverse phase space is discussed. An interesting consequence of the technique is that the first order mass resolving power of the system is determined by the first dispersive section up to the energy degrader, independent of whether or not momentum compression is used. The fragment separator at the Facility for Rare Isotope Beams is a specific application of this technique and is described along with simulations by the code COSY INFINITY.

  18. Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks.

    Science.gov (United States)

    Zheng, Haifeng; Li, Jiayin; Feng, Xinxin; Guo, Wenzhong; Chen, Zhonghui; Xiong, Neal

    2017-11-08

    Compressive sensing (CS) provides an energy-efficient paradigm for data gathering in wireless sensor networks (WSNs). However, the existing work on spatial-temporal data gathering using compressive sensing only considers either multi-hop relaying based or multiple random walks based approaches. In this paper, we exploit the mobility pattern for spatial-temporal data collection and propose a novel mobile data gathering scheme by employing the Metropolis-Hastings algorithm with delayed acceptance, an improved random walk algorithm for a mobile collector to collect data from a sensing field. The proposed scheme exploits Kronecker compressive sensing (KCS) for spatial-temporal correlation of sensory data by allowing the mobile collector to gather temporal compressive measurements from a small subset of randomly selected nodes along a random routing path. More importantly, from the theoretical perspective we prove that the equivalent sensing matrix constructed from the proposed scheme for spatial-temporal compressible signal can satisfy the property of KCS models. The simulation results demonstrate that the proposed scheme can not only significantly reduce communication cost but also improve recovery accuracy for mobile data gathering compared to the other existing schemes. In particular, we also show that the proposed scheme is robust in unreliable wireless environment under various packet losses. All this indicates that the proposed scheme can be an efficient alternative for data gathering application in WSNs .

  19. CEPRAM: Compression for Endurance in PCM RAM

    OpenAIRE

    González Alberquilla, Rodrigo; Castro Rodríguez, Fernando; Piñuel Moreno, Luis; Tirado Fernández, Francisco

    2017-01-01

    We deal with the endurance problem of Phase Change Memories (PCM) by proposing Compression for Endurance in PCM RAM (CEPRAM), a technique to elongate the lifespan of PCM-based main memory through compression. We introduce a total of three compression schemes based on already existent schemes, but targeting compression for PCM-based systems. We do a two-level evaluation. First, we quantify the performance of the compression, in terms of compressed size, bit-flips and how they are affected by e...

  20. Concurrent data compression and protection

    International Nuclear Information System (INIS)

    Saeed, M.

    2009-01-01

    Data compression techniques involve transforming data of a given format, called source message, to data of a smaller sized format, called codeword. The primary objective of data encryption is to ensure security of data if it is intercepted by an eavesdropper. It transforms data of a given format, called plaintext, to another format, called ciphertext, using an encryption key or keys. Thus, combining the processes of compression and encryption together must be done in this order, that is, compression followed by encryption because all compression techniques heavily rely on the redundancies which are inherently a part of a regular text or speech. The aim of this research is to combine two processes of compression (using an existing scheme) with a new encryption scheme which should be compatible with encoding scheme embedded in encoder. The novel technique proposed by the authors is new, unique and is highly secured. The deployment of sentinel marker' enhances the security of the proposed TR-One algorithm from 2/sup 44/ ciphertexts to 2/sup 44/ +2/sub 20/ ciphertexts thus imposing extra challenges to the intruders. (author)

  1. Pressure correction schemes for compressible flows

    International Nuclear Information System (INIS)

    Kheriji, W.

    2011-01-01

    This thesis is concerned with the development of semi-implicit fractional step schemes, for the compressible Navier-Stokes equations; these schemes are part of the class of the pressure correction methods. The chosen spatial discretization is staggered: non conforming mixed finite elements (Crouzeix-Raviart or Rannacher-Turek) or the classic MA C scheme. An upwind finite volume discretization of the mass balance guarantees the positivity of the density. The positivity of the internal energy is obtained by discretizing the internal energy balance by an upwind finite volume scheme and b y coupling the discrete internal energy balance with the pressure correction step. A special finite volume discretization on dual cells is performed for the convection term in the momentum balance equation, and a renormalisation step for the pressure is added to the algorithm; this ensures the control in time of the integral of the total energy over the domain. All these a priori estimates imply the existence of a discrete solution by a topological degree argument. The application of this scheme to Euler equations raises an additional difficulty. Indeed, obtaining correct shocks requires the scheme to be consistent with the total energy balance, property which we obtain as follows. First of all, a local discrete kinetic energy balance is established; it contains source terms winch we somehow compensate in the internal energy balance. The kinetic and internal energy equations are associated with the dual and primal meshes respectively, and thus cannot be added to obtain a total energy balance; its continuous counterpart is however recovered at the limit: if we suppose that a sequence of discrete solutions converges when the space and time steps tend to 0, we indeed show, in 1D at least, that the limit satisfies a weak form of the equation. These theoretical results are comforted by numerical tests. Similar results are obtained for the baro-tropic Navier-Stokes equations. (author)

  2. Central upwind scheme for a compressible two-phase flow model.

    Science.gov (United States)

    Ahmed, Munshoor; Saleem, M Rehan; Zia, Saqib; Qamar, Shamsul

    2015-01-01

    In this article, a compressible two-phase reduced five-equation flow model is numerically investigated. The model is non-conservative and the governing equations consist of two equations describing the conservation of mass, one for overall momentum and one for total energy. The fifth equation is the energy equation for one of the two phases and it includes source term on the right-hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. For the numerical approximation of the model a high resolution central upwind scheme is implemented. This is a non-oscillatory upwind biased finite volume scheme which does not require a Riemann solver at each time step. Few numerical case studies of two-phase flows are presented. For validation and comparison, the same model is also solved by using kinetic flux-vector splitting (KFVS) and staggered central schemes. It was found that central upwind scheme produces comparable results to the KFVS scheme.

  3. Central upwind scheme for a compressible two-phase flow model.

    Directory of Open Access Journals (Sweden)

    Munshoor Ahmed

    Full Text Available In this article, a compressible two-phase reduced five-equation flow model is numerically investigated. The model is non-conservative and the governing equations consist of two equations describing the conservation of mass, one for overall momentum and one for total energy. The fifth equation is the energy equation for one of the two phases and it includes source term on the right-hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. For the numerical approximation of the model a high resolution central upwind scheme is implemented. This is a non-oscillatory upwind biased finite volume scheme which does not require a Riemann solver at each time step. Few numerical case studies of two-phase flows are presented. For validation and comparison, the same model is also solved by using kinetic flux-vector splitting (KFVS and staggered central schemes. It was found that central upwind scheme produces comparable results to the KFVS scheme.

  4. A Hybrid Data Compression Scheme for Improved VNC

    OpenAIRE

    Xiaozheng (Jane) Zhang; Hirofumi Takahashi

    2007-01-01

    Virtual Network Computing (VNC) has emerged as a promising technology in distributed computing environment since its invention in the late nineties. Successful application of VNC requires rapid data transfer from one machine to another over a TCP/IP network connection. However transfer of screen data consumes much network bandwidth and current data encoding schemes for VNC are far from being ideal. This paper seeks to improve screen data compression techniques to enable VNC over slow connecti...

  5. Layered compression for high-precision depth data.

    Science.gov (United States)

    Miao, Dan; Fu, Jingjing; Lu, Yan; Li, Shipeng; Chen, Chang Wen

    2015-12-01

    With the development of depth data acquisition technologies, access to high-precision depth with more than 8-b depths has become much easier and determining how to efficiently represent and compress high-precision depth is essential for practical depth storage and transmission systems. In this paper, we propose a layered high-precision depth compression framework based on an 8-b image/video encoder to achieve efficient compression with low complexity. Within this framework, considering the characteristics of the high-precision depth, a depth map is partitioned into two layers: 1) the most significant bits (MSBs) layer and 2) the least significant bits (LSBs) layer. The MSBs layer provides rough depth value distribution, while the LSBs layer records the details of the depth value variation. For the MSBs layer, an error-controllable pixel domain encoding scheme is proposed to exploit the data correlation of the general depth information with sharp edges and to guarantee the data format of LSBs layer is 8 b after taking the quantization error from MSBs layer. For the LSBs layer, standard 8-b image/video codec is leveraged to perform the compression. The experimental results demonstrate that the proposed coding scheme can achieve real-time depth compression with satisfactory reconstruction quality. Moreover, the compressed depth data generated from this scheme can achieve better performance in view synthesis and gesture recognition applications compared with the conventional coding schemes because of the error control algorithm.

  6. Development of a discrete gas-kinetic scheme for simulation of two-dimensional viscous incompressible and compressible flows.

    Science.gov (United States)

    Yang, L M; Shu, C; Wang, Y

    2016-03-01

    In this work, a discrete gas-kinetic scheme (DGKS) is presented for simulation of two-dimensional viscous incompressible and compressible flows. This scheme is developed from the circular function-based GKS, which was recently proposed by Shu and his co-workers [L. M. Yang, C. Shu, and J. Wu, J. Comput. Phys. 274, 611 (2014)]. For the circular function-based GKS, the integrals for conservation forms of moments in the infinity domain for the Maxwellian function-based GKS are simplified to those integrals along the circle. As a result, the explicit formulations of conservative variables and fluxes are derived. However, these explicit formulations of circular function-based GKS for viscous flows are still complicated, which may not be easy for the application by new users. By using certain discrete points to represent the circle in the phase velocity space, the complicated formulations can be replaced by a simple solution process. The basic requirement is that the conservation forms of moments for the circular function-based GKS can be accurately satisfied by weighted summation of distribution functions at discrete points. In this work, it is shown that integral quadrature by four discrete points on the circle, which forms the D2Q4 discrete velocity model, can exactly match the integrals. Numerical results showed that the present scheme can provide accurate numerical results for incompressible and compressible viscous flows with roughly the same computational cost as that needed by the Roe scheme.

  7. An Electron Bunch Compression Scheme for a Superconducting Radio Frequency Linear Accelerator Driven Light Source

    Energy Technology Data Exchange (ETDEWEB)

    C. Tennant, S.V. Benson, D. Douglas, P. Evtushenko, R.A. Legg

    2011-09-01

    We describe an electron bunch compression scheme suitable for use in a light source driven by a superconducting radio frequency (SRF) linac. The key feature is the use of a recirculating linac to perform the initial bunch compression. Phasing of the second pass beam through the linac is chosen to de-chirp the electron bunch prior to acceleration to the final energy in an SRF linac ('afterburner'). The final bunch compression is then done at maximum energy. This scheme has the potential to circumvent some of the most technically challenging aspects of current longitudinal matches; namely transporting a fully compressed, high peak current electron bunch through an extended SRF environment, the need for a RF harmonic linearizer and the need for a laser heater. Additional benefits include a substantial savings in capital and operational costs by efficiently using the available SRF gradient.

  8. Development of discrete gas kinetic scheme for simulation of 3D viscous incompressible and compressible flows

    Science.gov (United States)

    Yang, L. M.; Shu, C.; Wang, Y.; Sun, Y.

    2016-08-01

    The sphere function-based gas kinetic scheme (GKS), which was presented by Shu and his coworkers [23] for simulation of inviscid compressible flows, is extended to simulate 3D viscous incompressible and compressible flows in this work. Firstly, we use certain discrete points to represent the spherical surface in the phase velocity space. Then, integrals along the spherical surface for conservation forms of moments, which are needed to recover 3D Navier-Stokes equations, are approximated by integral quadrature. The basic requirement is that these conservation forms of moments can be exactly satisfied by weighted summation of distribution functions at discrete points. It was found that the integral quadrature by eight discrete points on the spherical surface, which forms the D3Q8 discrete velocity model, can exactly match the integral. In this way, the conservative variables and numerical fluxes can be computed by weighted summation of distribution functions at eight discrete points. That is, the application of complicated formulations resultant from integrals can be replaced by a simple solution process. Several numerical examples including laminar flat plate boundary layer, 3D lid-driven cavity flow, steady flow through a 90° bending square duct, transonic flow around DPW-W1 wing and supersonic flow around NACA0012 airfoil are chosen to validate the proposed scheme. Numerical results demonstrate that the present scheme can provide reasonable numerical results for 3D viscous flows.

  9. Harmonic analysis in integrated energy system based on compressed sensing

    International Nuclear Information System (INIS)

    Yang, Ting; Pen, Haibo; Wang, Dan; Wang, Zhaoxia

    2016-01-01

    Highlights: • We propose a harmonic/inter-harmonic analysis scheme with compressed sensing theory. • Property of sparseness of harmonic signal in electrical power system is proved. • The ratio formula of fundamental and harmonic components sparsity is presented. • Spectral Projected Gradient-Fundamental Filter reconstruction algorithm is proposed. • SPG-FF enhances the precision of harmonic detection and signal reconstruction. - Abstract: The advent of Integrated Energy Systems enabled various distributed energy to access the system through different power electronic devices. The development of this has made the harmonic environment more complex. It needs low complexity and high precision of harmonic detection and analysis methods to improve power quality. To solve the shortages of large data storage capacities and high complexity of compression in sampling under the Nyquist sampling framework, this research paper presents a harmonic analysis scheme based on compressed sensing theory. The proposed scheme enables the performance of the functions of compressive sampling, signal reconstruction and harmonic detection simultaneously. In the proposed scheme, the sparsity of the harmonic signals in the base of the Discrete Fourier Transform (DFT) is numerically calculated first. This is followed by providing a proof of the matching satisfaction of the necessary conditions for compressed sensing. The binary sparse measurement is then leveraged to reduce the storage space in the sampling unit in the proposed scheme. In the recovery process, the scheme proposed a novel reconstruction algorithm called the Spectral Projected Gradient with Fundamental Filter (SPG-FF) algorithm to enhance the reconstruction precision. One of the actual microgrid systems is used as simulation example. The results of the experiment shows that the proposed scheme effectively enhances the precision of harmonic and inter-harmonic detection with low computing complexity, and has good

  10. A novel fractal image compression scheme with block classification and sorting based on Pearson's correlation coefficient.

    Science.gov (United States)

    Wang, Jianji; Zheng, Nanning

    2013-09-01

    Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.

  11. Graph Compression by BFS

    Directory of Open Access Journals (Sweden)

    Alberto Apostolico

    2009-08-01

    Full Text Available The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.

  12. Image Compression Based On Wavelet, Polynomial and Quadtree

    Directory of Open Access Journals (Sweden)

    Bushra A. SULTAN

    2011-01-01

    Full Text Available In this paper a simple and fast image compression scheme is proposed, it is based on using wavelet transform to decompose the image signal and then using polynomial approximation to prune the smoothing component of the image band. The architect of proposed coding scheme is high synthetic where the error produced due to polynomial approximation in addition to the detail sub-band data are coded using both quantization and Quadtree spatial coding. As a last stage of the encoding process shift encoding is used as a simple and efficient entropy encoder to compress the outcomes of the previous stage.The test results indicate that the proposed system can produce a promising compression performance while preserving the image quality level.

  13. Subset-sum phase transitions and data compression

    Science.gov (United States)

    Merhav, Neri

    2011-09-01

    We propose a rigorous analysis approach for the subset-sum problem in the context of lossless data compression, where the phase transition of the subset-sum problem is directly related to the passage between ambiguous and non-ambiguous decompression, for a compression scheme that is based on specifying the sequence composition. The proposed analysis lends itself to straightforward extensions in several directions of interest, including non-binary alphabets, incorporation of side information at the decoder (Slepian-Wolf coding), and coding schemes based on multiple subset sums. It is also demonstrated that the proposed technique can be used to analyze the critical behavior in a more involved situation where the sequence composition is not specified by the encoder.

  14. Decoupled scheme based on the Hermite expansion to construct lattice Boltzmann models for the compressible Navier-Stokes equations with arbitrary specific heat ratio.

    Science.gov (United States)

    Hu, Kainan; Zhang, Hongwu; Geng, Shaojuan

    2016-10-01

    A decoupled scheme based on the Hermite expansion to construct lattice Boltzmann models for the compressible Navier-Stokes equations with arbitrary specific heat ratio is proposed. The local equilibrium distribution function including the rotational velocity of particle is decoupled into two parts, i.e., the local equilibrium distribution function of the translational velocity of particle and that of the rotational velocity of particle. From these two local equilibrium functions, two lattice Boltzmann models are derived via the Hermite expansion, namely one is in relation to the translational velocity and the other is connected with the rotational velocity. Accordingly, the distribution function is also decoupled. After this, the evolution equation is decoupled into the evolution equation of the translational velocity and that of the rotational velocity. The two evolution equations evolve separately. The lattice Boltzmann models used in the scheme proposed by this work are constructed via the Hermite expansion, so it is easy to construct new schemes of higher-order accuracy. To validate the proposed scheme, a one-dimensional shock tube simulation is performed. The numerical results agree with the analytical solutions very well.

  15. Cellular automata codebooks applied to compact image compression

    Directory of Open Access Journals (Sweden)

    Radu DOGARU

    2006-12-01

    Full Text Available Emergent computation in semi-totalistic cellular automata (CA is used to generate a set of basis (or codebook. Such codebooks are convenient for simple and circuit efficient compression schemes based on binary vector quantization, applied to the bitplanes of any monochrome or color image. Encryption is also naturally included using these codebooks. Natural images would require less than 0.5 bits per pixel (bpp while the quality of the reconstructed images is comparable with traditional compression schemes. The proposed scheme is attractive for low power, sensor integrated applications.

  16. Proposal of Wireless Traffic Control Schemes for Wireless LANs

    Science.gov (United States)

    Hiraguri, Takefumi; Ichikawa, Takeo; Iizuka, Masataka; Kubota, Shuji

    This paper proposes two traffic control schemes to support the communication quality of multimedia streaming services such as VoIP and audio/video over IEEE 802.11 wireless LAN systems. The main features of the proposed scheme are bandwidth control for each flow of the multimedia streaming service and load balancing between access points (APs) of the wireless LAN by using information of data link, network and transport layers. The proposed schemes are implemented on a Linux machine which is called the wireless traffic controller (WTC). The WTC connects a high capacity backbone network and an access network to which the APs are attached. We evaluated the performance of the proposed WTC and confirmed that the communication quality of the multimedia streaming would be greatly improved by using this technique.

  17. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    Science.gov (United States)

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  18. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT

    Directory of Open Access Journals (Sweden)

    Ran Li

    2018-04-01

    Full Text Available Aimed at a low-energy consumption of Green Internet of Things (IoT, this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  19. A Novel 1D Hybrid Chaotic Map-Based Image Compression and Encryption Using Compressed Sensing and Fibonacci-Lucas Transform

    Directory of Open Access Journals (Sweden)

    Tongfeng Zhang

    2016-01-01

    Full Text Available A one-dimensional (1D hybrid chaotic system is constructed by three different 1D chaotic maps in parallel-then-cascade fashion. The proposed chaotic map has larger key space and exhibits better uniform distribution property in some parametric range compared with existing 1D chaotic map. Meanwhile, with the combination of compressive sensing (CS and Fibonacci-Lucas transform (FLT, a novel image compression and encryption scheme is proposed with the advantages of the 1D hybrid chaotic map. The whole encryption procedure includes compression by compressed sensing (CS, scrambling with FLT, and diffusion after linear scaling. Bernoulli measurement matrix in CS is generated by the proposed 1D hybrid chaotic map due to its excellent uniform distribution. To enhance the security and complexity, transform kernel of FLT varies in each permutation round according to the generated chaotic sequences. Further, the key streams used in the diffusion process depend on the chaotic map as well as plain image, which could resist chosen plaintext attack (CPA. Experimental results and security analyses demonstrate the validity of our scheme in terms of high security and robustness against noise attack and cropping attack.

  20. Encryption of Stereo Images after Compression by Advanced Encryption Standard (AES

    Directory of Open Access Journals (Sweden)

    Marwah k Hussien

    2018-04-01

    Full Text Available New partial encryption schemes are proposed, in which a secure encryption algorithm is used to encrypt only part of the compressed data. Partial encryption applied after application of image compression algorithm. Only 0.0244%-25% of the original data isencrypted for two pairs of dif-ferent grayscale imageswiththe size (256 ´ 256 pixels. As a result, we see a significant reduction of time in the stage of encryption and decryption. In the compression step, the Orthogonal Search Algorithm (OSA for motion estimation (the dif-ferent between stereo images is used. The resulting disparity vector and the remaining image were compressed by Discrete Cosine Transform (DCT, Quantization and arithmetic encoding. The image compressed was encrypted by Advanced Encryption Standard (AES. The images were then decoded and were compared with the original images. Experimental results showed good results in terms of Peak Signal-to-Noise Ratio (PSNR, Com-pression Ratio (CR and processing time. The proposed partial encryption schemes are fast, se-cure and do not reduce the compression performance of the underlying selected compression methods

  1. OTDM-WDM Conversion Based on Time-Domain Optical Fourier Transformation with Spectral Compression

    DEFF Research Database (Denmark)

    Mulvad, Hans Christian Hansen; Palushani, Evarist; Galili, Michael

    2011-01-01

    We propose a scheme enabling direct serial-to-parallel conversion of OTDM data tributaries onto a WDM grid, based on optical Fourier transformation with spectral compression. Demonstrations on 320 Gbit/s and 640 Gbit/s OTDM data are shown.......We propose a scheme enabling direct serial-to-parallel conversion of OTDM data tributaries onto a WDM grid, based on optical Fourier transformation with spectral compression. Demonstrations on 320 Gbit/s and 640 Gbit/s OTDM data are shown....

  2. A high capacity text steganography scheme based on LZW compression and color coding

    Directory of Open Access Journals (Sweden)

    Aruna Malik

    2017-02-01

    Full Text Available In this paper, capacity and security issues of text steganography have been considered by employing LZW compression technique and color coding based approach. The proposed technique uses the forward mail platform to hide the secret data. This algorithm first compresses secret data and then hides the compressed secret data into the email addresses and also in the cover message of the email. The secret data bits are embedded in the message (or cover text by making it colored using a color coding table. Experimental results show that the proposed method not only produces a high embedding capacity but also reduces computational complexity. Moreover, the security of the proposed method is significantly improved by employing stego keys. The superiority of the proposed method has been experimentally verified by comparing with recently developed existing techniques.

  3. A Proxy Architecture to Enhance the Performance of WAP 2.0 by Data Compression

    Directory of Open Access Journals (Sweden)

    Yin Zhanping

    2005-01-01

    Full Text Available This paper presents a novel proxy architecture for wireless application protocol (WAP employing an advanced data compression scheme. Though optional in WAP , a proxy can isolate the wireless from the wired domain to prevent error propagations and to eliminate wireless session delays (WSD by enabling long-lived connections between the proxy and wireless terminals. The proposed data compression scheme combines content compression together with robust header compression (ROHC, which minimizes the air-interface traffic data, thus significantly reduces the wireless access time. By using the content compression at the transport layer, it also enables TLS tunneling, which overcomes the end-to-end security problem in WAP 1.x. Performance evaluations show that while WAP 1.x is optimized for narrowband wireless channels, WAP utilizing TCP/IP outperforms WAP 1.x over wideband wireless channels even without compression. The proposed data compression scheme reduces the wireless access time of WAP by over in CDMA2000 1XRTT channels, and in low-speed IS-95 channels, substantially reduces access time to give comparable performance to WAP 1.x. The performance enhancement is mainly contributed by the reply content compression, with ROHC offering further enhancements.

  4. High-quality compressive ghost imaging

    Science.gov (United States)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  5. High-Performance Motion Estimation for Image Sensors with Video Compression

    Directory of Open Access Journals (Sweden)

    Weizhi Xu

    2015-08-01

    Full Text Available It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME. Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME.

  6. A Proxy Architecture to Enhance the Performance of WAP 2.0 by Data Compression

    Directory of Open Access Journals (Sweden)

    Yin Zhanping

    2005-01-01

    Full Text Available This paper presents a novel proxy architecture for wireless application protocol (WAP 2.0 employing an advanced data compression scheme. Though optional in WAP 2.0 , a proxy can isolate the wireless from the wired domain to prevent error propagations and to eliminate wireless session delays (WSD by enabling long-lived connections between the proxy and wireless terminals. The proposed data compression scheme combines content compression together with robust header compression (ROHC, which minimizes the air-interface traffic data, thus significantly reduces the wireless access time. By using the content compression at the transport layer, it also enables TLS tunneling, which overcomes the end-to-end security problem in WAP 1.x. Performance evaluations show that while WAP 1.x is optimized for narrowband wireless channels, WAP 2.0 utilizing TCP/IP outperforms WAP 1.x over wideband wireless channels even without compression. The proposed data compression scheme reduces the wireless access time of WAP 2.0 by over 45% in CDMA2000 1XRTT channels, and in low-speed IS-95 channels, substantially reduces access time to give comparable performance to WAP 1.x. The performance enhancement is mainly contributed by the reply content compression, with ROHC offering further enhancements.

  7. Block-Based Compressed Sensing for Neutron Radiation Image Using WDFB

    Directory of Open Access Journals (Sweden)

    Wei Jin

    2015-01-01

    Full Text Available An ideal compression method for neutron radiation image should have high compression ratio while keeping more details of the original image. Compressed sensing (CS, which can break through the restrictions of sampling theorem, is likely to offer an efficient compression scheme for the neutron radiation image. Combining wavelet transform with directional filter banks, a novel nonredundant multiscale geometry analysis transform named Wavelet Directional Filter Banks (WDFB is constructed and applied to represent neutron radiation image sparsely. Then, the block-based CS technique is introduced and a high performance CS scheme for neutron radiation image is proposed. By performing two-step iterative shrinkage algorithm the problem of L1 norm minimization is solved to reconstruct neutron radiation image from random measurements. The experiment results demonstrate that the scheme not only improves the quality of reconstructed image obviously but also retains more details of original image.

  8. Flux Limiter Lattice Boltzmann Scheme Approach to Compressible Flows with Flexible Specific-Heat Ratio and Prandtl Number

    International Nuclear Information System (INIS)

    Gan Yanbiao; Li Yingjun; Xu Aiguo; Zhang Guangcai

    2011-01-01

    We further develop the lattice Boltzmann (LB) model [Physica A 382 (2007) 502] for compressible flows from two aspects. Firstly, we modify the Bhatnagar-Gross-Krook (BGK) collision term in the LB equation, which makes the model suitable for simulating flows with different Prandtl numbers. Secondly, the flux limiter finite difference (FLFD) scheme is employed to calculate the convection term of the LB equation, which makes the unphysical oscillations at discontinuities be effectively suppressed and the numerical dissipations be significantly diminished. The proposed model is validated by recovering results of some well-known benchmarks, including (i) The thermal Couette flow; (ii) One- and two-dimensional Riemann problems. Good agreements are obtained between LB results and the exact ones or previously reported solutions. The flexibility, together with the high accuracy of the new model, endows the proposed model considerable potential for tracking some long-standing problems and for investigating nonlinear nonequilibrium complex systems. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  9. An efficient chaotic source coding scheme with variable-length blocks

    International Nuclear Information System (INIS)

    Lin Qiu-Zhen; Wong Kwok-Wo; Chen Jian-Yong

    2011-01-01

    An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding. (general)

  10. JPEG XS call for proposals subjective evaluations

    Science.gov (United States)

    McNally, David; Bruylants, Tim; Willème, Alexandre; Ebrahimi, Touradj; Schelkens, Peter; Macq, Benoit

    2017-09-01

    In March 2016 the Joint Photographic Experts Group (JPEG), formally known as ISO/IEC SC29 WG1, issued a call for proposals soliciting compression technologies for a low-latency, lightweight and visually transparent video compression scheme. Within the JPEG family of standards, this scheme was denominated JPEG XS. The subjective evaluation of visually lossless compressed video sequences at high resolutions and bit depths poses particular challenges. This paper describes the adopted procedures, the subjective evaluation setup, the evaluation process and summarizes the obtained results which were achieved in the context of the JPEG XS standardization process.

  11. Blind compressive sensing dynamic MRI

    Science.gov (United States)

    Lingala, Sajan Goud; Jacob, Mathews

    2013-01-01

    We propose a novel blind compressive sensing (BCS) frame work to recover dynamic magnetic resonance images from undersampled measurements. This scheme models the dynamic signal as a sparse linear combination of temporal basis functions, chosen from a large dictionary. In contrast to classical compressed sensing, the BCS scheme simultaneously estimates the dictionary and the sparse coefficients from the undersampled measurements. Apart from the sparsity of the coefficients, the key difference of the BCS scheme with current low rank methods is the non-orthogonal nature of the dictionary basis functions. Since the number of degrees of freedom of the BCS model is smaller than that of the low-rank methods, it provides improved reconstructions at high acceleration rates. We formulate the reconstruction as a constrained optimization problem; the objective function is the linear combination of a data consistency term and sparsity promoting ℓ1 prior of the coefficients. The Frobenius norm dictionary constraint is used to avoid scale ambiguity. We introduce a simple and efficient majorize-minimize algorithm, which decouples the original criterion into three simpler sub problems. An alternating minimization strategy is used, where we cycle through the minimization of three simpler problems. This algorithm is seen to be considerably faster than approaches that alternates between sparse coding and dictionary estimation, as well as the extension of K-SVD dictionary learning scheme. The use of the ℓ1 penalty and Frobenius norm dictionary constraint enables the attenuation of insignificant basis functions compared to the ℓ0 norm and column norm constraint assumed in most dictionary learning algorithms; this is especially important since the number of basis functions that can be reliably estimated is restricted by the available measurements. We also observe that the proposed scheme is more robust to local minima compared to K-SVD method, which relies on greedy sparse coding

  12. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  13. A simple, robust and efficient high-order accurate shock-capturing scheme for compressible flows: Towards minimalism

    Science.gov (United States)

    Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi

    2018-06-01

    Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.

  14. Lossless compression for 3D PET

    International Nuclear Information System (INIS)

    Macq, B.; Sibomana, M.; Coppens, A.; Bol, A.; Michel, C.

    1994-01-01

    A new adaptive scheme is proposed for the lossless compression of positron emission tomography (PET) sinogram data. The algorithm uses an adaptive differential pulse code modulator (ADPCM) followed by a universal variable length coder (UVLC). Contrasting with Lempel-Ziv (LZ), which operates on a whole sinogram, UVLC operates very efficiently on short data blocks. This is a major advantage for real-time implementation. The algorithm is adaptive and codes data after some on-line estimations of the statistics inside each block. Its efficiency is tested when coding dynamic and static scans from two PET scanners and reaches asymptotically the entropy limit for long frames. For very short 3D frames, the new algorithm is twice more efficient than LZ. Since an ASIC implementing a similar UVLC scheme is available today, a similar one should be able to sustain PET data lossless compression and decompression at a rate of 27 MBytes/sec. This algorithm is consequently a good candidate for the next generation of lossless compression engine

  15. A combined finite volume-nonconforming finite element scheme for compressible two phase flow in porous media

    KAUST Repository

    Saad, Bilal Mohammed; Saad, Mazen Naufal B M

    2014-01-01

    We propose and analyze a combined finite volume-nonconforming finite element scheme on general meshes to simulate the two compressible phase flow in porous media. The diffusion term, which can be anisotropic and heterogeneous, is discretized by piecewise linear nonconforming triangular finite elements. The other terms are discretized by means of a cell-centered finite volume scheme on a dual mesh, where the dual volumes are constructed around the sides of the original mesh. The relative permeability of each phase is decentred according the sign of the velocity at the dual interface. This technique also ensures the validity of the discrete maximum principle for the saturation under a non restrictive shape regularity of the space mesh and the positiveness of all transmissibilities. Next, a priori estimates on the pressures and a function of the saturation that denote capillary terms are established. These stabilities results lead to some compactness arguments based on the use of the Kolmogorov compactness theorem, and allow us to derive the convergence of a subsequence of the sequence of approximate solutions to a weak solution of the continuous equations, provided the mesh size tends to zero. The proof is given for the complete system when the density of the each phase depends on its own pressure. © 2014 Springer-Verlag Berlin Heidelberg.

  16. A combined finite volume-nonconforming finite element scheme for compressible two phase flow in porous media

    KAUST Repository

    Saad, Bilal Mohammed

    2014-06-28

    We propose and analyze a combined finite volume-nonconforming finite element scheme on general meshes to simulate the two compressible phase flow in porous media. The diffusion term, which can be anisotropic and heterogeneous, is discretized by piecewise linear nonconforming triangular finite elements. The other terms are discretized by means of a cell-centered finite volume scheme on a dual mesh, where the dual volumes are constructed around the sides of the original mesh. The relative permeability of each phase is decentred according the sign of the velocity at the dual interface. This technique also ensures the validity of the discrete maximum principle for the saturation under a non restrictive shape regularity of the space mesh and the positiveness of all transmissibilities. Next, a priori estimates on the pressures and a function of the saturation that denote capillary terms are established. These stabilities results lead to some compactness arguments based on the use of the Kolmogorov compactness theorem, and allow us to derive the convergence of a subsequence of the sequence of approximate solutions to a weak solution of the continuous equations, provided the mesh size tends to zero. The proof is given for the complete system when the density of the each phase depends on its own pressure. © 2014 Springer-Verlag Berlin Heidelberg.

  17. Compressed Sensing and Low-Rank Matrix Decomposition in Multisource Images Fusion

    Directory of Open Access Journals (Sweden)

    Kan Ren

    2014-01-01

    Full Text Available We propose a novel super-resolution multisource images fusion scheme via compressive sensing and dictionary learning theory. Under the sparsity prior of images patches and the framework of the compressive sensing theory, the multisource images fusion is reduced to a signal recovery problem from the compressive measurements. Then, a set of multiscale dictionaries are learned from several groups of high-resolution sample image’s patches via a nonlinear optimization algorithm. Moreover, a new linear weights fusion rule is proposed to obtain the high-resolution image. Some experiments are taken to investigate the performance of our proposed method, and the results prove its superiority to its counterparts.

  18. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2013-01-01

    We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  19. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2015-01-01

    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  20. Resource efficient data compression algorithms for demanding, WSN based biomedical applications.

    Science.gov (United States)

    Antonopoulos, Christos P; Voros, Nikolaos S

    2016-02-01

    During the last few years, medical research areas of critical importance such as Epilepsy monitoring and study, increasingly utilize wireless sensor network technologies in order to achieve better understanding and significant breakthroughs. However, the limited memory and communication bandwidth offered by WSN platforms comprise a significant shortcoming to such demanding application scenarios. Although, data compression can mitigate such deficiencies there is a lack of objective and comprehensive evaluation of relative approaches and even more on specialized approaches targeting specific demanding applications. The research work presented in this paper focuses on implementing and offering an in-depth experimental study regarding prominent, already existing as well as novel proposed compression algorithms. All algorithms have been implemented in a common Matlab framework. A major contribution of this paper, that differentiates it from similar research efforts, is the employment of real world Electroencephalography (EEG) and Electrocardiography (ECG) datasets comprising the two most demanding Epilepsy modalities. Emphasis is put on WSN applications, thus the respective metrics focus on compression rate and execution latency for the selected datasets. The evaluation results reveal significant performance and behavioral characteristics of the algorithms related to their complexity and the relative negative effect on compression latency as opposed to the increased compression rate. It is noted that the proposed schemes managed to offer considerable advantage especially aiming to achieve the optimum tradeoff between compression rate-latency. Specifically, proposed algorithm managed to combine highly completive level of compression while ensuring minimum latency thus exhibiting real-time capabilities. Additionally, one of the proposed schemes is compared against state-of-the-art general-purpose compression algorithms also exhibiting considerable advantages as far as the

  1. Compressing bitmap indexes for faster search operations

    International Nuclear Information System (INIS)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2002-01-01

    In this paper, we study the effects of compression on bitmap indexes. The main operations on the bitmaps during query processing are bitwise logical operations such as AND, OR, NOT, etc. Using the general purpose compression schemes, such as gzip, the logical operations on the compressed bitmaps are much slower than on the uncompressed bitmaps. Specialized compression schemes, like the byte-aligned bitmap code(BBC), are usually faster in performing logical operations than the general purpose schemes, but in many cases they are still orders of magnitude slower than the uncompressed scheme. To make the compressed bitmap indexes operate more efficiently, we designed a CPU-friendly scheme which we refer to as the word-aligned hybrid code (WAH). Tests on both synthetic and real application data show that the new scheme significantly outperforms well-known compression schemes at a modest increase in storage space. Compared to BBC, a scheme well-known for its operational efficiency, WAH performs logical operations about 12 times faster and uses only 60 percent more space. Compared to the uncompressed scheme, in most test cases WAH is faster while still using less space. We further verified with additional tests that the improvement in logical operation speed translates to similar improvement in query processing speed

  2. Compressing bitmap indexes for faster search operations

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2002-04-25

    In this paper, we study the effects of compression on bitmap indexes. The main operations on the bitmaps during query processing are bitwise logical operations such as AND, OR, NOT, etc. Using the general purpose compression schemes, such as gzip, the logical operations on the compressed bitmaps are much slower than on the uncompressed bitmaps. Specialized compression schemes, like the byte-aligned bitmap code(BBC), are usually faster in performing logical operations than the general purpose schemes, but in many cases they are still orders of magnitude slower than the uncompressed scheme. To make the compressed bitmap indexes operate more efficiently, we designed a CPU-friendly scheme which we refer to as the word-aligned hybrid code (WAH). Tests on both synthetic and real application data show that the new scheme significantly outperforms well-known compression schemes at a modest increase in storage space. Compared to BBC, a scheme well-known for its operational efficiency, WAH performs logical operations about 12 times faster and uses only 60 percent more space. Compared to the uncompressed scheme, in most test cases WAH is faster while still using less space. We further verified with additional tests that the improvement in logical operation speed translates to similar improvement in query processing speed.

  3. An optical color image watermarking scheme by using compressive sensing with human visual characteristics in gyrator domain

    Science.gov (United States)

    Liansheng, Sui; Bei, Zhou; Zhanmin, Wang; Ailing, Tian

    2017-05-01

    A novel optical color image watermarking scheme considering human visual characteristics is presented in gyrator transform domain. Initially, an appropriate reference image is constructed of significant blocks chosen from the grayscale host image by evaluating visual characteristics such as visual entropy and edge entropy. Three components of the color watermark image are compressed based on compressive sensing, and the corresponding results are combined to form the grayscale watermark. Then, the frequency coefficients of the watermark image are fused into the frequency data of the gyrator-transformed reference image. The fused result is inversely transformed and partitioned, and eventually the watermarked image is obtained by mapping the resultant blocks into their original positions. The scheme can reconstruct the watermark with high perceptual quality and has the enhanced security due to high sensitivity of the secret keys. Importantly, the scheme can be implemented easily under the framework of double random phase encoding with the 4f optical system. To the best of our knowledge, it is the first report on embedding the color watermark into the grayscale host image which will be out of attacker's expectation. Simulation results are given to verify the feasibility and its superior performance in terms of noise and occlusion robustness.

  4. Compression and decompression of digital seismic waveform data for storage and communication

    International Nuclear Information System (INIS)

    Bhadauria, Y.S.; Kumar, Vijai

    1991-01-01

    Two different classes of data compression schemes, namely physical data compression schemes and logical data compression schemes are examined for their use in storage and communication of digital seismic waveform data. In physical data compression schemes, the physical size of the waveform is reduced. One, therefore, gets only a broad picture of the original waveform, when the data are retrieved and the waveform is reconstituted. Coerrelation between original and decompressed waveform varies inversely with the data compresion ratio. In the logical data compression schemes, the data are stored in a logically encoded form. Storage of unnecessary characters like blank space is avoided. On decompression original data are retrieved and compression error is nil. Three algorithms of logical data compression schemes have been developed and studied. These are : 1) optimum formatting schemes, 2) differential bit reduction scheme, and 3) six bit compression scheme. Results of the above three algorithms of logical compression class are compared with those of physical compression schemes reported in literature. It is found that for all types of data, six bit compression scheme gives the highest value of data compression ratio. (author). 6 refs., 8 figs., 1 appendix, 2 tabs

  5. Evolution system study of a generalized scheme of relativistic magnetohydrodynamic

    International Nuclear Information System (INIS)

    Mahjoub, Bechir.

    1977-01-01

    A generalized scheme of relativistic magnetohydrodynamics is studied with a thermodynamical differential relation proposed by Fokker; this scheme takes account of interaction between the fluid and the magnetic field. Taking account of an integrability condition of this relation, the evolution system corresponding to this scheme is identical to the one corresponding to the usual scheme; it has the same characteristics; it is non-strictly hyperbolic with the same hypothesis of compressibility and it has, with respect to the Cauchy problem, an unique solution in a Gevrey class of index α=3/2 [fr

  6. Recognizable or Not: Towards Image Semantic Quality Assessment for Compression

    Science.gov (United States)

    Liu, Dong; Wang, Dandan; Li, Houqiang

    2017-12-01

    Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.

  7. FRESCO: Referential compression of highly similar sequences.

    Science.gov (United States)

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  8. Electricity storage using a thermal storage scheme

    Energy Technology Data Exchange (ETDEWEB)

    White, Alexander, E-mail: ajw36@cam.ac.uk [Hopkinson Laboratory, Cambridge University Engineering Department, Trumpington Street, Cambridge. CB2 1PZ (United Kingdom)

    2015-01-22

    The increasing use of renewable energy technologies for electricity generation, many of which have an unpredictably intermittent nature, will inevitably lead to a greater demand for large-scale electricity storage schemes. For example, the expanding fraction of electricity produced by wind turbines will require either backup or storage capacity to cover extended periods of wind lull. This paper describes a recently proposed storage scheme, referred to here as Pumped Thermal Storage (PTS), and which is based on “sensible heat” storage in large thermal reservoirs. During the charging phase, the system effectively operates as a high temperature-ratio heat pump, extracting heat from a cold reservoir and delivering heat to a hot one. In the discharge phase the processes are reversed and it operates as a heat engine. The round-trip efficiency is limited only by process irreversibilities (as opposed to Second Law limitations on the coefficient of performance and the thermal efficiency of the heat pump and heat engine respectively). PTS is currently being developed in both France and England. In both cases, the schemes operate on the Joule-Brayton (gas turbine) cycle, using argon as the working fluid. However, the French scheme proposes the use of turbomachinery for compression and expansion, whereas for that being developed in England reciprocating devices are proposed. The current paper focuses on the impact of the various process irreversibilities on the thermodynamic round-trip efficiency of the scheme. Consideration is given to compression and expansion losses and pressure losses (in pipe-work, valves and thermal reservoirs); heat transfer related irreversibility in the thermal reservoirs is discussed but not included in the analysis. Results are presented demonstrating how the various loss parameters and operating conditions influence the overall performance.

  9. Lossless compression for 3D PET

    International Nuclear Information System (INIS)

    Macq, B.; Sibomana, M.; Coppens, A.; Bol, A.; Michel, C.; Baker, K.; Jones, B.

    1994-01-01

    A new adaptive scheme is proposed for the lossless compression of positron emission tomography (PET) sinogram data. The algorithm uses an adaptive differential pulse code modulator (ADPCM) followed by a universal variable length coder (UVLC). Contrasting with Lempel-Ziv (LZ), which operates on a whole sinogram, UVLC operates very efficiently on short data blocks. This is a major advantage for real-time implementation. The algorithms is adaptive and codes data after some on-line estimations of the statistics inside each block. Its efficiency is tested when coding dynamic and static scans from two PET scanners and reaches asymptotically the entropy limit for long frames. For very short 3D frames, the new algorithm is twice more efficient than LZ. Since an application specific integrated circuit (ASIC) implementing a similar UVLC scheme is available today, a similar one should be able to sustain PET data lossless compression and decompression at a rate of 27 MBytes/sec. This algorithm is consequently a good candidate for the next generation of lossless compression engine

  10. A New Energy-Efficient Data Transmission Scheme Based on DSC and Virtual MIMO for Wireless Sensor Network

    OpenAIRE

    Li, Na; Zhang, Liwen; Li, Bing

    2015-01-01

    Energy efficiency in wireless sensor network (WSN) is one of the primary performance parameters. For improving the energy efficiency of WSN, we introduce distributed source coding (DSC) and virtual multiple-input multiple-output (MIMO) into wireless sensor network and then propose a new data transmission scheme called DSC-MIMO. DSC-MIMO compresses the source data using distributed source coding before transmitting, which is different from the existing communication schemes. Data compression c...

  11. Effective Quality-of-Service Renegotiating Schemes for Streaming Video

    Directory of Open Access Journals (Sweden)

    Song Hwangjun

    2004-01-01

    Full Text Available This paper presents effective quality-of-service renegotiating schemes for streaming video. The conventional network supporting quality of service generally allows a negotiation at a call setup. However, it is not efficient for the video application since the compressed video traffic is statistically nonstationary. Thus, we consider the network supporting quality-of-service renegotiations during the data transmission and study effective quality-of-service renegotiating schemes for streaming video. The token bucket model, whose parameters are token filling rate and token bucket size, is adopted for the video traffic model. The renegotiating time instants and the parameters are determined by analyzing the statistical information of compressed video traffic. In this paper, two renegotiating approaches, that is, fixed renegotiating interval case and variable renegotiating interval case, are examined. Finally, the experimental results are provided to show the performance of the proposed schemes.

  12. Improvement of the instability of compressible lattice Boltzmann model by shockdetecting sensor

    International Nuclear Information System (INIS)

    Esfahanian, Vahid; Ghadyani, Mohsen

    2015-01-01

    Recently, lattice Boltzmann method (LBM) has drawn attention as an alternative and promising numerical technique for simulating fluid flows. The stability of LBM is a challenging problem in the simulation of compressible flows with different types of embedded discontinuities. This study, proposes a complementary scheme for simulating inviscid flows by a compressible lattice Boltzmann model in order to improve the instability using a shock-detecting procedure. The advantages and disadvantages of using a numerical hybrid filter on the primitive or conservative variables, in addition to, macroscopic or mesoscopic variables are investigated. The study demonstrates that the robustness of the utilized LB model is improved for inviscid compressible flows by implementation of the complementary scheme on mesoscopic variables. The validity of the procedure to capture shocks and resolve contact discontinuity and rarefaction waves in well-known benchmark problems is investigated. The numerical results show that the scheme is capable of generating more robust solutions in the simulation of compressible flows and prevents the formation of oscillations. Good agreements are obtained for all test cases.

  13. Improvement of the instability of compressible lattice Boltzmann model by shockdetecting sensor

    Energy Technology Data Exchange (ETDEWEB)

    Esfahanian, Vahid [University of Tehran, Tehran (Iran, Islamic Republic of); Ghadyani, Mohsen [Islamic Azad University, Tehran (Iran, Islamic Republic of)

    2015-05-15

    Recently, lattice Boltzmann method (LBM) has drawn attention as an alternative and promising numerical technique for simulating fluid flows. The stability of LBM is a challenging problem in the simulation of compressible flows with different types of embedded discontinuities. This study, proposes a complementary scheme for simulating inviscid flows by a compressible lattice Boltzmann model in order to improve the instability using a shock-detecting procedure. The advantages and disadvantages of using a numerical hybrid filter on the primitive or conservative variables, in addition to, macroscopic or mesoscopic variables are investigated. The study demonstrates that the robustness of the utilized LB model is improved for inviscid compressible flows by implementation of the complementary scheme on mesoscopic variables. The validity of the procedure to capture shocks and resolve contact discontinuity and rarefaction waves in well-known benchmark problems is investigated. The numerical results show that the scheme is capable of generating more robust solutions in the simulation of compressible flows and prevents the formation of oscillations. Good agreements are obtained for all test cases.

  14. Fast Watermarking of MPEG-1/2 Streams Using Compressed-Domain Perceptual Embedding and a Generalized Correlator Detector

    Directory of Open Access Journals (Sweden)

    Briassouli Alexia

    2004-01-01

    Full Text Available A novel technique is proposed for watermarking of MPEG-1 and MPEG-2 compressed video streams. The proposed scheme is applied directly in the domain of MPEG-1 system streams and MPEG-2 program streams (multiplexed streams. Perceptual models are used during the embedding process in order to avoid degradation of the video quality. The watermark is detected without the use of the original video sequence. A modified correlation-based detector is introduced that applies nonlinear preprocessing before correlation. Experimental evaluation demonstrates that the proposed scheme is able to withstand several common attacks. The resulting watermarking system is very fast and therefore suitable for copyright protection of compressed video.

  15. EP-based wavelet coefficient quantization for linear distortion ECG data compression.

    Science.gov (United States)

    Hung, King-Chu; Wu, Tsung-Ching; Lee, Hsieh-Wei; Liu, Tung-Kuan

    2014-07-01

    Reconstruction quality maintenance is of the essence for ECG data compression due to the desire for diagnosis use. Quantization schemes with non-linear distortion characteristics usually result in time-consuming quality control that blocks real-time application. In this paper, a new wavelet coefficient quantization scheme based on an evolution program (EP) is proposed for wavelet-based ECG data compression. The EP search can create a stationary relationship among the quantization scales of multi-resolution levels. The stationary property implies that multi-level quantization scales can be controlled with a single variable. This hypothesis can lead to a simple design of linear distortion control with 3-D curve fitting technology. In addition, a competitive strategy is applied for alleviating data dependency effect. By using the ECG signals saved in MIT and PTB databases, many experiments were undertaken for the evaluation of compression performance, quality control efficiency, data dependency influence. The experimental results show that the new EP-based quantization scheme can obtain high compression performance and keep linear distortion behavior efficiency. This characteristic guarantees fast quality control even for the prediction model mismatching practical distortion curve. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  16. Lossless medical image compression with a hybrid coder

    Science.gov (United States)

    Way, Jing-Dar; Cheng, Po-Yuen

    1998-10-01

    The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.

  17. View compensated compression of volume rendered images for remote visualization.

    Science.gov (United States)

    Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S

    2009-07-01

    Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.

  18. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  19. Nonoscillatory shock capturing scheme using flux limited dissipation

    International Nuclear Information System (INIS)

    Jameson, A.

    1985-01-01

    A method for modifying the third order dissipative terms by the introduction of flux limiters is proposed. The first order dissipative terms can then be eliminated entirely, and in the case of a scalar conservation law the scheme is converted into a total variation diminishing scheme provided that an appropriate value is chosen for the dissipative coefficient. Particular attention is given to: (1) the treatment of the scalar conservation law; (2) the treatment of the Euler equations for inviscid compressible flow; (3) the boundary conditions; and (4) multistage time stepping and multigrid schemes. Numerical results for transonic flows suggest that a central difference scheme augmented by flux limited dissipative terms can lead to an effective nonoscillatory shock capturing method. 20 references

  20. A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node

    Directory of Open Access Journals (Sweden)

    Kan Luo

    2018-01-01

    Full Text Available Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS- based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node’s specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM, block sparse Bayesian learning (BSBL method, and discrete cosine transform (DCT basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption.

  1. A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node.

    Science.gov (United States)

    Luo, Kan; Cai, Zhipeng; Du, Keqin; Zou, Fumin; Zhang, Xiangyu; Li, Jianqing

    2018-01-01

    Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption.

  2. A compressed sensing based method with support refinement for impulse noise cancelation in DSL

    KAUST Repository

    Quadeer, Ahmed Abdul

    2013-06-01

    This paper presents a compressed sensing based method to suppress impulse noise in digital subscriber line (DSL). The proposed algorithm exploits the sparse nature of the impulse noise and utilizes the carriers, already available in all practical DSL systems, for its estimation and cancelation. Specifically, compressed sensing is used for a coarse estimate of the impulse position, an a priori information based maximum aposteriori probability (MAP) metric for its refinement, followed by least squares (LS) or minimum mean square error (MMSE) estimation for estimating the impulse amplitudes. Simulation results show that the proposed scheme achieves higher rate as compared to other known sparse estimation algorithms in literature. The paper also demonstrates the superior performance of the proposed scheme compared to the ITU-T G992.3 standard that utilizes RS-coding for impulse noise refinement in DSL signals. © 2013 IEEE.

  3. Space-Efficient Re-Pair Compression

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Prezza, Nicola

    2017-01-01

    Re-Pair [5] is an effective grammar-based compression scheme achieving strong compression rates in practice. Let n, σ, and d be the text length, alphabet size, and dictionary size of the final grammar, respectively. In their original paper, the authors show how to compute the Re-Pair grammar...... in expected linear time and 5n + 4σ2 + 4d + √n words of working space on top of the text. In this work, we propose two algorithms improving on the space of their original solution. Our model assumes a memory word of [log2 n] bits and a re-writable input text composed by n such words. Our first algorithm runs...

  4. A Multiresolution Image Completion Algorithm for Compressing Digital Color Images

    Directory of Open Access Journals (Sweden)

    R. Gomathi

    2014-01-01

    Full Text Available This paper introduces a new framework for image coding that uses image inpainting method. In the proposed algorithm, the input image is subjected to image analysis to remove some of the portions purposefully. At the same time, edges are extracted from the input image and they are passed to the decoder in the compressed manner. The edges which are transmitted to decoder act as assistant information and they help inpainting process fill the missing regions at the decoder. Textural synthesis and a new shearlet inpainting scheme based on the theory of p-Laplacian operator are proposed for image restoration at the decoder. Shearlets have been mathematically proven to represent distributed discontinuities such as edges better than traditional wavelets and are a suitable tool for edge characterization. This novel shearlet p-Laplacian inpainting model can effectively reduce the staircase effect in Total Variation (TV inpainting model whereas it can still keep edges as well as TV model. In the proposed scheme, neural network is employed to enhance the value of compression ratio for image coding. Test results are compared with JPEG 2000 and H.264 Intracoding algorithms. The results show that the proposed algorithm works well.

  5. Real-time and encryption efficiency improvements of simultaneous fusion, compression and encryption method based on chaotic generators

    Science.gov (United States)

    Jridi, Maher; Alfalou, Ayman

    2018-03-01

    In this paper, enhancement of an existing optical simultaneous fusion, compression and encryption (SFCE) scheme in terms of real-time requirements, bandwidth occupation and encryption robustness is proposed. We have used and approximate form of the DCT to decrease the computational resources. Then, a novel chaos-based encryption algorithm is introduced in order to achieve the confusion and diffusion effects. In the confusion phase, Henon map is used for row and column permutations, where the initial condition is related to the original image. Furthermore, the Skew Tent map is employed to generate another random matrix in order to carry out pixel scrambling. Finally, an adaptation of a classical diffusion process scheme is employed to strengthen security of the cryptosystem against statistical, differential, and chosen plaintext attacks. Analyses of key space, histogram, adjacent pixel correlation, sensitivity, and encryption speed of the encryption scheme are provided, and favorably compared to those of the existing crypto-compression system. The proposed method has been found to be digital/optical implementation-friendly which facilitates the integration of the crypto-compression system on a very broad range of scenarios.

  6. Optical image transformation and encryption by phase-retrieval-based double random-phase encoding and compressive ghost imaging

    Science.gov (United States)

    Yuan, Sheng; Yang, Yangrui; Liu, Xuemei; Zhou, Xin; Wei, Zhenzhuo

    2018-01-01

    An optical image transformation and encryption scheme is proposed based on double random-phase encoding (DRPE) and compressive ghost imaging (CGI) techniques. In this scheme, a secret image is first transformed into a binary image with the phase-retrieval-based DRPE technique, and then encoded by a series of random amplitude patterns according to the ghost imaging (GI) principle. Compressive sensing, corrosion and expansion operations are implemented to retrieve the secret image in the decryption process. This encryption scheme takes the advantage of complementary capabilities offered by the phase-retrieval-based DRPE and GI-based encryption techniques. That is the phase-retrieval-based DRPE is used to overcome the blurring defect of the decrypted image in the GI-based encryption, and the CGI not only reduces the data amount of the ciphertext, but also enhances the security of DRPE. Computer simulation results are presented to verify the performance of the proposed encryption scheme.

  7. Fixed-Rate Compressed Floating-Point Arrays.

    Science.gov (United States)

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  8. An image compression method for space multispectral time delay and integration charge coupled device camera

    International Nuclear Information System (INIS)

    Li Jin; Jin Long-Xu; Zhang Ran-Feng

    2013-01-01

    Multispectral time delay and integration charge coupled device (TDICCD) image compression requires a low-complexity encoder because it is usually completed on board where the energy and memory are limited. The Consultative Committee for Space Data Systems (CCSDS) has proposed an image data compression (CCSDS-IDC) algorithm which is so far most widely implemented in hardware. However, it cannot reduce spectral redundancy in multispectral images. In this paper, we propose a low-complexity improved CCSDS-IDC (ICCSDS-IDC)-based distributed source coding (DSC) scheme for multispectral TDICCD image consisting of a few bands. Our scheme is based on an ICCSDS-IDC approach that uses a bit plane extractor to parse the differences in the original image and its wavelet transformed coefficient. The output of bit plane extractor will be encoded by a first order entropy coder. Low-density parity-check-based Slepian—Wolf (SW) coder is adopted to implement the DSC strategy. Experimental results on space multispectral TDICCD images show that the proposed scheme significantly outperforms the CCSDS-IDC-based coder in each band

  9. Password Authentication Based on Fractal Coding Scheme

    Directory of Open Access Journals (Sweden)

    Nadia M. G. Al-Saidi

    2012-01-01

    Full Text Available Password authentication is a mechanism used to authenticate user identity over insecure communication channel. In this paper, a new method to improve the security of password authentication is proposed. It is based on the compression capability of the fractal image coding to provide an authorized user a secure access to registration and login process. In the proposed scheme, a hashed password string is generated and encrypted to be captured together with the user identity using text to image mechanisms. The advantage of fractal image coding is to be used to securely send the compressed image data through a nonsecured communication channel to the server. The verification of client information with the database system is achieved in the server to authenticate the legal user. The encrypted hashed password in the decoded fractal image is recognized using optical character recognition. The authentication process is performed after a successful verification of the client identity by comparing the decrypted hashed password with those which was stored in the database system. The system is analyzed and discussed from the attacker’s viewpoint. A security comparison is performed to show that the proposed scheme provides an essential security requirement, while their efficiency makes it easier to be applied alone or in hybrid with other security methods. Computer simulation and statistical analysis are presented.

  10. Application of a dual-resolution voxelization scheme to compressed-sensing (CS)-based iterative reconstruction in digital tomosynthesis (DTS)

    Science.gov (United States)

    Park, S. Y.; Kim, G. A.; Cho, H. S.; Park, C. K.; Lee, D. Y.; Lim, H. W.; Lee, H. W.; Kim, K. S.; Kang, S. Y.; Park, J. E.; Kim, W. S.; Jeon, D. H.; Je, U. K.; Woo, T. H.; Oh, J. E.

    2018-02-01

    In recent digital tomosynthesis (DTS), iterative reconstruction methods are often used owing to the potential to provide multiplanar images of superior image quality to conventional filtered-backprojection (FBP)-based methods. However, they require enormous computational cost in the iterative process, which has still been an obstacle to put them to practical use. In this work, we propose a new DTS reconstruction method incorporated with a dual-resolution voxelization scheme in attempt to overcome these difficulties, in which the voxels outside a small region-of-interest (ROI) containing target diagnosis are binned by 2 × 2 × 2 while the voxels inside the ROI remain unbinned. We considered a compressed-sensing (CS)-based iterative algorithm with a dual-constraint strategy for more accurate DTS reconstruction. We implemented the proposed algorithm and performed a systematic simulation and experiment to demonstrate its viability. Our results indicate that the proposed method seems to be effective for reducing computational cost considerably in iterative DTS reconstruction, keeping the image quality inside the ROI not much degraded. A binning size of 2 × 2 × 2 required only about 31.9% computational memory and about 2.6% reconstruction time, compared to those for no binning case. The reconstruction quality was evaluated in terms of the root-mean-square error (RMSE), the contrast-to-noise ratio (CNR), and the universal-quality index (UQI).

  11. Efficient access of compressed data

    International Nuclear Information System (INIS)

    Eggers, S.J.; Shoshani, A.

    1980-06-01

    A compression technique is presented that allows a high degree of compression but requires only logarithmic access time. The technique is a constant suppression scheme, and is most applicable to stable databases whose distribution of constants is fairly clustered. Furthermore, the repeated use of the technique permits the suppression of a multiple number of different constants. Of particular interest is the application of the constant suppression technique to databases the composite key of which is made up of an incomplete cross product of several attribute domains. The scheme for compressing the full cross product composite key is well known. This paper, however, also handles the general, incomplete case by applying the constant suppression technique in conjunction with a composite key suppression scheme

  12. Lossless compression of multispectral images using spectral information

    Science.gov (United States)

    Ma, Long; Shi, Zelin; Tang, Xusheng

    2009-10-01

    Multispectral images are available for different purposes due to developments in spectral imaging systems. The sizes of multispectral images are enormous. Thus transmission and storage of these volumes of data require huge time and memory resources. That is why compression algorithms must be developed. A salient property of multispectral images is that strong spectral correlation exists throughout almost all bands. This fact is successfully used to predict each band based on the previous bands. We propose to use spectral linear prediction and entropy coding with context modeling for encoding multispectral images. Linear prediction predicts the value for the next sample and computes the difference between predicted value and the original value. This difference is usually small, so it can be encoded with less its than the original value. The technique implies prediction of each image band by involving number of bands along the image spectra. Each pixel is predicted using information provided by pixels in the previous bands in the same spatial position. As done in the JPEG-LS, the proposed coder also represents the mapped residuals by using an adaptive Golomb-Rice code with context modeling. This residual coding is context adaptive, where the context used for the current sample is identified by a context quantization function of the three gradients. Then, context-dependent Golomb-Rice code and bias parameters are estimated sample by sample. The proposed scheme was compared with three algorithms applied to the lossless compression of multispectral images, namely JPEG-LS, Rice coding, and JPEG2000. Simulation tests performed on AVIRIS images have demonstrated that the proposed compression scheme is suitable for multispectral images.

  13. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  14. A Numerical Scheme Based on an Immersed Boundary Method for Compressible Turbulent Flows with Shocks: Application to Two-Dimensional Flows around Cylinders

    Directory of Open Access Journals (Sweden)

    Shun Takahashi

    2014-01-01

    Full Text Available A computational code adopting immersed boundary methods for compressible gas-particle multiphase turbulent flows is developed and validated through two-dimensional numerical experiments. The turbulent flow region is modeled by a second-order pseudo skew-symmetric form with minimum dissipation, while the monotone upstream-centered scheme for conservation laws (MUSCL scheme is employed in the shock region. The present scheme is applied to the flow around a two-dimensional cylinder under various freestream Mach numbers. Compared with the original MUSCL scheme, the minimum dissipation enabled by the pseudo skew-symmetric form significantly improves the resolution of the vortex generated in the wake while retaining the shock capturing ability. In addition, the resulting aerodynamic force is significantly improved. Also, the present scheme is successfully applied to moving two-cylinder problems.

  15. Compress compound images in H.264/MPGE-4 AVC by exploiting spatial correlation.

    Science.gov (United States)

    Lan, Cuiling; Shi, Guangming; Wu, Feng

    2010-04-01

    Compound images are a combination of text, graphics and natural image. They present strong anisotropic features, especially on the text and graphics parts. These anisotropic features often render conventional compression inefficient. Thus, this paper proposes a novel coding scheme from the H.264 intraframe coding. In the scheme, two new intramodes are developed to better exploit spatial correlation in compound images. The first is the residual scalar quantization (RSQ) mode, where intrapredicted residues are directly quantized and coded without transform. The second is the base colors and index map (BCIM) mode that can be viewed as an adaptive color quantization. In this mode, an image block is represented by several representative colors, referred to as base colors, and an index map to compress. Every block selects its coding mode from two new modes and the previous intramodes in H.264 by rate-distortion optimization (RDO). Experimental results show that the proposed scheme improves the coding efficiency even more than 10 dB at most bit rates for compound images and keeps a comparable efficient performance to H.264 for natural images.

  16. Irreversible data compression concepts with polynomial fitting in time-order of particle trajectory for visualization of huge particle system

    International Nuclear Information System (INIS)

    Ohtani, H; Ito, A M; Hagita, K; Kato, T; Saitoh, T; Takeda, T

    2013-01-01

    We propose in this paper a data compression scheme for large-scale particle simulations, which has favorable prospects for scientific visualization of particle systems. Our data compression concepts deal with the data of particle orbits obtained by simulation directly and have the following features: (i) Through control over the compression scheme, the difference between the simulation variables and the reconstructed values for the visualization from the compressed data becomes smaller than a given constant. (ii) The particles in the simulation are regarded as independent particles and the time-series data for each particle is compressed with an independent time-step for the particle. (iii) A particle trajectory is approximated by a polynomial function based on the characteristic motion of the particle. It is reconstructed as a continuous curve through interpolation from the values of the function for intermediate values of the sample data. We name this concept ''TOKI (Time-Order Kinetic Irreversible compression)''. In this paper, we present an example of an implementation of a data-compression scheme with the above features. Several application results are shown for plasma and galaxy formation simulation data

  17. Simultaneous optical image compression and encryption using error-reduction phase retrieval algorithm

    International Nuclear Information System (INIS)

    Liu, Wei; Liu, Shutian; Liu, Zhengjun

    2015-01-01

    We report a simultaneous image compression and encryption scheme based on solving a typical optical inverse problem. The secret images to be processed are multiplexed as the input intensities of a cascaded diffractive optical system. At the output plane, a compressed complex-valued data with a lot fewer measurements can be obtained by utilizing error-reduction phase retrieval algorithm. The magnitude of the output image can serve as the final ciphertext while its phase serves as the decryption key. Therefore the compression and encryption are simultaneously completed without additional encoding and filtering operations. The proposed strategy can be straightforwardly applied to the existing optical security systems that involve diffraction and interference. Numerical simulations are performed to demonstrate the validity and security of the proposal. (paper)

  18. Energy-Efficient Data Gathering Scheme Based on Broadcast Transmissions in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Soobin Lee

    2013-01-01

    previous works have proposed ideas that reduce the energy consumption of the network by exploiting the spatial correlation between sensed information. In this paper, we propose a distributed data compression framework that exploits the broadcasting characteristic of the wireless medium to improve energy efficiency. We analyze the performance of the proposed framework numerically and compare it with the performance of previous works using simulation. The proposed scheme performs better when the sensing information is correlated.

  19. Pressure correction schemes for compressible flows: application to baro-tropic Navier-Stokes equations and to drift-flux model

    International Nuclear Information System (INIS)

    Gastaldo, L.

    2007-11-01

    We develop in this PhD thesis a simulation tool for bubbly flows encountered in some late phases of a core-melt accident in pressurized water reactors, when the flow of molten core and vessel structures comes to chemically interact with the concrete of the containment floor. The physical modelling is based on the so-called drift-flux model, consisting of mass balance and momentum balance equations for the mixture (Navier-Stokes equations) and a mass balance equation for the gaseous phase. First, we propose a pressure correction scheme for the compressible Navier-Stokes equations based on mixed non-conforming finite elements. An ad hoc discretization of the advection operator, by a finite volume technique based on a dual mesh, ensures the stability of the velocity prediction step. A priori estimates for the velocity and the pressure yields the existence of the solution. We prove that this scheme is stable, in the sense that the discrete entropy is decreasing. For the conservation equation of the gaseous phase, we build a finite volume discretization which satisfies a discrete maximum principle. From this last property, we deduce the existence and the uniqueness of the discrete solution. Finally, on the basis of these works, a conservative and monotone scheme which is stable in the low Mach number limit, is build for the drift-flux model. This scheme enjoys, moreover, the following property: the algorithm preserves a constant pressure and velocity through moving interfaces between phases (i.e. contact discontinuities of the underlying hyperbolic system). In order to satisfy this property at the discrete level, we build an original pressure correction step which couples the mass balance equation with the transport terms of the gas mass balance equation, the remaining terms of the gas mass balance being taken into account with a splitting method. We prove the existence of a discrete solution for the pressure correction step. Numerical results are presented; they

  20. Compressive System Identification in the Linear Time-Invariant framework

    KAUST Repository

    Toth, Roland

    2011-12-01

    Selection of an efficient model parametrization (model order, delay, etc.) has crucial importance in parametric system identification. It navigates a trade-off between representation capabilities of the model (structural bias) and effects of over-parametrization (variance increase of the estimates). There exists many approaches to this widely studied problem in terms of statistical regularization methods and information criteria. In this paper, an alternative ℓ 1 regularization scheme is proposed for estimation of sparse linear-regression models based on recent results in compressive sensing. It is shown that the proposed scheme provides consistent estimation of sparse models in terms of the so-called oracle property, it is computationally attractive for large-scale over-parameterized models and it is applicable in case of small data sets, i.e., underdetermined estimation problems. The performance of the approach w.r.t. other regularization schemes is demonstrated in an extensive Monte Carlo study. © 2011 IEEE.

  1. Subband directional vector quantization in radiological image compression

    Science.gov (United States)

    Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel

    1992-05-01

    The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.

  2. Compression of Short Text on Embedded Systems

    DEFF Research Database (Denmark)

    Rein, S.; Gühmann, C.; Fitzek, Frank

    2006-01-01

    The paper details a scheme for lossless compression of a short data series larger than 50 bytes. The method uses arithmetic coding and context modelling with a low-complexity data model. A data model that takes 32 kBytes of RAM already cuts the data size in half. The compression scheme just takes...

  3. APC-PC Combined Scheme in Gilbert Two State Model: Proposal and Study

    Science.gov (United States)

    Bulo, Yaka; Saring, Yang; Bhunia, Chandan Tilak

    2017-04-01

    In an automatic repeat request (ARQ) scheme, a packet is retransmitted if it gets corrupted due to transmission errors caused by the channel. However, an erroneous packet may contain both erroneous bits and correct bits and hence it may still contain useful information. The receiver may be able to combine this information from multiple erroneous copies to recover the correct packet. Packet combining (PC) is a simple and elegant scheme of error correction in transmitted packet, in which two received copies are XORed to obtain the bit location of erroneous bits. Thereafter, the packet is corrected by bit inversion of bit located as erroneous. Aggressive packet combining (APC) is a logic extension of PC primarily designed for wireless communication with objective of correcting error with low latency. PC offers higher throughput than APC, but PC does not correct double bit errors if occur in same bit location of erroneous copies of the packet. A hybrid technique is proposed to utilize the advantages of both APC and PC while attempting to remove the limitation of both. In the proposed technique, applications of APC-PC on Gilbert two state model has been studied. The simulation results show that the proposed technique offers better throughput than the conventional APC and lesser packet error rate than PC scheme.

  4. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  5. Correspondence normalized ghost imaging on compressive sensing

    International Nuclear Information System (INIS)

    Zhao Sheng-Mei; Zhuang Peng

    2014-01-01

    Ghost imaging (GI) offers great potential with respect to conventional imaging techniques. It is an open problem in GI systems that a long acquisition time is be required for reconstructing images with good visibility and signal-to-noise ratios (SNRs). In this paper, we propose a new scheme to get good performance with a shorter construction time. We call it correspondence normalized ghost imaging based on compressive sensing (CCNGI). In the scheme, we enhance the signal-to-noise performance by normalizing the reference beam intensity to eliminate the noise caused by laser power fluctuations, and reduce the reconstruction time by using both compressive sensing (CS) and time-correspondence imaging (CI) techniques. It is shown that the qualities of the images have been improved and the reconstruction time has been reduced using CCNGI scheme. For the two-grayscale ''double-slit'' image, the mean square error (MSE) by GI and the normalized GI (NGI) schemes with the measurement number of 5000 are 0.237 and 0.164, respectively, and that is 0.021 by CCNGI scheme with 2500 measurements. For the eight-grayscale ''lena'' object, the peak signal-to-noise rates (PSNRs) are 10.506 and 13.098, respectively using GI and NGI schemes while the value turns to 16.198 using CCNGI scheme. The results also show that a high-fidelity GI reconstruction has been achieved using only 44% of the number of measurements corresponding to the Nyquist limit for the two-grayscale “double-slit'' object. The qualities of the reconstructed images using CCNGI are almost the same as those from GI via sparsity constraints (GISC) with a shorter reconstruction time. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  6. Compressive Sampling of EEG Signals with Finite Rate of Innovation

    Directory of Open Access Journals (Sweden)

    Poh Kok-Kiong

    2010-01-01

    Full Text Available Analyses of electroencephalographic signals and subsequent diagnoses can only be done effectively on long term recordings that preserve the signals' morphologies. Currently, electroencephalographic signals are obtained at Nyquist rate or higher, thus introducing redundancies. Existing compression methods remove these redundancies, thereby achieving compression. We propose an alternative compression scheme based on a sampling theory developed for signals with a finite rate of innovation (FRI which compresses electroencephalographic signals during acquisition. We model the signals as FRI signals and then sample them at their rate of innovation. The signals are thus effectively represented by a small set of Fourier coefficients corresponding to the signals' rate of innovation. Using the FRI theory, original signals can be reconstructed using this set of coefficients. Seventy-two hours of electroencephalographic recording are tested and results based on metrices used in compression literature and morphological similarities of electroencephalographic signals are presented. The proposed method achieves results comparable to that of wavelet compression methods, achieving low reconstruction errors while preserving the morphologiies of the signals. More importantly, it introduces a new framework to acquire electroencephalographic signals at their rate of innovation, thus entailing a less costly low-rate sampling device that does not waste precious computational resources.

  7. Cooperative Media Streaming Using Adaptive Network Compression

    DEFF Research Database (Denmark)

    Møller, Janus Heide; Sørensen, Jesper Hemming; Krigslund, Rasmus

    2008-01-01

    as an adaptive hybrid between LC and MDC. In order to facilitate the use of MDC-CC, a new overlay network approach is proposed, using tree of meshes. A control system for managing description distribution and compression in a small mesh is implemented in the discrete event simulator NS-2. The two traditional...... approaches, MDC and LC, are used as references for the performance evaluation of the proposed scheme. The system is simulated in a heterogeneous network environment, where packet errors are introduced. Moreover, a test is performed at different network loads. Performance gain is shown over both LC and MDC....

  8. Microbunching and RF Compression

    International Nuclear Information System (INIS)

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-01-01

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  9. A hybrid-drive nonisobaric-ignition scheme for inertial confinement fusion

    Energy Technology Data Exchange (ETDEWEB)

    He, X. T., E-mail: xthe@iapcm.ac.cn [Institute of Applied Physics and Computational Mathematics, P. O. Box 8009, Beijing 100094 (China); Center for Applied Physics and Technology, HEDPS, Peking University, Beijing 100871 (China); IFSA Collaborative Innovation Center of MoE, Shanghai Jiao-Tong University, Shanghai 200240 (China); Institute of Fusion Theory and Simulation, Zhejiang University, Hangzhou 310027 (China); Li, J. W.; Wang, L. F.; Liu, J.; Lan, K.; Ye, W. H. [Institute of Applied Physics and Computational Mathematics, P. O. Box 8009, Beijing 100094 (China); Center for Applied Physics and Technology, HEDPS, Peking University, Beijing 100871 (China); IFSA Collaborative Innovation Center of MoE, Shanghai Jiao-Tong University, Shanghai 200240 (China); Fan, Z. F.; Wu, J. F. [Institute of Applied Physics and Computational Mathematics, P. O. Box 8009, Beijing 100094 (China)

    2016-08-15

    A new hybrid-drive (HD) nonisobaric ignition scheme of inertial confinement fusion (ICF) is proposed, in which a HD pressure to drive implosion dynamics increases via increasing density rather than temperature in the conventional indirect drive (ID) and direct drive (DD) approaches. In this HD (combination of ID and DD) scheme, an assembled target of a spherical hohlraum and a layered deuterium-tritium capsule inside is used. The ID lasers first drive the shock to perform a spherical symmetry implosion and produce a large-scale corona plasma. Then, the DD lasers, whose critical surface in ID corona plasma is far from the radiation ablation front, drive a supersonic electron thermal wave, which slows down to a high-pressure electron compression wave, like a snowplow, piling up the corona plasma into high density and forming a HD pressurized plateau with a large width. The HD pressure is several times the conventional ID and DD ablation pressure and launches an enhanced precursor shock and a continuous compression wave, which give rise to the HD capsule implosion dynamics in a large implosion velocity. The hydrodynamic instabilities at imploding capsule interfaces are suppressed, and the continuous HD compression wave provides main pdV work large enough to hotspot, resulting in the HD nonisobaric ignition. The ignition condition and target design based on this scheme are given theoretically and by numerical simulations. It shows that the novel scheme can significantly suppress implosion asymmetry and hydrodynamic instabilities of current isobaric hotspot ignition design, and a high-gain ICF is promising.

  10. Efficient Hybrid Watermarking Scheme for Security and Transmission Bit Rate Enhancement of 3D Color-Plus-Depth Video Communication

    Science.gov (United States)

    El-Shafai, W.; El-Rabaie, S.; El-Halawany, M.; Abd El-Samie, F. E.

    2018-03-01

    Three-Dimensional Video-plus-Depth (3DV + D) comprises diverse video streams captured by different cameras around an object. Therefore, there is a great need to fulfill efficient compression to transmit and store the 3DV + D content in compressed form to attain future resource bounds whilst preserving a decisive reception quality. Also, the security of the transmitted 3DV + D is a critical issue for protecting its copyright content. This paper proposes an efficient hybrid watermarking scheme for securing the 3DV + D transmission, which is the homomorphic transform based Singular Value Decomposition (SVD) in Discrete Wavelet Transform (DWT) domain. The objective of the proposed watermarking scheme is to increase the immunity of the watermarked 3DV + D to attacks and achieve adequate perceptual quality. Moreover, the proposed watermarking scheme reduces the transmission-bandwidth requirements for transmitting the color-plus-depth 3DV over limited-bandwidth wireless networks through embedding the depth frames into the color frames of the transmitted 3DV + D. Thus, it saves the transmission bit rate and subsequently it enhances the channel bandwidth-efficiency. The performance of the proposed watermarking scheme is compared with those of the state-of-the-art hybrid watermarking schemes. The comparisons depend on both the subjective visual results and the objective results; the Peak Signal-to-Noise Ratio (PSNR) of the watermarked frames and the Normalized Correlation (NC) of the extracted watermark frames. Extensive simulation results on standard 3DV + D sequences have been conducted in the presence of attacks. The obtained results confirm that the proposed hybrid watermarking scheme is robust in the presence of attacks. It achieves not only very good perceptual quality with appreciated PSNR values and saving in the transmission bit rate, but also high correlation coefficient values in the presence of attacks compared to the existing hybrid watermarking schemes.

  11. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  12. High-resolution quantization based on soliton self-frequency shift and spectral compression in a bi-directional comb-fiber architecture

    Science.gov (United States)

    Zhang, Xuyan; Zhang, Zhiyao; Wang, Shubing; Liang, Dong; Li, Heping; Liu, Yong

    2018-03-01

    We propose and demonstrate an approach that can achieve high-resolution quantization by employing soliton self-frequency shift and spectral compression. Our approach is based on a bi-directional comb-fiber architecture which is composed of a Sagnac-loop-based mirror and a comb-like combination of N sections of interleaved single-mode fibers and high nonlinear fibers. The Sagnac-loop-based mirror placed at the terminal of a bus line reflects the optical pulses back to the bus line to achieve additional N-stage spectral compression, thus single-stage soliton self-frequency shift (SSFS) and (2 N - 1)-stage spectral compression are realized in the bi-directional scheme. The fiber length in the architecture is numerically optimized, and the proposed quantization scheme is evaluated by both simulation and experiment in the case of N = 2. In the experiment, a quantization resolution of 6.2 bits is obtained, which is 1.2-bit higher than that of its uni-directional counterpart.

  13. COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation

    Science.gov (United States)

    Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos

    2015-01-01

    The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.

  14. An Efficient, Semi-implicit Pressure-based Scheme Employing a High-resolution Finitie Element Method for Simulating Transient and Steady, Inviscid and Viscous, Compressible Flows on Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Richard C. Martineau; Ray A. Berry

    2003-04-01

    A new semi-implicit pressure-based Computational Fluid Dynamics (CFD) scheme for simulating a wide range of transient and steady, inviscid and viscous compressible flow on unstructured finite elements is presented here. This new CFD scheme, termed the PCICEFEM (Pressure-Corrected ICE-Finite Element Method) scheme, is composed of three computational phases, an explicit predictor, an elliptic pressure Poisson solution, and a semiimplicit pressure-correction of the flow variables. The PCICE-FEM scheme is capable of second-order temporal accuracy by incorporating a combination of a time-weighted form of the two-step Taylor-Galerkin Finite Element Method scheme as an explicit predictor for the balance of momentum equations and the finite element form of a time-weighted trapezoid rule method for the semi-implicit form of the governing hydrodynamic equations. Second-order spatial accuracy is accomplished by linear unstructured finite element discretization. The PCICE-FEM scheme employs Flux-Corrected Transport as a high-resolution filter for shock capturing. The scheme is capable of simulating flows from the nearly incompressible to the high supersonic flow regimes. The PCICE-FEM scheme represents an advancement in mass-momentum coupled, pressurebased schemes. The governing hydrodynamic equations for this scheme are the conservative form of the balance of momentum equations (Navier-Stokes), mass conservation equation, and total energy equation. An operator splitting process is performed along explicit and implicit operators of the semi-implicit governing equations to render the PCICE-FEM scheme in the class of predictor-corrector schemes. The complete set of semi-implicit governing equations in the PCICE-FEM scheme are cast in this form, an explicit predictor phase and a semi-implicit pressure-correction phase with the elliptic pressure Poisson solution coupling the predictor-corrector phases. The result of this predictor-corrector formulation is that the pressure Poisson

  15. An asymptotic preserving multidimensional ALE method for a system of two compressible flows coupled with friction

    Science.gov (United States)

    Del Pino, S.; Labourasse, E.; Morel, G.

    2018-06-01

    We present a multidimensional asymptotic preserving scheme for the approximation of a mixture of compressible flows. Fluids are modelled by two Euler systems of equations coupled with a friction term. The asymptotic preserving property is mandatory for this kind of model, to derive a scheme that behaves well in all regimes (i.e. whatever the friction parameter value is). The method we propose is defined in ALE coordinates, using a Lagrange plus remap approach. This imposes a multidimensional definition and analysis of the scheme.

  16. Bit-Wise Arithmetic Coding For Compression Of Data

    Science.gov (United States)

    Kiely, Aaron

    1996-01-01

    Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.

  17. Flux Limiter Lattice Boltzmann for Compressible Flows

    International Nuclear Information System (INIS)

    Chen Feng; Li Yingjun; Xu Aiguo; Zhang Guangcai

    2011-01-01

    In this paper, a new flux limiter scheme with the splitting technique is successfully incorporated into a multiple-relaxation-time lattice Boltzmann (LB) model for shacked compressible flows. The proposed flux limiter scheme is efficient in decreasing the artificial oscillations and numerical diffusion around the interface. Due to the kinetic nature, some interface problems being difficult to handle at the macroscopic level can be modeled more naturally through the LB method. Numerical simulations for the Richtmyer-Meshkov instability show that with the new model the computed interfaces are smoother and more consistent with physical analysis. The growth rates of bubble and spike present a satisfying agreement with the theoretical predictions and other numerical simulations. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  18. A Finite-Volume approach for compressible single- and two-phase flows in flexible pipelines with fluid-structure interaction

    Science.gov (United States)

    Daude, F.; Galon, P.

    2018-06-01

    A Finite-Volume scheme for the numerical computations of compressible single- and two-phase flows in flexible pipelines is proposed based on an approximate Godunov-type approach. The spatial discretization is here obtained using the HLLC scheme. In addition, the numerical treatment of abrupt changes in area and network including several pipelines connected at junctions is also considered. The proposed approach is based on the integral form of the governing equations making it possible to tackle general equations of state. A coupled approach for the resolution of fluid-structure interaction of compressible fluid flowing in flexible pipes is considered. The structural problem is solved using Euler-Bernoulli beam finite elements. The present Finite-Volume method is applied to ideal gas and two-phase steam-water based on the Homogeneous Equilibrium Model (HEM) in conjunction with a tabulated equation of state in order to demonstrate its ability to tackle general equations of state. The extensive application of the scheme for both shock tube and other transient flow problems demonstrates its capability to resolve such problems accurately and robustly. Finally, the proposed 1-D fluid-structure interaction model appears to be computationally efficient.

  19. Principles of non-Liouvillean pulse compression by photoionization for heavy ion fusion drivers

    International Nuclear Information System (INIS)

    Hofmann, I.

    1990-05-01

    Photoionization of single charged heavy ions has been proposed recently by Rubbia as a non-Liouvillean injection scheme from the linac into the storage rings of a driver accelerator for inertial confinement fusion (ICF). The main idea of this scheme is the accumulation of high currents of heavy ions without the usually inevitable increase of phase space. Here we suggest to use the photoionization idea in an alternative scheme: if it is applied at the final stage of pulse compression (replacing the conventional bunch compression by an rf voltage, which always increases the momentum spread) there is a significant advantage in the performance of the accelerator. We show, in particular, that this new compression scheme has the potential to relax the tough stability limitations, which were identified in the heavy ion fusion reactor study HIBALL. Moreover, it is promising for achieving the higher beam power, which is suitable for indirectly driven fusion targets (10 16 Watts/gram in contrast with the 10 14 for the directly driven targets in HIBALL). The idea of non-Liouvillean bunch compression is to stack a large number of bunches (typically 50-100) in the same phase space volume during a change of charge state of the ion. A particular feature of this scheme with regard to beam dynamics is its transient nature, since the time required is one revolution per bunch. After the stacking the intense bunch is ejected and directly guided to the target. The present study is a first step to explore the possibly limiting effect of space charge under the conditions of parameters of a full-size driver accelerator. Preliminary results indicate that there is a limit to the effective stacking number (non-Liouvillean 'compression-factor'), which is, however, not prohibitive. Requirements to the power of the photon beam from a free electron laser are also discussed. It is seen that resonant cross sections of the order of 10 -15 cm 2 lead to photon beam powers of a few Megawatt. (orig.)

  20. Contributions in compression of 3D medical images and 2D images; Contributions en compression d'images medicales 3D et d'images naturelles 2D

    Energy Technology Data Exchange (ETDEWEB)

    Gaudeau, Y

    2006-12-15

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  1. Compression and channel-coding algorithms for high-definition television signals

    Science.gov (United States)

    Alparone, Luciano; Benelli, Giuliano; Fabbri, A. F.

    1990-09-01

    In this paper results of investigations about the effects of channel errors in the transmission of images compressed by means of techniques based on Discrete Cosine Transform (DOT) and Vector Quantization (VQ) are presented. Since compressed images are heavily degraded by noise in the transmission channel more seriously for what concern VQ-coded images theoretical studies and simulations are presented in order to define and evaluate this degradation. Some channel coding schemes are proposed in order to protect information during transmission. Hamming codes (7 (15 and (31 have been used for DCT-compressed images more powerful codes such as Golay (23 for VQ-compressed images. Performances attainable with softdecoding techniques are also evaluated better quality images have been obtained than using classical hard decoding techniques. All tests have been carried out to simulate the transmission of a digital image from HDTV signal over an AWGN channel with P5K modulation.

  2. Design and Smartphone-Based Implementation of a Chaotic Video Communication Scheme via WAN Remote Transmission

    Science.gov (United States)

    Lin, Zhuosheng; Yu, Simin; Li, Chengqing; Lü, Jinhu; Wang, Qianxue

    This paper proposes a chaotic secure video remote communication scheme that can perform on real WAN networks, and implements it on a smartphone hardware platform. First, a joint encryption and compression scheme is designed by embedding a chaotic encryption scheme into the MJPG-Streamer source codes. Then, multiuser smartphone communications between the sender and the receiver are implemented via WAN remote transmission. Finally, the transmitted video data are received with the given IP address and port in an Android smartphone. It should be noted that, this is the first time that chaotic video encryption schemes are implemented on such a hardware platform. The experimental results demonstrate that the technical challenges on hardware implementation of secure video communication are successfully solved, reaching a balance amongst sufficient security level, real-time processing of massive video data, and utilization of available resources in the hardware environment. The proposed scheme can serve as a good application example of chaotic secure communications for smartphone and other mobile facilities in the future.

  3. Proposed criteria for the evaluation of an address assignment scheme in Botswana

    CSIR Research Space (South Africa)

    Ditsela, J

    2011-06-01

    Full Text Available propose criteria for an address assignment scheme in Botswana: a single set of place or area names; different addresses types for urban, rural and farm areas; principles for address numbering assignment; integration of different referencing systems; and a...

  4. Analysis of time integration methods for the compressible two-fluid model for pipe flow simulations

    NARCIS (Netherlands)

    B. Sanderse (Benjamin); I. Eskerud Smith (Ivar); M.H.W. Hendrix (Maurice)

    2017-01-01

    textabstractIn this paper we analyse different time integration methods for the two-fluid model and propose the BDF2 method as the preferred choice to simulate transient compressible multiphase flow in pipelines. Compared to the prevailing Backward Euler method, the BDF2 scheme has a significantly

  5. An unstaggered central scheme on nonuniform grids for the simulation of a compressible two-phase flow model

    Energy Technology Data Exchange (ETDEWEB)

    Touma, Rony [Department of Computer Science & Mathematics, Lebanese American University, Beirut (Lebanon); Zeidan, Dia [School of Basic Sciences and Humanities, German Jordanian University, Amman (Jordan)

    2016-06-08

    In this paper we extend a central finite volume method on nonuniform grids to the case of drift-flux two-phase flow problems. The numerical base scheme is an unstaggered, non oscillatory, second-order accurate finite volume scheme that evolves a piecewise linear numerical solution on a single grid and uses dual cells intermediately while updating the numerical solution to avoid the resolution of the Riemann problems arising at the cell interfaces. We then apply the numerical scheme and solve a classical drift-flux problem. The obtained results are in good agreement with corresponding ones appearing in the recent literature, thus confirming the potential of the proposed scheme.

  6. Study of suprathermal electron transport in solid or compressed matter for the fast-ignitor scheme

    International Nuclear Information System (INIS)

    Perez, F.

    2010-01-01

    The inertial confinement fusion (ICF) concept is widely studied nowadays. It consists in quickly compressing and heating a small spherical capsule filled with fuel, using extremely energetic lasers. Since approximately 15 years, the fast-ignition (FI) technique has been proposed to facilitate the fuel heating by adding a particle beam - electrons generated by an ultra-intense laser - at the exact moment when the capsule compression is at its maximum. This thesis constitutes an experimental study of these electron beams generated by picosecond-scale lasers. We present new results on the characteristics of these electrons after they are accelerated by the laser (energy, divergence, etc.) as well as their interaction with the matter they pass through. The experimental results are explained and reveal different aspects of these laser-accelerated fast electrons. Their analysis allowed for significant progress in understanding several mechanisms: how they are injected into solid matter, how to measure their divergence, and how they can be automatically collimated inside compressed matter. (author) [fr

  7. Effects of Instantaneous Multiband Dynamic Compression on Speech Intelligibility

    Directory of Open Access Journals (Sweden)

    Herzke Tobias

    2005-01-01

    Full Text Available The recruitment phenomenon, that is, the reduced dynamic range between threshold and uncomfortable level, is attributed to the loss of instantaneous dynamic compression on the basilar membrane. Despite this, hearing aids commonly use slow-acting dynamic compression for its compensation, because this was found to be the most successful strategy in terms of speech quality and intelligibility rehabilitation. Former attempts to use fast-acting compression gave ambiguous results, raising the question as to whether auditory-based recruitment compensation by instantaneous compression is in principle applicable in hearing aids. This study thus investigates instantaneous multiband dynamic compression based on an auditory filterbank. Instantaneous envelope compression is performed in each frequency band of a gammatone filterbank, which provides a combination of time and frequency resolution comparable to the normal healthy cochlea. The gain characteristics used for dynamic compression are deduced from categorical loudness scaling. In speech intelligibility tests, the instantaneous dynamic compression scheme was compared against a linear amplification scheme, which used the same filterbank for frequency analysis, but employed constant gain factors that restored the sound level for medium perceived loudness in each frequency band. In subjective comparisons, five of nine subjects preferred the linear amplification scheme and would not accept the instantaneous dynamic compression in hearing aids. Four of nine subjects did not perceive any quality differences. A sentence intelligibility test in noise (Oldenburg sentence test showed little to no negative effects of the instantaneous dynamic compression, compared to linear amplification. A word intelligibility test in quiet (one-syllable rhyme test showed that the subjects benefit from the larger amplification at low levels provided by instantaneous dynamic compression. Further analysis showed that the increase

  8. Proposal of a critical test of the Navier-Stokes-Fourier paradigm for compressible fluid continua

    Science.gov (United States)

    Brenner, Howard

    2013-01-01

    A critical, albeit simple experimental and/or molecular-dynamic (MD) simulation test is proposed whose outcome would, in principle, establish the viability of the Navier-Stokes-Fourier (NSF) equations for compressible fluid continua. The latter equation set, despite its longevity as constituting the fundamental paradigm of continuum fluid mechanics, has recently been criticized on the basis of its failure to properly incorporate volume transport phenomena—as embodied in the proposed bivelocity paradigm [H. Brenner, Int. J. Eng. Sci.IJESAN0020-722510.1016/j.ijengsci.2012.01.006 54, 67 (2012)]—into its formulation. Were the experimental or simulation results found to accord, even only qualitatively, with bivelocity predictions, the temperature distribution in a gas-filled, thermodynamically and mechanically isolated circular cylinder undergoing steady rigid-body rotation in an inertial reference frame would not be uniform; rather, the temperature would be higher at the cylinder wall than along the axis of rotation. This radial temperature nonuniformity contrasts with the uniformity of the temperature predicted by the NSF paradigm for these same circumstances. Easily attainable rates of rotation in centrifuges and readily available tools for measuring the expected temperature differences render experimental execution of the proposed scheme straightforward in principle. As such, measurement—via experiment or MD simulation—of, say, the temperature difference ΔT between the gas at the wall and along the axis of rotation would provide quantitative tests of both the NSF and bivelocity hydrodynamic models, whose respective solutions for the stated set of circumstances are derived in this paper. Independently of the correctness of the bivelocity model, any temperature difference observed during the proposed experiment or simulation, irrespective of magnitude, would preclude the possibility of the NSF paradigm being correct for fluid continua, except for

  9. Proposal of a critical test of the Navier-Stokes-Fourier paradigm for compressible fluid continua.

    Science.gov (United States)

    Brenner, Howard

    2013-01-01

    A critical, albeit simple experimental and/or molecular-dynamic (MD) simulation test is proposed whose outcome would, in principle, establish the viability of the Navier-Stokes-Fourier (NSF) equations for compressible fluid continua. The latter equation set, despite its longevity as constituting the fundamental paradigm of continuum fluid mechanics, has recently been criticized on the basis of its failure to properly incorporate volume transport phenomena-as embodied in the proposed bivelocity paradigm [H. Brenner, Int. J. Eng. Sci. 54, 67 (2012)]-into its formulation. Were the experimental or simulation results found to accord, even only qualitatively, with bivelocity predictions, the temperature distribution in a gas-filled, thermodynamically and mechanically isolated circular cylinder undergoing steady rigid-body rotation in an inertial reference frame would not be uniform; rather, the temperature would be higher at the cylinder wall than along the axis of rotation. This radial temperature nonuniformity contrasts with the uniformity of the temperature predicted by the NSF paradigm for these same circumstances. Easily attainable rates of rotation in centrifuges and readily available tools for measuring the expected temperature differences render experimental execution of the proposed scheme straightforward in principle. As such, measurement-via experiment or MD simulation-of, say, the temperature difference ΔT between the gas at the wall and along the axis of rotation would provide quantitative tests of both the NSF and bivelocity hydrodynamic models, whose respective solutions for the stated set of circumstances are derived in this paper. Independently of the correctness of the bivelocity model, any temperature difference observed during the proposed experiment or simulation, irrespective of magnitude, would preclude the possibility of the NSF paradigm being correct for fluid continua, except for incompressible flows.

  10. A robust H.264/AVC video watermarking scheme with drift compensation.

    Science.gov (United States)

    Jiang, Xinghao; Sun, Tanfeng; Zhou, Yue; Wang, Wan; Shi, Yun-Qing

    2014-01-01

    A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.

  11. An Optimally Stable and Accurate Second-Order SSP Runge-Kutta IMEX Scheme for Atmospheric Applications

    Science.gov (United States)

    Rokhzadi, Arman; Mohammadian, Abdolmajid; Charron, Martin

    2018-01-01

    The objective of this paper is to develop an optimized implicit-explicit (IMEX) Runge-Kutta scheme for atmospheric applications focusing on stability and accuracy. Following the common terminology, the proposed method is called IMEX-SSP2(2,3,2), as it has second-order accuracy and is composed of diagonally implicit two-stage and explicit three-stage parts. This scheme enjoys the Strong Stability Preserving (SSP) property for both parts. This new scheme is applied to nonhydrostatic compressible Boussinesq equations in two different arrangements, including (i) semiimplicit and (ii) Horizontally Explicit-Vertically Implicit (HEVI) forms. The new scheme preserves the SSP property for larger regions of absolute monotonicity compared to the well-studied scheme in the same class. In addition, numerical tests confirm that the IMEX-SSP2(2,3,2) improves the maximum stable time step as well as the level of accuracy and computational cost compared to other schemes in the same class. It is demonstrated that the A-stability property as well as satisfying "second-stage order" and stiffly accurate conditions lead the proposed scheme to better performance than existing schemes for the applications examined herein.

  12. Numerical analysis of a non equilibrium two-component two-compressible flow in porous media

    KAUST Repository

    Saad, Bilal Mohammed

    2013-09-01

    We propose and analyze a finite volume scheme to simulate a non equilibrium two components (water and hydrogen) two phase flow (liquid and gas) model. In this model, the assumption of local mass non equilibrium is ensured and thus the velocity of the mass exchange between dissolved hydrogen and hydrogen in the gas phase is supposed finite. The proposed finite volume scheme is fully implicit in time together with a phase-by-phase upwind approach in space and it is discretize the equations in their general form with gravity and capillary terms We show that the proposed scheme satisfies the maximum principle for the saturation and the concentration of the dissolved hydrogen. We establish stability results on the velocity of each phase and on the discrete gradient of the concentration. We show the convergence of a subsequence to a weak solution of the continuous equations as the size of the discretization tends to zero. At our knowledge, this is the first convergence result of finite volume scheme in the case of two component two phase compressible flow in several space dimensions.

  13. Contributions in compression of 3D medical images and 2D images; Contributions en compression d'images medicales 3D et d'images naturelles 2D

    Energy Technology Data Exchange (ETDEWEB)

    Gaudeau, Y

    2006-12-15

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  14. Experimental scheme and restoration algorithm of block compression sensing

    Science.gov (United States)

    Zhang, Linxia; Zhou, Qun; Ke, Jun

    2018-01-01

    Compressed Sensing (CS) can use the sparseness of a target to obtain its image with much less data than that defined by the Nyquist sampling theorem. In this paper, we study the hardware implementation of a block compression sensing system and its reconstruction algorithms. Different block sizes are used. Two algorithms, the orthogonal matching algorithm (OMP) and the full variation minimum algorithm (TV) are used to obtain good reconstructions. The influence of block size on reconstruction is also discussed.

  15. Contributions in compression of 3D medical images and 2D images

    International Nuclear Information System (INIS)

    Gaudeau, Y.

    2006-12-01

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  16. Symmetric and asymmetric hybrid cryptosystem based on compressive sensing and computer generated holography

    Science.gov (United States)

    Ma, Lihong; Jin, Weimin

    2018-01-01

    A novel symmetric and asymmetric hybrid optical cryptosystem is proposed based on compressive sensing combined with computer generated holography. In this method there are six encryption keys, among which two decryption phase masks are different from the two random phase masks used in the encryption process. Therefore, the encryption system has the feature of both symmetric and asymmetric cryptography. On the other hand, because computer generated holography can flexibly digitalize the encrypted information and compressive sensing can significantly reduce data volume, what is more, the final encryption image is real function by phase truncation, the method favors the storage and transmission of the encryption data. The experimental results demonstrate that the proposed encryption scheme boosts the security and has high robustness against noise and occlusion attacks.

  17. Quantization Distortion in Block Transform-Compressed Data

    Science.gov (United States)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  18. Research on a New Signature Scheme on Blockchain

    Directory of Open Access Journals (Sweden)

    Chao Yuan

    2017-01-01

    Full Text Available With the rise of Bitcoin, blockchain which is the core technology of Bitcoin has received increasing attention. Privacy preserving and performance on blockchain are two research points in academia and business, but there are still some unresolved issues in both respects. An aggregate signature scheme is a digital signature that supports making signatures on many different messages generated by many different users. Using aggregate signature, the size of the signature could be shortened by compressing multiple signatures into a single signature. In this paper, a new signature scheme for transactions on blockchain based on the aggregate signature was proposed. It was worth noting that elliptic curve discrete logarithm problem and bilinear maps played major roles in our signature scheme. And the security properties of our signature scheme were proved. In our signature scheme, the amount will be hidden especially in the transactions which contain multiple inputs and outputs. Additionally, the size of the signature on transaction is constant regardless of the number of inputs and outputs that the transaction contains, which can improve the performance of signature. Finally, we gave an application scenario for our signature scheme which aims to achieve the transactions of big data on blockchain.

  19. Compresso: Efficient Compression of Segmentation Data for Connectomics

    KAUST Repository

    Matejek, Brian

    2017-09-03

    Recent advances in segmentation methods for connectomics and biomedical imaging produce very large datasets with labels that assign object classes to image pixels. The resulting label volumes are bigger than the raw image data and need compression for efficient storage and transfer. General-purpose compression methods are less effective because the label data consists of large low-frequency regions with structured boundaries unlike natural image data. We present Compresso, a new compression scheme for label data that outperforms existing approaches by using a sliding window to exploit redundancy across border regions in 2D and 3D. We compare our method to existing compression schemes and provide a detailed evaluation on eleven biomedical and image segmentation datasets. Our method provides a factor of 600–2200x compression for label volumes, with running times suitable for practice.

  20. A Robust H.264/AVC Video Watermarking Scheme with Drift Compensation

    Directory of Open Access Journals (Sweden)

    Xinghao Jiang

    2014-01-01

    Full Text Available A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.

  1. Building indifferentiable compression functions from the PGV compression functions

    DEFF Research Database (Denmark)

    Gauravaram, P.; Bagheri, Nasour; Knudsen, Lars Ramkilde

    2016-01-01

    Preneel, Govaerts and Vandewalle (PGV) analysed the security of single-block-length block cipher based compression functions assuming that the underlying block cipher has no weaknesses. They showed that 12 out of 64 possible compression functions are collision and (second) preimage resistant. Black......, Rogaway and Shrimpton formally proved this result in the ideal cipher model. However, in the indifferentiability security framework introduced by Maurer, Renner and Holenstein, all these 12 schemes are easily differentiable from a fixed input-length random oracle (FIL-RO) even when their underlying block...

  2. Quasi Gradient Projection Algorithm for Sparse Reconstruction in Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Xin Meng

    2014-02-01

    Full Text Available Compressed sensing is a novel signal sampling theory under the condition that the signal is sparse or compressible. The existing recovery algorithms based on the gradient projection can either need prior knowledge or recovery the signal poorly. In this paper, a new algorithm based on gradient projection is proposed, which is referred as Quasi Gradient Projection. The algorithm presented quasi gradient direction and two step sizes schemes along this direction. The algorithm doesn’t need any prior knowledge of the original signal. Simulation results demonstrate that the presented algorithm cans recovery the signal more correctly than GPSR which also don’t need prior knowledge. Meanwhile, the algorithm has a lower computation complexity.

  3. HVS scheme for DICOM image compression: Design and comparative performance evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Prabhakar, B. [Biomedical and Engineering Division, Indian Institute of Technology Madras, Chennai 600036, Tamil Nadu (India)]. E-mail: prabhakarb@iitm.ac.in; Reddy, M. Ramasubba [Biomedical and Engineering Division, Indian Institute of Technology Madras, Chennai 600036, Tamil Nadu (India)

    2007-07-15

    Advanced digital imaging technology in medical domain demands efficient and effective DICOM image compression for progressive image transmission and picture archival. Here a compression system, which incorporates sensitivities of HVS coded with SPIHT quantization, is discussed. The weighting factors derived from luminance CSF are used to transform the wavelet subband coefficients to reflect characteristics of HVS in best possible manner. Mannos et al. and Daly HVS models have been used and results are compared. To evaluate the performance, Eskicioglu chart metric is considered. Experiment is done on both Monochrome and Color Dicom images of MRI, CT, OT, and CR, natural and benchmark images. Reconstructed image through our technique showed improvement in visual quality and Eskicioglu chart metric at same compression ratios. Also the Daly HVS model based compression shows better performance perceptually and quantitatively when compared to Mannos et el. model. Further 'bior4.4' wavelet filter provides better results than 'db9' filter for this compression system. Results give strong evidence that under common boundary conditions; our technique achieves competitive visual quality, compression ratio and coding/decoding time, when compared with jpeg2000 (kakadu)

  4. Sliding-MOMP Based Channel Estimation Scheme for ISDB-T Systems

    Directory of Open Access Journals (Sweden)

    Ziji Ma

    2016-01-01

    Full Text Available Compressive sensing based channel estimation has shown its advantage of accurate reconstruction for sparse signal with less pilots for OFDM systems. However, high computational cost requirement of CS method, due to linear programming, significantly restricts its implementation in practical applications. In this paper, we propose a reduced complexity channel estimation scheme of modified orthogonal matching pursuit with sliding windows for ISDB-T (Integrated Services Digital Broadcasting for Terrestrial system. The proposed scheme can reduce the computational cost by limiting the searching region as well as making effective use of the last estimation result. In addition, adaptive tracking strategy with sliding sampling window can improve the robustness of CS based methods to guarantee its accuracy of channel matrix reconstruction, even for fast time-variant channels. The computer simulation demonstrates its impact on improving bit error rate and computational complexity for ISDB-T system.

  5. Comparative study of construction schemes for proposed LINAC tunnel for ADSS

    International Nuclear Information System (INIS)

    Parchani, G.; Suresh, N.

    2003-01-01

    Radiation shielded structures involve architectural, structural and radiation shielding design. In order to attenuate the radiation level to the permissible limits concrete has been recognized as a most versatile radiation shielding material and is being extensively used. Concrete in addition to radiation shielding properties possesses very good mechanical properties, which enables its use as a structural member. High-energy linac lab, which will generate radiation, needs very large thickness of concrete for shielding. The length of tunnel (1.00 kM) is one of the most important factors in finalizing construction scheme. In view of this, it becomes essential to explore alternate construction schemes for such structures to optimize the cost of construction. In this paper, various alternates for the construction of proposed linac tunnel have been studied

  6. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  7. Bandwidth compression of the digitized HDTV images for transmission via satellites

    Science.gov (United States)

    Al-Asmari, A. KH.; Kwatra, S. C.

    1992-01-01

    This paper investigates a subband coding scheme to reduce the transmission bandwidth of the digitized HDTV images. The HDTV signals are decomposed into seven bands. Each band is then independently encoded. The based band is DPCM encoded and the high bands are encoded by using nonuniform Laplacian quantizers with a dead zone. By selecting the dead zone on the basis of energy in the high bands an acceptable image quality is achieved at an average of 45 Mbits/sec (Mbps) rate. This rate is comparable to some very hardware intensive schemes of transform compression or vector quantization proposed in the literature. The subband coding scheme used in this study is considered to be of medium complexity. The 45 Mbps rate is suitable for transmission of HDTV signals via satellites.

  8. Partial Encryption of Entropy-Coded Video Compression Using Coupled Chaotic Maps

    Directory of Open Access Journals (Sweden)

    Fadi Almasalha

    2014-10-01

    Full Text Available Due to pervasive communication infrastructures, a plethora of enabling technologies is being developed over mobile and wired networks. Among these, video streaming services over IP are the most challenging in terms of quality, real-time requirements and security. In this paper, we propose a novel scheme to efficiently secure variable length coded (VLC multimedia bit streams, such as H.264. It is based on code word error diffusion and variable size segment shuffling. The codeword diffusion and the shuffling mechanisms are based on random operations from a secure and computationally efficient chaos-based pseudo-random number generator. The proposed scheme is ubiquitous to the end users and can be deployed at any node in the network. It provides different levels of security, with encrypted data volume fluctuating between 5.5–17%. It works on the compressed bit stream without requiring any decoding. It provides excellent encryption speeds on different platforms, including mobile devices. It is 200% faster and 150% more power efficient when compared with AES software-based full encryption schemes. Regarding security, the scheme is robust to well-known attacks in the literature, such as brute force and known/chosen plain text attacks.

  9. The τ-compression effect in γ-soft nulei

    International Nuclear Information System (INIS)

    Pan, X.W.; Feng, D.H.

    1993-01-01

    It is shown that a particular deviation from the SO(5) level structure persists in most well known SO(6)-like nuclei, namely, the τ-dependent compression of levels in the gamma-soft nuclei. This compression can be regarded as an extention of Mottelson-Valatin effect in the SO(5) scheme, i.e.. The reduction of pairing interaction with the increasing of τ-number will cause a compression in the energy levels. We find that this phenomenon can be effectively explained in terms of the SO(6) plus Pairing model. Using the pairing term to perturb the SO(6) Hamiltonian, a new level regularity for the SO(5) scheme is obtained

  10. SeqCompress: an algorithm for biological sequence compression.

    Science.gov (United States)

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Fault Diagnosis for Hydraulic Servo System Using Compressed Random Subspace Based ReliefF

    Directory of Open Access Journals (Sweden)

    Yu Ding

    2018-01-01

    Full Text Available Playing an important role in electromechanical systems, hydraulic servo system is crucial to mechanical systems like engineering machinery, metallurgical machinery, ships, and other equipment. Fault diagnosis based on monitoring and sensory signals plays an important role in avoiding catastrophic accidents and enormous economic losses. This study presents a fault diagnosis scheme for hydraulic servo system using compressed random subspace based ReliefF (CRSR method. From the point of view of feature selection, the scheme utilizes CRSR method to determine the most stable feature combination that contains the most adequate information simultaneously. Based on the feature selection structure of ReliefF, CRSR employs feature integration rules in the compressed domain. Meanwhile, CRSR substitutes information entropy and fuzzy membership for traditional distance measurement index. The proposed CRSR method is able to enhance the robustness of the feature information against interference while selecting the feature combination with balanced information expressing ability. To demonstrate the effectiveness of the proposed CRSR method, a hydraulic servo system joint simulation model is constructed by HyPneu and Simulink, and three fault modes are injected to generate the validation data.

  12. A symmetrical image encryption scheme in wavelet and time domain

    Science.gov (United States)

    Luo, Yuling; Du, Minghui; Liu, Junxiu

    2015-02-01

    There has been an increasing concern for effective storages and secure transactions of multimedia information over the Internet. Then a great variety of encryption schemes have been proposed to ensure the information security while transmitting, but most of current approaches are designed to diffuse the data only in spatial domain which result in reducing storage efficiency. A lightweight image encryption strategy based on chaos is proposed in this paper. The encryption process is designed in transform domain. The original image is decomposed into approximation and detail components using integer wavelet transform (IWT); then as the more important component of the image, the approximation coefficients are diffused by secret keys generated from a spatiotemporal chaotic system followed by inverse IWT to construct the diffused image; finally a plain permutation is performed for diffusion image by the Logistic mapping in order to reduce the correlation between adjacent pixels further. Experimental results and performance analysis demonstrate the proposed scheme is an efficient, secure and robust encryption mechanism and it realizes effective coding compression to satisfy desirable storage.

  13. Compressed Data Structures for Range Searching

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Vind, Søren Juhl

    2015-01-01

    matrices and web graphs. Our contribution is twofold. First, we show how to compress geometric repetitions that may appear in standard range searching data structures (such as K-D trees, Quad trees, Range trees, R-trees, Priority R-trees, and K-D-B trees), and how to implement subsequent range queries...... on the compressed representation with only a constant factor overhead. Secondly, we present a compression scheme that efficiently identifies geometric repetitions in point sets, and produces a hierarchical clustering of the point sets, which combined with the first result leads to a compressed representation...

  14. Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme

    Science.gov (United States)

    Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook

    1995-01-01

    Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.

  15. A Data-Gathering Scheme with Joint Routing and Compressive Sensing Based on Modified Diffusion Wavelets in Wireless Sensor Networks.

    Science.gov (United States)

    Gu, Xiangping; Zhou, Xiaofeng; Sun, Yanjing

    2018-02-28

    Compressive sensing (CS)-based data gathering is a promising method to reduce energy consumption in wireless sensor networks (WSNs). Traditional CS-based data-gathering approaches require a large number of sensor nodes to participate in each CS measurement task, resulting in high energy consumption, and do not guarantee load balance. In this paper, we propose a sparser analysis that depends on modified diffusion wavelets, which exploit sensor readings' spatial correlation in WSNs. In particular, a novel data-gathering scheme with joint routing and CS is presented. A modified ant colony algorithm is adopted, where next hop node selection takes a node's residual energy and path length into consideration simultaneously. Moreover, in order to speed up the coverage rate and avoid the local optimal of the algorithm, an improved pheromone impact factor is put forward. More importantly, theoretical proof is given that the equivalent sensing matrix generated can satisfy the restricted isometric property (RIP). The simulation results demonstrate that the modified diffusion wavelets' sparsity affects the sensor signal and has better reconstruction performance than DFT. Furthermore, our data gathering with joint routing and CS can dramatically reduce the energy consumption of WSNs, balance the load, and prolong the network lifetime in comparison to state-of-the-art CS-based methods.

  16. Compressive Sensing for Feedback Reduction in Wireless Multiuser Networks

    KAUST Repository

    Elkhalil, Khalil

    2015-05-01

    User/relay selection is a simple technique that achieves spatial diversity in multiuser networks. However, for user/relay selection algorithms to make a selection decision, channel state information (CSI) from all cooperating users/relays is usually required at a central node. This requirement poses two important challenges. Firstly, CSI acquisition generates a great deal of feedback overhead (air-time) that could result in significant transmission delays. Secondly, the fed-back channel information is usually corrupted by additive noise. This could lead to transmission outages if the central node selects the set of cooperating relays based on inaccurate feedback information. Motivated by the aforementioned challenges, we propose a limited feedback user/relay selection scheme that is based on the theory of compressed sensing. Firstly, we introduce a limited feedback relay selection algorithm for a multicast relay network. The proposed algorithm exploits the theory of compressive sensing to first obtain the identity of the “strong” relays with limited feedback air-time. Following that, the CSI of the selected relays is estimated using minimum mean square error estimation without any additional feedback. To minimize the effect of noise on the fed-back CSI, we introduce a back-off strategy that optimally backs-off on the noisy received CSI. In the second part of the thesis, we propose a feedback reduction scheme for full-duplex relay-aided multiuser networks. The proposed scheme permits the base station (BS) to obtain channel state information (CSI) from a subset of strong users under substantially reduced feedback overhead. More specifically, we cast the problem of user identification and CSI estimation as a block sparse signal recovery problem in compressive sensing (CS). Using existing CS block recovery algorithms, we first obtain the identity of the strong users and then estimate their CSI using the best linear unbiased estimator (BLUE). Moreover, we derive the

  17. Compressive sensing for urban radar

    CERN Document Server

    Amin, Moeness

    2014-01-01

    With the emergence of compressive sensing and sparse signal reconstruction, approaches to urban radar have shifted toward relaxed constraints on signal sampling schemes in time and space, and to effectively address logistic difficulties in data acquisition. Traditionally, these challenges have hindered high resolution imaging by restricting both bandwidth and aperture, and by imposing uniformity and bounds on sampling rates.Compressive Sensing for Urban Radar is the first book to focus on a hybrid of two key areas: compressive sensing and urban sensing. It explains how reliable imaging, tracki

  18. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yihang Yin

    2015-08-01

    Full Text Available Wireless sensor networks (WSNs have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA. First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  19. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks.

    Science.gov (United States)

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-08-07

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  20. Linear perturbation of spherically symmetric flows: a first-order upwind scheme for the gas dynamics equations in Lagrangian coordinates

    International Nuclear Information System (INIS)

    Clarisse, J.M.

    2007-01-01

    A numerical scheme for computing linear Lagrangian perturbations of spherically symmetric flows of gas dynamics is proposed. This explicit first-order scheme uses the Roe method in Lagrangian coordinates, for computing the radial spherically symmetric mean flow, and its linearized version, for treating the three-dimensional linear perturbations. Fulfillment of the geometric conservation law discrete formulations for both the mean flow and its perturbation is ensured. This scheme capabilities are illustrated by the computation of free-surface mode evolutions at the boundaries of a spherical hollow shell undergoing an homogeneous cumulative compression, showing excellent agreement with reference results. (author)

  1. Well-balanced schemes for the Euler equations with gravitation: Conservative formulation using global fluxes

    Science.gov (United States)

    Chertock, Alina; Cui, Shumo; Kurganov, Alexander; Özcan, Şeyma Nur; Tadmor, Eitan

    2018-04-01

    We develop a second-order well-balanced central-upwind scheme for the compressible Euler equations with gravitational source term. Here, we advocate a new paradigm based on a purely conservative reformulation of the equations using global fluxes. The proposed scheme is capable of exactly preserving steady-state solutions expressed in terms of a nonlocal equilibrium variable. A crucial step in the construction of the second-order scheme is a well-balanced piecewise linear reconstruction of equilibrium variables combined with a well-balanced central-upwind evolution in time, which is adapted to reduce the amount of numerical viscosity when the flow is at (near) steady-state regime. We show the performance of our newly developed central-upwind scheme and demonstrate importance of perfect balance between the fluxes and gravitational forces in a series of one- and two-dimensional examples.

  2. The wavelet transform and the suppression theory of binocular vision for stereo image compression

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, W.D. Jr [Argonne National Lab., IL (United States); Kenyon, R.V. [Illinois Univ., Chicago, IL (United States)

    1996-08-01

    In this paper a method for compression of stereo images. The proposed scheme is a frequency domain approach based on the suppression theory of binocular vision. By using the information in the frequency domain, complex disparity estimation techniques can be avoided. The wavelet transform is used to obtain a multiresolution analysis of the stereo pair by which the subbands convey the necessary frequency domain information.

  3. A hybrid video compression based on zerotree wavelet structure

    International Nuclear Information System (INIS)

    Kilic, Ilker; Yilmaz, Reyat

    2009-01-01

    A video compression algorithm comparable to the standard techniques at low bit rates is presented in this paper. The overlapping block motion compensation (OBMC) is combined with discrete wavelet transform which followed by Lloyd-Max quantization and zerotree wavelet (ZTW) structure. The novel feature of this coding scheme is the combination of hierarchical finite state vector quantization (HFSVQ) with the ZTW to encode the quantized wavelet coefficients. It is seen that the proposed video encoder (ZTW-HFSVQ) performs better than the MPEG-4 and Zerotree Entropy Coding (ZTE). (author)

  4. Design of a Biorthogonal Wavelet Transform Based R-Peak Detection and Data Compression Scheme for Implantable Cardiac Pacemaker Systems.

    Science.gov (United States)

    Kumar, Ashish; Kumar, Manjeet; Komaragiri, Rama

    2018-04-19

    Bradycardia can be modulated using the cardiac pacemaker, an implantable medical device which sets and balances the patient's cardiac health. The device has been widely used to detect and monitor the patient's heart rate. The data collected hence has the highest authenticity assurance and is convenient for further electric stimulation. In the pacemaker, ECG detector is one of the most important element. The device is available in its new digital form, which is more efficient and accurate in performance with the added advantage of economical power consumption platform. In this work, a joint algorithm based on biorthogonal wavelet transform and run-length encoding (RLE) is proposed for QRS complex detection of the ECG signal and compressing the detected ECG data. Biorthogonal wavelet transform of the input ECG signal is first calculated using a modified demand based filter bank architecture which consists of a series combination of three lowpass filters with a highpass filter. Lowpass and highpass filters are realized using a linear phase structure which reduces the hardware cost of the proposed design approximately by 50%. Then, the location of the R-peak is found by comparing the denoised ECG signal with the threshold value. The proposed R-peak detector achieves the highest sensitivity and positive predictivity of 99.75 and 99.98 respectively with the MIT-BIH arrhythmia database. Also, the proposed R-peak detector achieves a comparatively low data error rate (DER) of 0.002. The use of RLE for the compression of detected ECG data achieves a higher compression ratio (CR) of 17.1. To justify the effectiveness of the proposed algorithm, the results have been compared with the existing methods, like Huffman coding/simple predictor, Huffman coding/adaptive, and slope predictor/fixed length packaging.

  5. Innovative ICF scheme-impact fast ignition

    International Nuclear Information System (INIS)

    Murakami, M.; Nagatomo, H.; Sakaiya, T.; Karasik, M.; Gardner, J.; Bates, J.

    2007-01-01

    A totally new ignition scheme for ICF, impact fast ignition (IFI), is proposed [1], in which the compressed DT main fuel is to be ignited by impact collision of another fraction of separately imploded DT fuel, which is accelerated in the hollow conical target. Two-dimensional hydrodynamic simulation results in full geometry are presented, in which some key physical parameters for the impact shell dynamics such as 10 8 cm/s of the implosion velocity, 200- 300 g/cm 3 of the compressed density, and the converted temperature beyond 5 keV are demonstrated. As the first step toward the proof-of-principle of IFI, we have conducted preliminary experiments under the operation of GEKKO XII/HYPER laser system to achieve a hyper-velocity of the order of 108 cm/s. As a result we have observed a highest velocity, 6.5 x 10 7 cm/s, ever achieved. Furthermore, we have also done the first integrated experiments using the target and observed substantial amount of neutron yields. Reference: [1] M. Murakami and Nagatomo, Nucl. Instrum. Meth. Phys. Res. A 544(2005) 67

  6. Adaptive protection scheme

    Directory of Open Access Journals (Sweden)

    R. Sitharthan

    2016-09-01

    Full Text Available This paper aims at modelling an electronically coupled distributed energy resource with an adaptive protection scheme. The electronically coupled distributed energy resource is a microgrid framework formed by coupling the renewable energy source electronically. Further, the proposed adaptive protection scheme provides a suitable protection to the microgrid for various fault conditions irrespective of the operating mode of the microgrid: namely, grid connected mode and islanded mode. The outstanding aspect of the developed adaptive protection scheme is that it monitors the microgrid and instantly updates relay fault current according to the variations that occur in the system. The proposed adaptive protection scheme also employs auto reclosures, through which the proposed adaptive protection scheme recovers faster from the fault and thereby increases the consistency of the microgrid. The effectiveness of the proposed adaptive protection is studied through the time domain simulations carried out in the PSCAD⧹EMTDC software environment.

  7. Optical image encryption scheme with multiple light paths based on compressive ghost imaging

    Science.gov (United States)

    Zhu, Jinan; Yang, Xiulun; Meng, Xiangfeng; Wang, Yurong; Yin, Yongkai; Sun, Xiaowen; Dong, Guoyan

    2018-02-01

    An optical image encryption method with multiple light paths is proposed based on compressive ghost imaging. In the encryption process, M random phase-only masks (POMs) are generated by means of logistic map algorithm, and these masks are then uploaded to the spatial light modulator (SLM). The collimated laser light is divided into several beams by beam splitters as it passes through the SLM, and the light beams illuminate the secret images, which are converted into sparse images by discrete wavelet transform beforehand. Thus, the secret images are simultaneously encrypted into intensity vectors by ghost imaging. The distances between the SLM and secret images vary and can be used as the main keys with original POM and the logistic map algorithm coefficient in the decryption process. In the proposed method, the storage space can be significantly decreased and the security of the system can be improved. The feasibility, security and robustness of the method are further analysed through computer simulations.

  8. Large-eddy simulation/Reynolds-averaged Navier-Stokes hybrid schemes for high speed flows

    Science.gov (United States)

    Xiao, Xudong

    Three LES/RANS hybrid schemes have been proposed for the prediction of high speed separated flows. Each method couples the k-zeta (Enstrophy) BANS model with an LES subgrid scale one-equation model by using a blending function that is coordinate system independent. Two of these functions are based on turbulence dissipation length scale and grid size, while the third one has no explicit dependence on the grid. To implement the LES/RANS hybrid schemes, a new rescaling-reintroducing method is used to generate time-dependent turbulent inflow conditions. The hybrid schemes have been tested on a Mach 2.88 flow over 25 degree compression-expansion ramp and a Mach 2.79 flow over 20 degree compression ramp. A special computation procedure has been designed to prevent the separation zone from expanding upstream to the recycle-plane. The code is parallelized using Message Passing Interface (MPI) and is optimized for running on IBM-SP3 parallel machine. The scheme was validated first for a flat plate. It was shown that the blending function has to be monotonic to prevent the RANS region from appearing in the LES region. In the 25 deg ramp case, the hybrid schemes provided better agreement with experiment in the recovery region. Grid refinement studies demonstrated the importance of using a grid independent blend function and further improvement with experiment in the recovery region. In the 20 deg ramp case, with a relatively finer grid, the hybrid scheme characterized by grid independent blending function well predicted the flow field in both the separation region and the recovery region. Therefore, with "appropriately" fine grid, current hybrid schemes are promising for the simulation of shock wave/boundary layer interaction problems.

  9. A socio-economic assessment of proposed road user charging schemes in Copenhagen

    DEFF Research Database (Denmark)

    Rich, Jeppe; Nielsen, Otto Anker

    2007-01-01

    -economic analysis of four different proposed road pricing schemes for the Copenhagen area. The purpose was to assess all benefits and costs involved, including impacts on traffic and environment, maintenance and financing costs as well as tax distortion effects. It was concluded that the socio-economic surplus......Road pricing. congestion charging, toll-systems and other road charging instruments are intensively discussed in many countries. Although many partial analyses of the consequences have been published, few overall socio-economic analyses have been carried out. The article presents such a socio...... of the projects depends crucially on the congestion level. With the Current traffic level, road pricing will not yet be socially expedient in Copenhagen. However, if the opening year is postponed to 2015, the two most favourable schemes will turn positive. The analyses also showed that the magnitude of demand...

  10. The evolution of emissions trading in the EU. Tensions between national trading schemes and the proposed EU directive

    International Nuclear Information System (INIS)

    Boemare, Catherine; Quirion, Philippe; Sorrell, Steve

    2003-12-01

    The EU is pioneering the development of greenhouse gas emissions trading, but there is a tension between the 'top-down' and 'bottom-up' evolution of trading schemes. While the Commission is introducing a European emissions trading scheme (EU ETS) in 2005, several member states have already introduced negotiated agreements that include trading arrangements. Typically, these national schemes have a wider scope than the proposed EU directive and allow firms to use relative rather than absolute targets. The coexistence of 'top-down' and 'bottom-up' trading schemes may create some complex problems of policy interaction. This paper explores the potential interactions between the EU ETS and the negotiated agreements in France and UK and uses these to illustrate some important generic issues. The paper first describes the proposed EU directive, outlines the UK and French policies and compares their main features to the EU ETS. It then discusses how the national and European policies may interact in practice. Four issues are highlighted, namely, double regulation, double counting of emission reductions, equivalence of effort and linking trading schemes. The paper concludes with some recommendations for the future development of UK and French climate policy

  11. Compressed natural gas transportation by utilizing FRP composite pressure vessels

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, S.C. [Trans Ocean Gas Inc., St. John' s, NF (Canada)

    2004-07-01

    This paper discussed the Trans Ocean Gas (TOG) method for transporting compressed natural gas (CNG). As demand for natural gas increases and with half of the world's reserves considered stranded, a method to transport natural gas by ship is needed. CNG transportation is widely viewed as a viable method. Transported as CNG, stranded gas reserves can be delivered to existing markets or can create new natural gas markets not applicable to liquefied natural gas (LNG). In contrast to LNG, compressed gas requires no processing to offload. TOG proposes that CNG be transported using fiber reinforced plastic (FRP) pressure vessels which overcome all the deficiencies of proposed steel-based systems. FRP pressure vessels have been proven safe and reliable through critical applications in the national defense, aerospace, and natural gas vehicle industries. They are light-weight, highly reliable, have very safe failure modes, are corrosion resistant, and have excellent low temperature characteristics. Under TOG's scheme, natural gas can be stored at two thirds the density of LNG without costly processing. TOG's proposed design and testing of a CNG system was reviewed in detail. 1 fig.

  12. A rapid compression technique for 4-D functional MRI images using data rearrangement and modified binary array techniques.

    Science.gov (United States)

    Uma Vetri Selvi, G; Nadarajan, R

    2015-12-01

    Compression techniques are vital for efficient storage and fast transfer of medical image data. The existing compression techniques take significant amount of time for performing encoding and decoding and hence the purpose of compression is not fully satisfied. In this paper a rapid 4-D lossy compression method constructed using data rearrangement, wavelet-based contourlet transformation and a modified binary array technique has been proposed for functional magnetic resonance imaging (fMRI) images. In the proposed method, the image slices of fMRI data are rearranged so that the redundant slices form a sequence. The image sequence is then divided into slices and transformed using wavelet-based contourlet transform (WBCT). In WBCT, the high frequency sub-band obtained from wavelet transform is further decomposed into multiple directional sub-bands by directional filter bank to obtain more directional information. The relationship between the coefficients has been changed in WBCT as it has more directions. The differences in parent–child relationships are handled by a repositioning algorithm. The repositioned coefficients are then subjected to quantization. The quantized coefficients are further compressed by modified binary array technique where the most frequently occurring value of a sequence is coded only once. The proposed method has been experimented with fMRI images the results indicated that the processing time of the proposed method is less compared to existing wavelet-based set partitioning in hierarchical trees and set partitioning embedded block coder (SPECK) compression schemes [1]. The proposed method could also yield a better compression performance compared to wavelet-based SPECK coder. The objective results showed that the proposed method could gain good compression ratio in maintaining a peak signal noise ratio value of above 70 for all the experimented sequences. The SSIM value is equal to 1 and the value of CC is greater than 0.9 for all

  13. A statistical–mechanical view on source coding: physical compression and data compression

    International Nuclear Information System (INIS)

    Merhav, Neri

    2011-01-01

    We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics

  14. Research on lossless compression of true color RGB image with low time and space complexity

    Science.gov (United States)

    Pan, ShuLin; Xie, ChengJun; Xu, Lin

    2008-12-01

    Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.

  15. Multidimensional flux-limited advection schemes

    International Nuclear Information System (INIS)

    Thuburn, J.

    1996-01-01

    A general method for building multidimensional shape preserving advection schemes using flux limiters is presented. The method works for advected passive scalars in either compressible or incompressible flow and on arbitrary grids. With a minor modification it can be applied to the equation for fluid density. Schemes using the simplest form of the flux limiter can cause distortion of the advected profile, particularly sideways spreading, depending on the orientation of the flow relative to the grid. This is partly because the simple limiter is too restrictive. However, some straightforward refinements lead to a shape-preserving scheme that gives satisfactory results, with negligible grid-flow angle-dependent distortion

  16. Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor.

    Science.gov (United States)

    Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2016-02-22

    In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.

  17. Adaptive Binary Arithmetic Coder-Based Image Feature and Segmentation in the Compressed Domain

    Directory of Open Access Journals (Sweden)

    Hsi-Chin Hsin

    2012-01-01

    Full Text Available Image compression is necessary in various applications, especially for efficient transmission over a band-limited channel. It is thus desirable to be able to segment an image in the compressed domain directly such that the burden of decompressing computation can be avoided. Motivated by the adaptive binary arithmetic coder (MQ coder of JPEG2000, we propose an efficient scheme to segment the feature vectors that are extracted from the code stream of an image. We modify the Compression-based Texture Merging (CTM algorithm to alleviate the influence of overmerging problem by making use of the rate distortion information. Experimental results show that the MQ coder-based image segmentation is preferable in terms of the boundary displacement error (BDE measure. It has the advantage of saving computational cost as the segmentation results even at low rates of bits per pixel (bpp are satisfactory.

  18. Modified Aggressive Packet Combining Scheme

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2010-06-01

    In this letter, a few schemes are presented to improve the performance of aggressive packet combining scheme (APC). To combat error in computer/data communication networks, ARQ (Automatic Repeat Request) techniques are used. Several modifications to improve the performance of ARQ are suggested by recent research and are found in literature. The important modifications are majority packet combining scheme (MjPC proposed by Wicker), packet combining scheme (PC proposed by Chakraborty), modified packet combining scheme (MPC proposed by Bhunia), and packet reversed packet combining (PRPC proposed by Bhunia) scheme. These modifications are appropriate for improving throughput of conventional ARQ protocols. Leung proposed an idea of APC for error control in wireless networks with the basic objective of error control in uplink wireless data network. We suggest a few modifications of APC to improve its performance in terms of higher throughput, lower delay and higher error correction capability. (author)

  19. Streaming Compression of Hexahedral Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  20. Interpolation decoding method with variable parameters for fractal image compression

    International Nuclear Information System (INIS)

    He Chuanjiang; Li Gaoping; Shen Xiaona

    2007-01-01

    The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal

  1. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    Science.gov (United States)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  2. Compressed sensing based joint-compensation of power amplifier's distortions in OFDMA cognitive radio systems

    KAUST Repository

    Ali, Anum Z.

    2013-12-01

    Linearization of user equipment power amplifiers driven by orthogonal frequency division multiplexing signals is addressed in this paper. Particular attention is paid to the power efficient operation of an orthogonal frequency division multiple access cognitive radio system and realization of such a system using compressed sensing. Specifically, precompensated overdriven amplifiers are employed at the mobile terminal. Over-driven amplifiers result in in-band distortions and out of band interference. Out of band interference mostly occupies the spectrum of inactive users, whereas the in-band distortions are mitigated using compressed sensing at the receiver. It is also shown that the performance of the proposed scheme can be further enhanced using multiple measurements of the distortion signal in single-input multi-output systems. Numerical results verify the ability of the proposed setup to improve error vector magnitude, bit error rate, outage capacity and mean squared error. © 2011 IEEE.

  3. Compressed sensing based joint-compensation of power amplifier's distortions in OFDMA cognitive radio systems

    KAUST Repository

    Ali, Anum Z.; Hammi, Oualid; Al-Naffouri, Tareq Y.

    2013-01-01

    Linearization of user equipment power amplifiers driven by orthogonal frequency division multiplexing signals is addressed in this paper. Particular attention is paid to the power efficient operation of an orthogonal frequency division multiple access cognitive radio system and realization of such a system using compressed sensing. Specifically, precompensated overdriven amplifiers are employed at the mobile terminal. Over-driven amplifiers result in in-band distortions and out of band interference. Out of band interference mostly occupies the spectrum of inactive users, whereas the in-band distortions are mitigated using compressed sensing at the receiver. It is also shown that the performance of the proposed scheme can be further enhanced using multiple measurements of the distortion signal in single-input multi-output systems. Numerical results verify the ability of the proposed setup to improve error vector magnitude, bit error rate, outage capacity and mean squared error. © 2011 IEEE.

  4. Application of kinetic flux vector splitting scheme for solving multi-dimensional hydrodynamical models of semiconductor devices

    Science.gov (United States)

    Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul

    In this article, one and two-dimensional hydrodynamical models of semiconductor devices are numerically investigated. The models treat the propagation of electrons in a semiconductor device as the flow of a charged compressible fluid. It plays an important role in predicting the behavior of electron flow in semiconductor devices. Mathematically, the governing equations form a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the kinetic flux-vector splitting (KFVS) method for the hyperbolic step, and a semi-implicit Runge-Kutta method for the relaxation step. The KFVS method is based on the direct splitting of macroscopic flux functions of the system on the cell interfaces. The second order accuracy of the scheme is achieved by using MUSCL-type initial reconstruction and Runge-Kutta time stepping method. Several case studies are considered. For validation, the results of current scheme are compared with those obtained from the splitting scheme based on the NT central scheme. The effects of various parameters such as low field mobility, device length, lattice temperature and voltage are analyzed. The accuracy, efficiency and simplicity of the proposed KFVS scheme validates its generic applicability to the given model equations. A two dimensional simulation is also performed by KFVS method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.

  5. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    Science.gov (United States)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  6. An unstructured-mesh finite-volume MPDATA for compressible atmospheric dynamics

    International Nuclear Information System (INIS)

    Kühnlein, Christian; Smolarkiewicz, Piotr K.

    2017-01-01

    An advancement of the unstructured-mesh finite-volume MPDATA (Multidimensional Positive Definite Advection Transport Algorithm) is presented that formulates the error-compensative pseudo-velocity of the scheme to rely only on face-normal advective fluxes to the dual cells, in contrast to the full vector employed in previous implementations. This is essentially achieved by expressing the temporal truncation error underlying the pseudo-velocity in a form consistent with the flux-divergence of the governing conservation law. The development is especially important for integrating fluid dynamics equations on non-rectilinear meshes whenever face-normal advective mass fluxes are employed for transport compatible with mass continuity—the latter being essential for flux-form schemes. In particular, the proposed formulation enables large-time-step semi-implicit finite-volume integration of the compressible Euler equations using MPDATA on arbitrary hybrid computational meshes. Furthermore, it facilitates multiple error-compensative iterations of the finite-volume MPDATA and improved overall accuracy. The advancement combines straightforwardly with earlier developments, such as the nonoscillatory option, the infinite-gauge variant, and moving curvilinear meshes. A comprehensive description of the scheme is provided for a hybrid horizontally-unstructured vertically-structured computational mesh for efficient global atmospheric flow modelling. The proposed finite-volume MPDATA is verified using selected 3D global atmospheric benchmark simulations, representative of hydrostatic and non-hydrostatic flow regimes. Besides the added capabilities, the scheme retains fully the efficacy of established finite-volume MPDATA formulations.

  7. An unstructured-mesh finite-volume MPDATA for compressible atmospheric dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Kühnlein, Christian, E-mail: christian.kuehnlein@ecmwf.int; Smolarkiewicz, Piotr K., E-mail: piotr.smolarkiewicz@ecmwf.int

    2017-04-01

    An advancement of the unstructured-mesh finite-volume MPDATA (Multidimensional Positive Definite Advection Transport Algorithm) is presented that formulates the error-compensative pseudo-velocity of the scheme to rely only on face-normal advective fluxes to the dual cells, in contrast to the full vector employed in previous implementations. This is essentially achieved by expressing the temporal truncation error underlying the pseudo-velocity in a form consistent with the flux-divergence of the governing conservation law. The development is especially important for integrating fluid dynamics equations on non-rectilinear meshes whenever face-normal advective mass fluxes are employed for transport compatible with mass continuity—the latter being essential for flux-form schemes. In particular, the proposed formulation enables large-time-step semi-implicit finite-volume integration of the compressible Euler equations using MPDATA on arbitrary hybrid computational meshes. Furthermore, it facilitates multiple error-compensative iterations of the finite-volume MPDATA and improved overall accuracy. The advancement combines straightforwardly with earlier developments, such as the nonoscillatory option, the infinite-gauge variant, and moving curvilinear meshes. A comprehensive description of the scheme is provided for a hybrid horizontally-unstructured vertically-structured computational mesh for efficient global atmospheric flow modelling. The proposed finite-volume MPDATA is verified using selected 3D global atmospheric benchmark simulations, representative of hydrostatic and non-hydrostatic flow regimes. Besides the added capabilities, the scheme retains fully the efficacy of established finite-volume MPDATA formulations.

  8. Pressure correction schemes for compressible flows: application to baro-tropic Navier-Stokes equations and to drift-flux model; Methodes de correction de pression pour les ecoulements compressibles: application aux equations de Navier-Stokes barotropes et au modele de derive

    Energy Technology Data Exchange (ETDEWEB)

    Gastaldo, L

    2007-11-15

    We develop in this PhD thesis a simulation tool for bubbly flows encountered in some late phases of a core-melt accident in pressurized water reactors, when the flow of molten core and vessel structures comes to chemically interact with the concrete of the containment floor. The physical modelling is based on the so-called drift-flux model, consisting of mass balance and momentum balance equations for the mixture (Navier-Stokes equations) and a mass balance equation for the gaseous phase. First, we propose a pressure correction scheme for the compressible Navier-Stokes equations based on mixed non-conforming finite elements. An ad hoc discretization of the advection operator, by a finite volume technique based on a dual mesh, ensures the stability of the velocity prediction step. A priori estimates for the velocity and the pressure yields the existence of the solution. We prove that this scheme is stable, in the sense that the discrete entropy is decreasing. For the conservation equation of the gaseous phase, we build a finite volume discretization which satisfies a discrete maximum principle. From this last property, we deduce the existence and the uniqueness of the discrete solution. Finally, on the basis of these works, a conservative and monotone scheme which is stable in the low Mach number limit, is build for the drift-flux model. This scheme enjoys, moreover, the following property: the algorithm preserves a constant pressure and velocity through moving interfaces between phases (i.e. contact discontinuities of the underlying hyperbolic system). In order to satisfy this property at the discrete level, we build an original pressure correction step which couples the mass balance equation with the transport terms of the gas mass balance equation, the remaining terms of the gas mass balance being taken into account with a splitting method. We prove the existence of a discrete solution for the pressure correction step. Numerical results are presented; they

  9. Comparing biological networks via graph compression

    Directory of Open Access Journals (Sweden)

    Hayashida Morihiro

    2010-09-01

    Full Text Available Abstract Background Comparison of various kinds of biological data is one of the main problems in bioinformatics and systems biology. Data compression methods have been applied to comparison of large sequence data and protein structure data. Since it is still difficult to compare global structures of large biological networks, it is reasonable to try to apply data compression methods to comparison of biological networks. In existing compression methods, the uniqueness of compression results is not guaranteed because there is some ambiguity in selection of overlapping edges. Results This paper proposes novel efficient methods, CompressEdge and CompressVertices, for comparing large biological networks. In the proposed methods, an original network structure is compressed by iteratively contracting identical edges and sets of connected edges. Then, the similarity of two networks is measured by a compression ratio of the concatenated networks. The proposed methods are applied to comparison of metabolic networks of several organisms, H. sapiens, M. musculus, A. thaliana, D. melanogaster, C. elegans, E. coli, S. cerevisiae, and B. subtilis, and are compared with an existing method. These results suggest that our methods can efficiently measure the similarities between metabolic networks. Conclusions Our proposed algorithms, which compress node-labeled networks, are useful for measuring the similarity of large biological networks.

  10. Understanding the effect of an emissions trading scheme on electricity generator investment and retirement behaviour: the proposed carbon pollution reduction scheme

    Energy Technology Data Exchange (ETDEWEB)

    Lambie, N.R. [Australian National University, Canberra, ACT (Australia). Crawford School of Economics & Government

    2010-04-15

    The objective of a greenhouse gas (GHG) emissions trading scheme (ETS) is to reduce emissions by transitioning the economy away from the production and consumption of goods and services that are GHG intensive. A GHG ETS has been a public policy issue in Australia for over a decade. The latest policy initiative on an ETS is the proposed Carbon Pollution Reduction Scheme (CPRS). A substantial share of Australia's total GHG reduction under the CPRS is expected to come from the electricity generation sector. This paper surveys the literature on investment behaviour under an ETS. It specifically focuses on the relationship between the design of an ETS and a generator's decisions to invest in low emissions plant and retire high emissions plant. The proposed CPRS provides the context for presenting key findings along with the implications for the electricity generation sector's transition to lower emissions plant. The literature shows that design features such as the method of allocating permits, the stringency of the emissions cap along with permit price uncertainty, provisions for banking, borrowing and internationally trading permits, and the credibility of emissions caps and policy uncertainty may all significantly impact on the investment and retirement behaviour of generators.

  11. A New Approach for Fingerprint Image Compression

    Energy Technology Data Exchange (ETDEWEB)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  12. Multi-criteria decision-making on assessment of proposed tidal barrage schemes in terms of environmental impacts.

    Science.gov (United States)

    Wu, Yunna; Xu, Chuanbo; Ke, Yiming; Chen, Kaifeng; Xu, Hu

    2017-12-15

    For tidal range power plants to be sustainable, the environmental impacts caused by the implement of various tidal barrage schemes must be assessed before construction. However, several problems exist in the current researches: firstly, evaluation criteria of the tidal barrage schemes environmental impact assessment (EIA) are not adequate; secondly, uncertainty of criteria information fails to be processed properly; thirdly, correlation among criteria is unreasonably measured. Hence the contributions of this paper are as follows: firstly, an evaluation criteria system is established from three dimensions of hydrodynamic, biological and morphological aspects. Secondly, cloud model is applied to describe the uncertainty of criteria information. Thirdly, Choquet integral with respect to λ-fuzzy measure is introduced to measure the correlation among criteria. On the above bases, a multi-criteria decision-making decision framework for tidal barrage scheme EIA is established to select the optimal scheme. Finally, a case study demonstrates the effectiveness of the proposed framework. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Joint compression and encryption using chaotically mutated Huffman trees

    Science.gov (United States)

    Hermassi, Houcemeddine; Rhouma, Rhouma; Belghith, Safya

    2010-10-01

    This paper introduces a new scheme for joint compression and encryption using the Huffman codec. A basic tree is first generated for a given message and then based on a keystream generated from a chaotic map and depending from the input message, the basic tree is mutated without changing the statistical model. Hence a symbol can be coded by more than one codeword having the same length. The security of the scheme is tested against the known plaintext attack and the brute force attack. Performance analysis including encryption/decryption speed, additional computational complexity and compression ratio are given.

  14. Compression of surface myoelectric signals using MP3 encoding.

    Science.gov (United States)

    Chan, Adrian D C

    2011-01-01

    The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).

  15. A Computational model for compressed sensing RNAi cellular screening

    Directory of Open Access Journals (Sweden)

    Tan Hua

    2012-12-01

    Full Text Available Abstract Background RNA interference (RNAi becomes an increasingly important and effective genetic tool to study the function of target genes by suppressing specific genes of interest. This system approach helps identify signaling pathways and cellular phase types by tracking intensity and/or morphological changes of cells. The traditional RNAi screening scheme, in which one siRNA is designed to knockdown one specific mRNA target, needs a large library of siRNAs and turns out to be time-consuming and expensive. Results In this paper, we propose a conceptual model, called compressed sensing RNAi (csRNAi, which employs a unique combination of group of small interfering RNAs (siRNAs to knockdown a much larger size of genes. This strategy is based on the fact that one gene can be partially bound with several small interfering RNAs (siRNAs and conversely, one siRNA can bind to a few genes with distinct binding affinity. This model constructs a multi-to-multi correspondence between siRNAs and their targets, with siRNAs much fewer than mRNA targets, compared with the conventional scheme. Mathematically this problem involves an underdetermined system of equations (linear or nonlinear, which is ill-posed in general. However, the recently developed compressed sensing (CS theory can solve this problem. We present a mathematical model to describe the csRNAi system based on both CS theory and biological concerns. To build this model, we first search nucleotide motifs in a target gene set. Then we propose a machine learning based method to find the effective siRNAs with novel features, such as image features and speech features to describe an siRNA sequence. Numerical simulations show that we can reduce the siRNA library to one third of that in the conventional scheme. In addition, the features to describe siRNAs outperform the existing ones substantially. Conclusions This csRNAi system is very promising in saving both time and cost for large-scale RNAi

  16. Magnetic resonance image compression using scalar-vector quantization

    Science.gov (United States)

    Mohsenian, Nader; Shahri, Homayoun

    1995-12-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.

  17. A parallel 3-D discrete wavelet transform architecture using pipelined lifting scheme approach for video coding

    Science.gov (United States)

    Hegde, Ganapathi; Vaya, Pukhraj

    2013-10-01

    This article presents a parallel architecture for 3-D discrete wavelet transform (3-DDWT). The proposed design is based on the 1-D pipelined lifting scheme. The architecture is fully scalable beyond the present coherent Daubechies filter bank (9, 7). This 3-DDWT architecture has advantages such as no group of pictures restriction and reduced memory referencing. It offers low power consumption, low latency and high throughput. The computing technique is based on the concept that lifting scheme minimises the storage requirement. The application specific integrated circuit implementation of the proposed architecture is done by synthesising it using 65 nm Taiwan Semiconductor Manufacturing Company standard cell library. It offers a speed of 486 MHz with a power consumption of 2.56 mW. This architecture is suitable for real-time video compression even with large frame dimensions.

  18. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  19. Comment on "Proposal of a critical test of the Navier-Stokes-Fourier paradigm for compressible fluid continua".

    Science.gov (United States)

    Felderhof, B U

    2013-08-01

    Recently, a critical test of the Navier-Stokes-Fourier equations for compressible fluid continua was proposed [H. Brenner, Phys. Rev. E 87, 013014 (2013)]. It was shown that the equations of bivelocity hydrodynamics imply that a compressible fluid in an isolated rotating circular cylinder attains a nonequilibrium steady state with a nonuniform temperature increasing radially with distance from the axis. We demonstrate that statistical mechanical arguments, involving Hamiltonian dynamics and ergodicity due to irregularity of the wall, lead instead to a thermal equilibrium state with uniform temperature. This is the situation to be expected in experiment.

  20. Fast lossless compression via cascading Bloom filters.

    Science.gov (United States)

    Rozov, Roye; Shamir, Ron; Halperin, Eran

    2014-01-01

    Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space

  1. Study of CSR longitudinal bunch compression cavity

    International Nuclear Information System (INIS)

    Yin Dayu; Li Peng; Liu Yong; Xie Qingchun

    2009-01-01

    The scheme of longitudinal bunch compression cavity for the Cooling Storage Ring (CSR)is an important issue. Plasma physics experiments require high density heavy ion beam and short pulsed bunch,which can be produced by non-adiabatic compression of bunch implemented by a fast compression with 90 degree rotation in the longitudinal phase space. The phase space rotation in fast compression is initiated by a fast jump of the RF-voltage amplitude. For this purpose, the CSR longitudinal bunch compression cavity, loaded with FINEMET-FT-1M is studied and simulated with MAFIA code. In this paper, the CSR longitudinal bunch compression cavity is simulated and the initial bunch length of 238 U 72+ with 250 MeV/u will be compressed from 200 ns to 50 ns.The construction and RF properties of the CSR longitudinal bunch compression cavity are simulated and calculated also with MAFIA code. The operation frequency of the cavity is 1.15 MHz with peak voltage of 80 kV, and the cavity can be used to compress heavy ions in the CSR. (authors)

  2. Proposal for a new LEIR slow extraction scheme dedicated to biomedical research

    CERN Document Server

    Garonna, A; Abler, D

    2014-01-01

    A proposal is here presented for a new slow extraction scheme for the Low Energy Ion Ring (LEIR) in the context of the feasibility study for a future biomedical research facility at CERN. The new slow extraction system is based on the third-integer resonance. Two resonance driving mechanisms have been studied: the quadrupole-driven method and the RF-knockout technique. Both were made compatible with the tight constraints imposed by parallel operation of LEIR as heavy ion accumulator and care was taken to maximize the use of the available hardware.

  3. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    Science.gov (United States)

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  4. Reducing test-data volume and test-power simultaneously in LFSR reseeding-based compression environment

    Energy Technology Data Exchange (ETDEWEB)

    Wang Weizheng; Kuang Jishun; You Zhiqiang; Liu Peng, E-mail: jshkuang@163.com [College of Information Science and Engineering, Hunan University, Changsha 410082 (China)

    2011-07-15

    This paper presents a new test scheme based on scan block encoding in a linear feedback shift register (LFSR) reseeding-based compression environment. Meanwhile, our paper also introduces a novel algorithm of scan-block clustering. The main contribution of this paper is a flexible test-application framework that achieves significant reductions in switching activity during scan shift and the number of specified bits that need to be generated via LFSR reseeding. Thus, it can significantly reduce the test power and test data volume. Experimental results using Mintest test set on the larger ISCAS'89 benchmarks show that the proposed method reduces the switching activity significantly by 72%-94% and provides a best possible test compression of 74%-94% with little hardware overhead. (semiconductor integrated circuits)

  5. Transmission usage cost allocation schemes

    International Nuclear Information System (INIS)

    Abou El Ela, A.A.; El-Sehiemy, R.A.

    2009-01-01

    This paper presents different suggested transmission usage cost allocation (TCA) schemes to the system individuals. Different independent system operator (ISO) visions are presented using the proportional rata and flow-based TCA methods. There are two proposed flow-based TCA schemes (FTCA). The first FTCA scheme generalizes the equivalent bilateral exchanges (EBE) concepts for lossy networks through two-stage procedure. The second FTCA scheme is based on the modified sensitivity factors (MSF). These factors are developed from the actual measurements of power flows in transmission lines and the power injections at different buses. The proposed schemes exhibit desirable apportioning properties and are easy to implement and understand. Case studies for different loading conditions are carried out to show the capability of the proposed schemes for solving the TCA problem. (author)

  6. Compressibility-aware media retargeting with structure preserving.

    Science.gov (United States)

    Wang, Shu-Fan; Lai, Shang-Hong

    2011-03-01

    A number of algorithms have been proposed for intelligent image/video retargeting with image content retained as much as possible. However, they usually suffer from some artifacts in the results, such as ridge or structure twist. In this paper, we present a structure-preserving media retargeting technique that preserves the content and image structure as best as possible. Different from the previous pixel or grid based methods, we estimate the image content saliency from the structure of the content. A block structure energy is introduced with a top-down strategy to constrain the image structure inside to deform uniformly in either x or y direction. However, the flexibilities for retargeting are quite different for different images. To cope with this problem, we propose a compressibility assessment scheme for media retargeting by combining the entropies of image gradient magnitude and orientation distributions. Thus, the resized media is produced to preserve the image content and structure as best as possible. Our experiments demonstrate that the proposed method provides resized images/videos with better preservation of content and structure than those by the previous methods.

  7. Improved Secret Image Sharing Scheme in Embedding Capacity without Underflow and Overflow.

    Science.gov (United States)

    Pang, Liaojun; Miao, Deyu; Li, Huixian; Wang, Qiong

    2015-01-01

    Computational secret image sharing (CSIS) is an effective way to protect a secret image during its transmission and storage, and thus it has attracted lots of attentions since its appearance. Nowadays, it has become a hot topic for researchers to improve the embedding capacity and eliminate the underflow and overflow situations, which is embarrassing and difficult to deal with. The scheme, which has the highest embedding capacity among the existing schemes, has the underflow and overflow problems. Although the underflow and overflow situations have been well dealt with by different methods, the embedding capacities of these methods are reduced more or less. Motivated by these concerns, we propose a novel scheme, in which we take the differential coding, Huffman coding, and data converting to compress the secret image before embedding it to further improve the embedding capacity, and the pixel mapping matrix embedding method with a newly designed matrix is used to embed secret image data into the cover image to avoid the underflow and overflow situations. Experiment results show that our scheme can improve the embedding capacity further and eliminate the underflow and overflow situations at the same time.

  8. Simulations of viscous and compressible gas-gas flows using high-order finite difference schemes

    Science.gov (United States)

    Capuano, M.; Bogey, C.; Spelt, P. D. M.

    2018-05-01

    A computational method for the simulation of viscous and compressible gas-gas flows is presented. It consists in solving the Navier-Stokes equations associated with a convection equation governing the motion of the interface between two gases using high-order finite-difference schemes. A discontinuity-capturing methodology based on sensors and a spatial filter enables capturing shock waves and deformable interfaces. One-dimensional test cases are performed as validation and to justify choices in the numerical method. The results compare well with analytical solutions. Shock waves and interfaces are accurately propagated, and remain sharp. Subsequently, two-dimensional flows are considered including viscosity and thermal conductivity. In Richtmyer-Meshkov instability, generated on an air-SF6 interface, the influence of the mesh refinement on the instability shape is studied, and the temporal variations of the instability amplitude is compared with experimental data. Finally, for a plane shock wave propagating in air and impacting a cylindrical bubble filled with helium or R22, numerical Schlieren pictures obtained using different grid refinements are found to compare well with experimental shadow-photographs. The mass conservation is verified from the temporal variations of the mass of the bubble. The mean velocities of pressure waves and bubble interface are similar to those obtained experimentally.

  9. A Novel Image Authentication with Tamper Localization and Self-Recovery in Encrypted Domain Based on Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2018-01-01

    Full Text Available This paper proposes a novel tamper detection, localization, and recovery scheme for encrypted images with Discrete Wavelet Transformation (DWT and Compressive Sensing (CS. The original image is first transformed into DWT domain and divided into important part, that is, low-frequency part, and unimportant part, that is, high-frequency part. For low-frequency part contains the main information of image, traditional chaotic encryption is employed. Then, high-frequency part is encrypted with CS to vacate space for watermark. The scheme takes the processed original image content as watermark, from which the characteristic digest values are generated. Comparing with the existing image authentication algorithms, the proposed scheme can realize not only tamper detection and localization but also tamper recovery. Moreover, tamper recovery is based on block division and the recovery accuracy varies with the contents that are possibly tampered. If either the watermark or low-frequency part is tampered, the recovery accuracy is 100%. The experimental results show that the scheme can not only distinguish the type of tamper and find the tampered blocks but also recover the main information of the original image. With great robustness and security, the scheme can adequately meet the need of secure image transmission under unreliable conditions.

  10. Correlation and image compression for limited-bandwidth CCD.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Douglas G.

    2005-07-01

    As radars move to Unmanned Aerial Vehicles with limited-bandwidth data downlinks, the amount of data stored and transmitted with each image becomes more significant. This document gives the results of a study to determine the effect of lossy compression in the image magnitude and phase on Coherent Change Detection (CCD). We examine 44 lossy compression types, plus lossless zlib compression, and test each compression method with over 600 CCD image pairs. We also derive theoretical predictions for the correlation for most of these compression schemes, which compare favorably with the experimental results. We recommend image transmission formats for limited-bandwidth programs having various requirements for CCD, including programs which cannot allow performance degradation and those which have stricter bandwidth requirements at the expense of CCD performance.

  11. Enhanced recovery of subsurface geological structures using compressed sensing and the Ensemble Kalman filter

    KAUST Repository

    Sana, Furrukh

    2015-07-26

    Recovering information on subsurface geological features, such as flow channels, holds significant importance for optimizing the productivity of oil reservoirs. The flow channels exhibit high permeability in contrast to low permeability rock formations in their surroundings, enabling formulation of a sparse field recovery problem. The Ensemble Kalman filter (EnKF) is a widely used technique for the estimation of subsurface parameters, such as permeability. However, the EnKF often fails to recover and preserve the channel structures during the estimation process. Compressed Sensing (CS) has shown to significantly improve the reconstruction quality when dealing with such problems. We propose a new scheme based on CS principles to enhance the reconstruction of subsurface geological features by transforming the EnKF estimation process to a sparse domain representing diverse geological structures. Numerical experiments suggest that the proposed scheme provides an efficient mechanism to incorporate and preserve structural information in the estimation process and results in significant enhancement in the recovery of flow channel structures.

  12. Enhanced recovery of subsurface geological structures using compressed sensing and the Ensemble Kalman filter

    KAUST Repository

    Sana, Furrukh; Katterbauer, Klemens; Al-Naffouri, Tareq Y.; Hoteit, Ibrahim

    2015-01-01

    Recovering information on subsurface geological features, such as flow channels, holds significant importance for optimizing the productivity of oil reservoirs. The flow channels exhibit high permeability in contrast to low permeability rock formations in their surroundings, enabling formulation of a sparse field recovery problem. The Ensemble Kalman filter (EnKF) is a widely used technique for the estimation of subsurface parameters, such as permeability. However, the EnKF often fails to recover and preserve the channel structures during the estimation process. Compressed Sensing (CS) has shown to significantly improve the reconstruction quality when dealing with such problems. We propose a new scheme based on CS principles to enhance the reconstruction of subsurface geological features by transforming the EnKF estimation process to a sparse domain representing diverse geological structures. Numerical experiments suggest that the proposed scheme provides an efficient mechanism to incorporate and preserve structural information in the estimation process and results in significant enhancement in the recovery of flow channel structures.

  13. International proposal for an acoustic classification scheme for dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2014-01-01

    Acoustic classification schemes specify different quality levels for acoustic conditions. Regulations and classification schemes for dwellings typically include criteria for airborne and impact sound insulation, façade sound insulation and service equipment noise. However, although important...... classes, implying also trade barriers. Thus, a harmonized classification scheme would be useful, and the European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", running 2009-2013 with members from 32 countries, including three overseas...... for quality of life, information about acoustic conditions is rarely available, neither for new or existing housing. Regulatory acoustic requirements will, if enforced, ensure a corresponding quality for new dwellings, but satisfactory conditions for occupants are not guaranteed. Consequently, several...

  14. Packet reversed packet combining scheme

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2006-07-01

    The packet combining scheme is a well defined simple error correction scheme with erroneous copies at the receiver. It offers higher throughput combined with ARQ protocols in networks than that of basic ARQ protocols. But packet combining scheme fails to correct errors when the errors occur in the same bit locations of two erroneous copies. In the present work, we propose a scheme that will correct error if the errors occur at the same bit location of the erroneous copies. The proposed scheme when combined with ARQ protocol will offer higher throughput. (author)

  15. Image compression for the silicon drift detectors in the ALICE experiment

    International Nuclear Information System (INIS)

    Werbrouck, A.; Tosello, F.; Rivetti, A.; Mazza, G.; De Remigis, P.; Cavagnino, D.; Alberici, G.

    2001-01-01

    We describe an algorithm for the zero suppression and data compression for the Silicon Drift Detectors (SDD) in the ALICE experiment. The algorithm operates on 10-bit linear data streams from the SDDs by applying a 10 bit to 8-bit non-linear compression followed by a data reduction based on a two-threshold discrimination and a two-dimensional analysis along both the drift time and the anodes. The proposed scheme allows for a better understanding of the neighborhoods of the SDD signal clusters, thus improving their reconstructability, and also provides a statistical monitoring of the background characteristics for each SDD anode. The entire algorithm is purely combinatorial and thus can be executed in pipeline, without additional clock cycles, during the SDD readout. The hardware coding together with the methods for the expansion to the original 10-bit values in the offline analysis and for the background monitoring are presented

  16. 3D Video Compression and Transmission

    DEFF Research Database (Denmark)

    Zamarin, Marco; Forchhammer, Søren

    In this short paper we provide a brief introduction to 3D and multi-view video technologies - like three-dimensional television and free-viewpoint video - focusing on the aspects related to data compression and transmission. Geometric information represented by depth maps is introduced as well...... and a novel coding scheme for multi-view data able to exploit geometric information in order to improve compression performances is briefly described and compared against the classical solution based on multi-view motion estimation. Future research directions close the paper....

  17. SVD compression for magnetic resonance fingerprinting in the time domain.

    Science.gov (United States)

    McGivney, Debra F; Pierre, Eric; Ma, Dan; Jiang, Yun; Saybasili, Haris; Gulani, Vikas; Griswold, Mark A

    2014-12-01

    Magnetic resonance (MR) fingerprinting is a technique for acquiring and processing MR data that simultaneously provides quantitative maps of different tissue parameters through a pattern recognition algorithm. A predefined dictionary models the possible signal evolutions simulated using the Bloch equations with different combinations of various MR parameters and pattern recognition is completed by computing the inner product between the observed signal and each of the predicted signals within the dictionary. Though this matching algorithm has been shown to accurately predict the MR parameters of interest, one desires a more efficient method to obtain the quantitative images. We propose to compress the dictionary using the singular value decomposition, which will provide a low-rank approximation. By compressing the size of the dictionary in the time domain, we are able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously.

  18. A Novel, Automatic Quality Control Scheme for Real Time Image Transmission

    Directory of Open Access Journals (Sweden)

    S. Ramachandran

    2002-01-01

    Full Text Available A novel scheme to compute energy on-the-fly and thereby control the quality of the image frames dynamically is presented along with its FPGA implementation. This scheme is suitable for incorporation in image compression systems such as video encoders. In this new scheme, processing is automatically stopped when the desired quality is achieved for the image being processed by using a concept called pruning. Pruning also increases the processing speed by a factor of more than two when compared to the conventional method of processing without pruning. An MPEG-2 encoder implemented using this scheme is capable of processing good quality monochrome and color images of sizes up to 1024 × 768 pixels at the rate of 42 and 28 frames per second, respectively, with a compression ratio of over 17:1. The encoder is also capable of working in the fixed pruning level mode with user programmable features.

  19. A novel chaotic encryption scheme based on arithmetic coding

    International Nuclear Information System (INIS)

    Mi Bo; Liao Xiaofeng; Chen Yong

    2008-01-01

    In this paper, under the combination of arithmetic coding and logistic map, a novel chaotic encryption scheme is presented. The plaintexts are encrypted and compressed by using an arithmetic coder whose mapping intervals are changed irregularly according to a keystream derived from chaotic map and plaintext. Performance and security of the scheme are also studied experimentally and theoretically in detail

  20. A Robust Color Image Watermarking Scheme Using Entropy and QR Decomposition

    Directory of Open Access Journals (Sweden)

    L. Laur

    2015-12-01

    Full Text Available Internet has affected our everyday life drastically. Expansive volumes of information are exchanged over the Internet consistently which causes numerous security concerns. Issues like content identification, document and image security, audience measurement, ownership, copyrights and others can be settled by using digital watermarking. In this work, robust and imperceptible non-blind color image watermarking algorithm is proposed, which benefit from the fact that watermark can be hidden in different color channel which results into further robustness of the proposed technique to attacks. Given method uses some algorithms such as entropy, discrete wavelet transform, Chirp z-transform, orthogonal-triangular decomposition and Singular value decomposition in order to embed the watermark in a color image. Many experiments are performed using well-known signal processing attacks such as histogram equalization, adding noise and compression. Experimental results show that proposed scheme is imperceptible and robust against common signal processing attacks.

  1. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  2. WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Zhouzhou Liu

    2015-01-01

    Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.

  3. Dynamic Relative Compression, Dynamic Partial Sums, and Substring Concatenation

    DEFF Research Database (Denmark)

    Bille, Philip; Christiansen, Anders Roy; Cording, Patrick Hagge

    2017-01-01

    -repetitive massive data sets such as genomes and web-data. We initiate the study of relative compression in a dynamic setting where the compressed source string S is subject to edit operations. The goal is to maintain the compressed representation compactly, while supporting edits and allowing efficient random...... access to the (uncompressed) source string. We present new data structures that achieve optimal time for updates and queries while using space linear in the size of the optimal relative compression, for nearly all combinations of parameters. We also present solutions for restricted and extended sets......Given a static reference string R and a source string S, a relative compression of S with respect to R is an encoding of S as a sequence of references to substrings of R. Relative compression schemes are a classic model of compression and have recently proved very successful for compressing highly...

  4. Dynamic Relative Compression, Dynamic Partial Sums, and Substring Concatenation

    DEFF Research Database (Denmark)

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li

    2016-01-01

    -repetitive massive data sets such as genomes and web-data. We initiate the study of relative compression in a dynamic setting where the compressed source string S is subject to edit operations. The goal is to maintain the compressed representation compactly, while supporting edits and allowing efficient random...... access to the (uncompressed) source string. We present new data structures that achieve optimal time for updates and queries while using space linear in the size of the optimal relative compression, for nearly all combinations of parameters. We also present solutions for restricted and extended sets......Given a static reference string R and a source string S, a relative compression of S with respect to R is an encoding of S as a sequence of references to substrings of R. Relative compression schemes are a classic model of compression and have recently proved very successful for compressing highly...

  5. Efficient transmission of compressed data for remote volume visualization.

    Science.gov (United States)

    Krishnan, Karthik; Marcellin, Michael W; Bilgin, Ali; Nadar, Mariappan S

    2006-09-01

    One of the goals of telemedicine is to enable remote visualization and browsing of medical volumes. There is a need to employ scalable compression schemes and efficient client-server models to obtain interactivity and an enhanced viewing experience. First, we present a scheme that uses JPEG2000 and JPIP (JPEG2000 Interactive Protocol) to transmit data in a multi-resolution and progressive fashion. The server exploits the spatial locality offered by the wavelet transform and packet indexing information to transmit, in so far as possible, compressed volume data relevant to the clients query. Once the client identifies its volume of interest (VOI), the volume is refined progressively within the VOI from an initial lossy to a final lossless representation. Contextual background information can also be made available having quality fading away from the VOI. Second, we present a prioritization that enables the client to progressively visualize scene content from a compressed file. In our specific example, the client is able to make requests to progressively receive data corresponding to any tissue type. The server is now capable of reordering the same compressed data file on the fly to serve data packets prioritized as per the client's request. Lastly, we describe the effect of compression parameters on compression ratio, decoding times and interactivity. We also present suggestions for optimizing JPEG2000 for remote volume visualization and volume browsing applications. The resulting system is ideally suited for client-server applications with the server maintaining the compressed volume data, to be browsed by a client with a low bandwidth constraint.

  6. Analysis of a discrete element method and coupling with a compressible fluid flow method

    International Nuclear Information System (INIS)

    Monasse, L.

    2011-01-01

    This work aims at the numerical simulation of compressible fluid/deformable structure interactions. In particular, we have developed a partitioned coupling algorithm between a Finite Volume method for the compressible fluid and a Discrete Element method capable of taking into account fractures in the solid. A survey of existing fictitious domain methods and partitioned algorithms has led to choose an Embedded Boundary method and an explicit coupling scheme. We first showed that the Discrete Element method used for the solid yielded the correct macroscopic behaviour and that the symplectic time-integration scheme ensured the preservation of energy. We then developed an explicit coupling algorithm between a compressible inviscid fluid and an un-deformable solid. Mass, momentum and energy conservation and consistency properties were proved for the coupling scheme. The algorithm was then extended to the coupling with a deformable solid, in the form of a semi implicit scheme. Finally, we applied this method to unsteady inviscid flows around moving structures: comparisons with existing numerical and experimental results demonstrate the excellent accuracy of our method. (author) [fr

  7. REMOTELY SENSEDC IMAGE COMPRESSION BASED ON WAVELET TRANSFORM

    Directory of Open Access Journals (Sweden)

    Heung K. Lee

    1996-06-01

    Full Text Available In this paper, we present an image compression algorithm that is capable of significantly reducing the vast mount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet trans-form to remove the spatial redundancy. The transformed images are than encoded by hilbert-curve scanning and run-length-encoding, followed by huffman coding. We also present the performance of the proposed algorithm with KITSAT-1 image as well as the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by peak signal to noise ratio (PSNR and classification capability.

  8. An introduction to video image compression and authentication technology for safeguards applications

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1995-01-01

    Verification of a video image has been a major problem for safeguards for several years. Various verification schemes have been tried on analog video signals ever since the mid-1970's. These schemes have provided a measure of protection but have never been widely adopted. The development of reasonably priced complex video processing integrated circuits makes it possible to digitize a video image and then compress the resulting digital file into a smaller file without noticeable loss of resolution. Authentication and/or encryption algorithms can be more easily applied to digital video files that have been compressed. The compressed video files require less time for algorithm processing and image transmission. An important safeguards application for authenticated, compressed, digital video images is in unattended video surveillance systems and remote monitoring systems. The use of digital images in the surveillance system makes it possible to develop remote monitoring systems that send images over narrow bandwidth channels such as the common telephone line. This paper discusses the video compression process, authentication algorithm, and data format selected to transmit and store the authenticated images

  9. Applications of wavelet-based compression to multidimensional earth science data

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1993-01-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  10. Applications of wavelet-based compression to multidimensional earth science data

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1993-02-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  11. Low power design of wireless endoscopy compression/communication architecture

    Directory of Open Access Journals (Sweden)

    Zitouni Abdelkrim

    2018-05-01

    Full Text Available A wireless endoscopy capsule represents an efficient device interesting on the examination of digestive diseases. Many performance criteria’s (silicon area, dissipated power, image quality, computational time, etc. need to be deeply studied.In this paper, our interest is the optimization of the indicated criteria. The proposed methodology is based on exploring the advantages of the DCT/DWT transforms by combining them into single architecture. For arithmetic operations, the MCLA technique is used. This architecture integrates also a CABAC entropy coder that supports all binarization schemes. AMBA/I2C architecture is developed for assuring optimized communication.The comparisons of the proposed architecture with the most popular methods explained in related works show efficient results in terms dissipated power, hardware cost, and computation speed. Keywords: Wireless endoscopy capsule, DCT/DWT image compression, CABAC entropy coder, AMBA/I2C multi-bus architecture

  12. On Compression of a Heavy Compressible Layer of an Elastoplastic or Elastoviscoplastic Medium

    Science.gov (United States)

    Kovtanyuk, L. V.; Panchenko, G. L.

    2017-11-01

    The problem of deformation of a horizontal plane layer of a compressible material is solved in the framework of the theory of small strains. The upper boundary of the layer is under the action of shear and compressing loads, and the no-slip condition is satisfied on the lower boundary of the layer. The loads increase in absolute value with time, then become constant, and then decrease to zero.Various plasticity conditions are consideredwith regard to the material compressibility, namely, the Coulomb-Mohr plasticity condition, the von Mises-Schleicher plasticity condition, and the same conditions with the viscous properties of the material taken into account. To solve the system of partial differential equations for the components of irreversible strains, a finite-difference scheme is developed for a spatial domain increasing with time. The laws of motion of elastoplastic boundaries are presented, the stresses, strains, rates of strain, and displacements are calculated, and the residual stresses and strains are found.

  13. Health financing reform in Uganda: How equitable is the proposed National Health Insurance scheme?

    Directory of Open Access Journals (Sweden)

    Orem Juliet

    2010-10-01

    Full Text Available Abstract Background Uganda is proposing introduction of the National Health Insurance scheme (NHIS in a phased manner with the view to obtaining additional funding for the health sector and promoting financial risk protection. In this paper, we have assessed the proposed NHIS from an equity perspective, exploring the extent to which NHIS would improve existing disparities in the health sector. Methods We reviewed the proposed design and other relevant documents that enhanced our understanding of contextual issues. We used the Kutzin and fair financing frameworks to critically assess the impact of NHIS on overall equity in financing in Uganda. Results The introduction of NHIS is being proposed against the backdrop of inequalities in the distribution of health system inputs between rural and urban areas, different levels of care and geographic areas. In this assessment, we find that gradual implementation of NHIS will result in low coverage initially, which might pose a challenge for effective management of the scheme. The process for accreditation of service providers during the first phase is not explicit on how it will ensure that a two-tier service provision arrangement does not emerge to cater for different types of patients. If the proposed fee-for-service mechanism of reimbursing providers is pursued, utilisation patterns will determine how resources are allocated. This implies that equity in resource allocation will be determined by the distribution of accredited providers, and checks put in place to prohibit frivolous use. The current design does not explicitly mention how these two issues will be tackled. Lastly, there is no clarity on how the NHIS will fit into, and integrate within existing financing mechanisms. Conclusion Under the current NHIS design, the initial low coverage in the first years will inhibit optimal achievement of the important equity characteristics of pooling, cross-subsidisation and financial protection. Depending

  14. High-energy few-cycle pulse compression through self-channeling in gases

    International Nuclear Information System (INIS)

    Hauri, C.; Merano, M.; Trisorio, A.; Canova, F.; Canova, L.; Lopez-Martens, R.; Ruchon, T.; Engquist, A.; Varju, K.; Gustafsson, E.

    2006-01-01

    Complete test of publication follows. Nonlinear spectral broadening of femtosecond optical pulses by intense propagation in a Kerr medium followed by temporal compression constitutes the Holy Grail for ultrafast science since it allows the generation of intense few-cycle optical transients from longer pulses provided by now commercially available femtosecond lasers. Tremendous progress in high-field and attosecond physics achieved in recent years has triggered the need for efficient pulse compression schemes producing few-cycle pulses beyond the mJ level. We studied a novel pulse compression scheme based on self-channeling in gases, which promises to overcome the energy constraints of hollow-core fiber compression techniques. Fundamentally, self-channeling at high laser powers in gases occurs when the self-focusing effect in the gas is balanced through the dispersion induced by the inhomogeneous refractive index resulting from optically-induced ionization. The high nonlinearity of the ionization process poses great technical challenges when trying to scale this pulse compression scheme to higher energies input energies. Light channels are known to be unstable under small fluctuations of the trapped field that can lead to temporal and spatial beam breakup, usually resulting in the generation of spectrally broad but uncompressible pulses. Here we present experimental results on high-energy pulse compression of self-channeled 40-fs pulses in pressure-gas cells. In the first experiment, performed at the Lund Laser Center in Sweden, we identified a particular self-channeling regime at lower pulse energies (0.8 mJ), in which the ultrashort pulses are generated with negative group delay dispersion (GDD) such that they can be readily compressed down to near 10-fs through simple material dispersion. Pulse compression is efficient (70%) and exhibits exceptional spatial and temporal beam stability. In a second experiment, performed at the LOA-Palaiseau in France, we

  15. Optimal Face-Iris Multimodal Fusion Scheme

    Directory of Open Access Journals (Sweden)

    Omid Sharifi

    2016-06-01

    Full Text Available Multimodal biometric systems are considered a way to minimize the limitations raised by single traits. This paper proposes new schemes based on score level, feature level and decision level fusion to efficiently fuse face and iris modalities. Log-Gabor transformation is applied as the feature extraction method on face and iris modalities. At each level of fusion, different schemes are proposed to improve the recognition performance and, finally, a combination of schemes at different fusion levels constructs an optimized and robust scheme. In this study, CASIA Iris Distance database is used to examine the robustness of all unimodal and multimodal schemes. In addition, Backtracking Search Algorithm (BSA, a novel population-based iterative evolutionary algorithm, is applied to improve the recognition accuracy of schemes by reducing the number of features and selecting the optimized weights for feature level and score level fusion, respectively. Experimental results on verification rates demonstrate a significant improvement of proposed fusion schemes over unimodal and multimodal fusion methods.

  16. Survey of numerical methods for compressible fluids

    Energy Technology Data Exchange (ETDEWEB)

    Sod, G A

    1977-06-01

    The finite difference methods of Godunov, Hyman, Lax-Wendroff (two-step), MacCormack, Rusanov, the upwind scheme, the hybrid scheme of Harten and Zwas, the antidiffusion method of Boris and Book, and the artificial compression method of Harten are compared with the random choice known as Glimm's method. The methods are used to integrate the one-dimensional equations of gas dynamics for an inviscid fluid. The results are compared and demonstrate that Glimm's method has several advantages. 16 figs., 4 tables.

  17. Fast hybrid fractal image compression using an image feature and neural network

    International Nuclear Information System (INIS)

    Zhou Yiming; Zhang Chao; Zhang Zengke

    2008-01-01

    Since fractal image compression could maintain high-resolution reconstructed images at very high compression ratio, it has great potential to improve the efficiency of image storage and image transmission. On the other hand, fractal image encoding is time consuming for the best matching search between range blocks and domain blocks, which limits the algorithm to practical application greatly. In order to solve this problem, two strategies are adopted to improve the fractal image encoding algorithm in this paper. Firstly, based on the definition of an image feature, a necessary condition of the best matching search and FFC algorithm are proposed, and it could reduce the search space observably and exclude most inappropriate domain blocks according to each range block before the best matching search. Secondly, on the basis of FFC algorithm, in order to reduce the mapping error during the best matching search, a special neural network is constructed to modify the mapping scheme for the subblocks, in which the pixel values fluctuate greatly (FNFC algorithm). Experimental results show that the proposed algorithms could obtain good quality of the reconstructed images and need much less time than the baseline encoding algorithm

  18. Density ratios in compressions driven by radiation pressure

    International Nuclear Information System (INIS)

    Lee, S.

    1988-01-01

    It has been suggested that in the cannonball scheme of laser compression the pellet may be considered to be compressed by the 'brute force' of the radiation pressure. For such a radiation-driven compression, an energy balance method is applied to give an equation fixing the radius compression ratio K which is a key parameter for such intense compressions. A shock model is used to yield specific results. For a square-pulse driving power compressing a spherical pellet with a specific heat ratio of 5/3, a density compression ratio Γ of 27 is computed. Double (stepped) pulsing with linearly rising power enhances Γ to 1750. The value of Γ is not dependent on the absolute magnitude of the piston power, as long as this is large enough. Further enhancement of compression by multiple (stepped) pulsing becomes obvious. The enhanced compression increases the energy gain factor G for a 100 μm DT pellet driven by radiation power of 10 16 W from 6 for a square pulse power with 0.5 MJ absorbed energy to 90 for a double (stepped) linearly rising pulse with absorbed energy of 0.4 MJ assuming perfect coupling efficiency. (author)

  19. Full-frame compression of discrete wavelet and cosine transforms

    Science.gov (United States)

    Lo, Shih-Chung B.; Li, Huai; Krasner, Brian; Freedman, Matthew T.; Mun, Seong K.

    1995-04-01

    At the foreground of computerized radiology and the filmless hospital are the possibilities for easy image retrieval, efficient storage, and rapid image communication. This paper represents the authors' continuous efforts in compression research on full-frame discrete wavelet (FFDWT) and full-frame discrete cosine transforms (FFDCT) for medical image compression. Prior to the coding, it is important to evaluate the global entropy in the decomposed space. It is because of the minimum entropy, that a maximum compression efficiency can be achieved. In this study, each image was split into the top three most significant bit (MSB) and the remaining remapped least significant bit (RLSB) images. The 3MSB image was compressed by an error-free contour coding and received an average of 0.1 bit/pixel. The RLSB image was either transformed to a multi-channel wavelet or the cosine transform domain for entropy evaluation. Ten x-ray chest radiographs and ten mammograms were randomly selected from our clinical database and were used for the study. Our results indicated that the coding scheme in the FFDCT domain performed better than in FFDWT domain for high-resolution digital chest radiographs and mammograms. From this study, we found that decomposition efficiency in the DCT domain for relatively smooth images is higher than that in the DWT. However, both schemes worked just as well for low resolution digital images. We also found that the image characteristics of the `Lena' image commonly used in the compression literature are very different from those of radiological images. The compression outcome of the radiological images can not be extrapolated from the compression result based on the `Lena.'

  20. Cost-based droop scheme for DC microgrid

    DEFF Research Database (Denmark)

    Nutkani, Inam Ullah; Wang, Peng; Loh, Poh Chiang

    2014-01-01

    voltage level, less on optimized operation and control of generation sources. The latter theme is perused in this paper, where cost-based droop scheme is proposed for distributed generators (DGs) in DC microgrids. Unlike traditional proportional power sharing based droop scheme, the proposed scheme......-connected operation. Most importantly, the proposed scheme can reduce overall total generation cost in DC microgrids without centralized controller and communication links. The performance of the proposed scheme has been verified under different load conditions.......DC microgrids are gaining interest due to higher efficiencies of DC distribution compared with AC. The benefits of DC systems have been widely researched for data centers, IT facilities and residential applications. The research focus, however, has been more on system architecture and optimal...

  1. An efficient finite differences method for the computation of compressible, subsonic, unsteady flows past airfoils and panels

    Science.gov (United States)

    Colera, Manuel; Pérez-Saborid, Miguel

    2017-09-01

    A finite differences scheme is proposed in this work to compute in the time domain the compressible, subsonic, unsteady flow past an aerodynamic airfoil using the linearized potential theory. It improves and extends the original method proposed in this journal by Hariharan, Ping and Scott [1] by considering: (i) a non-uniform mesh, (ii) an implicit time integration algorithm, (iii) a vectorized implementation and (iv) the coupled airfoil dynamics and fluid dynamic loads. First, we have formulated the method for cases in which the airfoil motion is given. The scheme has been tested on well known problems in unsteady aerodynamics -such as the response to a sudden change of the angle of attack and to a harmonic motion of the airfoil- and has been proved to be more accurate and efficient than other finite differences and vortex-lattice methods found in the literature. Secondly, we have coupled our method to the equations governing the airfoil dynamics in order to numerically solve problems where the airfoil motion is unknown a priori as happens, for example, in the cases of the flutter and the divergence of a typical section of a wing or of a flexible panel. Apparently, this is the first self-consistent and easy-to-implement numerical analysis in the time domain of the compressible, linearized coupled dynamics of the (generally flexible) airfoil-fluid system carried out in the literature. The results for the particular case of a rigid airfoil show excellent agreement with those reported by other authors, whereas those obtained for the case of a cantilevered flexible airfoil in compressible flow seem to be original or, at least, not well-known.

  2. Bitshuffle: Filter for improving compression of typed binary data

    Science.gov (United States)

    Masui, Kiyoshi

    2017-12-01

    Bitshuffle rearranges typed, binary data for improving compression; the algorithm is implemented in a python/C package within the Numpy framework. The library can be used alongside HDF5 to compress and decompress datasets and is integrated through the dynamically loaded filters framework. Algorithmically, Bitshuffle is closely related to HDF5's Shuffle filter except it operates at the bit level instead of the byte level. Arranging a typed data array in to a matrix with the elements as the rows and the bits within the elements as the columns, Bitshuffle "transposes" the matrix, such that all the least-significant-bits are in a row, etc. This transposition is performed within blocks of data roughly 8kB long; this does not in itself compress data, but rearranges it for more efficient compression. A compression library is necessary to perform the actual compression. This scheme has been used for compression of radio data in high performance computing.

  3. Linearly decoupled energy-stable numerical methods for multi-component two-phase compressible flow

    KAUST Repository

    Kou, Jisheng

    2017-12-06

    In this paper, for the first time we propose two linear, decoupled, energy-stable numerical schemes for multi-component two-phase compressible flow with a realistic equation of state (e.g. Peng-Robinson equation of state). The methods are constructed based on the scalar auxiliary variable (SAV) approaches for Helmholtz free energy and the intermediate velocities that are designed to decouple the tight relationship between velocity and molar densities. The intermediate velocities are also involved in the discrete momentum equation to ensure a consistency relationship with the mass balance equations. Moreover, we propose a component-wise SAV approach for a multi-component fluid, which requires solving a sequence of linear, separate mass balance equations. We prove that the methods have the unconditional energy-dissipation feature. Numerical results are presented to verify the effectiveness of the proposed methods.

  4. Nonlinear secret image sharing scheme.

    Science.gov (United States)

    Shin, Sang-Ho; Lee, Gil-Je; Yoo, Kee-Young

    2014-01-01

    Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2 m⌉ bit-per-pixel (bpp), respectively.

  5. Comparison of the generalized Riemann solver and the gas-kinetic scheme for inviscid compressible flow simulations

    International Nuclear Information System (INIS)

    Li Jiequan; Li Qibing; Xu Kun

    2011-01-01

    The generalized Riemann problem (GRP) scheme for the Euler equations and gas-kinetic scheme (GKS) for the Boltzmann equation are two high resolution shock capturing schemes for fluid simulations. The difference is that one is based on the characteristics of the inviscid Euler equations and their wave interactions, and the other is based on the particle transport and collisions. The similarity between them is that both methods can use identical MUSCL-type initial reconstructions around a cell interface, and the spatial slopes on both sides of a cell interface involve in the gas evolution process and the construction of a time-dependent flux function. Although both methods have been applied successfully to the inviscid compressible flow computations, their performances have never been compared. Since both methods use the same initial reconstruction, any difference is solely coming from different underlying mechanism in their flux evaluation. Therefore, such a comparison is important to help us to understand the correspondence between physical modeling and numerical performances. Since GRP is so faithfully solving the inviscid Euler equations, the comparison can be also used to show the validity of solving the Euler equations itself. The numerical comparison shows that the GRP exhibits a slightly better computational efficiency, and has comparable accuracy with GKS for the Euler solutions in 1D case, but the GKS is more robust than GRP. For the 2D high Mach number flow simulations, the GKS is absent from the shock instability and converges to the steady state solutions faster than the GRP. The GRP has carbuncle phenomena, likes a cloud hanging over exact Riemann solvers. The GRP and GKS use different physical processes to describe the flow motion starting from a discontinuity. One is based on the assumption of equilibrium state with infinite number of particle collisions, and the other starts from the non-equilibrium free transport process to evolve into an

  6. Innovative hyperchaotic encryption algorithm for compressed video

    Science.gov (United States)

    Yuan, Chun; Zhong, Yuzhuo; Yang, Shiqiang

    2002-12-01

    It is accepted that stream cryptosystem can achieve good real-time performance and flexibility which implements encryption by selecting few parts of the block data and header information of the compressed video stream. Chaotic random number generator, for example Logistics Map, is a comparatively promising substitute, but it is easily attacked by nonlinear dynamic forecasting and geometric information extracting. In this paper, we present a hyperchaotic cryptography scheme to encrypt the compressed video, which integrates Logistics Map with Z(232 - 1) field linear congruential algorithm to strengthen the security of the mono-chaotic cryptography, meanwhile, the real-time performance and flexibility of the chaotic sequence cryptography are maintained. It also integrates with the dissymmetrical public-key cryptography and implements encryption and identity authentification on control parameters at initialization phase. In accord with the importance of data in compressed video stream, encryption is performed in layered scheme. In the innovative hyperchaotic cryptography, the value and the updating frequency of control parameters can be changed online to satisfy the requirement of the network quality, processor capability and security requirement. The innovative hyperchaotic cryprography proves robust security by cryptoanalysis, shows good real-time performance and flexible implement capability through the arithmetic evaluating and test.

  7. A scheme for reducing deceleration-phase Rayleigh-Taylor growth in inertial confinement fusion implosions

    Science.gov (United States)

    Wang, L. F.; Ye, W. H.; Wu, J. F.; Liu, Jie; Zhang, W. Y.; He, X. T.

    2016-05-01

    It is demonstrated that the growth of acceleration-phase instabilities in inertial confinement fusion implosions can be controlled, especially in the high-foot implosions [O. A. Hurricane et al., Phys. Plasmas 21, 056314 (2014)] on the National Ignition Facility. However, the excessive growth of the deceleration-phase instabilities can still destroy the hot spot ignition. A scheme is proposed to retard the deceleration-phase Rayleigh-Taylor instability growth by shock collision near the waist of the inner shell surface. Two-dimensional radiation hydrodynamic simulations confirm the improved deceleration-phase hot spot stability properties without sacrificing the fuel compression.

  8. A scheme for reducing deceleration-phase Rayleigh–Taylor growth in inertial confinement fusion implosions

    Energy Technology Data Exchange (ETDEWEB)

    Wang, L. F., E-mail: wang-lifeng@iapcm.ac.cn; Ye, W. H.; Liu, Jie [Institute of Applied Physics and Computational Mathematics, Beijing 100094 (China); HEDPS, Center for Applied Physics and Technology, Peking University, Beijing 100871 (China); Center for Fusion Energy Science and Technology, Chinese Academy of Engineering Physics, Beijing 100088 (China); Wu, J. F. [Institute of Applied Physics and Computational Mathematics, Beijing 100094 (China); Center for Fusion Energy Science and Technology, Chinese Academy of Engineering Physics, Beijing 100088 (China); Zhang, W. Y. [Institute of Applied Physics and Computational Mathematics, Beijing 100094 (China); He, X. T. [Institute of Applied Physics and Computational Mathematics, Beijing 100094 (China); HEDPS, Center for Applied Physics and Technology, Peking University, Beijing 100871 (China)

    2016-05-15

    It is demonstrated that the growth of acceleration-phase instabilities in inertial confinement fusion implosions can be controlled, especially in the high-foot implosions [O. A. Hurricane et al., Phys. Plasmas 21, 056314 (2014)] on the National Ignition Facility. However, the excessive growth of the deceleration-phase instabilities can still destroy the hot spot ignition. A scheme is proposed to retard the deceleration-phase Rayleigh–Taylor instability growth by shock collision near the waist of the inner shell surface. Two-dimensional radiation hydrodynamic simulations confirm the improved deceleration-phase hot spot stability properties without sacrificing the fuel compression.

  9. An improved biometrics-based remote user authentication scheme with user anonymity.

    Science.gov (United States)

    Khan, Muhammad Khurram; Kumari, Saru

    2013-01-01

    The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An's scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An's scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability.

  10. Proposal for a new LEIR Slow Extraction Scheme dedicated to Biomedical Research

    CERN Document Server

    Garonna, A; Carli, C

    2014-01-01

    This report presents a proposal for a new slow extraction scheme for the Low Energy Ion Ring (LEIR) in the context of the feasibility study for a biomedical research facility at CERN. LEIR has to be maintained as a heavy ion accumulator ring for LHC and for fixed-target experiments with the SPS. In parallel to this on-going operation for physics experiments, an additional secondary use of LEIR for a biomedical research facility was proposed [Dosanjh2013, Holzscheiter2012, PHE2010]. This facility would complement the existing research beam-time available at other laboratories for studies related to ion beam therapy. The new slow extraction [Abler2013] is based on the third-integer resonance. The reference beam is composed of fully stripped carbon ions with extraction energies of 20-440 MeV/u, transverse physical emittances of 5-25 µm and momentum spreads of ±2-9•10-4. Two resonance driving mechanisms have been studied: the quadrupole-driven method and the RF-knockout technique. Both were made compatible...

  11. Optimal erasure protection for scalably compressed video streams with limited retransmission.

    Science.gov (United States)

    Taubman, David; Thie, Johnson

    2005-08-01

    This paper shows how the priority encoding transmission (PET) framework may be leveraged to exploit both unequal error protection and limited retransmission for RD-optimized delivery of streaming media. Previous work on scalable media protection with PET has largely ignored the possibility of retransmission. Conversely, the PET framework has not been harnessed by the substantial body of previous work on RD optimized hybrid forward error correction/automatic repeat request schemes. We limit our attention to sources which can be modeled as independently compressed frames (e.g., video frames), where each element in the scalable representation of each frame can be transmitted in one or both of two transmission slots. An optimization algorithm determines the level of protection which should be assigned to each element in each slot, subject to transmission bandwidth constraints. To balance the protection assigned to elements which are being transmitted for the first time with those which are being retransmitted, the proposed algorithm formulates a collection of hypotheses concerning its own behavior in future transmission slots. We show how the PET framework allows for a decoupled optimization algorithm with only modest complexity. Experimental results obtained with Motion JPEG2000 compressed video demonstrate that substantial performance benefits can be obtained using the proposed framework.

  12. MIMO transmit scheme based on morphological perceptron with competitive learning.

    Science.gov (United States)

    Valente, Raul Ambrozio; Abrão, Taufik

    2016-08-01

    This paper proposes a new multi-input multi-output (MIMO) transmit scheme aided by artificial neural network (ANN). The morphological perceptron with competitive learning (MP/CL) concept is deployed as a decision rule in the MIMO detection stage. The proposed MIMO transmission scheme is able to achieve double spectral efficiency; hence, in each time-slot the receiver decodes two symbols at a time instead one as Alamouti scheme. Other advantage of the proposed transmit scheme with MP/CL-aided detector is its polynomial complexity according to modulation order, while it becomes linear when the data stream length is greater than modulation order. The performance of the proposed scheme is compared to the traditional MIMO schemes, namely Alamouti scheme and maximum-likelihood MIMO (ML-MIMO) detector. Also, the proposed scheme is evaluated in a scenario with variable channel information along the frame. Numerical results have shown that the diversity gain under space-time coding Alamouti scheme is partially lost, which slightly reduces the bit-error rate (BER) performance of the proposed MP/CL-NN MIMO scheme. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Valenzise G

    2009-01-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  14. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  15. A content-based digital image watermarking scheme resistant to local geometric distortions

    International Nuclear Information System (INIS)

    Yang, Hong-ying; Chen, Li-li; Wang, Xiang-yang

    2011-01-01

    Geometric distortion is known as one of the most difficult attacks to resist, as it can desynchronize the location of the watermark and hence cause incorrect watermark detection. Geometric distortion can be decomposed into two classes: global affine transforms and local geometric distortions. Most countermeasures proposed in the literature only address the problem of global affine transforms. It is a challenging problem to design a robust image watermarking scheme against local geometric distortions. In this paper, we propose a new content-based digital image watermarking scheme with good visual quality and reasonable resistance against local geometric distortions. Firstly, the robust feature points, which can survive various common image processing and global affine transforms, are extracted by using a multi-scale SIFT (scale invariant feature transform) detector. Then, the affine covariant local feature regions (LFRs) are constructed adaptively according to the feature scale and local invariant centroid. Finally, the digital watermark is embedded into the affine covariant LFRs by modulating the magnitudes of discrete Fourier transform (DFT) coefficients. By binding the watermark with the affine covariant LFRs, the watermark detection can be done without synchronization error. Experimental results show that the proposed image watermarking is not only invisible and robust against common image processing operations such as sharpening, noise addition, and JPEG compression, etc, but also robust against global affine transforms and local geometric distortions

  16. Homogeneous Charge Compression Ignition Combustion: Challenges and Proposed Solutions

    Directory of Open Access Journals (Sweden)

    Mohammad Izadi Najafabadi

    2013-01-01

    Full Text Available Engine and car manufacturers are experiencing the demand concerning fuel efficiency and low emissions from both consumers and governments. Homogeneous charge compression ignition (HCCI is an alternative combustion technology that is cleaner and more efficient than the other types of combustion. Although the thermal efficiency and NOx emission of HCCI engine are greater in comparison with traditional engines, HCCI combustion has several main difficulties such as controlling of ignition timing, limited power output, and weak cold-start capability. In this study a literature review on HCCI engine has been performed and HCCI challenges and proposed solutions have been investigated from the point view of Ignition Timing that is the main problem of this engine. HCCI challenges are investigated by many IC engine researchers during the last decade, but practical solutions have not been presented for a fully HCCI engine. Some of the solutions are slow response time and some of them are technically difficult to implement. So it seems that fully HCCI engine needs more investigation to meet its mass-production and the future research and application should be considered as part of an effort to achieve low-temperature combustion in a wide range of operating conditions in an IC engine.

  17. Bi-level image compression with tree coding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1996-01-01

    Presently, tree coders are the best bi-level image coders. The current ISO standard, JBIG, is a good example. By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code. Three general-purpose coders...... are constructed by this principle. A multi-pass free tree coding scheme produces superior compression results for all test images. A multi-pass fast free template coding scheme produces much better results than JBIG for difficult images, such as halftonings. Rissanen's algorithm `Context' is presented in a new...

  18. An Improved Biometrics-Based Remote User Authentication Scheme with User Anonymity

    Directory of Open Access Journals (Sweden)

    Muhammad Khurram Khan

    2013-01-01

    Full Text Available The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An’s scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An’s scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability.

  19. Compressive laser ranging.

    Science.gov (United States)

    Babbitt, Wm Randall; Barber, Zeb W; Renner, Christoffer

    2011-12-15

    Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.

  20. A Review On Segmentation Based Image Compression Techniques

    Directory of Open Access Journals (Sweden)

    S.Thayammal

    2013-11-01

    Full Text Available Abstract -The storage and transmission of imagery become more challenging task in the current scenario of multimedia applications. Hence, an efficient compression scheme is highly essential for imagery, which reduces the requirement of storage medium and transmission bandwidth. Not only improvement in performance and also the compression techniques must converge quickly in order to apply them for real time applications. There are various algorithms have been done in image compression, but everyone has its own pros and cons. Here, an extensive analysis between existing methods is performed. Also, the use of existing works is highlighted, for developing the novel techniques which face the challenging task of image storage and transmission in multimedia applications.

  1. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    Energy Technology Data Exchange (ETDEWEB)

    Di, Sheng; Cappello, Franck

    2018-01-01

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points can be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.

  2. An effective coded excitation scheme based on a predistorted FM signal and an optimized digital filter

    DEFF Research Database (Denmark)

    Misaridis, Thanasis; Jensen, Jørgen Arendt

    1999-01-01

    This paper presents a coded excitation imaging system based on a predistorted FM excitation and a digital compression filter designed for medical ultrasonic applications, in order to preserve both axial resolution and contrast. In radars, optimal Chebyshev windows efficiently weight a nearly...... as with pulse excitation (about 1.5 lambda), depending on the filter design criteria. The axial sidelobes are below -40 dB, which is the noise level of the measuring imaging system. The proposed excitation/compression scheme shows good overall performance and stability to the frequency shift due to attenuation...... be removed by weighting. We show that by using a predistorted chirp with amplitude or phase shaping for amplitude ripple reduction and a correlation filter that accounts for the transducer's natural frequency weighting, output sidelobe levels of -35 to -40 dB are directly obtained. When an optimized filter...

  3. An authentication scheme for secure access to healthcare services.

    Science.gov (United States)

    Khan, Muhammad Khurram; Kumari, Saru

    2013-08-01

    Last few decades have witnessed boom in the development of information and communication technologies. Health-sector has also been benefitted with this advancement. To ensure secure access to healthcare services some user authentication mechanisms have been proposed. In 2012, Wei et al. proposed a user authentication scheme for telecare medical information system (TMIS). Recently, Zhu pointed out offline password guessing attack on Wei et al.'s scheme and proposed an improved scheme. In this article, we analyze both of these schemes for their effectiveness in TMIS. We show that Wei et al.'s scheme and its improvement proposed by Zhu fail to achieve some important characteristics necessary for secure user authentication. We find that security problems of Wei et al.'s scheme stick with Zhu's scheme; like undetectable online password guessing attack, inefficacy of password change phase, traceability of user's stolen/lost smart card and denial-of-service threat. We also identify that Wei et al.'s scheme lacks forward secrecy and Zhu's scheme lacks session key between user and healthcare server. We therefore propose an authentication scheme for TMIS with forward secrecy which preserves the confidentiality of air messages even if master secret key of healthcare server is compromised. Our scheme retains advantages of Wei et al.'s scheme and Zhu's scheme, and offers additional security. The security analysis and comparison results show the enhanced suitability of our scheme for TMIS.

  4. A universal encoding scheme for MIMO transmission using a single active element for PSK modulation schemes

    DEFF Research Database (Denmark)

    Alrabadi, Osama; Papadias, C.B.; Kalis, A.

    2009-01-01

    A universal scheme for encoding multiple symbol streams using a single driven element (and consequently a single radio frequency (RF) frontend) surrounded by parasitic elements (PE) loaded with variable reactive loads, is proposed in this paper. The proposed scheme is based on creating a MIMO sys...

  5. Channel coding/decoding alternatives for compressed TV data on advanced planetary missions.

    Science.gov (United States)

    Rice, R. F.

    1972-01-01

    The compatibility of channel coding/decoding schemes with a specific TV compressor developed for advanced planetary missions is considered. Under certain conditions, it is shown that compressed data can be transmitted at approximately the same rate as uncompressed data without any loss in quality. Thus, the full gains of data compression can be achieved in real-time transmission.

  6. Lagrange-Flux Schemes: Reformulating Second-Order Accurate Lagrange-Remap Schemes for Better Node-Based HPC Performance

    Directory of Open Access Journals (Sweden)

    De Vuyst Florian

    2016-11-01

    Full Text Available In a recent paper [Poncet R., Peybernes M., Gasc T., De Vuyst F. (2016 Performance modeling of a compressible hydrodynamics solver on multicore CPUs, in “Parallel Computing: on the road to Exascale”], we have achieved the performance analysis of staggered Lagrange-remap schemes, a class of solvers widely used for hydrodynamics applications. This paper is devoted to the rethinking and redesign of the Lagrange-remap process for achieving better performance using today’s computing architectures. As an unintended outcome, the analysis has lead us to the discovery of a new family of solvers – the so-called Lagrange-flux schemes – that appear to be promising for the CFD community.

  7. Dynamic Symmetric Key Mobile Commerce Scheme Based on Self-Verified Mechanism

    Directory of Open Access Journals (Sweden)

    Jiachen Yang

    2014-01-01

    Full Text Available In terms of the security and efficiency of mobile e-commerce, the authors summarized the advantages and disadvantages of several related schemes, especially the self-verified mobile payment scheme based on the elliptic curve cryptosystem (ECC and then proposed a new type of dynamic symmetric key mobile commerce scheme based on self-verified mechanism. The authors analyzed the basic algorithm based on self-verified mechanisms and detailed the complete transaction process of the proposed scheme. The authors analyzed the payment scheme based on the security and high efficiency index. The analysis shows that the proposed scheme not only meets the high efficiency of mobile electronic payment premise, but also takes the security into account. The user confirmation mechanism at the end of the proposed scheme further strengthens the security of the proposed scheme. In brief, the proposed scheme is more efficient and practical than most of the existing schemes.

  8. Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video

    Science.gov (United States)

    Yeo, Boon-Lock; Liu, Bede

    1996-03-01

    Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.

  9. Cancelable remote quantum fingerprint templates protection scheme

    International Nuclear Information System (INIS)

    Liao Qin; Guo Ying; Huang Duan

    2017-01-01

    With the increasing popularity of fingerprint identification technology, its security and privacy have been paid much attention. Only the security and privacy of biological information are insured, the biological technology can be better accepted and used by the public. In this paper, we propose a novel quantum bit (qbit)-based scheme to solve the security and privacy problem existing in the traditional fingerprint identification system. By exploiting the properties of quantm mechanics, our proposed scheme, cancelable remote quantum fingerprint templates protection scheme, can achieve the unconditional security guaranteed in an information-theoretical sense. Moreover, this novel quantum scheme can invalidate most of the attacks aimed at the fingerprint identification system. In addition, the proposed scheme is applicable to the requirement of remote communication with no need to worry about its security and privacy during the transmission. This is an absolute advantage when comparing with other traditional methods. Security analysis shows that the proposed scheme can effectively ensure the communication security and the privacy of users’ information for the fingerprint identification. (paper)

  10. Smoothly Clipped Absolute Deviation (SCAD) regularization for compressed sensing MRI Using an augmented Lagrangian scheme

    NARCIS (Netherlands)

    Mehranian, Abolfazl; Rad, Hamidreza Saligheh; Rahmim, Arman; Ay, Mohammad Reza; Zaidi, Habib

    2013-01-01

    Purpose: Compressed sensing (CS) provides a promising framework for MR image reconstruction from highly undersampled data, thus reducing data acquisition time. In this context, sparsity-promoting regularization techniques exploit the prior knowledge that MR images are sparse or compressible in a

  11. A diversity compression and combining technique based on channel shortening for cooperative networks

    KAUST Repository

    Hussain, Syed Imtiaz

    2012-02-01

    The cooperative relaying process with multiple relays needs proper coordination among the communicating and the relaying nodes. This coordination and the required capabilities may not be available in some wireless systems where the nodes are equipped with very basic communication hardware. We consider a scenario where the source node transmits its signal to the destination through multiple relays in an uncoordinated fashion. The destination captures the multiple copies of the transmitted signal through a Rake receiver. We analyze a situation where the number of Rake fingers N is less than that of the relaying nodes L. In this case, the receiver can combine N strongest signals out of L. The remaining signals will be lost and act as interference to the desired signal components. To tackle this problem, we develop a novel signal combining technique based on channel shortening principles. This technique proposes a processing block before the Rake reception which compresses the energy of L signal components over N branches while keeping the noise level at its minimum. The proposed scheme saves the system resources and makes the received signal compatible to the available hardware. Simulation results show that it outperforms the selection combining scheme. © 2012 IEEE.

  12. Time-space trade-offs for lempel-ziv compressed indexing

    DEFF Research Database (Denmark)

    Bille, Philip; Ettienne, Mikko Berggren; Gørtz, Inge Li

    2017-01-01

    Given a string S, the compressed indexing problem is to preprocess S into a compressed representation that supports fast substring queries. The goal is to use little space relative to the compressed size of S while supporting fast queries. We present a compressed index based on the Lempel-Ziv 1977...... compression scheme. Let n, and z denote the size of the input string, and the compressed LZ77 string, respectively. We obtain the following time-space trade-offs. Given a pattern string P of length m, we can solve the problem in (i) O (m + occ lg lg n) time using O(z lg(n/z) lg lg z) space, or (ii) (m (1...... best space bound, but has a leading term in the query time of O(m(1 + lgϵ z/lg(n/z))). However, for any polynomial compression ratio, i.e., z = O(n1-δ), for constant δ > 0, this becomes O(m). Our index also supports extraction of any substring of length ℓ in O(ℓ + lg(n/z)) time. Technically, our...

  13. An Arbitrated Quantum Signature Scheme without Entanglement*

    International Nuclear Information System (INIS)

    Li Hui-Ran; Luo Ming-Xing; Peng Dai-Yuan; Wang Xiao-Jun

    2017-01-01

    Several quantum signature schemes are recently proposed to realize secure signatures of quantum or classical messages. Arbitrated quantum signature as one nontrivial scheme has attracted great interests because of its usefulness and efficiency. Unfortunately, previous schemes cannot against Trojan horse attack and DoS attack and lack of the unforgeability and the non-repudiation. In this paper, we propose an improved arbitrated quantum signature to address these secure issues with the honesty arbitrator. Our scheme takes use of qubit states not entanglements. More importantly, the qubit scheme can achieve the unforgeability and the non-repudiation. Our scheme is also secure for other known quantum attacks . (paper)

  14. Confinement requirements for OHMIC-compressive ignition of a Spheromak plasma

    International Nuclear Information System (INIS)

    Olson, R.; Gilligan, J.; Miley, G.

    1980-01-01

    The Moving Plasmoid Reactor (MPR) is an attractive alternative magnetic fusion scheme in which Spheromak plasmoids are envisioned to be formed, compressed, burned, and expanded as the plasmoids translate through a series of linear reactor modules. Although auxiliary heating of the plasmoids may be possible, the MPR scenario would be especially interesting if ohmic decay and compression along were sufficient to heat the plasmoids to an ignition temperature. In the present work, we will study the transport conditions under which a Spheromak plasmoid could be expected to reach ignition via a combination of ohmic and compression heating

  15. Confinement requirements for ohmic-compressive ignition of a Spheromak plasma

    International Nuclear Information System (INIS)

    Olson, R.E.; Miley, G.H.

    1981-01-01

    The Moving Plasmoid Reactor (MPR) is an attractive alternative magnetic fusion scheme in which Spheromak plasmoids are envisioned to be formed, compressed, burned, and expanded as the plasmoids translate through a series of linear reactor modules. Although auxiliary heating of the plasmoids may be possible, the MPR scenario would be especially interesting if ohmic decay and compression alone is sufficient to heat the plasmoids to an ignition temperature. In the present work, we examine the transport conditions under which a Spheromak plasmoid can be expected to reach ignition via a combination of ohmic and compression heating

  16. Radiological Image Compression

    Science.gov (United States)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  17. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    Science.gov (United States)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  18. Speech Data Compression using Vector Quantization

    OpenAIRE

    H. B. Kekre; Tanuja K. Sarode

    2008-01-01

    Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table s...

  19. Multiresolution signal decomposition schemes

    NARCIS (Netherlands)

    J. Goutsias (John); H.J.A.M. Heijmans (Henk)

    1998-01-01

    textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis

  20. Adaptive transmission schemes for MISO spectrum sharing systems

    KAUST Repository

    Bouida, Zied

    2013-06-01

    We propose three adaptive transmission techniques aiming to maximize the capacity of a multiple-input-single-output (MISO) secondary system under the scenario of an underlay cognitive radio network. In the first scheme, namely the best antenna selection (BAS) scheme, the antenna maximizing the capacity of the secondary link is used for transmission. We then propose an orthogonal space time bloc code (OSTBC) transmission scheme using the Alamouti scheme with transmit antenna selection (TAS), namely the TAS/STBC scheme. The performance improvement offered by this scheme comes at the expense of an increased complexity and delay when compared to the BAS scheme. As a compromise between these schemes, we propose a hybrid scheme using BAS when only one antenna verifies the interference condition and TAS/STBC when two or more antennas are illegible for communication. We first derive closed-form expressions of the statistics of the received signal-to-interference-and-noise ratio (SINR) at the secondary receiver (SR). These results are then used to analyze the performance of the proposed techniques in terms of the average spectral efficiency, the average number of transmit antennas, and the average bit error rate (BER). This performance is then illustrated via selected numerical examples. © 2013 IEEE.

  1. Statistical mechanics approach to 1-bit compressed sensing

    International Nuclear Information System (INIS)

    Xu, Yingying; Kabashima, Yoshiyuki

    2013-01-01

    Compressed sensing is a framework that makes it possible to recover an N-dimensional sparse vector x∈R N from its linear transformation y∈R M of lower dimensionality M 1 -norm-based signal recovery scheme for 1-bit compressed sensing using statistical mechanics methods. We show that the signal recovery performance predicted by the replica method under the replica symmetric ansatz, which turns out to be locally unstable for modes breaking the replica symmetry, is in good consistency with experimental results of an approximate recovery algorithm developed earlier. This suggests that the l 1 -based recovery problem typically has many local optima of a similar recovery accuracy, which can be achieved by the approximate algorithm. We also develop another approximate recovery algorithm inspired by the cavity method. Numerical experiments show that when the density of nonzero entries in the original signal is relatively large the new algorithm offers better performance than the abovementioned scheme and does so with a lower computational cost. (paper)

  2. Time-division-multiplex control scheme for voltage multiplier rectifiers

    Directory of Open Access Journals (Sweden)

    Bin-Han Liu

    2017-03-01

    Full Text Available A voltage multiplier rectifier with a novel time-division-multiplexing (TDM control scheme for high step-up converters is proposed in this study. In the proposed TDM control scheme, two full-wave voltage doubler rectifiers can be combined to realise a voltage quadrupler rectifier. The proposed voltage quadrupler rectifier can reduce transformer turn ratio and transformer size for high step-up converters and also reduce voltage stress for the output capacitors and rectifier diodes. An N-times voltage rectifier can be straightforwardly produced by extending the concepts from the proposed TDM control scheme. A phase-shift full-bridge (PSFB converter is adopted in the primary side of the proposed voltage quadrupler rectifier to construct a PSFB quadrupler converter. Experimental results for the PSFB quadrupler converter demonstrate the performance of the proposed TDM control scheme for voltage quadrupler rectifiers. An 8-times voltage rectifier is simulated to determine the validity of extending the proposed TDM control scheme to realise an N-times voltage rectifier. Experimental and simulation results show that the proposed TDM control scheme has great potential to be used in high step-up converters.

  3. A hybrid Lagrangian Voronoi-SPH scheme

    Science.gov (United States)

    Fernandez-Gutierrez, D.; Souto-Iglesias, A.; Zohdi, T. I.

    2017-11-01

    A hybrid Lagrangian Voronoi-SPH scheme, with an explicit weakly compressible formulation for both the Voronoi and SPH sub-domains, has been developed. The SPH discretization is substituted by Voronoi elements close to solid boundaries, where SPH consistency and boundary conditions implementation become problematic. A buffer zone to couple the dynamics of both sub-domains is used. This zone is formed by a set of particles where fields are interpolated taking into account SPH particles and Voronoi elements. A particle may move in or out of the buffer zone depending on its proximity to a solid boundary. The accuracy of the coupled scheme is discussed by means of a set of well-known verification benchmarks.

  4. Efficient Depth Map Compression Exploiting Segmented Color Data

    DEFF Research Database (Denmark)

    Milani, Simone; Zanuttigh, Pietro; Zamarin, Marco

    2011-01-01

    performances is still an open research issue. This paper presents a novel compression scheme that exploits a segmentation of the color data to predict the shape of the different surfaces in the depth map. Then each segment is approximated with a parameterized plane. In case the approximation is sufficiently...

  5. A Modified Computational Scheme for the Stochastic Perturbation Finite Element Method

    Directory of Open Access Journals (Sweden)

    Feng Wu

    Full Text Available Abstract A modified computational scheme of the stochastic perturbation finite element method (SPFEM is developed for structures with low-level uncertainties. The proposed scheme can provide second-order estimates of the mean and variance without differentiating the system matrices with respect to the random variables. When the proposed scheme is used, it involves finite analyses of deterministic systems. In the case of one random variable with a symmetric probability density function, the proposed computational scheme can even provide a result with fifth-order accuracy. Compared with the traditional computational scheme of SPFEM, the proposed scheme is more convenient for numerical implementation. Four numerical examples demonstrate that the proposed scheme can be used in linear or nonlinear structures with correlated or uncorrelated random variables.

  6. A Novel Range Compression Algorithm for Resolution Enhancement in GNSS-SARs

    Directory of Open Access Journals (Sweden)

    Yu Zheng

    2017-06-01

    Full Text Available In this paper, a novel range compression algorithm for enhancing range resolutions of a passive Global Navigation Satellite System-based Synthetic Aperture Radar (GNSS-SAR is proposed. In the proposed algorithm, within each azimuth bin, firstly range compression is carried out by correlating a reflected GNSS intermediate frequency (IF signal with a synchronized direct GNSS base-band signal in the range domain. Thereafter, spectrum equalization is applied to the compressed results for suppressing side lobes to obtain a final range-compressed signal. Both theoretical analysis and simulation results have demonstrated that significant range resolution improvement in GNSS-SAR images can be achieved by the proposed range compression algorithm, compared to the conventional range compression algorithm.

  7. Optical identity authentication technique based on compressive ghost imaging with QR code

    Science.gov (United States)

    Wenjie, Zhan; Leihong, Zhang; Xi, Zeng; Yi, Kang

    2018-04-01

    With the rapid development of computer technology, information security has attracted more and more attention. It is not only related to the information and property security of individuals and enterprises, but also to the security and social stability of a country. Identity authentication is the first line of defense in information security. In authentication systems, response time and security are the most important factors. An optical authentication technology based on compressive ghost imaging with QR codes is proposed in this paper. The scheme can be authenticated with a small number of samples. Therefore, the response time of the algorithm is short. At the same time, the algorithm can resist certain noise attacks, so it offers good security.

  8. PRESS: A Novel Framework of Trajectory Compression in Road Networks

    OpenAIRE

    Song, Renchu; Sun, Weiwei; Zheng, Baihua; Zheng, Yu

    2014-01-01

    Location data becomes more and more important. In this paper, we focus on the trajectory data, and propose a new framework, namely PRESS (Paralleled Road-Network-Based Trajectory Compression), to effectively compress trajectory data under road network constraints. Different from existing work, PRESS proposes a novel representation for trajectories to separate the spatial representation of a trajectory from the temporal representation, and proposes a Hybrid Spatial Compression (HSC) algorithm ...

  9. Light-weight reference-based compression of FASTQ data.

    Science.gov (United States)

    Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan

    2015-06-09

    The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.

  10. Compressing Data Cube in Parallel OLAP Systems

    Directory of Open Access Journals (Sweden)

    Frank Dehne

    2007-03-01

    Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.

  11. Secure and Efficient Anonymous Authentication Scheme in Global Mobility Networks

    Directory of Open Access Journals (Sweden)

    Jun-Sub Kim

    2013-01-01

    Full Text Available In 2012, Mun et al. pointed out that Wu et al.’s scheme failed to achieve user anonymity and perfect forward secrecy and disclosed the passwords of legitimate users. And they proposed a new enhancement for anonymous authentication scheme. However, their proposed scheme has vulnerabilities that are susceptible to replay attack and man-in-the-middle attack. It also incurs a high overhead in the database. In this paper, we examine the vulnerabilities in the existing schemes and the computational overhead incurred in the database. We then propose a secure and efficient anonymous authentication scheme for roaming service in global mobility network. Our proposed scheme is secure against various attacks, provides mutual authentication and session key establishment, and incurs less computational overhead in the database than Mun et al.'s scheme.

  12. Multichannel compressive sensing MRI using noiselet encoding.

    Directory of Open Access Journals (Sweden)

    Kamlesh Pawar

    Full Text Available The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS. In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding.

  13. A Secure and Robust Compressed Domain Video Steganography for Intra- and Inter-Frames Using Embedding-Based Byte Differencing (EBBD) Scheme.

    Science.gov (United States)

    Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah

    2016-01-01

    This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values.

  14. A Secure and Robust Compressed Domain Video Steganography for Intra- and Inter-Frames Using Embedding-Based Byte Differencing (EBBD Scheme.

    Directory of Open Access Journals (Sweden)

    Tarik Idbeaa

    Full Text Available This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD. Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values.

  15. l1- and l2-Norm Joint Regularization Based Sparse Signal Reconstruction Scheme

    Directory of Open Access Journals (Sweden)

    Chanzi Liu

    2016-01-01

    Full Text Available Many problems in signal processing and statistical inference involve finding sparse solution to some underdetermined linear system of equations. This is also the application condition of compressive sensing (CS which can find the sparse solution from the measurements far less than the original signal. In this paper, we propose l1- and l2-norm joint regularization based reconstruction framework to approach the original l0-norm based sparseness-inducing constrained sparse signal reconstruction problem. Firstly, it is shown that, by employing the simple conjugate gradient algorithm, the new formulation provides an effective framework to deduce the solution as the original sparse signal reconstruction problem with l0-norm regularization item. Secondly, the upper reconstruction error limit is presented for the proposed sparse signal reconstruction framework, and it is unveiled that a smaller reconstruction error than l1-norm relaxation approaches can be realized by using the proposed scheme in most cases. Finally, simulation results are presented to validate the proposed sparse signal reconstruction approach.

  16. Bunch Compression Stability Dependence on RF Parameters

    CERN Document Server

    Limberg, T

    2005-01-01

    In present designs for FEL's with high electron peak currents and short bunch lengths, higher harmonic RF systems are often used to optimize the final longitudinal charge distributions. This opens degrees of freedom for the choice of RF phases and amplitudes to achieve the necessary peak current with a reasonable longitudinal bunch shape. It had been found empirically that different working points result in different tolerances for phases and amplitudes. We give an analytical expression for the sensitivity of the compression factor on phase and amplitude jitter for a bunch compression scheme involving two RF systems and two magnetic chicanes as well numerical results for the case of the European XFEL.

  17. Multiuser switched diversity scheduling schemes

    KAUST Repository

    Shaqfeh, Mohammad; Alnuweiri, Hussein M.; Alouini, Mohamed-Slim

    2012-01-01

    Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions. © 2012 IEEE.

  18. Multiuser switched diversity scheduling schemes

    KAUST Repository

    Shaqfeh, Mohammad

    2012-09-01

    Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions. © 2012 IEEE.

  19. Sparse Channel Estimation for MIMO-OFDM Two-Way Relay Network with Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Aihua Zhang

    2013-01-01

    Full Text Available Accurate channel impulse response (CIR is required for equalization and can help improve communication service quality in next-generation wireless communication systems. An example of an advanced system is amplify-and-forward multiple-input multiple-output two-way relay network, which is modulated by orthogonal frequency-division multiplexing. Linear channel estimation methods, for example, least squares and expectation conditional maximization, have been proposed previously for the system. However, these methods do not take advantage of channel sparsity, and they decrease estimation performance. We propose a sparse channel estimation scheme, which is different from linear methods, at end users under the relay channel to enable us to exploit sparsity. First, we formulate the sparse channel estimation problem as a compressed sensing problem by using sparse decomposition theory. Second, the CIR is reconstructed by CoSaMP and OMP algorithms. Finally, computer simulations are conducted to confirm the superiority of the proposed methods over traditional linear channel estimation methods.

  20. A new access scheme in OFDMA systems

    Institute of Scientific and Technical Information of China (English)

    GU Xue-lin; YAN Wei; TIAN Hui; ZHANG Ping

    2006-01-01

    This article presents a dynamic random access scheme for orthogonal frequency division multiple access (OFDMA) systems. The key features of the proposed scheme are:it is a combination of both the distributed and the centralized schemes, it can accommodate several delay sensitivity classes,and it can adjust the number of random access channels in a media access control (MAC) frame and the access probability according to the outcome of Mobile Terminals access attempts in previous MAC frames. For floating populated packet-based networks, the proposed scheme possibly leads to high average user satisfaction.

  1. Time-lens based optical packet pulse compression and retiming

    DEFF Research Database (Denmark)

    Laguardia Areal, Janaina; Hu, Hao; Palushani, Evarist

    2010-01-01

    recovery, resulting in a potentially very efficient solution. The scheme uses a time-lens, implemented through a sinusoidally driven optical phase modulation, combined with a linear dispersion element. As time-lenses are also used for pulse compression, we design the circuit also to perform pulse...

  2. A new deflection technique applied to an existing scheme of electrostatic accelerator for high energy neutral beam injection in fusion reactor devices

    Science.gov (United States)

    Pilan, N.; Antoni, V.; De Lorenzi, A.; Chitarin, G.; Veltri, P.; Sartori, E.

    2016-02-01

    A scheme of a neutral beam injector (NBI), based on electrostatic acceleration and magneto-static deflection of negative ions, is proposed and analyzed in terms of feasibility and performance. The scheme is based on the deflection of a high energy (2 MeV) and high current (some tens of amperes) negative ion beam by a large magnetic deflector placed between the Beam Source (BS) and the neutralizer. This scheme has the potential of solving two key issues, which at present limit the applicability of a NBI to a fusion reactor: the maximum achievable acceleration voltage and the direct exposure of the BS to the flux of neutrons and radiation coming from the fusion reactor. In order to solve these two issues, a magnetic deflector is proposed to screen the BS from direct exposure to radiation and neutrons so that the voltage insulation between the electrostatic accelerator and the grounded vessel can be enhanced by using compressed SF6 instead of vacuum so that the negative ions can be accelerated at energies higher than 1 MeV. By solving the beam transport with different magnetic deflector properties, an optimum scheme has been found which is shown to be effective to guarantee both the steering effect and the beam aiming.

  3. A new deflection technique applied to an existing scheme of electrostatic accelerator for high energy neutral beam injection in fusion reactor devices

    Energy Technology Data Exchange (ETDEWEB)

    Pilan, N., E-mail: nicola.pilan@igi.cnr.it; Antoni, V.; De Lorenzi, A.; Chitarin, G.; Veltri, P.; Sartori, E. [Consorzio RFX—Associazione EURATOM-ENEA per la Fusione, Corso Stati Uniti 4, 35127 Padova (Italy)

    2016-02-15

    A scheme of a neutral beam injector (NBI), based on electrostatic acceleration and magneto-static deflection of negative ions, is proposed and analyzed in terms of feasibility and performance. The scheme is based on the deflection of a high energy (2 MeV) and high current (some tens of amperes) negative ion beam by a large magnetic deflector placed between the Beam Source (BS) and the neutralizer. This scheme has the potential of solving two key issues, which at present limit the applicability of a NBI to a fusion reactor: the maximum achievable acceleration voltage and the direct exposure of the BS to the flux of neutrons and radiation coming from the fusion reactor. In order to solve these two issues, a magnetic deflector is proposed to screen the BS from direct exposure to radiation and neutrons so that the voltage insulation between the electrostatic accelerator and the grounded vessel can be enhanced by using compressed SF{sub 6} instead of vacuum so that the negative ions can be accelerated at energies higher than 1 MeV. By solving the beam transport with different magnetic deflector properties, an optimum scheme has been found which is shown to be effective to guarantee both the steering effect and the beam aiming.

  4. A proposed radiographic classification scheme for congenital thoracic vertebral malformations in brachycephalic "screw-tailed" dog breeds.

    Science.gov (United States)

    Gutierrez-Quintana, Rodrigo; Guevar, Julien; Stalin, Catherine; Faller, Kiterie; Yeamans, Carmen; Penderis, Jacques

    2014-01-01

    Congenital vertebral malformations are common in brachycephalic "screw-tailed" dog breeds such as French bulldogs, English bulldogs, Boston terriers, and pugs. The aim of this retrospective study was to determine whether a radiographic classification scheme developed for use in humans would be feasible for use in these dog breeds. Inclusion criteria were hospital admission between September 2009 and April 2013, neurologic examination findings available, diagnostic quality lateral and ventro-dorsal digital radiographs of the thoracic vertebral column, and at least one congenital vertebral malformation. Radiographs were retrieved and interpreted by two observers who were unaware of neurologic status. Vertebral malformations were classified based on a classification scheme modified from a previous human study and a consensus of both observers. Twenty-eight dogs met inclusion criteria (12 with neurologic deficits, 16 with no neurologic deficits). Congenital vertebral malformations affected 85/362 (23.5%) of thoracic vertebrae. Vertebral body formation defects were the most common (butterfly vertebrae 6.6%, ventral wedge-shaped vertebrae 5.5%, dorsal hemivertebrae 0.8%, and dorso-lateral hemivertebrae 0.5%). No lateral hemivertebrae or lateral wedge-shaped vertebrae were identified. The T7 vertebra was the most commonly affected (11/28 dogs), followed by T8 (8/28 dogs) and T12 (8/28 dogs). The number and type of vertebral malformations differed between groups (P = 0.01). Based on MRI, dorsal, and dorso-lateral hemivertebrae were the cause of spinal cord compression in 5/12 (41.6%) of dogs with neurologic deficits. Findings indicated that a modified human radiographic classification system of vertebral malformations is feasible for use in future studies of brachycephalic "screw-tailed" dogs. © 2014 American College of Veterinary Radiology.

  5. Compact torus compression of microwaves

    International Nuclear Information System (INIS)

    Hewett, D.W.; Langdon, A.B.

    1985-01-01

    The possibility that a compact torus (CT) might be accelerated to large velocities has been suggested by Hartman and Hammer. If this is feasible one application of these moving CTs might be to compress microwaves. The proposed mechanism is that a coaxial vacuum region in front of a CT is prefilled with a number of normal electromagnetic modes on which the CT impinges. A crucial assumption of this proposal is that the CT excludes the microwaves and therefore compresses them. Should the microwaves penetrate the CT, compression efficiency is diminished and significant CT heating results. MFE applications in the same parameters regime have found electromagnetic radiation capable of penetrating, heating, and driving currents. We report here a cursory investigation of rf penetration using a 1-D version of a direct implicit PIC code

  6. Simulation of moving boundaries interacting with compressible reacting flows using a second-order adaptive Cartesian cut-cell method

    Science.gov (United States)

    Muralidharan, Balaji; Menon, Suresh

    2018-03-01

    A high-order adaptive Cartesian cut-cell method, developed in the past by the authors [1] for simulation of compressible viscous flow over static embedded boundaries, is now extended for reacting flow simulations over moving interfaces. The main difficulty related to simulation of moving boundary problems using immersed boundary techniques is the loss of conservation of mass, momentum and energy during the transition of numerical grid cells from solid to fluid and vice versa. Gas phase reactions near solid boundaries can produce huge source terms to the governing equations, which if not properly treated for moving boundaries, can result in inaccuracies in numerical predictions. The small cell clustering algorithm proposed in our previous work is now extended to handle moving boundaries enforcing strict conservation. In addition, the cell clustering algorithm also preserves the smoothness of solution near moving surfaces. A second order Runge-Kutta scheme where the boundaries are allowed to change during the sub-time steps is employed. This scheme improves the time accuracy of the calculations when the body motion is driven by hydrodynamic forces. Simple one dimensional reacting and non-reacting studies of moving piston are first performed in order to demonstrate the accuracy of the proposed method. Results are then reported for flow past moving cylinders at subsonic and supersonic velocities in a viscous compressible flow and are compared with theoretical and previously available experimental data. The ability of the scheme to handle deforming boundaries and interaction of hydrodynamic forces with rigid body motion is demonstrated using different test cases. Finally, the method is applied to investigate the detonation initiation and stabilization mechanisms on a cylinder and a sphere, when they are launched into a detonable mixture. The effect of the filling pressure on the detonation stabilization mechanisms over a hyper-velocity sphere launched into a hydrogen

  7. Performance evaluation of breast image compression techniques

    International Nuclear Information System (INIS)

    Anastassopoulos, G.; Lymberopoulos, D.; Panayiotakis, G.; Bezerianos, A.

    1994-01-01

    Novel diagnosis orienting tele working systems manipulate, store, and process medical data through real time communication - conferencing schemes. One of the most important factors affecting the performance of these systems is image handling. Compression algorithms can be applied to the medical images, in order to minimize : a) the volume of data to be stored in the database, b) the demanded bandwidth from the network, c) the transmission costs, and to minimize the speed of the transmitted data. In this paper an estimation of all the factors of the process that affect the presentation of breast images is made, from the time the images are produced from a modality, till the compressed images are stored, or transmitted in a Broadband network (e.g. B-ISDN). The images used were scanned images of the TOR(MAX) Leeds breast phantom, as well as typical breast images. A comparison of seven compression techniques has been done, based on objective criteria such as Mean Square Error (MSE), resolution, contrast, etc. The user can choose the appropriate compression ratio in order to achieve the desired image quality. (authors)

  8. A Multiserver Biometric Authentication Scheme for TMIS using Elliptic Curve Cryptography.

    Science.gov (United States)

    Chaudhry, Shehzad Ashraf; Khan, Muhammad Tawab; Khan, Muhammad Khurram; Shon, Taeshik

    2016-11-01

    Recently several authentication schemes are proposed for telecare medicine information system (TMIS). Many of such schemes are proved to have weaknesses against known attacks. Furthermore, numerous such schemes cannot be used in real time scenarios. Because they assume a single server for authentication across the globe. Very recently, Amin et al. (J. Med. Syst. 39(11):180, 2015) designed an authentication scheme for secure communication between a patient and a medical practitioner using a trusted central medical server. They claimed their scheme to extend all security requirements and emphasized the efficiency of their scheme. However, the analysis in this article proves that the scheme designed by Amin et al. is vulnerable to stolen smart card and stolen verifier attacks. Furthermore, their scheme is having scalability issues along with inefficient password change and password recovery phases. Then we propose an improved scheme. The proposed scheme is more practical, secure and lightweight than Amin et al.'s scheme. The security of proposed scheme is proved using the popular automated tool ProVerif.

  9. Highly Efficient Compression Algorithms for Multichannel EEG.

    Science.gov (United States)

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  10. Fluffy dust forms icy planetesimals by static compression

    Science.gov (United States)

    Kataoka, Akimasa; Tanaka, Hidekazu; Okuzumi, Satoshi; Wada, Koji

    2013-09-01

    Context. Several barriers have been proposed in planetesimal formation theory: bouncing, fragmentation, and radial drift problems. Understanding the structure evolution of dust aggregates is a key in planetesimal formation. Dust grains become fluffy by coagulation in protoplanetary disks. However, once they are fluffy, they are not sufficiently compressed by collisional compression to form compact planetesimals. Aims: We aim to reveal the pathway of dust structure evolution from dust grains to compact planetesimals. Methods: Using the compressive strength formula, we analytically investigate how fluffy dust aggregates are compressed by static compression due to ram pressure of the disk gas and self-gravity of the aggregates in protoplanetary disks. Results: We reveal the pathway of the porosity evolution from dust grains via fluffy aggregates to form planetesimals, circumventing the barriers in planetesimal formation. The aggregates are compressed by the disk gas to a density of 10-3 g/cm3 in coagulation, which is more compact than is the case with collisional compression. Then, they are compressed more by self-gravity to 10-1 g/cm3 when the radius is 10 km. Although the gas compression decelerates the growth, the aggregates grow rapidly enough to avoid the radial drift barrier when the orbital radius is ≲6 AU in a typical disk. Conclusions: We propose a fluffy dust growth scenario from grains to planetesimals. It enables icy planetesimal formation in a wide range beyond the snowline in protoplanetary disks. This result proposes a concrete initial condition of planetesimals for the later stages of the planet formation.

  11. Effectiveness of compressed sensing and transmission in wireless sensor networks for structural health monitoring

    Science.gov (United States)

    Fujiwara, Takahiro; Uchiito, Haruki; Tokairin, Tomoya; Kawai, Hiroyuki

    2017-04-01

    Regarding Structural Health Monitoring (SHM) for seismic acceleration, Wireless Sensor Networks (WSN) is a promising tool for low-cost monitoring. Compressed sensing and transmission schemes have been drawing attention to achieve effective data collection in WSN. Especially, SHM systems installing massive nodes of WSN require efficient data transmission due to restricted communications capability. The dominant frequency band of seismic acceleration is occupied within 100 Hz or less. In addition, the response motions on upper floors of a structure are activated at a natural frequency, resulting in induced shaking at the specified narrow band. Focusing on the vibration characteristics of structures, we introduce data compression techniques for seismic acceleration monitoring in order to reduce the amount of transmission data. We carry out a compressed sensing and transmission scheme by band pass filtering for seismic acceleration data. The algorithm executes the discrete Fourier transform for the frequency domain and band path filtering for the compressed transmission. Assuming that the compressed data is transmitted through computer networks, restoration of the data is performed by the inverse Fourier transform in the receiving node. This paper discusses the evaluation of the compressed sensing for seismic acceleration by way of an average error. The results present the average error was 0.06 or less for the horizontal acceleration, in conditions where the acceleration was compressed into 1/32. Especially, the average error on the 4th floor achieved a small error of 0.02. Those results indicate that compressed sensing and transmission technique is effective to reduce the amount of data with maintaining the small average error.

  12. Security and efficiency data sharing scheme for cloud storage

    International Nuclear Information System (INIS)

    Han, Ke; Li, Qingbo; Deng, Zhongliang

    2016-01-01

    With the adoption and diffusion of data sharing paradigm in cloud storage, there have been increasing demands and concerns for shared data security. Ciphertext Policy Attribute-Based Encryption (CP-ABE) is becoming a promising cryptographic solution to the security problem of shared data in cloud storage. However due to key escrow, backward security and inefficiency problems, existing CP-ABE schemes cannot be directly applied to cloud storage system. In this paper, an effective and secure access control scheme for shared data is proposed to solve those problems. The proposed scheme refines the security of existing CP-ABE based schemes. Specifically, key escrow and conclusion problem are addressed by dividing key generation center into several distributed semi-trusted parts. Moreover, secrecy revocation algorithm is proposed to address not only back secrecy but efficient problem in existing CP-ABE based scheme. Furthermore, security and performance analyses indicate that the proposed scheme is both secure and efficient for cloud storage.

  13. Extremely short relativistic-electron-bunch generation in the laser wakefield via novel bunch injection scheme

    Directory of Open Access Journals (Sweden)

    A. G. Khachatryan

    2004-12-01

    Full Text Available Recently a new electron-bunch injection scheme for the laser wakefield accelerator has been proposed [JETP Lett. 74, 371 (2001JTPLA20021-364010.1134/1.1427124; Phys. Rev. E 65, 046504 (2002PLEEE81063-651X10.1103/PhysRevE.65.046504]. In this scheme, a low energy electron bunch, sent in a plasma channel just before a high-intensity laser pulse, is trapped in the laser wakefield, considerably compressed and accelerated to an ultrarelativistic energy. In this paper we show the possibility of the generation of an extremely short (on the order of 1   μm long or a few femtoseconds in duration relativistic-electron-bunch by this mechanism. The initial electron bunch, which can be generated, for example, by a laser-driven photocathode rf gun, should have an energy of a few hundred keVs to a few MeVs, a duration in the picosecond range or less and a relatively low concentration. The trapping conditions and parameters of an accelerated bunch are investigated. The laser pulse dynamics as well as a possible experimental setup for the demonstration of the injection scheme are also considered.

  14. A Memory Efficient Network Encryption Scheme

    Science.gov (United States)

    El-Fotouh, Mohamed Abo; Diepold, Klaus

    In this paper, we studied the two widely used encryption schemes in network applications. Shortcomings have been found in both schemes, as these schemes consume either more memory to gain high throughput or low memory with low throughput. The need has aroused for a scheme that has low memory requirements and in the same time possesses high speed, as the number of the internet users increases each day. We used the SSM model [1], to construct an encryption scheme based on the AES. The proposed scheme possesses high throughput together with low memory requirements.

  15. Exact analysis of Packet Reversed Packet Combining Scheme and Modified Packet Combining Scheme; and a combined scheme

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-07-01

    Packet combining scheme is a well defined simple error correction scheme for the detection and correction of errors at the receiver. Although it permits a higher throughput when compared to other basic ARQ protocols, packet combining (PC) scheme fails to correct errors when errors occur in the same bit locations of copies. In a previous work, a scheme known as Packet Reversed Packet Combining (PRPC) Scheme that will correct errors which occur at the same bit location of erroneous copies, was studied however PRPC does not handle a situation where a packet has more than 1 error bit. The Modified Packet Combining (MPC) Scheme that can correct double or higher bit errors was studied elsewhere. Both PRPC and MPC schemes are believed to offer higher throughput in previous studies, however neither adequate investigation nor exact analysis was done to substantiate this claim of higher throughput. In this work, an exact analysis of both PRPC and MPC is carried out and the results reported. A combined protocol (PRPC and MPC) is proposed and the analysis shows that it is capable of offering even higher throughput and better error correction capability at high bit error rate (BER) and larger packet size. (author)

  16. Robust and Efficient Authentication Scheme for Session Initiation Protocol

    Directory of Open Access Journals (Sweden)

    Yanrong Lu

    2015-01-01

    Full Text Available The session initiation protocol (SIP is a powerful application-layer protocol which is used as a signaling one for establishing, modifying, and terminating sessions among participants. Authentication is becoming an increasingly crucial issue when a user asks to access SIP services. Hitherto, many authentication schemes have been proposed to enhance the security of SIP. In 2014, Arshad and Nikooghadam proposed an enhanced authentication and key agreement scheme for SIP and claimed that their scheme could withstand various attacks. However, in this paper, we show that Arshad and Nikooghadam’s authentication scheme is still susceptible to key-compromise impersonation and trace attacks and does not provide proper mutual authentication. To conquer the flaws, we propose a secure and efficient ECC-based authentication scheme for SIP. Through the informal and formal security analyses, we demonstrate that our scheme is resilient to possible known attacks including the attacks found in Arshad et al.’s scheme. In addition, the performance analysis shows that our scheme has similar or better efficiency in comparison with other existing ECC-based authentication schemes for SIP.

  17. A new hyperspectral image compression paradigm based on fusion

    Science.gov (United States)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  18. A hybrid pi control scheme for airship hovering

    International Nuclear Information System (INIS)

    Ashraf, Z.; Choudhry, M.A.; Hanif, A.

    2012-01-01

    Airship provides us many attractive applications in aerospace industry including transportation of heavy payloads, tourism, emergency management, communication, hover and vision based applications. Hovering control of airship has many utilizations in different engineering fields. However, it is a difficult problem to sustain the hover condition maintaining controllability. So far, different solutions have been proposed in literature but most of them are difficult in analysis and implementation. In this paper, we have presented a simple and efficient scheme to design a multi input multi output hybrid PI control scheme for airship. It can maintain stability of the plant by rejecting disturbance inputs to ensure robustness. A control scheme based on feedback theory is proposed that uses principles of optimality with integral action for hovering applications. Simulations are carried out in MTALAB for examining the proposed control scheme for hovering in different wind conditions. Comparison of the technique with an existing scheme is performed, describing the effectiveness of control scheme. (author)

  19. Statistical conditional sampling for variable-resolution video compression.

    Directory of Open Access Journals (Sweden)

    Alexander Wong

    Full Text Available In this study, we investigate a variable-resolution approach to video compression based on Conditional Random Field and statistical conditional sampling in order to further improve compression rate while maintaining high-quality video. In the proposed approach, representative key-frames within a video shot are identified and stored at full resolution. The remaining frames within the video shot are stored and compressed at a reduced resolution. At the decompression stage, a region-based dictionary is constructed from the key-frames and used to restore the reduced resolution frames to the original resolution via statistical conditional sampling. The sampling approach is based on the conditional probability of the CRF modeling by use of the constructed dictionary. Experimental results show that the proposed variable-resolution approach via statistical conditional sampling has potential for improving compression rates when compared to compressing the video at full resolution, while achieving higher video quality when compared to compressing the video at reduced resolution.

  20. Quality-on-Demand Compression of EEG Signals for Telemedicine Applications Using Neural Network Predictors

    Directory of Open Access Journals (Sweden)

    N. Sriraam

    2011-01-01

    Full Text Available A telemedicine system using communication and information technology to deliver medical signals such as ECG, EEG for long distance medical services has become reality. In either the urgent treatment or ordinary healthcare, it is necessary to compress these signals for the efficient use of bandwidth. This paper discusses a quality on demand compression of EEG signals using neural network predictors for telemedicine applications. The objective is to obtain a greater compression gains at a low bit rate while preserving the clinical information content. A two-stage compression scheme with a predictor and an entropy encoder is used. The residue signals obtained after prediction is first thresholded using various levels of thresholds and are further quantized and then encoded using an arithmetic encoder. Three neural network models, single-layer and multi-layer perceptrons and Elman network are used and the results are compared with linear predictors such as FIR filters and AR modeling. The fidelity of the reconstructed EEG signal is assessed quantitatively using parameters such as PRD, SNR, cross correlation and power spectral density. It is found from the results that the quality of the reconstructed signal is preserved at a low PRD thereby yielding better compression results compared to results obtained using lossless scheme.

  1. Advancement of compressible multiphase flows and sodium-water reaction analysis program SERAPHIM. Validation of a numerical method for the simulation of highly underexpanded jets

    International Nuclear Information System (INIS)

    Uchibori, Akihiro; Ohshima, Hiroyuki; Watanabe, Akira

    2010-01-01

    SERAPHIM is a computer program for the simulation of the compressible multiphase flow involving the sodium-water chemical reaction under a tube failure accident in a steam generator of sodium cooled fast reactors. In this study, the numerical analysis of the highly underexpanded air jets into the air or into the water was performed as a part of validation of the SERAPHIM program. The multi-fluid model, the second-order TVD scheme and the HSMAC method considering a compressibility were used in this analysis. Combining these numerical methods makes it possible to calculate the multiphase flow including supersonic gaseous jets. In the case of the air jet into the air, the calculated pressure, the shape of the jet and the location of a Mach disk agreed with the existing experimental results. The effect of the difference scheme and the mesh resolution on the prediction accuracy was clarified through these analyses. The behavior of the air jet into the water was also reproduced successfully by the proposed numerical method. (author)

  2. Wireless Broadband Access and Accounting Schemes

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In this paper, we propose two wireless broadband access and accounting schemes. In both schemes, the accounting system adopts RADIUS protocol, but the access system adopts SSH and SSL protocols respectively.

  3. Symmetric weak ternary quantum homomorphic encryption schemes

    Science.gov (United States)

    Wang, Yuqi; She, Kun; Luo, Qingbin; Yang, Fan; Zhao, Chao

    2016-03-01

    Based on a ternary quantum logic circuit, four symmetric weak ternary quantum homomorphic encryption (QHE) schemes were proposed. First, for a one-qutrit rotation gate, a QHE scheme was constructed. Second, in view of the synthesis of a general 3 × 3 unitary transformation, another one-qutrit QHE scheme was proposed. Third, according to the one-qutrit scheme, the two-qutrit QHE scheme about generalized controlled X (GCX(m,n)) gate was constructed and further generalized to the n-qutrit unitary matrix case. Finally, the security of these schemes was analyzed in two respects. It can be concluded that the attacker can correctly guess the encryption key with a maximum probability pk = 1/33n, thus it can better protect the privacy of users’ data. Moreover, these schemes can be well integrated into the future quantum remote server architecture, and thus the computational security of the users’ private quantum information can be well protected in a distributed computing environment.

  4. ECF2: A pulsed power generator based on magnetic flux compression for K-shell radiation production

    International Nuclear Information System (INIS)

    L'Eplattenier, P.; Lassalle, F.; Mangeant, C.; Hamann, F.; Bavay, M.; Bayol, F.; Huet, D.; Morell, A.; Monjaux, P.; Avrillaud, G.; Lalle, B.

    2002-01-01

    The 3 MJ energy stored ECF2 generator is developed at Centre d'Etudes de Gramat, France, for K-shell radiation production. This generator is based on microsecond LTD stages as primary generators, and on the magnetic flux compression scheme for power amplification from the microsecond to the 100ns regime. This paper presents a general overview of the ECF2 generator. The flux compression stage, a key component, will be studied in details. We will present its advantages and drawbacks. We will then present the first experimental and numerical results which show the improvements that have already been made on this scheme

  5. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    Science.gov (United States)

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  6. Arbitrated quantum signature scheme with message recovery

    International Nuclear Information System (INIS)

    Lee, Hwayean; Hong, Changho; Kim, Hyunsang; Lim, Jongin; Yang, Hyung Jin

    2004-01-01

    Two quantum signature schemes with message recovery relying on the availability of an arbitrator are proposed. One scheme uses a public board and the other does not. However both schemes provide confidentiality of the message and a higher efficiency in transmission

  7. Considerations and Algorithms for Compression of Sets

    DEFF Research Database (Denmark)

    Larsson, Jesper

    We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....

  8. A robust trust establishment scheme for wireless sensor networks.

    Science.gov (United States)

    Ishmanov, Farruh; Kim, Sung Won; Nam, Seung Yeob

    2015-03-23

    Security techniques like cryptography and authentication can fail to protect a network once a node is compromised. Hence, trust establishment continuously monitors and evaluates node behavior to detect malicious and compromised nodes. However, just like other security schemes, trust establishment is also vulnerable to attack. Moreover, malicious nodes might misbehave intelligently to trick trust establishment schemes. Unfortunately, attack-resistance and robustness issues with trust establishment schemes have not received much attention from the research community. Considering the vulnerability of trust establishment to different attacks and the unique features of sensor nodes in wireless sensor networks, we propose a lightweight and robust trust establishment scheme. The proposed trust scheme is lightweight thanks to a simple trust estimation method. The comprehensiveness and flexibility of the proposed trust estimation scheme make it robust against different types of attack and misbehavior. Performance evaluation under different types of misbehavior and on-off attacks shows that the detection rate of the proposed trust mechanism is higher and more stable compared to other trust mechanisms.

  9. A Robust Trust Establishment Scheme for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Farruh Ishmanov

    2015-03-01

    Full Text Available Security techniques like cryptography and authentication can fail to protect a network once a node is compromised. Hence, trust establishment continuously monitors and evaluates node behavior to detect malicious and compromised nodes. However, just like other security schemes, trust establishment is also vulnerable to attack. Moreover, malicious nodes might misbehave intelligently to trick trust establishment schemes. Unfortunately, attack-resistance and robustness issues with trust establishment schemes have not received much attention from the research community. Considering the vulnerability of trust establishment to different attacks and the unique features of sensor nodes in wireless sensor networks, we propose a lightweight and robust trust establishment scheme. The proposed trust scheme is lightweight thanks to a simple trust estimation method. The comprehensiveness and flexibility of the proposed trust estimation scheme make it robust against different types of attack and misbehavior. Performance evaluation under different types of misbehavior and on-off attacks shows that the detection rate of the proposed trust mechanism is higher and more stable compared to other trust mechanisms.

  10. A simple angular transmit diversity scheme using a single RF frontend for PSK modulation schemes

    DEFF Research Database (Denmark)

    Alrabadi, Osama Nafeth Saleem; Papadias, Constantinos B.; Kalis, Antonis

    2009-01-01

    array (SPA) with a single transceiver, and an array area of 0.0625 square wavelengths. The scheme which requires no channel state information (CSI) at the transmitter, provides mainly a diversity gain to combat against multipath fading. The performance/capacity of the proposed diversity scheme...

  11. Autonomous droop scheme with reduced generation cost

    DEFF Research Database (Denmark)

    Nutkani, Inam Ullah; Loh, Poh Chiang; Blaabjerg, Frede

    2013-01-01

    Droop scheme has been widely applied to the control of Distributed Generators (DGs) in microgrids for proportional power sharing based on their ratings. For standalone microgrid, where centralized management system is not viable, the proportional power sharing based droop might not suit well since...... DGs are usually of different types unlike synchronous generators. This paper presents an autonomous droop scheme that takes into consideration the operating cost, efficiency and emission penalty of each DG since all these factors directly or indirectly contributes to the Total Generation Cost (TGC......) of the overall microgrid. Comparing it with the traditional scheme, the proposed scheme has retained its simplicity, which certainly is a feature preferred by the industry. The overall performance of the proposed scheme has been verified through simulation and experiment....

  12. Peeling Decoding of LDPC Codes with Applications in Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Weijun Zeng

    2016-01-01

    Full Text Available We present a new approach for the analysis of iterative peeling decoding recovery algorithms in the context of Low-Density Parity-Check (LDPC codes and compressed sensing. The iterative recovery algorithm is particularly interesting for its low measurement cost and low computational complexity. The asymptotic analysis can track the evolution of the fraction of unrecovered signal elements in each iteration, which is similar to the well-known density evolution analysis in the context of LDPC decoding algorithm. Our analysis shows that there exists a threshold on the density factor; if under this threshold, the recovery algorithm is successful; otherwise it will fail. Simulation results are also provided for verifying the agreement between the proposed asymptotic analysis and recovery algorithm. Compared with existing works of peeling decoding algorithm, focusing on the failure probability of the recovery algorithm, our proposed approach gives accurate evolution of performance with different parameters of measurement matrices and is easy to implement. We also show that the peeling decoding algorithm performs better than other schemes based on LDPC codes.

  13. Distributed multi-agent scheme for reactive power management with renewable energy

    International Nuclear Information System (INIS)

    Rahman, M.S.; Mahmud, M.A.; Pota, H.R.; Hossain, M.J.

    2014-01-01

    Highlights: • A distributed multi-agent scheme is proposed to enhance the dynamic voltage stability. • A control agent is designed where control actions are performed through PI controller. • Proposed scheme is compared with the conventional approach with DSTATCOM. • Proposed scheme adapts the capability of estimation and control under various operating conditions. - Abstract: This paper presents a new distributed multi-agent scheme for reactive power management in smart coordinated distribution networks with renewable energy sources (RESs) to enhance the dynamic voltage stability, which is mainly based on controlling distributed static synchronous compensators (DSTATCOMs). The proposed control scheme is incorporated in a multi-agent framework where the intelligent agents simultaneously coordinate with each other and represent various physical models to provide information and energy flow among different physical processes. The reactive power is estimated from the topology of distribution networks and with this information, necessary control actions are performed through the proposed proportional integral (PI) controller. The performance of the proposed scheme is evaluated on a 8-bus distribution network under various operating conditions. The performance of the proposed scheme is validated through simulation results and these results are compared to that of conventional PI-based DSTATCOM control scheme. From simulation results, it is found that the distributed MAS provides excellence performance for improving voltage profiles by managing reactive power in a smarter way

  14. A proposal of new nuclear communication scheme based on qualitative research

    International Nuclear Information System (INIS)

    Yagi, Ekou; Takahashi, Makoto; Kitamura, Masaharu

    2007-01-01

    An action research project called dialogue forum has been conducted in this study. The essential constituent of the project is a series of repetitive dialogue sessions carried out by lay citizens, nuclear experts, and a facilitator. One important feature of the project is that the study has been conducted based on the qualitative research methodology. The changes in opinions and attitude of the dialogue participants have been analyzed by an ethno-methodological approach. The observations are summarized as follows. The opinions of the citizen participants showed a significant shift from emotional to practical representations along with the progression of the dialogue sessions. Meanwhile, their attitude showed a marked tendency from problem-statement-oriented to problem-solving-oriented representation. On the other hand, the statements of the expert participants showed a significant shift from expert-based to citizen-based risk recognition and description, and their attitude showed a clear tendency from teaching-oriented to colearning-oriented thinking. These changes of opinions and attitude have been interpreted as a coevolving rather than a single process. It can be stressed that this type of change is most important for the reestablishment of mutual trust between the citizens and the nuclear experts. In this regard The Process Model of Coevolution of Risk Recognition' has been proposed as a guideline for developing a new scheme of public communication concerning nuclear technology. The proposed process model of coevolution of risk recognition is regarded to be essential for appropriate relationship management between nuclear technology and society in the near future. (author)

  15. An anti-diffusive Lagrange-Remap scheme for multi-material compressible flows with an arbitrary number of components

    Directory of Open Access Journals (Sweden)

    Kokh Samuel

    2012-04-01

    Full Text Available We propose a method dedicated to the simulation of interface flows involving an arbitrary number m of compressible components. Our task is two-fold: we first introduce a m-component flow model that generalizes the two-material five-equation model of [2,3]. Then, we present a discretization strategy by means of a Lagrange-Remap [8,10] approach following the lines of [5,7,12]. The projection step involves an anti-dissipative mechanism derived from [11,12]. This feature allows to prevent the numerical diffusion of the material interfaces. We present two-dimensional simulation results of three-material flow. Nous proposons une méthode de simulation pour des écoulements comportant un nombre arbitraire m de composants compressibles séparés par des interfaces. Nous procdons en deux tapes : tout d’abord nous introduisons un modèle d’écoulementm composants qui généralise le modèle à cinq équations de [2,3]. Ensuite nous présentons une stratégie de discrétisation de type Lagrange-Projection [8,10] inspirée de [5,7,12]. La phase de projection met en œuvre une technique de transport anti-diffusive [11,12] qui permet de limiter la diffusion numérique des interfaces matérielles. Nous présentons des résultats de calcul bidimensionnel d’écoulement à trois composants.

  16. A novel secret image sharing scheme based on chaotic system

    Science.gov (United States)

    Li, Li; Abd El-Latif, Ahmed A.; Wang, Chuanjun; Li, Qiong; Niu, Xiamu

    2012-04-01

    In this paper, we propose a new secret image sharing scheme based on chaotic system and Shamir's method. The new scheme protects the shadow images with confidentiality and loss-tolerance simultaneously. In the new scheme, we generate the key sequence based on chaotic system and then encrypt the original image during the sharing phase. Experimental results and analysis of the proposed scheme demonstrate a better performance than other schemes and confirm a high probability to resist brute force attack.

  17. Performance evaluation of breast image compression techniques

    Energy Technology Data Exchange (ETDEWEB)

    Anastassopoulos, G; Lymberopoulos, D [Wire Communications Laboratory, Electrical Engineering Department, University of Patras, Greece (Greece); Panayiotakis, G; Bezerianos, A [Medical Physics Department, School of Medicine, University of Patras, Greece (Greece)

    1994-12-31

    Novel diagnosis orienting tele working systems manipulate, store, and process medical data through real time communication - conferencing schemes. One of the most important factors affecting the performance of these systems is image handling. Compression algorithms can be applied to the medical images, in order to minimize : a) the volume of data to be stored in the database, b) the demanded bandwidth from the network, c) the transmission costs, and to minimize the speed of the transmitted data. In this paper an estimation of all the factors of the process that affect the presentation of breast images is made, from the time the images are produced from a modality, till the compressed images are stored, or transmitted in a Broadband network (e.g. B-ISDN). The images used were scanned images of the TOR(MAX) Leeds breast phantom, as well as typical breast images. A comparison of seven compression techniques has been done, based on objective criteria such as Mean Square Error (MSE), resolution, contrast, etc. The user can choose the appropriate compression ratio in order to achieve the desired image quality. (authors). 12 refs, 4 figs.

  18. A New Algorithm for the On-Board Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Raúl Guerra

    2018-03-01

    Full Text Available Hyperspectral sensors are able to provide information that is useful for many different applications. However, the huge amounts of data collected by these sensors are not exempt of drawbacks, especially in remote sensing environments where the hyperspectral images are collected on-board satellites and need to be transferred to the earth’s surface. In this situation, an efficient compression of the hyperspectral images is mandatory in order to save bandwidth and storage space. Lossless compression algorithms have been traditionally preferred, in order to preserve all the information present in the hyperspectral cube for scientific purposes, despite their limited compression ratio. Nevertheless, the increment in the data-rate of the new-generation sensors is making more critical the necessity of obtaining higher compression ratios, making it necessary to use lossy compression techniques. A new transform-based lossy compression algorithm, namely Lossy Compression Algorithm for Hyperspectral Image Systems (HyperLCA, is proposed in this manuscript. This compressor has been developed for achieving high compression ratios with a good compression performance at a reasonable computational burden. An extensive amount of experiments have been performed in order to evaluate the goodness of the proposed HyperLCA compressor using different calibrated and uncalibrated hyperspectral images from the AVIRIS and Hyperion sensors. The results provided by the proposed HyperLCA compressor have been evaluated and compared against those produced by the most relevant state-of-the-art compression solutions. The theoretical and experimental evidence indicates that the proposed algorithm represents an excellent option for lossy compressing hyperspectral images, especially for applications where the available computational resources are limited, such as on-board scenarios.

  19. Huffman-based code compression techniques for embedded processors

    KAUST Repository

    Bonny, Mohamed Talal

    2010-09-01

    The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures

  20. Linear perturbation of spherically symmetric flows: a first-order upwind scheme for the gas dynamics equations in Lagrangian coordinates; Perturbation lineaire d'ecoulements a symetrie spherique: schema decentre d'ordre 1 pour les equations de la dynamique des gaz en variables de Lagrange

    Energy Technology Data Exchange (ETDEWEB)

    Clarisse, J.M

    2007-07-01

    A numerical scheme for computing linear Lagrangian perturbations of spherically symmetric flows of gas dynamics is proposed. This explicit first-order scheme uses the Roe method in Lagrangian coordinates, for computing the radial spherically symmetric mean flow, and its linearized version, for treating the three-dimensional linear perturbations. Fulfillment of the geometric conservation law discrete formulations for both the mean flow and its perturbation is ensured. This scheme capabilities are illustrated by the computation of free-surface mode evolutions at the boundaries of a spherical hollow shell undergoing an homogeneous cumulative compression, showing excellent agreement with reference results. (author)

  1. ONU Power Saving Scheme for EPON System

    Science.gov (United States)

    Mukai, Hiroaki; Tano, Fumihiko; Tanaka, Masaki; Kozaki, Seiji; Yamanaka, Hideaki

    PON (Passive Optical Network) achieves FTTH (Fiber To The Home) economically, by sharing an optical fiber among plural subscribers. Recently, global climate change has been recognized as a serious near term problem. Power saving techniques for electronic devices are important. In PON system, the ONU (Optical Network Unit) power saving scheme has been studied and defined in XG-PON. In this paper, we propose an ONU power saving scheme for EPON. Then, we present an analysis of the power reduction effect and the data transmission delay caused by the ONU power saving scheme. According to the analysis, we propose an efficient provisioning method for the ONU power saving scheme which is applicable to both of XG-PON and EPON.

  2. Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations.

    Science.gov (United States)

    Ullah, Azmat; Malik, Suheel Abdullah; Alimgeer, Khurram Saleem

    2018-01-01

    In this paper, a hybrid heuristic scheme based on two different basis functions i.e. Log Sigmoid and Bernstein Polynomial with unknown parameters is used for solving the nonlinear heat transfer equations efficiently. The proposed technique transforms the given nonlinear ordinary differential equation into an equivalent global error minimization problem. Trial solution for the given nonlinear differential equation is formulated using a fitness function with unknown parameters. The proposed hybrid scheme of Genetic Algorithm (GA) with Interior Point Algorithm (IPA) is opted to solve the minimization problem and to achieve the optimal values of unknown parameters. The effectiveness of the proposed scheme is validated by solving nonlinear heat transfer equations. The results obtained by the proposed scheme are compared and found in sharp agreement with both the exact solution and solution obtained by Haar Wavelet-Quasilinearization technique which witnesses the effectiveness and viability of the suggested scheme. Moreover, the statistical analysis is also conducted for investigating the stability and reliability of the presented scheme.

  3. Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations.

    Directory of Open Access Journals (Sweden)

    Azmat Ullah

    Full Text Available In this paper, a hybrid heuristic scheme based on two different basis functions i.e. Log Sigmoid and Bernstein Polynomial with unknown parameters is used for solving the nonlinear heat transfer equations efficiently. The proposed technique transforms the given nonlinear ordinary differential equation into an equivalent global error minimization problem. Trial solution for the given nonlinear differential equation is formulated using a fitness function with unknown parameters. The proposed hybrid scheme of Genetic Algorithm (GA with Interior Point Algorithm (IPA is opted to solve the minimization problem and to achieve the optimal values of unknown parameters. The effectiveness of the proposed scheme is validated by solving nonlinear heat transfer equations. The results obtained by the proposed scheme are compared and found in sharp agreement with both the exact solution and solution obtained by Haar Wavelet-Quasilinearization technique which witnesses the effectiveness and viability of the suggested scheme. Moreover, the statistical analysis is also conducted for investigating the stability and reliability of the presented scheme.

  4. A new numerical scheme for non uniform homogenized problems: Application to the non linear Reynolds compressible equation

    Directory of Open Access Journals (Sweden)

    Buscaglia Gustavo C.

    2001-01-01

    Full Text Available A new numerical approach is proposed to alleviate the computational cost of solving non-linear non-uniform homogenized problems. The article details the application of the proposed approach to lubrication problems with roughness effects. The method is based on a two-parameter Taylor expansion of the implicit dependence of the homogenized coefficients on the average pressure and on the local value of the air gap thickness. A fourth-order Taylor expansion provides an approximation that is accurate enough to be used in the global problem solution instead of the exact dependence, without introducing significant errors. In this way, when solving the global problem, the solution of local problems is simply replaced by the evaluation of a polynomial. Moreover, the method leads naturally to Newton-Raphson nonlinear iterations, that further reduce the cost. The overall efficiency of the numerical methodology makes it feasible to apply rigorous homogenization techniques in the analysis of compressible fluid contact considering roughness effects. Previous work makes use of an heuristic averaging technique. Numerical comparison proves that homogenization-based methods are superior when the roughness is strongly anisotropic and not aligned with the flow direction.

  5. Autonomous Droop Scheme With Reduced Generation Cost

    DEFF Research Database (Denmark)

    Nutkani, Inam Ullah; Loh, Poh Chiang; Wang, Peng

    2014-01-01

    ) of the microgrid. To reduce this TGC without relying on fast communication links, an autonomous droop scheme is proposed here, whose resulting power sharing is decided by the individual DG generation costs. Comparing it with the traditional scheme, the proposed scheme retains its simplicity and it is hence more....... This objective might, however, not suit microgrids well since DGs are usually of different types, unlike synchronous generators. Other factors like cost, efficiency, and emission penalty of each DG at different loading must be considered since they contribute directly to the total generation cost (TGC...

  6. Asynchronous Channel-Hopping Scheme under Jamming Attacks

    Directory of Open Access Journals (Sweden)

    Yongchul Kim

    2018-01-01

    Full Text Available Cognitive radio networks (CRNs are considered an attractive technology to mitigate inefficiency in the usage of licensed spectrum. CRNs allow the secondary users (SUs to access the unused licensed spectrum and use a blind rendezvous process to establish communication links between SUs. In particular, quorum-based channel-hopping (CH schemes have been studied recently to provide guaranteed blind rendezvous in decentralized CRNs without using global time synchronization. However, these schemes remain vulnerable to jamming attacks. In this paper, we first analyze the limitations of quorum-based rendezvous schemes called asynchronous channel hopping (ACH. Then, we introduce a novel sequence sensing jamming attack (SSJA model in which a sophisticated jammer can dramatically reduce the rendezvous success rates of ACH schemes. In addition, we propose a fast and robust asynchronous rendezvous scheme (FRARS that can significantly enhance robustness under jamming attacks. Our numerical results demonstrate that the performance of the proposed scheme vastly outperforms the ACH scheme when there are security concerns about a sequence sensing jammer.

  7. ERGC: an efficient referential genome compression algorithm.

    Science.gov (United States)

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-11-01

    Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. rajasek@engr.uconn.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Enhanced ID-Based Authentication Scheme Using OTP in Smart Grid AMI Environment

    Directory of Open Access Journals (Sweden)

    Sang-Soo Yeo

    2014-01-01

    Full Text Available This paper presents the vulnerabilities analyses of KL scheme which is an ID-based authentication scheme for AMI network attached SCADA in smart grid and proposes a security-enhanced authentication scheme which satisfies forward secrecy as well as security requirements introduced in KL scheme and also other existing schemes. The proposed scheme uses MDMS which is the supervising system located in an electrical company as a time-synchronizing server in order to synchronize smart devices at home and conducts authentication between smart meter and smart devices using a new secret value generated by an OTP generator every session. The proposed scheme has forward secrecy, so it increases overall security, but its communication and computation overhead reduce its performance slightly, comparing the existing schemes. Nonetheless, hardware specification and communication bandwidth of smart devices will have better conditions continuously, so the proposed scheme would be a good choice for secure AMI environment.

  9. Development of a compressive surface capturing formulation for modelling free-surface flow by using the volume-of-fluid approach

    CSIR Research Space (South Africa)

    Heyns, Johan A

    2012-06-01

    Full Text Available combines a blended higher resolution scheme with the addition of an artificial compressive term to the volume-of-fluid equation. This reduces the numerical smearing of the interface associated with explicit higher resolution schemes while limiting...

  10. smallWig: parallel compression of RNA-seq WIG files.

    Science.gov (United States)

    Wang, Zhiying; Weissman, Tsachy; Milenkovic, Olgica

    2016-01-15

    We developed a new lossless compression method for WIG data, named smallWig, offering the best known compression rates for RNA-seq data and featuring random access functionalities that enable visualization, summary statistics analysis and fast queries from the compressed files. Our approach results in order of magnitude improvements compared with bigWig and ensures compression rates only a fraction of those produced by cWig. The key features of the smallWig algorithm are statistical data analysis and a combination of source coding methods that ensure high flexibility and make the algorithm suitable for different applications. Furthermore, for general-purpose file compression, the compression rate of smallWig approaches the empirical entropy of the tested WIG data. For compression with random query features, smallWig uses a simple block-based compression scheme that introduces only a minor overhead in the compression rate. For archival or storage space-sensitive applications, the method relies on context mixing techniques that lead to further improvements of the compression rate. Implementations of smallWig can be executed in parallel on different sets of chromosomes using multiple processors, thereby enabling desirable scaling for future transcriptome Big Data platforms. The development of next-generation sequencing technologies has led to a dramatic decrease in the cost of DNA/RNA sequencing and expression profiling. RNA-seq has emerged as an important and inexpensive technology that provides information about whole transcriptomes of various species and organisms, as well as different organs and cellular communities. The vast volume of data generated by RNA-seq experiments has significantly increased data storage costs and communication bandwidth requirements. Current compression tools for RNA-seq data such as bigWig and cWig either use general-purpose compressors (gzip) or suboptimal compression schemes that leave significant room for improvement. To substantiate

  11. An efficient and provable secure revocable identity-based encryption scheme.

    Directory of Open Access Journals (Sweden)

    Changji Wang

    Full Text Available Revocation functionality is necessary and crucial to identity-based cryptosystems. Revocable identity-based encryption (RIBE has attracted a lot of attention in recent years, many RIBE schemes have been proposed in the literature but shown to be either insecure or inefficient. In this paper, we propose a new scalable RIBE scheme with decryption key exposure resilience by combining Lewko and Waters' identity-based encryption scheme and complete subtree method, and prove our RIBE scheme to be semantically secure using dual system encryption methodology. Compared to existing scalable and semantically secure RIBE schemes, our proposed RIBE scheme is more efficient in term of ciphertext size, public parameters size and decryption cost at price of a little looser security reduction. To the best of our knowledge, this is the first construction of scalable and semantically secure RIBE scheme with constant size public system parameters.

  12. A new method for simplification and compression of 3D meshes

    OpenAIRE

    Attene, Marco

    2001-01-01

    We focus on the lossy compression of manifold triangle meshes. Our SwingWrapper approach partitions the surface of an original mesh M into simply-connected regions, called triangloids. We compute a new mesh M'. Each triangle of M' is a close approximation of a pseudo-triangle of M. By construction, the connectivity of M' is fairly regular and can be compressed to less than a bit per triangle using EdgeBreaker or one of the other recently developed schemes. The locations of the vertices of M' ...

  13. METHOD AND APPARATUS FOR INSPECTION OF COMPRESSED DATA PACKAGES

    DEFF Research Database (Denmark)

    2008-01-01

    to be transferred over the data network. The method comprises the steps of: a) extracting payload data from the payload part of the package, b) appending the extracted payload data to a stream of data, c) probing the data package header so as to determine the compression scheme that is applied to the payload data...

  14. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  15. High-Order Hyperbolic Residual-Distribution Schemes on Arbitrary Triangular Grids

    Science.gov (United States)

    Mazaheri, Alireza; Nishikawa, Hiroaki

    2015-01-01

    In this paper, we construct high-order hyperbolic residual-distribution schemes for general advection-diffusion problems on arbitrary triangular grids. We demonstrate that the second-order accuracy of the hyperbolic schemes can be greatly improved by requiring the scheme to preserve exact quadratic solutions. We also show that the improved second-order scheme can be easily extended to third-order by further requiring the exactness for cubic solutions. We construct these schemes based on the LDA and the SUPG methodology formulated in the framework of the residual-distribution method. For both second- and third-order-schemes, we construct a fully implicit solver by the exact residual Jacobian of the second-order scheme, and demonstrate rapid convergence of 10-15 iterations to reduce the residuals by 10 orders of magnitude. We demonstrate also that these schemes can be constructed based on a separate treatment of the advective and diffusive terms, which paves the way for the construction of hyperbolic residual-distribution schemes for the compressible Navier-Stokes equations. Numerical results show that these schemes produce exceptionally accurate and smooth solution gradients on highly skewed and anisotropic triangular grids, including curved boundary problems, using linear elements. We also present Fourier analysis performed on the constructed linear system and show that an under-relaxation parameter is needed for stabilization of Gauss-Seidel relaxation.

  16. Normalized compression distance of multisets with applications

    NARCIS (Netherlands)

    Cohen, A.R.; Vitányi, P.M.B.

    Pairwise normalized compression distance (NCD) is a parameter-free, feature-free, alignment-free, similarity metric based on compression. We propose an NCD of multisets that is also metric. Previously, attempts to obtain such an NCD failed. For classification purposes it is superior to the pairwise

  17. IMPROVED COMPRESSION OF XML FILES FOR FAST IMAGE TRANSMISSION

    Directory of Open Access Journals (Sweden)

    S. Manimurugan

    2011-02-01

    Full Text Available The eXtensible Markup Language (XML is a format that is widely used as a tool for data exchange and storage. It is being increasingly used in secure transmission of image data over wireless network and World Wide Web. Verbose in nature, XML files can be tens of megabytes long. Thus, to reduce their size and to allow faster transmission, compression becomes vital. Several general purpose compression tools have been proposed without satisfactory results. This paper proposes a novel technique using modified BWT for compressing XML files in a lossless fashion. The experimental results show that the performance of the proposed technique outperforms both general purpose and XML-specific compressors.

  18. Fast Compressive Tracking.

    Science.gov (United States)

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2014-10-01

    It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.

  19. High-speed and high-ratio referential genome compression.

    Science.gov (United States)

    Liu, Yuansheng; Peng, Hui; Wong, Limsoon; Li, Jinyan

    2017-11-01

    The rapidly increasing number of genomes generated by high-throughput sequencing platforms and assembly algorithms is accompanied by problems in data storage, compression and communication. Traditional compression algorithms are unable to meet the demand of high compression ratio due to the intrinsic challenging features of DNA sequences such as small alphabet size, frequent repeats and palindromes. Reference-based lossless compression, by which only the differences between two similar genomes are stored, is a promising approach with high compression ratio. We present a high-performance referential genome compression algorithm named HiRGC. It is based on a 2-bit encoding scheme and an advanced greedy-matching search on a hash table. We compare the performance of HiRGC with four state-of-the-art compression methods on a benchmark dataset of eight human genomes. HiRGC takes compress about 21 gigabytes of each set of the seven target genomes into 96-260 megabytes, achieving compression ratios of 217 to 82 times. This performance is at least 1.9 times better than the best competing algorithm on its best case. Our compression speed is also at least 2.9 times faster. HiRGC is stable and robust to deal with different reference genomes. In contrast, the competing methods' performance varies widely on different reference genomes. More experiments on 100 human genomes from the 1000 Genome Project and on genomes of several other species again demonstrate that HiRGC's performance is consistently excellent. The C ++ and Java source codes of our algorithm are freely available for academic and non-commercial use. They can be downloaded from https://github.com/yuansliu/HiRGC. jinyan.li@uts.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  20. A Remote User Authentication Scheme with Anonymity for Mobile Devices

    Directory of Open Access Journals (Sweden)

    Soobok Shin

    2012-04-01

    Full Text Available With the rapid growth of information technologies, mobile devices have been utilized in a variety of services such as e-commerce. When a remote server provides such e-commerce services to a user, it must verify the legitimacy of the user over an insecure communication channel. Therefore, remote user authentication has been widely deployed to verify the legitimacy of remote user login requests using mobile devices like smart cards. In this paper we propose a smart card-based authentication scheme that provides both user anonymity and mutual authentication between a remote server and a user. The proposed authentication scheme is a simple and efficient system applicable to the limited resource and low computing performance of the smart card. The proposed scheme provides not only resilience to potential attacks in the smart card-based authentication scheme, but also secure authentication functions. A smart card performs a simple one-way hash function, the operations of exclusive-or and concatenation in the authentication phase of the proposed scheme. The proposed scheme also provides user anonymity using a dynamic identity and key agreement, and secure password change.

  1. An Interference Cancellation Scheme for High Reliability Based on MIMO Systems

    Directory of Open Access Journals (Sweden)

    Jae-Hyun Ro

    2018-03-01

    Full Text Available This article proposes a new interference cancellation scheme in a half-duplex based two-path relay system. In the conventional two-path relay system, inter-relay-interference (IRI which severely degrades the error performances at a destination occurs because a source and a relay transmit signals simultaneously at a specific time. The proposed scheme removes the IRI at a relay for higher signal-to-interference plus noise ratio (SINR to receive interference free signal at a destination, unlike the conventional relay system, which removes IRI at a destination. To handle the IRI, the proposed scheme uses multiple-input multiple-output (MIMO signal detection at the relays and it makes low-complexity signal processing at a destination which is a usually mobile user. At the relays, the proposed scheme uses the low-complexity QR decomposition-M algorithm (QRD-M to optimally remove the IRI. Also, for obtaining diversity gain, the proposed scheme uses cyclic delay diversity (CDD to transmit the signals at a source and the relays. In simulation results, the error performance for the proposed scheme is better when the distance between one relay and another relay is low unlike the conventional scheme because the QRD-M detects received signal in order of higher post signal-to-noise ratio (SNR.

  2. Analysis and improvement for the performance of Baptista's cryptographic scheme

    International Nuclear Information System (INIS)

    Wei Jun; Liao Xiaofeng; Wong, K.W.; Zhou Tsing; Deng Yigui

    2006-01-01

    Based on Baptista's chaotic cryptosystem, we propose a secure and robust chaotic cryptographic scheme after investigating the problems found in this cryptosystem as well as its variants. In this proposed scheme, a subkey array generated from the key and the plaintext is adopted to enhance the security. Some methods are introduced to increase the efficiency. Theoretical analyses and numerical simulations indicate that the proposed scheme is secure and efficient for practical use

  3. Numerical investigation on compressible flow characteristics in axial compressors using a multi block finite-volume scheme

    International Nuclear Information System (INIS)

    Farhanieh, B.; Amanifard, N.; Ghorbanian, K.

    2002-01-01

    An unsteady two-dimensional numerical investigation was performed on the viscous flow passing through a multi-blade cascade. A Cartesian finite-volume approach was linked to Van-Leer's and Roe's flux splitting schemes to evaluate inviscid flux terms. To prevent the oscillatory behavior of numerical results and to increase the accuracy, Mon tonic Upstream Scheme for Conservation Laws was added to flux splitting schemes. The Baldwin-Lo max (B L) turbulence model was implemented to solve the turbulent case studies. Implicit solution was also provided using Lower and Upper (L U) decomposition technique to compare with explicit solutions. To validate the numerical procedure, two test cases are prepared and flow over a Na Ca 0012 airfoil was investigated and the pressure coefficients were compared to the reference data. The numerical solver was implemented to study the flow passing over a compressor cascade. The results of various combinations of splitting schemes and the Mon tonic Upstream Scheme for Conventional Laws limiter were compared with each other to find the suitable methods in cascade problems. Finally the convergence histories of implemented schemes were compared to each other to show the behavior of the solver in using various methods before implementation of them in flow instability studies

  4. Building Secure Public Key Encryption Scheme from Hidden Field Equations

    Directory of Open Access Journals (Sweden)

    Yuan Ping

    2017-01-01

    Full Text Available Multivariate public key cryptography is a set of cryptographic schemes built from the NP-hardness of solving quadratic equations over finite fields, amongst which the hidden field equations (HFE family of schemes remain the most famous. However, the original HFE scheme was insecure, and the follow-up modifications were shown to be still vulnerable to attacks. In this paper, we propose a new variant of the HFE scheme by considering the special equation x2=x defined over the finite field F3 when x=0,1. We observe that the equation can be used to further destroy the special structure of the underlying central map of the HFE scheme. It is shown that the proposed public key encryption scheme is secure against known attacks including the MinRank attack, the algebraic attacks, and the linearization equations attacks. The proposal gains some advantages over the original HFE scheme with respect to the encryption speed and public key size.

  5. Coulomb-Driven Relativistic Electron Beam Compression.

    Science.gov (United States)

    Lu, Chao; Jiang, Tao; Liu, Shengguang; Wang, Rui; Zhao, Lingrong; Zhu, Pengfei; Xiang, Dao; Zhang, Jie

    2018-01-26

    Coulomb interaction between charged particles is a well-known phenomenon in many areas of research. In general, the Coulomb repulsion force broadens the pulse width of an electron bunch and limits the temporal resolution of many scientific facilities such as ultrafast electron diffraction and x-ray free-electron lasers. Here we demonstrate a scheme that actually makes use of the Coulomb force to compress a relativistic electron beam. Furthermore, we show that the Coulomb-driven bunch compression process does not introduce additional timing jitter, which is in sharp contrast to the conventional radio-frequency buncher technique. Our work not only leads to enhanced temporal resolution in electron-beam-based ultrafast instruments that may provide new opportunities in probing material systems far from equilibrium, but also opens a promising direction for advanced beam manipulation through self-field interactions.

  6. Coulomb-Driven Relativistic Electron Beam Compression

    Science.gov (United States)

    Lu, Chao; Jiang, Tao; Liu, Shengguang; Wang, Rui; Zhao, Lingrong; Zhu, Pengfei; Xiang, Dao; Zhang, Jie

    2018-01-01

    Coulomb interaction between charged particles is a well-known phenomenon in many areas of research. In general, the Coulomb repulsion force broadens the pulse width of an electron bunch and limits the temporal resolution of many scientific facilities such as ultrafast electron diffraction and x-ray free-electron lasers. Here we demonstrate a scheme that actually makes use of the Coulomb force to compress a relativistic electron beam. Furthermore, we show that the Coulomb-driven bunch compression process does not introduce additional timing jitter, which is in sharp contrast to the conventional radio-frequency buncher technique. Our work not only leads to enhanced temporal resolution in electron-beam-based ultrafast instruments that may provide new opportunities in probing material systems far from equilibrium, but also opens a promising direction for advanced beam manipulation through self-field interactions.

  7. Quantum Secure Communication Scheme with W State

    International Nuclear Information System (INIS)

    Wang Jian; Zhang Quan; Tang Chaojng

    2007-01-01

    We present a quantum secure communication scheme using three-qubit W state. It is unnecessary for the present scheme to use alternative measurement or Bell basis measurement. Compared with the quantum secure direct communication scheme proposed by Cao et al. [H.J. Cao and H.S. Song, Chin. Phys. Lett. 23 (2006) 290], in our scheme, the detection probability for an eavesdropper's attack increases from 8.3% to 25%. We also show that our scheme is secure for a noise quantum channel.

  8. Study of the shock ignition scheme in inertial confinement fusion

    International Nuclear Information System (INIS)

    Lafon, M.

    2011-01-01

    The Shock Ignition (SI) scheme is an alternative to classical ignition schemes in Inertial Confinement Fusion. Its singularity relies on the relaxation of constraints during the compression phase and fulfilment of ignition conditions by launching a short and intense laser pulse (∼500 ps, ∼300 TW) on the pre-assembled fuel at the end of the implosion.In this thesis, it has been established that the SI process leads to a non-isobaric fuel configuration at the ignition time thus modifying the ignition criteria of Deuterium-Tritium (DT) against the conventional schemes. A gain model has been developed and gain curves have been inferred and numerically validated. This hydrodynamical modeling has demonstrated that the SI process allows higher gain and lower ignition energy threshold than conventional ignition due to the high hot spot pressure at ignition time resulting from the ignitor shock propagation.The radiative hydrodynamic CHIC code developed at the CELIA laboratory has been used to determine parametric dependences describing the optimal conditions for target design leading to ignition. These numerical studies have enlightened the potential of SI with regards to saving up laser energy, obtain high gains but also to safety margins and ignition robustness.Finally, the results of the first SI experiments performed in spherical geometry on the OMEGA laser facility (NY, USA) are presented. An interpretation of the experimental data is proposed from mono and bidimensional hydrodynamic simulations. Then, different trails are explored to account for the differences observed between experimental and numerical data and alternative solutions to improve performances are suggested. (author) [fr

  9. Solution of Euler unsteady equations using a second order numerical scheme

    International Nuclear Information System (INIS)

    Devos, J.P.

    1992-08-01

    In thermal power plants, the steam circuits experience incidents due to the noise and vibration induced by trans-sonic flow. In these configurations, the compressible fluid can be considered the perfect ideal. Euler equations therefore constitute a good model. However, processing of the discontinuities induced by the shockwaves are a particular problem. We give a bibliographical synthesis of the work done on this subject. The research by Roe and Harten leads to TVD (Total Variation Decreasing) type schemes. These second order schemes generate no oscillation and converge towards physically acceptable weak solutions. (author). 12 refs

  10. Compressive multi-mode superresolution display

    KAUST Repository

    Heide, Felix

    2014-01-01

    Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image. © 2014 Optical Society of America.

  11. A Cell-Centered Multiphase ALE Scheme With Structural Coupling

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, Timothy Alan [Univ. of California, Davis, CA (United States)

    2012-04-16

    A novel computational scheme has been developed for simulating compressible multiphase flows interacting with solid structures. The multiphase fluid is computed using a Godunov-type finite-volume method. This has been extended to allow computations on moving meshes using a direct arbitrary-Eulerian- Lagrangian (ALE) scheme. The method has been implemented within a Lagrangian hydrocode, which allows modeling the interaction with Lagrangian structural regions. Although the above scheme is general enough for use on many applications, the ultimate goal of the research is the simulation of heterogeneous energetic material, such as explosives or propellants. The method is powerful enough for application to all stages of the problem, including the initial burning of the material, the propagation of blast waves, and interaction with surrounding structures. The method has been tested on a number of canonical multiphase tests as well as fluid-structure interaction problems.

  12. A New Privacy-Preserving Handover Authentication Scheme for Wireless Networks.

    Science.gov (United States)

    Wang, Changji; Yuan, Yuan; Wu, Jiayuan

    2017-06-20

    Handover authentication is a critical issue in wireless networks, which is being used to ensure mobile nodes wander over multiple access points securely and seamlessly. A variety of handover authentication schemes for wireless networks have been proposed in the literature. Unfortunately, existing handover authentication schemes are vulnerable to a few security attacks, or incur high communication and computation costs. Recently, He et al. proposed a handover authentication scheme PairHand and claimed it can resist various attacks without rigorous security proofs. In this paper, we show that PairHand does not meet forward secrecy and strong anonymity. More seriously, it is vulnerable to key compromise attack, where an adversary can recover the private key of any mobile node. Then, we propose a new efficient and provably secure handover authentication scheme for wireless networks based on elliptic curve cryptography. Compared with existing schemes, our proposed scheme can resist key compromise attack, and achieves forward secrecy and strong anonymity. Moreover, it is more efficient in terms of computation and communication.

  13. An adaptive secret key-directed cryptographic scheme for secure transmission in wireless sensor networks

    International Nuclear Information System (INIS)

    Muhammad, K.; Jan, Z.; Khan, Z

    2015-01-01

    Wireless Sensor Networks (WSNs) are memory and bandwidth limited networks whose main goals are to maximize the network lifetime and minimize the energy consumption and transmission cost. To achieve these goals, different techniques of compression and clustering have been used. However, security is an open and major issue in WSNs for which different approaches are used, both in centralized and distributed WSNs' environments. This paper presents an adaptive cryptographic scheme for secure transmission of various sensitive parameters, sensed by wireless sensors to the fusion center for further processing in WSNs such as military networks. The proposed method encrypts the sensitive captured data of sensor nodes using various encryption procedures (bitxor operation, bits shuffling, and secret key based encryption) and then sends it to the fusion center. At the fusion center, the received encrypted data is decrypted for taking further necessary actions. The experimental results with complexity analysis, validate the effectiveness and feasibility of the proposed method in terms of security in WSNs. (author)

  14. Compressed normalized block difference for object tracking

    Science.gov (United States)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  15. Efficient two-dimensional compressive sensing in MIMO radar

    Science.gov (United States)

    Shahbazi, Nafiseh; Abbasfar, Aliazam; Jabbarian-Jahromi, Mohammad

    2017-12-01

    Compressive sensing (CS) has been a way to lower sampling rate leading to data reduction for processing in multiple-input multiple-output (MIMO) radar systems. In this paper, we further reduce the computational complexity of a pulse-Doppler collocated MIMO radar by introducing a two-dimensional (2D) compressive sensing. To do so, we first introduce a new 2D formulation for the compressed received signals and then we propose a new measurement matrix design for our 2D compressive sensing model that is based on minimizing the coherence of sensing matrix using gradient descent algorithm. The simulation results show that our proposed 2D measurement matrix design using gradient decent algorithm (2D-MMDGD) has much lower computational complexity compared to one-dimensional (1D) methods while having better performance in comparison with conventional methods such as Gaussian random measurement matrix.

  16. A numerical scheme for the generalized Burgers–Huxley equation

    Directory of Open Access Journals (Sweden)

    Brajesh K. Singh

    2016-10-01

    Full Text Available In this article, a numerical solution of generalized Burgers–Huxley (gBH equation is approximated by using a new scheme: modified cubic B-spline differential quadrature method (MCB-DQM. The scheme is based on differential quadrature method in which the weighting coefficients are obtained by using modified cubic B-splines as a set of basis functions. This scheme reduces the equation into a system of first-order ordinary differential equation (ODE which is solved by adopting SSP-RK43 scheme. Further, it is shown that the proposed scheme is stable. The efficiency of the proposed method is illustrated by four numerical experiments, which confirm that obtained results are in good agreement with earlier studies. This scheme is an easy, economical and efficient technique for finding numerical solutions for various kinds of (nonlinear physical models as compared to the earlier schemes.

  17. Study on Exploding Wire Compression for Evaluating Electrical Conductivity in Warm-Dense Diamond-Like-Carbon

    International Nuclear Information System (INIS)

    Sasaki, Toru; Takahashi, Kazumasa; Kudo, Takahiro; Kikuchi, Takashi; Aso, Tsukasa; Harada, Nob.; Fujioka, Shinsuke; Horioka, Kazuhiko

    2016-01-01

    To improve a coupling efficiency for the fast ignition scheme of the inertial confinement fusion, fast electron behaviors as a function of an electrical conductivity are required. To evaluate the electrical conductivity for low-Z materials as a diamond-like-carbon (DLC), we have proposed a concept to investigate the properties of warm dense matter (WDM) by using pulsed-power discharges. The concept of the evaluation of DLC for WDM is a shock compression driven by an exploding wire discharge with confined by a rigid capillary. The qualitatively evaluation of the electrical conductivity for the WDM DLC requires a small electrical conductivity of the exploding wire. To analyze the electrical conductivity of exploding wire, we have demonstrated an exploding wire discharge in water for gold. The results indicated that the electrical conductivity of WDM gold for 5000 K of temperature has an insulator regime. It means that the shock compression driven by the exploding wire discharge with confined by the rigid capillary is applied for the evaluation of electrical conductivity for WDM DLC. (paper)

  18. Time-and-ID-Based Proxy Reencryption Scheme

    Directory of Open Access Journals (Sweden)

    Kambombo Mtonga

    2014-01-01

    Full Text Available Time- and ID-based proxy reencryption scheme is proposed in this paper in which a type-based proxy reencryption enables the delegator to implement fine-grained policies with one key pair without any additional trust on the proxy. However, in some applications, the time within which the data was sampled or collected is very critical. In such applications, for example, healthcare and criminal investigations, the delegatee may be interested in only some of the messages with some types sampled within some time bound instead of the entire subset. Hence, in order to carter for such situations, in this paper, we propose a time-and-identity-based proxy reencryption scheme that takes into account the time within which the data was collected as a factor to consider when categorizing data in addition to its type. Our scheme is based on Boneh and Boyen identity-based scheme (BB-IBE and Matsuo’s proxy reencryption scheme for identity-based encryption (IBE to IBE. We prove that our scheme is semantically secure in the standard model.

  19. Time Reversal UWB Communication System: A Novel Modulation Scheme with Experimental Validation

    Directory of Open Access Journals (Sweden)

    Khaleghi A

    2010-01-01

    Full Text Available A new modulation scheme is proposed for a time reversal (TR ultra wide-band (UWB communication system. The new modulation scheme uses the binary pulse amplitude modulation (BPAM and adds a new level of modulation to increase the data rate of a TR UWB communication system. Multiple data bits can be transmitted simultaneously with a cost of little added interference. Bit error rate (BER performance and the maximum achievable data rate of the new modulation scheme are theoretically analyzed. Two separate measurement campaigns are carried out to analyze the proposed modulation scheme. In the first campaign, the frequency responses of a typical indoor channel are measured and the performance is studied by the simulations using the measured frequency responses. Theoretical and the simulative performances are in strong agreement with each other. Furthermore, the BER performance of the proposed modulation scheme is compared with the performance of existing modulation schemes. It is shown that the proposed modulation scheme outperforms QAM and PAM for in an AWGN channel. In the second campaign, an experimental validation of the proposed modulation scheme is done. It is shown that the performances with the two measurement campaigns are in good agreement.

  20. Certificateless short sequential and broadcast multisignature schemes using elliptic curve bilinear pairings

    Directory of Open Access Journals (Sweden)

    SK Hafizul Islam

    2014-01-01

    Full Text Available Several certificateless short signature and multisignature schemes based on traditional public key infrastructure (PKI or identity-based cryptosystem (IBC have been proposed in the literature; however, no certificateless short sequential (or serial multisignature (CL-SSMS or short broadcast (or parallel multisignature (CL-SBMS schemes have been proposed. In this paper, we propose two such new CL-SSMS and CL-SBMS schemes based on elliptic curve bilinear pairing. Like any certificateless public key cryptosystem (CL-PKC, the proposed schemes are free from the public key certificate management burden and the private key escrow problem as found in PKI- and IBC-based cryptosystems, respectively. In addition, the requirements of the expected security level and the fixed length signature with constant verification time have been achieved in our schemes. The schemes are communication efficient as the length of the multisignature is equivalent to a single elliptic curve point and thus become the shortest possible multisignature scheme. The proposed schemes are then suitable for communication systems having resource constrained devices such as PDAs, mobile phones, RFID chips, and sensors where the communication bandwidth, battery life, computing power and storage space are limited.

  1. A threshold-based fixed predictor for JPEG-LS image compression

    Science.gov (United States)

    Deng, Lihua; Huang, Zhenghua; Yao, Shoukui

    2018-03-01

    In JPEG-LS, fixed predictor based on median edge detector (MED) only detect horizontal and vertical edges, and thus produces large prediction errors in the locality of diagonal edges. In this paper, we propose a threshold-based edge detection scheme for the fixed predictor. The proposed scheme can detect not only the horizontal and vertical edges, but also diagonal edges. For some certain thresholds, the proposed scheme can be simplified to other existing schemes. So, it can also be regarded as the integration of these existing schemes. For a suitable threshold, the accuracy of horizontal and vertical edges detection is higher than the existing median edge detection in JPEG-LS. Thus, the proposed fixed predictor outperforms the existing JPEG-LS predictors for all images tested, while the complexity of the overall algorithm is maintained at a similar level.

  2. PROMISE: parallel-imaging and compressed-sensing reconstruction of multicontrast imaging using SharablE information.

    Science.gov (United States)

    Gong, Enhao; Huang, Feng; Ying, Kui; Wu, Wenchuan; Wang, Shi; Yuan, Chun

    2015-02-01

    A typical clinical MR examination includes multiple scans to acquire images with different contrasts for complementary diagnostic information. The multicontrast scheme requires long scanning time. The combination of partially parallel imaging and compressed sensing (CS-PPI) has been used to reconstruct accelerated scans. However, there are several unsolved problems in existing methods. The target of this work is to improve existing CS-PPI methods for multicontrast imaging, especially for two-dimensional imaging. If the same field of view is scanned in multicontrast imaging, there is significant amount of sharable information. It is proposed in this study to use manifold sharable information among multicontrast images to enhance CS-PPI in a sequential way. Coil sensitivity information and structure based adaptive regularization, which were extracted from previously reconstructed images, were applied to enhance the following reconstructions. The proposed method is called Parallel-imaging and compressed-sensing Reconstruction Of Multicontrast Imaging using SharablE information (PROMISE). Using L1 -SPIRiT as a CS-PPI example, results on multicontrast brain and carotid scans demonstrated that lower error level and better detail preservation can be achieved by exploiting manifold sharable information. Besides, the privilege of PROMISE still exists while there is interscan motion. Using the sharable information among multicontrast images can enhance CS-PPI with tolerance to motions. © 2014 Wiley Periodicals, Inc.

  3. Generalization of binary tensor product schemes depends upon four parameters

    International Nuclear Information System (INIS)

    Bashir, R.; Bari, M.; Mustafa, G.

    2018-01-01

    This article deals with general formulae of parametric and non parametric bivariate subdivision scheme with four parameters. By assigning specific values to those parameters we get some special cases of existing tensor product schemes as well as a new proposed scheme. The behavior of schemes produced by the general formulae is interpolating, approximating and relaxed. Approximating bivariate subdivision schemes produce some other surfaces as compared to interpolating bivariate subdivision schemes. Polynomial reproduction and polynomial generation are desirable properties of subdivision schemes. Capability of polynomial reproduction and polynomial generation is strongly connected with smoothness, sum rules, convergence and approximation order. We also calculate the polynomial generation and polynomial reproduction of 9-point bivariate approximating subdivision scheme. Comparison of polynomial reproduction, polynomial generation and continuity of existing and proposed schemes has also been established. Some numerical examples are also presented to show the behavior of bivariate schemes. (author)

  4. Hyperspectral image compressing using wavelet-based method

    Science.gov (United States)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  5. Fractal Image Compression Based on High Entropy Values Technique

    Directory of Open Access Journals (Sweden)

    Douaa Younis Abbaas

    2018-04-01

    Full Text Available There are many attempts tried to improve the encoding stage of FIC because it consumed time. These attempts worked by reducing size of the search pool for pair range-domain matching but most of them led to get a bad quality, or a lower compression ratio of reconstructed image. This paper aims to present a method to improve performance of the full search algorithm by combining FIC (lossy compression and another lossless technique (in this case entropy coding is used. The entropy technique will reduce size of the domain pool (i. e., number of domain blocks based on the entropy value of each range block and domain block and then comparing the results of full search algorithm and proposed algorithm based on entropy technique to see each of which give best results (such as reduced the encoding time with acceptable values in both compression quali-ty parameters which are C. R (Compression Ratio and PSNR (Image Quality. The experimental results of the proposed algorithm proven that using the proposed entropy technique reduces the encoding time while keeping compression rates and reconstruction image quality good as soon as possible.

  6. Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction.

    Science.gov (United States)

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2018-06-01

    To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.

  7. Faster tissue interface analysis from Raman microscopy images using compressed factorisation

    Science.gov (United States)

    Palmer, Andrew D.; Bannerman, Alistair; Grover, Liam; Styles, Iain B.

    2013-06-01

    The structure of an artificial ligament was examined using Raman microscopy in combination with novel data analysis. Basis approximation and compressed principal component analysis are shown to provide efficient compression of confocal Raman microscopy images, alongside powerful methods for unsupervised analysis. This scheme allows the acceleration of data mining, such as principal component analysis, as they can be performed on the compressed data representation, providing a decrease in the factorisation time of a single image from five minutes to under a second. Using this workflow the interface region between a chemically engineered ligament construct and a bone-mimic anchor was examined. Natural ligament contains a striated interface between the bone and tissue that provides improved mechanical load tolerance, a similar interface was found in the ligament construct.

  8. A blind reversible robust watermarking scheme for relational databases.

    Science.gov (United States)

    Chang, Chin-Chen; Nguyen, Thai-Son; Lin, Chia-Chen

    2013-01-01

    Protecting the ownership and controlling the copies of digital data have become very important issues in Internet-based applications. Reversible watermark technology allows the distortion-free recovery of relational databases after the embedded watermark data are detected or verified. In this paper, we propose a new, blind, reversible, robust watermarking scheme that can be used to provide proof of ownership for the owner of a relational database. In the proposed scheme, a reversible data-embedding algorithm, which is referred to as "histogram shifting of adjacent pixel difference" (APD), is used to obtain reversibility. The proposed scheme can detect successfully 100% of the embedded watermark data, even if as much as 80% of the watermarked relational database is altered. Our extensive analysis and experimental results show that the proposed scheme is robust against a variety of data attacks, for example, alteration attacks, deletion attacks, mix-match attacks, and sorting attacks.

  9. Image compression using moving average histogram and RBF network

    International Nuclear Information System (INIS)

    Khowaja, S.; Ismaili, I.A.

    2015-01-01

    Modernization and Globalization have made the multimedia technology as one of the fastest growing field in recent times but optimal use of bandwidth and storage has been one of the topics which attract the research community to work on. Considering that images have a lion share in multimedia communication, efficient image compression technique has become the basic need for optimal use of bandwidth and space. This paper proposes a novel method for image compression based on fusion of moving average histogram and RBF (Radial Basis Function). Proposed technique employs the concept of reducing color intensity levels using moving average histogram technique followed by the correction of color intensity levels using RBF networks at reconstruction phase. Existing methods have used low resolution images for the testing purpose but the proposed method has been tested on various image resolutions to have a clear assessment of the said technique. The proposed method have been tested on 35 images with varying resolution and have been compared with the existing algorithms in terms of CR (Compression Ratio), MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio), computational complexity. The outcome shows that the proposed methodology is a better trade off technique in terms of compression ratio, PSNR which determines the quality of the image and computational complexity. (author)

  10. Proposal of a new classification scheme for periocular injuries

    Directory of Open Access Journals (Sweden)

    Devi Prasad Mohapatra

    2017-01-01

    Full Text Available Background: Eyelids are important structures and play a role in protecting the globe from trauma, brightness, in maintaining the integrity of tear films and moving the tears towards the lacrimal drainage system and contribute to aesthetic appearance of the face. Ophthalmic trauma is an important cause of morbidity among individuals and has also been responsible for additional cost of healthcare. Periocular trauma involving eyelids and adjacent structures has been found to have increased recently probably due to increased pace of life and increased dependence on machinery. A comprehensive classification of periocular trauma would help in stratifying these injuries as well as study outcomes. Material and Methods: This study was carried out at our institute from June 2015 to Dec 2015. We searched multiple English language databases for existing classification systems for periocular trauma. We designed a system of classification of periocular soft tissue injuries based on clinico-anatomical presentations. This classification was applied prospectively to patients presenting with periocular soft tissue injuries to our department. Results: A comprehensive classification scheme was designed consisting of five types of periocular injuries. A total of 38 eyelid injuries in 34 patients were evaluated in this study. According to the System for Peri-Ocular Trauma (SPOT classification, Type V injuries were most common. SPOT Type II injuries were more common isolated injuries among all zones. Discussion: Classification systems are necessary in order to provide a framework in which to scientifically study the etiology, pathogenesis, and treatment of diseases in an orderly fashion. The SPOT classification has taken into account the periocular soft tissue injuries i.e., upper eyelid, lower eyelid, medial and lateral canthus injuries., based on observed clinico-anatomical patterns of eyelid injuries. Conclusion: The SPOT classification seems to be a reliable

  11. An Enhanced Run-Length Encoding Compression Method for Telemetry Data

    Directory of Open Access Journals (Sweden)

    Shan Yanhu

    2017-09-01

    Full Text Available The telemetry data are essential in evaluating the performance of aircraft and diagnosing its failures. This work combines the oversampling technology with the run-length encoding compression algorithm with an error factor to further enhance the compression performance of telemetry data in a multichannel acquisition system. Compression of telemetry data is carried out with the use of FPGAs. In the experiments there are used pulse signals and vibration signals. The proposed method is compared with two existing methods. The experimental results indicate that the compression ratio, precision, and distortion degree of the telemetry data are improved significantly compared with those obtained by the existing methods. The implementation and measurement of the proposed telemetry data compression method show its effectiveness when used in a high-precision high-capacity multichannel acquisition system.

  12. Wavelet-based compression of pathological images for telemedicine applications

    Science.gov (United States)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  13. SRIM Scheme: An Impression-Management Scheme for Privacy-Aware Photo-Sharing Users

    Directory of Open Access Journals (Sweden)

    Fenghua Li

    2018-02-01

    Full Text Available With the development of online social networks (OSNs and modern smartphones, sharing photos with friends has become one of the most popular social activities. Since people usually prefer to give others a positive impression, impression management during photo sharing is becoming increasingly important. However, most of the existing privacy-aware solutions have two main drawbacks: ① Users must decide manually whether to share each photo with others or not, in order to build the desired impression; and ② users run a high risk of leaking sensitive relational information in group photos during photo sharing, such as their position as part of a couple, or their sexual identity. In this paper, we propose a social relation impression-management (SRIM scheme to protect relational privacy and to automatically recommend an appropriate photo-sharing policy to users. To be more specific, we have designed a lightweight face-distance measurement that calculates the distances between users’ faces within group photos by relying on photo metadata and face-detection results. These distances are then transformed into relations using proxemics. Furthermore, we propose a relation impression evaluation algorithm to evaluate and manage relational impressions. We developed a prototype and employed 21 volunteers to verify the functionalities of the SRIM scheme. The evaluation results show the effectiveness and efficiency of our proposed scheme. Keywords: Impression management, Relational privacy, Photo sharing, Policy recommendation, Proxemics

  14. On Novel Access and Scheduling Schemes for IoT Communications

    Directory of Open Access Journals (Sweden)

    Zheng Jiang

    2016-01-01

    Full Text Available The Internet of Things (IoT is expected to foster the development of 5G wireless networks and requires the efficient support for a large number of simultaneous short message communications. To address these challenges, some existing works utilize new waveform and multiuser superposition transmission schemes to improve the capacity of IoT communication. In this paper, we will investigate the spatial degree of freedom of IoT devices based on their distribution, then extend the multiuser shared access (MUSA which is one of the typical MUST schemes to spatial domain, and propose two novel schemes, that is, the preconfigured access scheme and the joint spatial and code domain scheduling scheme, to enhance IoT communication. The results indicate that the proposed schemes can reduce the collision rate dramatically during the IoT random access procedure and improve the performance of IoT communication obviously. Based on the simulation results, it is also shown that the proposed scheduling scheme can achieve the similar performance to the corresponding brute-force scheduling but with lower complexity.

  15. Scheme for achieving coherent perfect absorption by anisotropic metamaterials

    KAUST Repository

    Zhang, Xiujuan

    2017-02-22

    We propose a unified scheme to achieve coherent perfect absorption of electromagnetic waves by anisotropic metamaterials. The scheme describes the condition on perfect absorption and offers an inverse design route based on effective medium theory in conjunction with retrieval method to determine practical metamaterial absorbers. The scheme is scalable to frequencies and applicable to various incident angles. Numerical simulations show that perfect absorption is achieved in the designed absorbers over a wide range of incident angles, verifying the scheme. By integrating these absorbers, we further propose an absorber to absorb energy from two coherent point sources.

  16. A robust cloud access scheme with mutual authentication

    Directory of Open Access Journals (Sweden)

    Chen Chin-Ling

    2016-01-01

    Full Text Available Due to the progress of network technology, we can access some information through remote servers, and we also can save and access lots of personal data in remote servers. Therefore, to protect these data and resist unauthorized access is an important issue. Some researchers proposed authentication scheme, but there still exist some security weaknesses. This article is based on the concept of HDFS (Hadoop Distributed File System, and offers a robust authentication scheme. The proposed scheme achieves mutual authentication, prevents re-play attack, solves asynchronous issue, and prevents offline password guessing attack.

  17. A progressive diagonalization scheme for the Rabi Hamiltonian

    International Nuclear Information System (INIS)

    Pan, Feng; Guan, Xin; Wang, Yin; Draayer, J P

    2010-01-01

    A diagonalization scheme for the Rabi Hamiltonian, which describes a qubit interacting with a single-mode radiation field via a dipole interaction, is proposed. It is shown that the Rabi Hamiltonian can be solved almost exactly using a progressive scheme that involves a finite set of one variable polynomial equations. The scheme is especially efficient for the lower part of the spectrum. Some low-lying energy levels of the model with several sets of parameters are calculated and compared to those provided by the recently proposed generalized rotating-wave approximation and a full matrix diagonalization.

  18. A privacy preserving secure and efficient authentication scheme for telecare medical information systems.

    Science.gov (United States)

    Mishra, Raghavendra; Barnwal, Amit Kumar

    2015-05-01

    The Telecare medical information system (TMIS) presents effective healthcare delivery services by employing information and communication technologies. The emerging privacy and security are always a matter of great concern in TMIS. Recently, Chen at al. presented a password based authentication schemes to address the privacy and security. Later on, it is proved insecure against various active and passive attacks. To erase the drawbacks of Chen et al.'s anonymous authentication scheme, several password based authentication schemes have been proposed using public key cryptosystem. However, most of them do not present pre-smart card authentication which leads to inefficient login and password change phases. To present an authentication scheme with pre-smart card authentication, we present an improved anonymous smart card based authentication scheme for TMIS. The proposed scheme protects user anonymity and satisfies all the desirable security attributes. Moreover, the proposed scheme presents efficient login and password change phases where incorrect input can be quickly detected and a user can freely change his password without server assistance. Moreover, we demonstrate the validity of the proposed scheme by utilizing the widely-accepted BAN (Burrows, Abadi, and Needham) logic. The proposed scheme is also comparable in terms of computational overheads with relevant schemes.

  19. n-Gram-Based Text Compression

    Science.gov (United States)

    Duong, Hieu N.; Snasel, Vaclav

    2016-01-01

    We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods. PMID:27965708

  20. n-Gram-Based Text Compression

    Directory of Open Access Journals (Sweden)

    Vu H. Nguyen

    2016-01-01

    Full Text Available We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods.

  1. Modelling and optimization of seawater desalination process using mechanical vapour compression

    Directory of Open Access Journals (Sweden)

    V.P. Kravchenko

    2016-09-01

    Full Text Available In the conditions of global climate changes shortage of fresh water becomes an urgent problem for an increasing number of the countries. One of the most perspective technologies of a desalting of sea water is the mechanical vapour compression (MVC providing low energy consumption due to the principle of a heat pump. Aim: The aim of this research is to identify the reserves of efficiency increasing of the desalination systems based on mechanical vapour compression by optimization of the scheme and parameters of installations with MVC. Materials and Methods: The new type of desalination installation is offered which main element is the heat exchanger of the latent heat. Sea water after preliminary heating in heat exchangers comes to the evaporator-condenser where receives the main amount of heat from the condensed steam. A part of sea water evaporates, and the strong solution of salt (brine goes out of the evaporator, and after cooling is dumped back in the sea. The formed steam is compressed by the compressor and comes to the condenser. An essential singularity of this scheme is that condensation happens at higher temperature, than evaporation. Thanks to this the heat, which is comes out at devaporation, is used for evaporation of sea water. Thereby, in this class of desalination installations the principle of a heat pump is implemented. Results: For achievement of a goal the following tasks were solved: the mathematical model of installations with MVC is modified and supplemented; the scheme of heat exchangers switching is modified; influence of design data of desalination installation on the cost of an inventory and the electric power is investigated. The detailed analysis of the main schemes of installation and mathematical model allowed defining ways of decrease in energy consumption and the possible merit value. Influence of two key parameters - a specific power of the compressor and a specific surface area of the evaporator-condenser - on a

  2. Two-Factor User Authentication with Key Agreement Scheme Based on Elliptic Curve Cryptosystem

    Directory of Open Access Journals (Sweden)

    Juan Qu

    2014-01-01

    Full Text Available A password authentication scheme using smart card is called two-factor authentication scheme. Two-factor authentication scheme is the most accepted and commonly used mechanism that provides the authorized users a secure and efficient method for accessing resources over insecure communication channel. Up to now, various two-factor user authentication schemes have been proposed. However, most of them are vulnerable to smart card loss attack, offline password guessing attack, impersonation attack, and so on. In this paper, we design a password remote user authentication with key agreement scheme using elliptic curve cryptosystem. Security analysis shows that the proposed scheme has high level of security. Moreover, the proposed scheme is more practical and secure in contrast to some related schemes.

  3. Visual privacy by context: proposal and evaluation of a level-based visualisation scheme.

    Science.gov (United States)

    Padilla-López, José Ramón; Chaaraoui, Alexandros Andre; Gu, Feng; Flórez-Revuelta, Francisco

    2015-06-04

    Privacy in image and video data has become an important subject since cameras are being installed in an increasing number of public and private spaces. Specifically, in assisted living, intelligent monitoring based on computer vision can allow one to provide risk detection and support services that increase people's autonomy at home. In the present work, a level-based visualisation scheme is proposed to provide visual privacy when human intervention is necessary, such as at telerehabilitation and safety assessment applications. Visualisation levels are dynamically selected based on the previously modelled context. In this way, different levels of protection can be provided, maintaining the necessary intelligibility required for the applications. Furthermore, a case study of a living room, where a top-view camera is installed, is presented. Finally, the performed survey-based evaluation indicates the degree of protection provided by the different visualisation models, as well as the personal privacy preferences and valuations of the users.

  4. A discrete fibre dispersion method for excluding fibres under compression in the modelling of fibrous tissues.

    Science.gov (United States)

    Li, Kewei; Ogden, Ray W; Holzapfel, Gerhard A

    2018-01-01

    Recently, micro-sphere-based methods derived from the angular integration approach have been used for excluding fibres under compression in the modelling of soft biological tissues. However, recent studies have revealed that many of the widely used numerical integration schemes over the unit sphere are inaccurate for large deformation problems even without excluding fibres under compression. Thus, in this study, we propose a discrete fibre dispersion model based on a systematic method for discretizing a unit hemisphere into a finite number of elementary areas, such as spherical triangles. Over each elementary area, we define a representative fibre direction and a discrete fibre density. Then, the strain energy of all the fibres distributed over each elementary area is approximated based on the deformation of the representative fibre direction weighted by the corresponding discrete fibre density. A summation of fibre contributions over all elementary areas then yields the resultant fibre strain energy. This treatment allows us to exclude fibres under compression in a discrete manner by evaluating the tension-compression status of the representative fibre directions only. We have implemented this model in a finite-element programme and illustrate it with three representative examples, including simple tension and simple shear of a unit cube, and non-homogeneous uniaxial extension of a rectangular strip. The results of all three examples are consistent and accurate compared with the previously developed continuous fibre dispersion model, and that is achieved with a substantial reduction of computational cost. © 2018 The Author(s).

  5. A combined spectrum sensing and OFDM demodulation scheme

    NARCIS (Netherlands)

    Heskamp, M.; Slump, Cornelis H.

    2009-01-01

    In this paper we propose a combined signaling and spectrum sensing scheme for cognitive radio that can detect in-band primary users while the networks own signal is active. The signaling scheme uses OFDM with phase shift keying modulated sub-carriers, and the detection scheme measures the deviation

  6. Escalator: An Autonomous Scheduling Scheme for Convergecast in TSCH.

    Science.gov (United States)

    Oh, Sukho; Hwang, DongYeop; Kim, Ki-Hyung; Kim, Kangseok

    2018-04-16

    Time Slotted Channel Hopping (TSCH) is widely used in the industrial wireless sensor networks due to its high reliability and energy efficiency. Various timeslot and channel scheduling schemes have been proposed for achieving high reliability and energy efficiency for TSCH networks. Recently proposed autonomous scheduling schemes provide flexible timeslot scheduling based on the routing topology, but do not take into account the network traffic and packet forwarding delays. In this paper, we propose an autonomous scheduling scheme for convergecast in TSCH networks with RPL as a routing protocol, named Escalator. Escalator generates a consecutive timeslot schedule along the packet forwarding path to minimize the packet transmission delay. The schedule is generated autonomously by utilizing only the local routing topology information without any additional signaling with other nodes. The generated schedule is guaranteed to be conflict-free, in that all nodes in the network could transmit packets to the sink in every slotframe cycle. We implement Escalator and evaluate its performance with existing autonomous scheduling schemes through a testbed and simulation. Experimental results show that the proposed Escalator has lower end-to-end delay and higher packet delivery ratio compared to the existing schemes regardless of the network topology.

  7. Quantum Distributed Ballot Scheme Based on Greenberger-Home-Zeilinger State

    International Nuclear Information System (INIS)

    Shi Ronghua; Wu Ying; Guo Ying; Zeng Guihua

    2010-01-01

    Motivated by the complementary relations of the Greenherger-Horne-Zeilinger (GHZ) entangled triplet-particle states, a novel way of realizing quantum distributed ballot scheme is proposed. The ballot information is encoded by local operations performed on the particles of entangled GHZ triplet states, which ensures the security of the present scheme. In order to guarantee the security of this scheme, the checking phase is designed in detail on the basis of the entangled GHZ triplet state. The analysis shows the security of the proposed scheme. (general)

  8. Quasi-isentropic compression using compressed water flow generated by underwater electrical explosion of a wire array

    Science.gov (United States)

    Gurovich, V.; Virozub, A.; Rososhek, A.; Bland, S.; Spielman, R. B.; Krasik, Ya. E.

    2018-05-01

    A major experimental research area in material equation-of-state today involves the use of off-Hugoniot measurements rather than shock experiments that give only Hugoniot data. There is a wide range of applications using quasi-isentropic compression of matter including the direct measurement of the complete isentrope of materials in a single experiment and minimizing the heating of flyer plates for high-velocity shock measurements. We propose a novel approach to generating quasi-isentropic compression of matter. Using analytical modeling and hydrodynamic simulations, we show that a working fluid composed of compressed water, generated by an underwater electrical explosion of a planar wire array, might be used to efficiently drive the quasi-isentropic compression of a copper target to pressures ˜2 × 1011 Pa without any complex target designs.

  9. Compressive sensing based ptychography image encryption

    Science.gov (United States)

    Rawat, Nitin

    2015-09-01

    A compressive sensing (CS) based ptychography combined with an optical image encryption is proposed. The diffraction pattern is recorded through ptychography technique further compressed by non-uniform sampling via CS framework. The system requires much less encrypted data and provides high security. The diffraction pattern as well as the lesser measurements of the encrypted samples serves as a secret key which make the intruder attacks more difficult. Furthermore, CS shows that the linearly projected few random samples have adequate information for decryption with a dramatic volume reduction. Experimental results validate the feasibility and effectiveness of our proposed technique compared with the existing techniques. The retrieved images do not reveal any information with the original information. In addition, the proposed system can be robust even with partial encryption and under brute-force attacks.

  10. A user-driven treadmill control scheme for simulating overground locomotion.

    Science.gov (United States)

    Kim, Jonghyun; Stanley, Christopher J; Curatalo, Lindsey A; Park, Hyung-Soon

    2012-01-01

    Treadmill-based locomotor training should simulate overground walking as closely as possible for optimal skill transfer. The constant speed of a standard treadmill encourages automaticity rather than engagement and fails to simulate the variable speeds encountered during real-world walking. To address this limitation, this paper proposes a user-driven treadmill velocity control scheme that allows the user to experience natural fluctuations in walking velocity with minimal unwanted inertial force due to acceleration/deceleration of the treadmill belt. A smart estimation limiter in the scheme effectively attenuates the inertial force during velocity changes. The proposed scheme requires measurement of pelvic and swing foot motions, and is developed for a treadmill of typical belt length (1.5 m). The proposed scheme is quantitatively evaluated here with four healthy subjects by comparing it with the most advanced control scheme identified in the literature.

  11. Quantum Communication Scheme Using Non-symmetric Quantum Channel

    International Nuclear Information System (INIS)

    Cao Haijing; Chen Zhonghua; Song Heshan

    2008-01-01

    A theoretical quantum communication scheme based on entanglement swapping and superdense coding is proposed with a 3-dimensional Bell state and 2-dimensional Bell state function as quantum channel. quantum key distribution and quantum secure direct communication can be simultaneously accomplished in the scheme. The scheme is secure and has high source capacity. At last, we generalize the quantum communication scheme to d-dimensional quantum channel

  12. Arbitrated quantum signature scheme based on χ-type entangled states

    International Nuclear Information System (INIS)

    Zuo, Huijuan; Huang, Wei; Qin, Sujuan

    2013-01-01

    An arbitrated quantum signature scheme, which is mainly applied in electronic-payment systems, is proposed and investigated. The χ-type entangled states are used for quantum key distribution and quantum signature in this protocol. Compared with previous quantum signature schemes which also utilize χ-type entangled states, the proposed scheme provides higher efficiency. Finally, we also analyze its security under various kinds of attacks. (paper)

  13. An enhanced dynamic ID-based authentication scheme for telecare medical information systems

    Directory of Open Access Journals (Sweden)

    Ankita Chaturvedi

    2017-01-01

    Full Text Available The authentication schemes for telecare medical information systems (TMIS try to ensure secure and authorized access. ID-based authentication schemes address secure communication, but privacy is not properly addressed. In recent times, dynamic ID-based remote user authentication schemes for TMIS have been presented to protect user’s privacy. The dynamic ID-based authentication schemes efficiently protect the user’s privacy. Unfortunately, most of the existing dynamic ID-based authentication schemes for TMIS ignore the input verifying condition. This makes login and password change phases inefficient. Inefficiency of the password change phase may lead to denial of service attack in the case of incorrect input in the password change phase. To overcome these weaknesses, we proposed a new dynamic ID-based authentication scheme using a smart card. The proposed scheme can quickly detect incorrect inputs which makes the login and password change phase efficient. We adopt the approach with the aim to protect privacy, and efficient login and password change phases. The proposed scheme also resists off-line password guessing attack and denial of service attack. We also demonstrate the validity of the proposed scheme by utilizing the widely-accepted BAN (Burrows, Abadi, and Needham logic. In addition, our scheme is comparable in terms of the communication and computational overheads with relevant schemes for TMIS.

  14. Adaptive Image Transmission Scheme over Wavelet-Based OFDM System

    Institute of Scientific and Technical Information of China (English)

    GAOXinying; YUANDongfeng; ZHANGHaixia

    2005-01-01

    In this paper an adaptive image transmission scheme is proposed over Wavelet-based OFDM (WOFDM) system with Unequal error protection (UEP) by the design of non-uniform signal constellation in MLC. Two different data division schemes: byte-based and bitbased, are analyzed and compared. Different bits are protected unequally according to their different contribution to the image quality in bit-based data division scheme, which causes UEP combined with this scheme more powerful than that with byte-based scheme. Simulation results demonstrate that image transmission by UEP with bit-based data division scheme presents much higher PSNR values and surprisingly better image quality. Furthermore, by considering the tradeoff of complexity and BER performance, Haar wavelet with the shortest compactly supported filter length is the most suitable one among orthogonal Daubechies wavelet series in our proposed system.

  15. Robust anonymous authentication scheme for telecare medical information systems.

    Science.gov (United States)

    Xie, Qi; Zhang, Jun; Dong, Na

    2013-04-01

    Patient can obtain sorts of health-care delivery services via Telecare Medical Information Systems (TMIS). Authentication, security, patient's privacy protection and data confidentiality are important for patient or doctor accessing to Electronic Medical Records (EMR). In 2012, Chen et al. showed that Khan et al.'s dynamic ID-based authentication scheme has some weaknesses and proposed an improved scheme, and they claimed that their scheme is more suitable for TMIS. However, we show that Chen et al.'s scheme also has some weaknesses. In particular, Chen et al.'s scheme does not provide user's privacy protection and perfect forward secrecy, is vulnerable to off-line password guessing attack and impersonation attack once user's smart card is compromised. Further, we propose a secure anonymity authentication scheme to overcome their weaknesses even an adversary can know all information stored in smart card.

  16. Adaptive Digital Watermarking Scheme Based on Support Vector Machines and Optimized Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaoyi Zhou

    2018-01-01

    Full Text Available Digital watermarking is an effective solution to the problem of copyright protection, thus maintaining the security of digital products in the network. An improved scheme to increase the robustness of embedded information on the basis of discrete cosine transform (DCT domain is proposed in this study. The embedding process consisted of two main procedures. Firstly, the embedding intensity with support vector machines (SVMs was adaptively strengthened by training 1600 image blocks which are of different texture and luminance. Secondly, the embedding position with the optimized genetic algorithm (GA was selected. To optimize GA, the best individual in the first place of each generation directly went into the next generation, and the best individual in the second position participated in the crossover and the mutation process. The transparency reaches 40.5 when GA’s generation number is 200. A case study was conducted on a 256 × 256 standard Lena image with the proposed method. After various attacks (such as cropping, JPEG compression, Gaussian low-pass filtering (3,0.5, histogram equalization, and contrast increasing (0.5,0.6 on the watermarked image, the extracted watermark was compared with the original one. Results demonstrate that the watermark can be effectively recovered after these attacks. Even though the algorithm is weak against rotation attacks, it provides high quality in imperceptibility and robustness and hence it is a successful candidate for implementing novel image watermarking scheme meeting real timelines.

  17. A robust anonymous biometric-based remote user authentication scheme using smart cards

    Directory of Open Access Journals (Sweden)

    Ashok Kumar Das

    2015-04-01

    Full Text Available Several biometric-based remote user authentication schemes using smart cards have been proposed in the literature in order to improve the security weaknesses in user authentication system. In 2012, An proposed an enhanced biometric-based remote user authentication scheme using smart cards. It was claimed that the proposed scheme is secure against the user impersonation attack, the server masquerading attack, the password guessing attack, and the insider attack and provides mutual authentication between the user and the server. In this paper, we first analyze the security of An’s scheme and we show that this scheme has three serious security flaws in the design of the scheme: (i flaw in user’s biometric verification during the login phase, (ii flaw in user’s password verification during the login and authentication phases, and (iii flaw in user’s password change locally at any time by the user. Due to these security flaws, An’s scheme cannot support mutual authentication between the user and the server. Further, we show that An’s scheme cannot prevent insider attack. In order to remedy the security weaknesses found in An’s scheme, we propose a new robust and secure anonymous biometric-based remote user authentication scheme using smart cards. Through the informal and formal security analysis, we show that our scheme is secure against all possible known attacks including the attacks found in An’s scheme. The simulation results of our scheme using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications tool ensure that our scheme is secure against passive and active attacks. In addition, our scheme is also comparable in terms of the communication and computational overheads with An’s scheme and other related existing schemes. As a result, our scheme is more appropriate for practical applications compared to other approaches.

  18. Packet Payload Monitoring for Internet Worm Content Detection Using Deterministic Finite Automaton with Delayed Dictionary Compression

    Directory of Open Access Journals (Sweden)

    Divya Selvaraj

    2014-01-01

    Full Text Available Packet content scanning is one of the crucial threats to network security and network monitoring applications. In monitoring applications, payload of packets in a network is matched against the set of patterns in order to detect attacks like worms, viruses, and protocol definitions. During network transfer, incoming and outgoing packets are monitored in depth to inspect the packet payload. In this paper, the regular expressions that are basically string patterns are analyzed for packet payloads in detecting worms. Then the grouping scheme for regular expression matching is rewritten using Deterministic Finite Automaton (DFA. DFA achieves better processing speed during regular expression matching. DFA requires more memory space for each state. In order to reduce memory utilization, decompression technique is used. Delayed Dictionary Compression (DDC is applied for achieving better speeds in the communication links. DDC achieves decoding latency during compression of payload packets in the network. Experimental results show that the proposed approach provides better time consumption and memory utilization during detection of Internet worm attacks.

  19. New practicable Siberian Snake schemes

    International Nuclear Information System (INIS)

    Steffen, K.

    1983-07-01

    Siberian Snake schemes can be inserted in ring accelerators for making the spin tune almost independent of energy. Two such schemes are here suggested which lend particularly well to practical application over a wide energy range. Being composed of horizontal and vertical bending magnets, the proposed snakes are designed to have a small maximum beam excursion in one plane. By applying in this plane a bending correction that varies with energy, they can be operated at fixed geometry in the other plane where most of the bending occurs, thus avoiding complicated magnet motion or excessively large magnet apertures that would otherwise be needed for large energy variations. The first of the proposed schemes employs a pair of standard-type Siberian Snakes, i.e. of the usual 1st and 2nd kind which rotate the spin about the longitudinal and the transverse horizontal axis, respectively. The second scheme employs a pair of novel-type snakes which rotate the spin about either one of the horizontal axes that are at 45 0 to the beam direction. In obvious reference to these axes, they are called left-pointed and right-pointed snakes. (orig.)

  20. Polar-coordinate lattice Boltzmann modeling of compressible flows

    Science.gov (United States)

    Lin, Chuandong; Xu, Aiguo; Zhang, Guangcai; Li, Yingjun; Succi, Sauro

    2014-01-01

    We present a polar coordinate lattice Boltzmann kinetic model for compressible flows. A method to recover the continuum distribution function from the discrete distribution function is indicated. Within the model, a hybrid scheme being similar to, but different from, the operator splitting is proposed. The temporal evolution is calculated analytically, and the convection term is solved via a modified Warming-Beam (MWB) scheme. Within the MWB scheme a suitable switch function is introduced. The current model works not only for subsonic flows but also for supersonic flows. It is validated and verified via the following well-known benchmark tests: (i) the rotational flow, (ii) the stable shock tube problem, (iii) the Richtmyer-Meshkov (RM) instability, and (iv) the Kelvin-Helmholtz instability. As an original application, we studied the nonequilibrium characteristics of the system around three kinds of interfaces, the shock wave, the rarefaction wave, and the material interface, for two specific cases. In one of the two cases, the material interface is initially perturbed, and consequently the RM instability occurs. It is found that the macroscopic effects due to deviating from thermodynamic equilibrium around the material interface differ significantly from those around the mechanical interfaces. The initial perturbation at the material interface enhances the coupling of molecular motions in different degrees of freedom. The amplitude of deviation from thermodynamic equilibrium around the shock wave is much higher than those around the rarefaction wave and material interface. By comparing each component of the high-order moments and its value in equilibrium, we can draw qualitatively the main behavior of the actual distribution function. These results deepen our understanding of the mechanical and material interfaces from a more fundamental level, which is indicative for constructing macroscopic models and other kinds of kinetic models.

  1. Resonance ionization scheme development for europium

    Energy Technology Data Exchange (ETDEWEB)

    Chrysalidis, K., E-mail: katerina.chrysalidis@cern.ch; Goodacre, T. Day; Fedosseev, V. N.; Marsh, B. A. [CERN (Switzerland); Naubereit, P. [Johannes Gutenberg-Universität, Institiut für Physik (Germany); Rothe, S.; Seiffert, C. [CERN (Switzerland); Kron, T.; Wendt, K. [Johannes Gutenberg-Universität, Institiut für Physik (Germany)

    2017-11-15

    Odd-parity autoionizing states of europium have been investigated by resonance ionization spectroscopy via two-step, two-resonance excitations. The aim of this work was to establish ionization schemes specifically suited for europium ion beam production using the ISOLDE Resonance Ionization Laser Ion Source (RILIS). 13 new RILIS-compatible ionization schemes are proposed. The scheme development was the first application of the Photo Ionization Spectroscopy Apparatus (PISA) which has recently been integrated into the RILIS setup.

  2. Secure RAID Schemes for Distributed Storage

    OpenAIRE

    Huang, Wentao; Bruck, Jehoshua

    2016-01-01

    We propose secure RAID, i.e., low-complexity schemes to store information in a distributed manner that is resilient to node failures and resistant to node eavesdropping. We generalize the concept of systematic encoding to secure RAID and show that systematic schemes have significant advantages in the efficiencies of encoding, decoding and random access. For the practical high rate regime, we construct three XOR-based systematic secure RAID schemes with optimal or almost optimal encoding and ...

  3. A Non-blind Color Image Watermarking Scheme Resistent Against Geometric Attacks

    Directory of Open Access Journals (Sweden)

    A. Ghafoor

    2012-12-01

    Full Text Available A non-blind color image watermarking scheme using principle component analysis, discrete wavelet transform and singular value decomposition is proposed. The color components are uncorrelated using principle component analysis. The watermark is embedded into the singular values of discrete wavelet transformed sub-band associated with principle component containing most of the color information. The scheme was tested against various attacks (including histogram equalization, rotation, Gaussian noise, scaling, cropping, Y-shearing, X-shearing, median filtering, affine transformation, translation, salt & pepper, sharpening, to check robustness. The results of proposed scheme are compared with state-of-the-art existing color watermarking schemes using normalized correlation coefficient and peak signal to noise ratio. The simulation results show that proposed scheme is robust and imperceptible.

  4. Control of Warm Compression Stations Using Model Predictive Control: Simulation and Experimental Results

    Science.gov (United States)

    Bonne, F.; Alamir, M.; Bonnay, P.

    2017-02-01

    This paper deals with multivariable constrained model predictive control for Warm Compression Stations (WCS). WCSs are subject to numerous constraints (limits on pressures, actuators) that need to be satisfied using appropriate algorithms. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to achieve precise control of pressures in normal operation or to avoid reaching stopping criteria (such as excessive pressures) under high disturbances (such as a pulsed heat load expected to take place in future fusion reactors, expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details the simulator used to validate this new control scheme and the associated simulation results on the SBTs WCS. This work is partially supported through the French National Research Agency (ANR), task agreement ANR-13-SEED-0005.

  5. Improved Load Shedding Scheme considering Distributed Generation

    DEFF Research Database (Denmark)

    Das, Kaushik; Nitsas, Antonios; Altin, Müfit

    2017-01-01

    With high penetration of distributed generation (DG), the conventional under-frequency load shedding (UFLS) face many challenges and may not perform as expected. This article proposes new UFLS schemes, which are designed to overcome the shortcomings of traditional load shedding scheme...

  6. A Novel Physical Layer Assisted Authentication Scheme for Mobile Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Qiuhua Wang

    2017-02-01

    Full Text Available Physical-layer authentication can address physical layer vulnerabilities and security threats in wireless sensor networks, and has been considered as an effective complementary enhancement to existing upper-layer authentication mechanisms. In this paper, to advance the existing research and improve the authentication performance, we propose a novel physical layer assisted authentication scheme for mobile wireless sensor networks. In our proposed scheme, we explore the reciprocity and spatial uncorrelation of the wireless channel to verify the identities of involved transmitting users and decide whether all data frames are from the same sender. In our proposed scheme, a new method is developed for the legitimate users to compare their received signal strength (RSS records, which avoids the information from being disclosed to the adversary. Our proposed scheme can detect the spoofing attack even in a high dynamic environment. We evaluate our scheme through experiments under indoor and outdoor environments. Experiment results show that our proposed scheme is more efficient and achieves a higher detection rate as well as keeping a lower false alarm rate.

  7. A Novel Physical Layer Assisted Authentication Scheme for Mobile Wireless Sensor Networks.

    Science.gov (United States)

    Wang, Qiuhua

    2017-02-04

    Physical-layer authentication can address physical layer vulnerabilities and security threats in wireless sensor networks, and has been considered as an effective complementary enhancement to existing upper-layer authentication mechanisms. In this paper, to advance the existing research and improve the authentication performance, we propose a novel physical layer assisted authentication scheme for mobile wireless sensor networks. In our proposed scheme, we explore the reciprocity and spatial uncorrelation of the wireless channel to verify the identities of involved transmitting users and decide whether all data frames are from the same sender. In our proposed scheme, a new method is developed for the legitimate users to compare their received signal strength (RSS) records, which avoids the information from being disclosed to the adversary. Our proposed scheme can detect the spoofing attack even in a high dynamic environment. We evaluate our scheme through experiments under indoor and outdoor environments. Experiment results show that our proposed scheme is more efficient and achieves a higher detection rate as well as keeping a lower false alarm rate.

  8. Linear source approximation scheme for method of characteristics

    International Nuclear Information System (INIS)

    Tang Chuntao

    2011-01-01

    Method of characteristics (MOC) for solving neutron transport equation based on unstructured mesh has already become one of the fundamental methods for lattice calculation of nuclear design code system. However, most of MOC codes are developed with flat source approximation called step characteristics (SC) scheme, which is another basic assumption for MOC. A linear source (LS) characteristics scheme and its corresponding modification for negative source distribution were proposed. The OECD/NEA C5G7-MOX 2D benchmark and a self-defined BWR mini-core problem were employed to validate the new LS module of PEACH code. Numerical results indicate that the proposed LS scheme employs less memory and computational time compared with SC scheme at the same accuracy. (authors)

  9. Identity based Encryption and Biometric Authentication Scheme for Secure Data Access in Cloud Computing

    DEFF Research Database (Denmark)

    Cheng, Hongbing; Rong, Chunming; Tan, Zheng-Hua

    2012-01-01

    Cloud computing will be a main information infrastructure in the future; it consists of many large datacenters which are usually geographically distributed and heterogeneous. How to design a secure data access for cloud computing platform is a big challenge. In this paper, we propose a secure data...... access scheme based on identity-based encryption and biometric authentication for cloud computing. Firstly, we describe the security concern of cloud computing and then propose an integrated data access scheme for cloud computing, the procedure of the proposed scheme include parameter setup, key...... distribution, feature template creation, cloud data processing and secure data access control. Finally, we compare the proposed scheme with other schemes through comprehensive analysis and simulation. The results show that the proposed data access scheme is feasible and secure for cloud computing....

  10. Numerical schemes for explosion hazards

    International Nuclear Information System (INIS)

    Therme, Nicolas

    2015-01-01

    In nuclear facilities, internal or external explosions can cause confinement breaches and radioactive materials release in the environment. Hence, modeling such phenomena is crucial for safety matters. Blast waves resulting from explosions are modeled by the system of Euler equations for compressible flows, whereas Navier-Stokes equations with reactive source terms and level set techniques are used to simulate the propagation of flame front during the deflagration phase. The purpose of this thesis is to contribute to the creation of efficient numerical schemes to solve these complex models. The work presented here focuses on two major aspects: first, the development of consistent schemes for the Euler equations, then the buildup of reliable schemes for the front propagation. In both cases, explicit in time schemes are used, but we also introduce a pressure correction scheme for the Euler equations. Staggered discretization is used in space. It is based on the internal energy formulation of the Euler system, which insures its positivity and avoids tedious discretization of the total energy over staggered grids. A discrete kinetic energy balance is derived from the scheme and a source term is added in the discrete internal energy balance equation to preserve the exact total energy balance at the limit. High order methods of MUSCL type are used in the discrete convective operators, based solely on material velocity. They lead to positivity of density and internal energy under CFL conditions. This ensures that the total energy cannot grow and we can furthermore derive a discrete entropy inequality. Under stability assumptions of the discrete L8 and BV norms of the scheme's solutions one can prove that a sequence of converging discrete solutions necessarily converges towards the weak solution of the Euler system. Besides it satisfies a weak entropy inequality at the limit. Concerning the front propagation, we transform the flame front evolution equation (the so called

  11. A novel quantum group signature scheme without using entangled states

    Science.gov (United States)

    Xu, Guang-Bao; Zhang, Ke-Jia

    2015-07-01

    In this paper, we propose a novel quantum group signature scheme. It can make the signer sign a message on behalf of the group without the help of group manager (the arbitrator), which is different from the previous schemes. In addition, a signature can be verified again when its signer disavows she has ever generated it. We analyze the validity and the security of the proposed signature scheme. Moreover, we discuss the advantages and the disadvantages of the new scheme and the existing ones. The results show that our scheme satisfies all the characteristics of a group signature and has more advantages than the previous ones. Like its classic counterpart, our scheme can be used in many application scenarios, such as e-government and e-business.

  12. CAC DPLB MCN: A Distributed Load Balancing Scheme in Multimedia Mobile Cellular Networks

    Directory of Open Access Journals (Sweden)

    Sharma Abhijit

    2016-11-01

    Full Text Available The problem of non-uniform traffic demand in different cells of a cellular network may lead to a gross imbalance in the system performance. Thus, the users in hot cells may suffer from low throughput. In this paper, an effective and simple load balancing scheme CAC_DPLB_MCN is proposed that can effectively reduce the overall call blocking. This model considers dealing with multi-media traffic as well as time-varying geographical traffic distribution. The proposed scheme uses the concept of cell-tiering thereby creating fractional frequency reuse environment. A message exchange based distributed scheme instead of centralized one is used which help the proposed scheme be implemented in a multiple hot cell environment also. Furthermore, concept of dynamic pricing is used to serve the best interest of the users as well as for the service providers. The performance of the proposed scheme is compared with two other existing schemes in terms of call blocking probability and bandwidth utilization. Simulation results show that the proposed scheme can reduce the call blocking significantly in highly congested cell with highest bandwidth utilization. Use of dynamic pricing also makes the scheme useful to increase revenue of the service providers in contrast with compared schemes.

  13. NFC Secure Payment and Verification Scheme with CS E-Ticket

    Directory of Open Access Journals (Sweden)

    Kai Fan

    2017-01-01

    Full Text Available As one of the most important techniques in IoT, NFC (Near Field Communication is more interesting than ever. NFC is a short-range, high-frequency communication technology well suited for electronic tickets, micropayment, and access control function, which is widely used in the financial industry, traffic transport, road ban control, and other fields. However, NFC is becoming increasingly popular in the relevant field, but its secure problems, such as man-in-the-middle-attack and brute force attack, have hindered its further development. To address the security problems and specific application scenarios, we propose a NFC mobile electronic ticket secure payment and verification scheme in the paper. The proposed scheme uses a CS E-Ticket and offline session key generation and distribution technology to prevent major attacks and increase the security of NFC. As a result, the proposed scheme can not only be a good alternative to mobile e-ticket system but also be used in many NFC fields. Furthermore, compared with other existing schemes, the proposed scheme provides a higher security.

  14. On the data compression at filmless readout of the streamer chamber information

    International Nuclear Information System (INIS)

    Bajla, I.; Ososkov, G.A.; Prikhod'ko, V.I.

    1980-01-01

    It is supposed that the system of filmless detecting and processing the visual information from ''RISK'' streamer chamber will comprise the effective on-line data compression algorithm. The role of the basic methodological principles of chamber image film processing in Righ Energy Physics for building up such system is analysed. On the basis of this analysis the main requirements are formulated that have to be fulfilled by the compression algorithm. The most important requirement consists in securing the possibility of the input data reprocessing, if problems in the off-line recognition occur. Using a vector system representation of primary data, the on-line data compression philosophy is proposed that embodies the following three principles: universality, parallelism and input data reconstructibility. Excluding of the recognition procedure from the on-line compression algorithm causes the compression factor reduction. The hierarchic structure of the compression algorithm consisting of (1) sorting, (2) filtering, (3) compression for an additional increasing of the compression ratio is proposed

  15. Terminology: resistance or stiffness for medical compression stockings?

    Directory of Open Access Journals (Sweden)

    André Cornu-Thenard

    2013-04-01

    Full Text Available Based on previous experimental work with medical compression stockings it is proposed to restrict the term stiffness to measurements on the human leg and rather to speak about resistance when it comes to characterize the elastic property of compression hosiery in the textile laboratory.

  16. Efficient algorithms of multidimensional γ-ray spectra compression

    International Nuclear Information System (INIS)

    Morhac, M.; Matousek, V.

    2006-01-01

    The efficient algorithms to compress multidimensional γ-ray events are presented. Two alternative kinds of compression algorithms based on both the adaptive orthogonal and randomizing transforms are proposed. In both algorithms we employ the reduction of data volume due to the symmetry of the γ-ray spectra

  17. A multihop key agreement scheme for wireless ad hoc networks based on channel characteristics.

    Science.gov (United States)

    Hao, Zhuo; Zhong, Sheng; Yu, Nenghai

    2013-01-01

    A number of key agreement schemes based on wireless channel characteristics have been proposed recently. However, previous key agreement schemes require that two nodes which need to agree on a key are within the communication range of each other. Hence, they are not suitable for multihop wireless networks, in which nodes do not always have direct connections with each other. In this paper, we first propose a basic multihop key agreement scheme for wireless ad hoc networks. The proposed basic scheme is resistant to external eavesdroppers. Nevertheless, this basic scheme is not secure when there exist internal eavesdroppers or Man-in-the-Middle (MITM) adversaries. In order to cope with these adversaries, we propose an improved multihop key agreement scheme. We show that the improved scheme is secure against internal eavesdroppers and MITM adversaries in a single path. Both performance analysis and simulation results demonstrate that the improved scheme is efficient. Consequently, the improved key agreement scheme is suitable for multihop wireless ad hoc networks.

  18. Simulation techniques for spatially evolving instabilities in compressible flow over a flat plate

    NARCIS (Netherlands)

    Wasistho, B.; Geurts, Bernardus J.; Kuerten, Johannes G.M.

    1997-01-01

    In this paper we present numerical techniques suitable for a direct numerical simulation in the spatial setting. We demonstrate the application to the simulation of compressible flat plate flow instabilities. We compare second and fourth order accurate spatial discretization schemes in combination

  19. Three-factor anonymous authentication and key agreement scheme for Telecare Medicine Information Systems.

    Science.gov (United States)

    Arshad, Hamed; Nikooghadam, Morteza

    2014-12-01

    Nowadays, with comprehensive employment of the internet, healthcare delivery services is provided remotely by telecare medicine information systems (TMISs). A secure mechanism for authentication and key agreement is one of the most important security requirements for TMISs. Recently, Tan proposed a user anonymity preserving three-factor authentication scheme for TMIS. The present paper shows that Tan's scheme is vulnerable to replay attacks and Denial-of-Service attacks. In order to overcome these security flaws, a new and efficient three-factor anonymous authentication and key agreement scheme for TMIS is proposed. Security and performance analysis shows superiority of the proposed scheme in comparison with previously proposed schemes that are related to security of TMISs.

  20. Some Proxy Signature and Designated verifier Signature Schemes over Braid Groups

    OpenAIRE

    Lal, Sunder; Verma, Vandani

    2009-01-01

    Braids groups provide an alternative to number theoretic public cryptography and can be implemented quite efficiently. The paper proposes five signature schemes: Proxy Signature, Designated Verifier, Bi-Designated Verifier, Designated Verifier Proxy Signature And Bi-Designated Verifier Proxy Signature scheme based on braid groups. We also discuss the security aspects of each of the proposed schemes.

  1. Methods for compressible multiphase flows and their applications

    Science.gov (United States)

    Kim, H.; Choe, Y.; Kim, H.; Min, D.; Kim, C.

    2018-06-01

    This paper presents an efficient and robust numerical framework to deal with multiphase real-fluid flows and their broad spectrum of engineering applications. A homogeneous mixture model incorporated with a real-fluid equation of state and a phase change model is considered to calculate complex multiphase problems. As robust and accurate numerical methods to handle multiphase shocks and phase interfaces over a wide range of flow speeds, the AUSMPW+_N and RoeM_N schemes with a system preconditioning method are presented. These methods are assessed by extensive validation problems with various types of equation of state and phase change models. Representative realistic multiphase phenomena, including the flow inside a thermal vapor compressor, pressurization in a cryogenic tank, and unsteady cavitating flow around a wedge, are then investigated as application problems. With appropriate physical modeling followed by robust and accurate numerical treatments, compressible multiphase flow physics such as phase changes, shock discontinuities, and their interactions are well captured, confirming the suitability of the proposed numerical framework to wide engineering applications.

  2. Compression of Multispectral Images with Comparatively Few Bands Using Posttransform Tucker Decomposition

    Directory of Open Access Journals (Sweden)

    Jin Li

    2014-01-01

    Full Text Available Up to now, data compression for the multispectral charge-coupled device (CCD images with comparatively few bands (MSCFBs is done independently on each multispectral channel. This compression codec is called a “monospectral compressor.” The monospectral compressor does not have a removing spectral redundancy stage. To fill this gap, we propose an efficient compression approach for MSCFBs. In our approach, the one dimensional discrete cosine transform (1D-DCT is performed on spectral dimension to exploit the spectral information, and the posttransform (PT in 2D-DWT domain is performed on each spectral band to exploit the spatial information. A deep coupling approach between the PT and Tucker decomposition (TD is proposed to remove residual spectral redundancy between bands and residual spatial redundancy of each band. Experimental results on multispectral CCD camera data set show that the proposed compression algorithm can obtain a better compression performance and significantly outperforms the traditional compression algorithm-based TD in 2D-DWT and 3D-DCT domain.

  3. Signal compression in radar using FPGA

    OpenAIRE

    Escamilla Hemández, Enrique; Kravchenko, Víctor; Ponomaryov, Volodymyr; Duchen Sánchez, Gonzalo; Hernández Sánchez, David

    2010-01-01

    We present the hardware implementation of radar real time processing procedures using a simple, fast technique based on FPGA (Field Programmable Gate Array) architecture. This processing includes different window procedures during pulse compression in synthetic aperture radar (SAR). The radar signal compression processing is realized using matched filter, and classical and novel window functions, where we focus on better solution for minimum values of sidelobes. The proposed architecture expl...

  4. Capacity-achieving CPM schemes

    OpenAIRE

    Perotti, Alberto; Tarable, Alberto; Benedetto, Sergio; Montorsi, Guido

    2008-01-01

    The pragmatic approach to coded continuous-phase modulation (CPM) is proposed as a capacity-achieving low-complexity alternative to the serially-concatenated CPM (SC-CPM) coding scheme. In this paper, we first perform a selection of the best spectrally-efficient CPM modulations to be embedded into SC-CPM schemes. Then, we consider the pragmatic capacity (a.k.a. BICM capacity) of CPM modulations and optimize it through a careful design of the mapping between input bits and CPM waveforms. The s...

  5. Entropy Stable Wall Boundary Conditions for the Compressible Navier-Stokes Equations

    Science.gov (United States)

    Parsani, Matteo; Carpenter, Mark H.; Nielsen, Eric J.

    2014-01-01

    Non-linear entropy stability and a summation-by-parts framework are used to derive entropy stable wall boundary conditions for the compressible Navier-Stokes equations. A semi-discrete entropy estimate for the entire domain is achieved when the new boundary conditions are coupled with an entropy stable discrete interior operator. The data at the boundary are weakly imposed using a penalty flux approach and a simultaneous-approximation-term penalty technique. Although discontinuous spectral collocation operators are used herein for the purpose of demonstrating their robustness and efficacy, the new boundary conditions are compatible with any diagonal norm summation-by-parts spatial operator, including finite element, finite volume, finite difference, discontinuous Galerkin, and flux reconstruction schemes. The proposed boundary treatment is tested for three-dimensional subsonic and supersonic flows. The numerical computations corroborate the non-linear stability (entropy stability) and accuracy of the boundary conditions.

  6. Modeling the Compression of Merged Compact Toroids by Multiple Plasma Jets

    Science.gov (United States)

    Thio, Y. C. Francis; Knapp, Charles E.; Kirkpatrick, Ron; Rodgers, Stephen L. (Technical Monitor)

    2000-01-01

    A fusion propulsion scheme has been proposed that makes use of the merging of a spherical distribution of plasma jets to dynamically form a gaseous liner. The gaseous liner is used to implode a magnetized target to produce the fusion reaction in a standoff manner. In this paper, the merging of the plasma jets to form the gaseous liner is investigated numerically. The Los Alamos SPHINX code, based on the smoothed particle hydrodynamics method is used to model the interaction of the jets. 2-D and 3-D simulations have been performed to study the characteristics of the resulting flow when these jets collide. The results show that the jets merge to form a plasma liner that converge radially which may be used to compress the central plasma to fusion conditions. Details of the computational model and the SPH numerical methods will be presented together with the numerical results.

  7. Seeding magnetic fields for laser-driven flux compression in high-energy-density plasmas.

    Science.gov (United States)

    Gotchev, O V; Knauer, J P; Chang, P Y; Jang, N W; Shoup, M J; Meyerhofer, D D; Betti, R

    2009-04-01

    A compact, self-contained magnetic-seed-field generator (5 to 16 T) is the enabling technology for a novel laser-driven flux-compression scheme in laser-driven targets. A magnetized target is directly irradiated by a kilojoule or megajoule laser to compress the preseeded magnetic field to thousands of teslas. A fast (300 ns), 80 kA current pulse delivered by a portable pulsed-power system is discharged into a low-mass coil that surrounds the laser target. A >15 T target field has been demonstrated using a hot spot of a compressed target. This can lead to the ignition of massive shells imploded with low velocity-a way of reaching higher gains than is possible with conventional ICF.

  8. A dynamic identity based authentication scheme using chaotic maps for telecare medicine information systems.

    Science.gov (United States)

    Wang, Zhiheng; Huo, Zhanqiang; Shi, Wenbo

    2015-01-01

    With rapid development of computer technology and wide use of mobile devices, the telecare medicine information system has become universal in the field of medical care. To protect patients' privacy and medial data's security, many authentication schemes for the telecare medicine information system have been proposed. Due to its better performance, chaotic maps have been used in the design of authentication schemes for the telecare medicine information system. However, most of them cannot provide user's anonymity. Recently, Lin proposed a dynamic identity based authentication scheme using chaotic maps for the telecare medicine information system and claimed that their scheme was secure against existential active attacks. In this paper, we will demonstrate that their scheme cannot provide user anonymity and is vulnerable to the impersonation attack. Further, we propose an improved scheme to fix security flaws in Lin's scheme and demonstrate the proposed scheme could withstand various attacks.

  9. Compressing Sensing Based Source Localization for Controlled Acoustic Signals Using Distributed Microphone Arrays

    Directory of Open Access Journals (Sweden)

    Wei Ke

    2017-01-01

    Full Text Available In order to enhance the accuracy of sound source localization in noisy and reverberant environments, this paper proposes an adaptive sound source localization method based on distributed microphone arrays. Since sound sources lie at a few points in the discrete spatial domain, our method can exploit this inherent sparsity to convert the localization problem into a sparse recovery problem based on the compressive sensing (CS theory. In this method, a two-step discrete cosine transform- (DCT- based feature extraction approach is utilized to cover both short-time and long-time properties of acoustic signals and reduce the dimensions of the sparse model. In addition, an online dictionary learning (DL method is used to adjust the dictionary for matching the changes of audio signals, and then the sparse solution could better represent location estimations. Moreover, we propose an improved block-sparse reconstruction algorithm using approximate l0 norm minimization to enhance reconstruction performance for sparse signals in low signal-noise ratio (SNR conditions. The effectiveness of the proposed scheme is demonstrated by simulation results and experimental results where substantial improvement for localization performance can be obtained in the noisy and reverberant conditions.

  10. The QKD network: model and routing scheme

    Science.gov (United States)

    Yang, Chao; Zhang, Hongqi; Su, Jinhai

    2017-11-01

    Quantum key distribution (QKD) technology can establish unconditional secure keys between two communicating parties. Although this technology has some inherent constraints, such as the distance and point-to-point mode limits, building a QKD network with multiple point-to-point QKD devices can overcome these constraints. Considering the development level of current technology, the trust relaying QKD network is the first choice to build a practical QKD network. However, the previous research didn't address a routing method on the trust relaying QKD network in detail. This paper focuses on the routing issues, builds a model of the trust relaying QKD network for easily analysing and understanding this network, and proposes a dynamical routing scheme for this network. From the viewpoint of designing a dynamical routing scheme in classical network, the proposed scheme consists of three components: a Hello protocol helping share the network topology information, a routing algorithm to select a set of suitable paths and establish the routing table and a link state update mechanism helping keep the routing table newly. Experiments and evaluation demonstrates the validity and effectiveness of the proposed routing scheme.

  11. Improving Biometric-Based Authentication Schemes with Smart Card Revocation/Reissue for Wireless Sensor Networks.

    Science.gov (United States)

    Moon, Jongho; Lee, Donghoon; Lee, Youngsook; Won, Dongho

    2017-04-25

    User authentication in wireless sensor networks is more difficult than in traditional networks owing to sensor network characteristics such as unreliable communication, limited resources, and unattended operation. For these reasons, various authentication schemes have been proposed to provide secure and efficient communication. In 2016, Park et al. proposed a secure biometric-based authentication scheme with smart card revocation/reissue for wireless sensor networks. However, we found that their scheme was still insecure against impersonation attack, and had a problem in the smart card revocation/reissue phase. In this paper, we show how an adversary can impersonate a legitimate user or sensor node, illegal smart card revocation/reissue and prove that Park et al.'s scheme fails to provide revocation/reissue. In addition, we propose an enhanced scheme that provides efficiency, as well as anonymity and security. Finally, we provide security and performance analysis between previous schemes and the proposed scheme, and provide formal analysis based on the random oracle model. The results prove that the proposed scheme can solve the weaknesses of impersonation attack and other security flaws in the security analysis section. Furthermore, performance analysis shows that the computational cost is lower than the previous scheme.

  12. Unequal error control scheme for dimmable visible light communication systems

    Science.gov (United States)

    Deng, Keyan; Yuan, Lei; Wan, Yi; Li, Huaan

    2017-01-01

    Visible light communication (VLC), which has the advantages of a very large bandwidth, high security, and freedom from license-related restrictions and electromagnetic-interference, has attracted much interest. Because a VLC system simultaneously performs illumination and communication functions, dimming control, efficiency, and reliable transmission are significant and challenging issues of such systems. In this paper, we propose a novel unequal error control (UEC) scheme in which expanding window fountain (EWF) codes in an on-off keying (OOK)-based VLC system are used to support different dimming target values. To evaluate the performance of the scheme for various dimming target values, we apply it to H.264 scalable video coding bitstreams in a VLC system. The results of the simulations that are performed using additive white Gaussian noises (AWGNs) with different signal-to-noise ratios (SNRs) are used to compare the performance of the proposed scheme for various dimming target values. It is found that the proposed UEC scheme enables earlier base layer recovery compared to the use of the equal error control (EEC) scheme for different dimming target values and therefore afford robust transmission for scalable video multicast over optical wireless channels. This is because of the unequal error protection (UEP) and unequal recovery time (URT) of the EWF code in the proposed scheme.

  13. Anticollusion Attack Noninteractive Security Hierarchical Key Agreement Scheme in WHMS

    Directory of Open Access Journals (Sweden)

    Kefei Mao

    2016-01-01

    Full Text Available Wireless Health Monitoring Systems (WHMS have potential to change the way of health care and bring numbers of benefits to patients, physicians, hospitals, and society. However, there are crucial barriers not only to transmit the biometric information but also to protect the privacy and security of the patients’ information. The key agreement between two entities is an essential cryptography operation to clear the barriers. In particular, the noninteractive hierarchical key agreement scheme becomes an attractive direction in WHMS because each sensor node or gateway has limited resources and power. Recently, a noninteractive hierarchical key agreement scheme has been proposed by Kim for WHMS. However, we show that Kim’s cryptographic scheme is vulnerable to the collusion attack if the physicians can be corrupted. Obviously, it is a more practical security condition. Therefore, we proposed an improved key agreement scheme against the attack. Security proof, security analysis, and experimental results demonstrate that our proposed scheme gains enhanced security and more efficiency than Kim’s previous scheme while inheriting its qualities of one-round communication and security properties.

  14. Optimisation algorithms for ECG data compression.

    Science.gov (United States)

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  15. Improvement of a Quantum Proxy Blind Signature Scheme

    Science.gov (United States)

    Zhang, Jia-Lei; Zhang, Jian-Zhong; Xie, Shu-Cui

    2018-06-01

    Improvement of a quantum proxy blind signature scheme is proposed in this paper. Six-qubit entangled state functions as quantum channel. In our scheme, a trust party Trent is introduced so as to avoid David's dishonest behavior. The receiver David verifies the signature with the help of Trent in our scheme. The scheme uses the physical characteristics of quantum mechanics to implement message blinding, delegation, signature and verification. Security analysis proves that our scheme has the properties of undeniability, unforgeability, anonymity and can resist some common attacks.

  16. Compressive sensing of full wave field data for structural health monitoring applications

    DEFF Research Database (Denmark)

    di Ianni, Tommaso; De Marchi, Luca; Perelli, Alessandro

    2015-01-01

    ; however, the acquisition process is generally time-consuming, posing a limit in the applicability of such approaches. To reduce the acquisition time, we use a random sampling scheme based on compressive sensing (CS) to minimize the number of points at which the field is measured. The CS reconstruction...

  17. Cryptanalysis of a semi-quantum secret sharing scheme based on Bell states

    Science.gov (United States)

    Gao, Gan; Wang, Yue; Wang, Dong

    2018-03-01

    In the paper [Mod. Phys. Lett. B 31 (2017) 1750150], Yin et al. proposed a semi-quantum secret sharing scheme by using Bell states. We find that the proposed scheme cannot finish the quantum secret sharing task. In addition, we also find that the proposed scheme has a security loophole, that is, it will not be detected that the dishonest participant, Charlie attacks on the quantum channel.

  18. An Improved Dynamic ID-Based Remote User Authentication with Key Agreement Scheme

    Directory of Open Access Journals (Sweden)

    Juan Qu

    2013-01-01

    Full Text Available In recent years, several dynamic ID-based remote user authentication schemes have been proposed. In 2012, Wen and Li proposed a dynamic ID-based remote user authentication with key agreement scheme. They claimed that their scheme can resist impersonation attack and insider attack and provide anonymity for the users. However, we will show that Wen and Li's scheme cannot withstand insider attack and forward secrecy, does not provide anonymity for the users, and inefficiency for error password login. In this paper, we propose a novel ECC-based remote user authentication scheme which is immune to various known types of attack and is more secure and practical for mobile clients.

  19. New Regenerative Cycle for Vapor Compression Refrigeration

    Energy Technology Data Exchange (ETDEWEB)

    Mark J. Bergander

    2005-08-29

    The main objective of this project is to confirm on a well-instrumented prototype the theoretically derived claims of higher efficiency and coefficient of performance for geothermal heat pumps based on a new regenerative thermodynamic cycle as comparing to existing technology. In order to demonstrate the improved performance of the prototype, it will be compared to published parameters of commercially available geothermal heat pumps manufactured by US and foreign companies. Other objectives are to optimize the design parameters and to determine the economic viability of the new technology. Background (as stated in the proposal): The proposed technology closely relates to EERE mission by improving energy efficiency, bringing clean, reliable and affordable heating and cooling to the residential and commercial buildings and reducing greenhouse gases emission. It can provide the same amount of heating and cooling with considerably less use of electrical energy and consequently has a potential of reducing our nations dependence on foreign oil. The theoretical basis for the proposed thermodynamic cycle was previously developed and was originally called a dynamic equilibrium method. This theory considers the dynamic equations of state of the working fluid and proposes the methods for modification of T-S trajectories of adiabatic transformation by changing dynamic properties of gas, such as flow rate, speed and acceleration. The substance of this proposal is a thermodynamic cycle characterized by the regenerative use of the potential energy of two-phase flow expansion, which in traditional systems is lost in expansion valves. The essential new features of the process are: (1) The application of two-step throttling of the working fluid and two-step compression of its vapor phase. (2) Use of a compressor as the initial step compression and a jet device as a second step, where throttling and compression are combined. (3) Controlled ratio of a working fluid at the first and

  20. Protection Scheme for Modular Multilevel Converters under Diode Open-Circuit Faults

    DEFF Research Database (Denmark)

    Deng, Fujin; Zhu, Rongwu; Liu, Dong

    2018-01-01

    devices. The diode open-circuit fault in the submodule (SM) is an important issue for the MMC, which would affect the performance of the MMC and disrupt the operation of the MMC. This paper analyzes the impact of diode open-circuit failures in the SMs on the performance of the MMC and proposes...... a protection scheme for the MMC under diode open-circuit faults. The proposed protection scheme not only can effectively eliminate the possible caused high voltage due to the diode open-circuit fault but also can quickly detect the faulty SMs, which effectively avoids the destruction and protects the MMC....... The proposed protection scheme is verified with a downscale MMC prototype in the laboratory. The results confirm the effectiveness of the proposed protection scheme for the MMC under diode open-circuit faults....

  1. Compression of seismic data: filter banks and extended transforms, synthesis and adaptation; Compression de donnees sismiques: bancs de filtres et transformees etendues, synthese et adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Duval, L.

    2000-11-01

    Wavelet and wavelet packet transforms are the most commonly used algorithms for seismic data compression. Wavelet coefficients are generally quantized and encoded by classical entropy coding techniques. We first propose in this work a compression algorithm based on the wavelet transform. The wavelet transform is used together with a zero-tree type coding, with first use in seismic applications. Classical wavelet transforms nevertheless yield a quite rigid approach, since it is often desirable to adapt the transform stage to the properties of each type of signal. We thus propose a second algorithm using, instead of wavelets, a set of so called 'extended transforms'. These transforms, originating from the filter bank theory, are parameterized. Classical examples are Malvar's Lapped Orthogonal Transforms (LOT) or de Queiroz et al. Generalized Lapped Orthogonal Transforms (GenLOT). We propose several optimization criteria to build 'extended transforms' which are adapted the properties of seismic signals. We further show that these transforms can be used with the same zero-tree type coding technique as used with wavelets. Both proposed algorithms provide exact compression rate choice, block-wise compression (in the case of extended transforms) and partial decompression for quality control or visualization. Performances are tested on a set of actual seismic data. They are evaluated for several quality measures. We also compare them to other seismic compression algorithms. (author)

  2. Escalator: An Autonomous Scheduling Scheme for Convergecast in TSCH

    Directory of Open Access Journals (Sweden)

    Sukho Oh

    2018-04-01

    Full Text Available Time Slotted Channel Hopping (TSCH is widely used in the industrial wireless sensor networks due to its high reliability and energy efficiency. Various timeslot and channel scheduling schemes have been proposed for achieving high reliability and energy efficiency for TSCH networks. Recently proposed autonomous scheduling schemes provide flexible timeslot scheduling based on the routing topology, but do not take into account the network traffic and packet forwarding delays. In this paper, we propose an autonomous scheduling scheme for convergecast in TSCH networks with RPL as a routing protocol, named Escalator. Escalator generates a consecutive timeslot schedule along the packet forwarding path to minimize the packet transmission delay. The schedule is generated autonomously by utilizing only the local routing topology information without any additional signaling with other nodes. The generated schedule is guaranteed to be conflict-free, in that all nodes in the network could transmit packets to the sink in every slotframe cycle. We implement Escalator and evaluate its performance with existing autonomous scheduling schemes through a testbed and simulation. Experimental results show that the proposed Escalator has lower end-to-end delay and higher packet delivery ratio compared to the existing schemes regardless of the network topology.

  3. A hierarchical classification scheme of psoriasis images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    A two-stage hierarchical classification scheme of psoriasis lesion images is proposed. These images are basically composed of three classes: normal skin, lesion and background. The scheme combines conventional tools to separate the skin from the background in the first stage, and the lesion from...

  4. Applicability of higher-order TVD method to low mach number compressible flows

    International Nuclear Information System (INIS)

    Akamatsu, Mikio

    1995-01-01

    Steep gradients of fluid density are the influential factor of spurious oscillation in numerical solutions of low Mach number (M<<1) compressible flows. The total variation diminishing (TVD) scheme is a promising remedy to overcome this problem and obtain accurate solutions. TVD schemes for high-speed flows are, however, not compatible with commonly used methods in low Mach number flows using pressure-based formulation. In the present study a higher-order TVD scheme is constructed on a modified form of each individual scalar equation of primitive variables. It is thus clarified that the concept of TVD is applicable to low Mach number flows within the framework of the existing numerical method. Results of test problems of the moving interface of two-component gases with the density ratio ≥ 4, demonstrate the accurate and robust (wiggle-free) profile of the scheme. (author)

  5. A hybrid data compression approach for online backup service

    Science.gov (United States)

    Wang, Hua; Zhou, Ke; Qin, MingKang

    2009-08-01

    With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.

  6. Fast and predictable video compression in software design and implementation of an H.261 codec

    Science.gov (United States)

    Geske, Dagmar; Hess, Robert

    1998-09-01

    The use of software codecs for video compression becomes commonplace in several videoconferencing applications. In order to reduce conflicts with other applications used at the same time, mechanisms for resource reservation on endsystems need to determine an upper bound for computing time used by the codec. This leads to the demand for predictable execution times of compression/decompression. Since compression schemes as H.261 inherently depend on the motion contained in the video, an adaptive admission control is required. This paper presents a data driven approach based on dynamical reduction of the number of processed macroblocks in peak situations. Besides the absolute speed is a point of interest. The question, whether and how software compression of high quality video is feasible on today's desktop computers, is examined.

  7. A Three Factor Remote User Authentication Scheme Using Collision Resist Fuzzy Extractor in Single Server Environment

    Directory of Open Access Journals (Sweden)

    Giri Debasis

    2017-01-01

    Full Text Available Due to rapid growth of online applications, it is needed to provide such a facility by which communicators can get the services by applying the applications in a secure way. As communications are done through an insecure channel like Internet, any adversary can trap and modify the communication messages. Only authentication procedure can overcome the aforementioned problem. Many researchers have proposed so many authentication schemes in this literature. But, this paper has shown that many of them are not usable in real world application scenarios because, the existing schemes cannot resist all the possible attacks. Therefore, this paper has proposed a three factor authentication scheme using hash function and fuzzy extractor. This paper has further analyzed the security of the proposed scheme using random oracle model. The analysis shows that the proposed scheme can resist all the possible attacks. Furthermore, comparison between proposed scheme and related existing schemes shows that the proposed scheme has better trade-off among storage, computational and communication costs.

  8. Blind Compressed Sensing Parameter Estimation of Non-cooperative Frequency Hopping Signal

    Directory of Open Access Journals (Sweden)

    Chen Ying

    2016-10-01

    Full Text Available To overcome the disadvantages of a non-cooperative frequency hopping communication system, such as a high sampling rate and inadequate prior information, parameter estimation based on Blind Compressed Sensing (BCS is proposed. The signal is precisely reconstructed by the alternating iteration of sparse coding and basis updating, and the hopping frequencies are directly estimated based on the results. Compared with conventional compressive sensing, blind compressed sensing does not require prior information of the frequency hopping signals; hence, it offers an effective solution to the inadequate prior information problem. In the proposed method, the signal is first modeled and then reconstructed by Orthonormal Block Diagonal Blind Compressed Sensing (OBD-BCS, and the hopping frequencies and hop period are finally estimated. The simulation results suggest that the proposed method can reconstruct and estimate the parameters of noncooperative frequency hopping signals with a low signal-to-noise ratio.

  9. Wavelet compression of multichannel ECG data by enhanced set partitioning in hierarchical trees algorithm.

    Science.gov (United States)

    Sharifahmadian, Ershad

    2006-01-01

    The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.

  10. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    Science.gov (United States)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  11. Enhanced arbitrated quantum signature scheme using Bell states

    International Nuclear Information System (INIS)

    Wang Chao; Liu Jian-Wei; Shang Tao

    2014-01-01

    We investigate the existing arbitrated quantum signature schemes as well as their cryptanalysis, including intercept-resend attack and denial-of-service attack. By exploring the loopholes of these schemes, a malicious signatory may successfully disavow signed messages, or the receiver may actively negate the signature from the signatory without being detected. By modifying the existing schemes, we develop counter-measures to these attacks using Bell states. The newly proposed scheme puts forward the security of arbitrated quantum signature. Furthermore, several valuable topics are also presented for further research of the quantum signature scheme

  12. Performance Analysis of Virtual MIMO Relaying Schemes Based on Detect–Split–Forward

    KAUST Repository

    Al-Basit, Suhaib M.

    2014-10-29

    © 2014, Springer Science+Business Media New York. Virtual multi-input multi-output (vMIMO) schemes in wireless communication systems improve coverage, throughput, capacity, and quality of service. In this paper, we propose three uplink vMIMO relaying schemes based on detect–split–forward (DSF). In addition, we investigate the effect of several physical parameters such as distance, modulation type and number of relays. Furthermore, an adaptive vMIMO DSF scheme based on VBLAST and STBC is proposed. In order to do that, we provide analytical tools to evaluate the performance of the propose vMIMO relaying scheme.

  13. Performance Analysis of Virtual MIMO Relaying Schemes Based on Detect–Split–Forward

    KAUST Repository

    Al-Basit, Suhaib M.; Al-Ghadhban, Samir; Zummo, Salam A.

    2014-01-01

    © 2014, Springer Science+Business Media New York. Virtual multi-input multi-output (vMIMO) schemes in wireless communication systems improve coverage, throughput, capacity, and quality of service. In this paper, we propose three uplink vMIMO relaying schemes based on detect–split–forward (DSF). In addition, we investigate the effect of several physical parameters such as distance, modulation type and number of relays. Furthermore, an adaptive vMIMO DSF scheme based on VBLAST and STBC is proposed. In order to do that, we provide analytical tools to evaluate the performance of the propose vMIMO relaying scheme.

  14. Compressive Behavior of Fiber-Reinforced Concrete with End-Hooked Steel Fibers

    Directory of Open Access Journals (Sweden)

    Seong-Cheol Lee

    2015-03-01

    Full Text Available In this paper, the compressive behavior of fiber-reinforced concrete with end-hooked steel fibers has been investigated through a uniaxial compression test in which the variables were concrete compressive strength, fiber volumetric ratio, and fiber aspect ratio (length to diameter. In order to minimize the effect of specimen size on fiber distribution, 48 cylinder specimens 150 mm in diameter and 300 mm in height were prepared and then subjected to uniaxial compression. From the test results, it was shown that steel fiber-reinforced concrete (SFRC specimens exhibited ductile behavior after reaching their compressive strength. It was also shown that the strain at the compressive strength generally increased along with an increase in the fiber volumetric ratio and fiber aspect ratio, while the elastic modulus decreased. With consideration for the effect of steel fibers, a model for the stress–strain relationship of SFRC under compression is proposed here. Simple formulae to predict the strain at the compressive strength and the elastic modulus of SFRC were developed as well. The proposed model and formulae will be useful for realistic predictions of the structural behavior of SFRC members or structures.

  15. Compressive Behavior of Fiber-Reinforced Concrete with End-Hooked Steel Fibers.

    Science.gov (United States)

    Lee, Seong-Cheol; Oh, Joung-Hwan; Cho, Jae-Yeol

    2015-03-27

    In this paper, the compressive behavior of fiber-reinforced concrete with end-hooked steel fibers has been investigated through a uniaxial compression test in which the variables were concrete compressive strength, fiber volumetric ratio, and fiber aspect ratio (length to diameter). In order to minimize the effect of specimen size on fiber distribution, 48 cylinder specimens 150 mm in diameter and 300 mm in height were prepared and then subjected to uniaxial compression. From the test results, it was shown that steel fiber-reinforced concrete (SFRC) specimens exhibited ductile behavior after reaching their compressive strength. It was also shown that the strain at the compressive strength generally increased along with an increase in the fiber volumetric ratio and fiber aspect ratio, while the elastic modulus decreased. With consideration for the effect of steel fibers, a model for the stress-strain relationship of SFRC under compression is proposed here. Simple formulae to predict the strain at the compressive strength and the elastic modulus of SFRC were developed as well. The proposed model and formulae will be useful for realistic predictions of the structural behavior of SFRC members or structures.

  16. A novel high-frequency encoding algorithm for image compression

    Science.gov (United States)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  17. Compact Spreader Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Placidi, M.; Jung, J. -Y.; Ratti, A.; Sun, C.

    2014-07-25

    This paper describes beam distribution schemes adopting a novel implementation based on low amplitude vertical deflections combined with horizontal ones generated by Lambertson-type septum magnets. This scheme offers substantial compactness in the longitudinal layouts of the beam lines and increased flexibility for beam delivery of multiple beam lines on a shot-to-shot basis. Fast kickers (FK) or transverse electric field RF Deflectors (RFD) provide the low amplitude deflections. Initially proposed at the Stanford Linear Accelerator Center (SLAC) as tools for beam diagnostics and more recently adopted for multiline beam pattern schemes, RFDs offer repetition capabilities and a likely better amplitude reproducibility when compared to FKs, which, in turn, offer more modest financial involvements both in construction and operation. Both solutions represent an ideal approach for the design of compact beam distribution systems resulting in space and cost savings while preserving flexibility and beam quality.

  18. Trinary signed-digit arithmetic using an efficient encoding scheme

    Science.gov (United States)

    Salim, W. Y.; Alam, M. S.; Fyath, R. S.; Ali, S. A.

    2000-09-01

    The trinary signed-digit (TSD) number system is of interest for ultrafast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.

  19. Cooperation schemes for rate enhancement in detect-and-forward relay channels

    KAUST Repository

    Benjillali, Mustapha

    2010-05-01

    To improve the spectral efficiency of "Detect-and-Forward" (DetF) half-duplex relaying in fading channels, we propose a cooperation scheme where the relay uses a modulation whose order is higher than the one at the source. In a new common framework, we show that the proposed scheme offers considerable gains - in terms of achievable information rates - compared to the conventional DetF relaying schemes for both orthogonal and non-orthogonal source/relay cooperation. This allows us to propose an adaptive cooperation scheme based on the maximization of the information rate at the destination which needs to observe only the average signal-to-noise ratios of direct and relaying links. ©2010 IEEE.

  20. A generalized scheme for designing multistable continuous ...

    Indian Academy of Sciences (India)

    In this paper, a generalized scheme is proposed for designing multistable continuous dynamical systems. The scheme is based on the concept of partial synchronization of states and the concept of constants of motion. The most important observation is that by coupling two mdimensional dynamical systems, multistable ...