WorldWideScience

Sample records for employs compressive sensing

  1. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  2. Compressive sensing in medical imaging.

    Science.gov (United States)

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  3. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  4. On the Feedback Reduction of Relay Multiuser Networks using Compressive Sensing

    KAUST Repository

    Elkhalil, Khalil; Eltayeb, Mohammed; Kammoun, Abla; Al-Naffouri, Tareq Y.; Bahrami, Hamid Reza

    2016-01-01

    This paper presents a comprehensive performance analysis of full-duplex multiuser relay networks employing opportunistic scheduling with noisy and compressive feedback. Specifically, two feedback techniques based on compressive sensing (CS) theory

  5. Energy-efficient sensing in wireless sensor networks using compressed sensing.

    Science.gov (United States)

    Razzaque, Mohammad Abdur; Dobson, Simon

    2014-02-12

    Sensing of the application environment is the main purpose of a wireless sensor network. Most existing energy management strategies and compression techniques assume that the sensing operation consumes significantly less energy than radio transmission and reception. This assumption does not hold in a number of practical applications. Sensing energy consumption in these applications may be comparable to, or even greater than, that of the radio. In this work, we support this claim by a quantitative analysis of the main operational energy costs of popular sensors, radios and sensor motes. In light of the importance of sensing level energy costs, especially for power hungry sensors, we consider compressed sensing and distributed compressed sensing as potential approaches to provide energy efficient sensing in wireless sensor networks. Numerical experiments investigating the effectiveness of compressed sensing and distributed compressed sensing using real datasets show their potential for efficient utilization of sensing and overall energy costs in wireless sensor networks. It is shown that, for some applications, compressed sensing and distributed compressed sensing can provide greater energy efficiency than transform coding and model-based adaptive sensing in wireless sensor networks.

  6. Compressive sensing for urban radar

    CERN Document Server

    Amin, Moeness

    2014-01-01

    With the emergence of compressive sensing and sparse signal reconstruction, approaches to urban radar have shifted toward relaxed constraints on signal sampling schemes in time and space, and to effectively address logistic difficulties in data acquisition. Traditionally, these challenges have hindered high resolution imaging by restricting both bandwidth and aperture, and by imposing uniformity and bounds on sampling rates.Compressive Sensing for Urban Radar is the first book to focus on a hybrid of two key areas: compressive sensing and urban sensing. It explains how reliable imaging, tracki

  7. Compressive Sensing in Communication Systems

    DEFF Research Database (Denmark)

    Fyhn, Karsten

    2013-01-01

    . The need for cheaper, smarter and more energy efficient wireless devices is greater now than ever. This thesis addresses this problem and concerns the application of the recently developed sampling theory of compressive sensing in communication systems. Compressive sensing is the merging of signal...... acquisition and compression. It allows for sampling a signal with a rate below the bound dictated by the celebrated Shannon-Nyquist sampling theorem. In some communication systems this necessary minimum sample rate, dictated by the Shannon-Nyquist sampling theorem, is so high it is at the limit of what...... with using compressive sensing in communication systems. The main contribution of this thesis is two-fold: 1) a new compressive sensing hardware structure for spread spectrum signals, which is simpler than the current state-of-the-art, and 2) a range of algorithms for parameter estimation for the class...

  8. Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique

    Science.gov (United States)

    Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi

    2013-09-01

    According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.

  9. Compressed sensing for distributed systems

    CERN Document Server

    Coluccia, Giulio; Magli, Enrico

    2015-01-01

    This book presents a survey of the state-of-the art in the exciting and timely topic of compressed sensing for distributed systems. It has to be noted that, while compressed sensing has been studied for some time now, its distributed applications are relatively new. Remarkably, such applications are ideally suited to exploit all the benefits that compressed sensing can provide. The objective of this book is to provide the reader with a comprehensive survey of this topic, from the basic concepts to different classes of centralized and distributed reconstruction algorithms, as well as a comparison of these techniques. This book collects different contributions on these aspects. It presents the underlying theory in a complete and unified way for the first time, presenting various signal models and their use cases. It contains a theoretical part collecting latest results in rate-distortion analysis of distributed compressed sensing, as well as practical implementations of algorithms obtaining performance close to...

  10. Compressed Sensing with Rank Deficient Dictionaries

    DEFF Research Database (Denmark)

    Hansen, Thomas Lundgaard; Johansen, Daniel Højrup; Jørgensen, Peter Bjørn

    2012-01-01

    In compressed sensing it is generally assumed that the dictionary matrix constitutes a (possibly overcomplete) basis of the signal space. In this paper we consider dictionaries that do not span the signal space, i.e. rank deficient dictionaries. We show that in this case the signal-to-noise ratio...... (SNR) in the compressed samples can be increased by selecting the rows of the measurement matrix from the column space of the dictionary. As an example application of compressed sensing with a rank deficient dictionary, we present a case study of compressed sensing applied to the Coarse Acquisition (C...

  11. Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks.

    Science.gov (United States)

    Zheng, Haifeng; Li, Jiayin; Feng, Xinxin; Guo, Wenzhong; Chen, Zhonghui; Xiong, Neal

    2017-11-08

    Compressive sensing (CS) provides an energy-efficient paradigm for data gathering in wireless sensor networks (WSNs). However, the existing work on spatial-temporal data gathering using compressive sensing only considers either multi-hop relaying based or multiple random walks based approaches. In this paper, we exploit the mobility pattern for spatial-temporal data collection and propose a novel mobile data gathering scheme by employing the Metropolis-Hastings algorithm with delayed acceptance, an improved random walk algorithm for a mobile collector to collect data from a sensing field. The proposed scheme exploits Kronecker compressive sensing (KCS) for spatial-temporal correlation of sensory data by allowing the mobile collector to gather temporal compressive measurements from a small subset of randomly selected nodes along a random routing path. More importantly, from the theoretical perspective we prove that the equivalent sensing matrix constructed from the proposed scheme for spatial-temporal compressible signal can satisfy the property of KCS models. The simulation results demonstrate that the proposed scheme can not only significantly reduce communication cost but also improve recovery accuracy for mobile data gathering compared to the other existing schemes. In particular, we also show that the proposed scheme is robust in unreliable wireless environment under various packet losses. All this indicates that the proposed scheme can be an efficient alternative for data gathering application in WSNs .

  12. Approximate equiangular tight frames for compressed sensing and CDMA applications

    Science.gov (United States)

    Tsiligianni, Evaggelia; Kondi, Lisimachos P.; Katsaggelos, Aggelos K.

    2017-12-01

    Performance guarantees for recovery algorithms employed in sparse representations, and compressed sensing highlights the importance of incoherence. Optimal bounds of incoherence are attained by equiangular unit norm tight frames (ETFs). Although ETFs are important in many applications, they do not exist for all dimensions, while their construction has been proven extremely difficult. In this paper, we construct frames that are close to ETFs. According to results from frame and graph theory, the existence of an ETF depends on the existence of its signature matrix, that is, a symmetric matrix with certain structure and spectrum consisting of two distinct eigenvalues. We view the construction of a signature matrix as an inverse eigenvalue problem and propose a method that produces frames of any dimensions that are close to ETFs. Due to the achieved equiangularity property, the so obtained frames can be employed as spreading sequences in synchronous code-division multiple access (s-CDMA) systems, besides compressed sensing.

  13. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    Science.gov (United States)

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  14. Discrete Wigner Function Reconstruction and Compressed Sensing

    OpenAIRE

    Zhang, Jia-Ning; Fang, Lei; Ge, Mo-Lin

    2011-01-01

    A new reconstruction method for Wigner function is reported for quantum tomography based on compressed sensing. By analogy with computed tomography, Wigner functions for some quantum states can be reconstructed with less measurements utilizing this compressed sensing based method.

  15. Performance of target detection algorithm in compressive sensing miniature ultraspectral imaging compressed sensing system

    Science.gov (United States)

    Gedalin, Daniel; Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Rotman, Stanley R.; Stern, Adrian

    2017-04-01

    Compressive sensing theory was proposed to deal with the high quantity of measurements demanded by traditional hyperspectral systems. Recently, a compressive spectral imaging technique dubbed compressive sensing miniature ultraspectral imaging (CS-MUSI) was presented. This system uses a voltage controlled liquid crystal device to create multiplexed hyperspectral cubes. We evaluate the utility of the data captured using the CS-MUSI system for the task of target detection. Specifically, we compare the performance of the matched filter target detection algorithm in traditional hyperspectral systems and in CS-MUSI multiplexed hyperspectral cubes. We found that the target detection algorithm performs similarly in both cases, despite the fact that the CS-MUSI data is up to an order of magnitude less than that in conventional hyperspectral cubes. Moreover, the target detection is approximately an order of magnitude faster in CS-MUSI data.

  16. Sparse representations and compressive sensing for imaging and vision

    CERN Document Server

    Patel, Vishal M

    2013-01-01

    Compressed sensing or compressive sensing is a new concept in signal processing where one measures a small number of non-adaptive linear combinations of the signal.  These measurements are usually much smaller than the number of samples that define the signal.  From these small numbers of measurements, the signal is then reconstructed by non-linear procedure.  Compressed sensing has recently emerged as a powerful tool for efficiently processing data in non-traditional ways.  In this book, we highlight some of the key mathematical insights underlying sparse representation and compressed sensing and illustrate the role of these theories in classical vision, imaging and biometrics problems.

  17. Efficient two-dimensional compressive sensing in MIMO radar

    Science.gov (United States)

    Shahbazi, Nafiseh; Abbasfar, Aliazam; Jabbarian-Jahromi, Mohammad

    2017-12-01

    Compressive sensing (CS) has been a way to lower sampling rate leading to data reduction for processing in multiple-input multiple-output (MIMO) radar systems. In this paper, we further reduce the computational complexity of a pulse-Doppler collocated MIMO radar by introducing a two-dimensional (2D) compressive sensing. To do so, we first introduce a new 2D formulation for the compressed received signals and then we propose a new measurement matrix design for our 2D compressive sensing model that is based on minimizing the coherence of sensing matrix using gradient descent algorithm. The simulation results show that our proposed 2D measurement matrix design using gradient decent algorithm (2D-MMDGD) has much lower computational complexity compared to one-dimensional (1D) methods while having better performance in comparison with conventional methods such as Gaussian random measurement matrix.

  18. Application of Compressive Sensing to Gravitational Microlensing Experiments

    Science.gov (United States)

    Korde-Patel, Asmita; Barry, Richard K.; Mohsenin, Tinoosh

    2016-01-01

    Compressive Sensing is an emerging technology for data compression and simultaneous data acquisition. This is an enabling technique for significant reduction in data bandwidth, and transmission power and hence, can greatly benefit spaceflight instruments. We apply this process to detect exoplanets via gravitational microlensing. We experiment with various impact parameters that describe microlensing curves to determine the effectiveness and uncertainty caused by Compressive Sensing. Finally, we describe implications for spaceflight missions.

  19. Evaluation of the image quality in digital breast tomosynthesis (DBT) employed with a compressed-sensing (CS)-based reconstruction algorithm by using the mammographic accreditation phantom

    Energy Technology Data Exchange (ETDEWEB)

    Park, Yeonok; Cho, Heemoon; Je, Uikyu; Cho, Hyosung, E-mail: hscho1@yonsei.ac.kr; Park, Chulkyu; Lim, Hyunwoo; Kim, Kyuseok; Kim, Guna; Park, Soyoung; Woo, Taeho; Choi, Sungil

    2015-12-21

    In this work, we have developed a prototype digital breast tomosynthesis (DBT) system which mainly consists of an x-ray generator (28 kV{sub p}, 7 mA s), a CMOS-type flat-panel detector (70-μm pixel size, 230.5×339 mm{sup 2} active area), and a rotational arm to move the x-ray generator in an arc. We employed a compressed-sensing (CS)-based reconstruction algorithm, rather than a common filtered-backprojection (FBP) one, for more accurate DBT reconstruction. Here the CS is a state-of-the-art mathematical theory for solving the inverse problems, which exploits the sparsity of the image with substantially high accuracy. We evaluated the reconstruction quality in terms of the detectability, the contrast-to-noise ratio (CNR), and the slice-sensitive profile (SSP) by using the mammographic accreditation phantom (Model 015, CIRS Inc.) and compared it to the FBP-based quality. The CS-based algorithm yielded much better image quality, preserving superior image homogeneity, edge sharpening, and cross-plane resolution, compared to the FBP-based one. - Highlights: • A prototype digital breast tomosynthesis (DBT) system is developed. • Compressed-sensing (CS) based reconstruction framework is employed. • We reconstructed high-quality DBT images by using the proposed reconstruction framework.

  20. Compressive sensing based algorithms for electronic defence

    CERN Document Server

    Mishra, Amit Kumar

    2017-01-01

    This book details some of the major developments in the implementation of compressive sensing in radio applications for electronic defense and warfare communication use. It provides a comprehensive background to the subject and at the same time describes some novel algorithms. It also investigates application value and performance-related parameters of compressive sensing in scenarios such as direction finding, spectrum monitoring, detection, and classification.

  1. The possibilities of compressed sensing based migration

    KAUST Repository

    Aldawood, Ali

    2013-09-22

    Linearized waveform inversion or Least-square migration helps reduce migration artifacts caused by limited acquisition aperture, coarse sampling of sources and receivers, and low subsurface illumination. However, leastsquare migration, based on L2-norm minimization of the misfit function, tends to produce a smeared (smoothed) depiction of the true subsurface reflectivity. Assuming that the subsurface reflectivity distribution is a sparse signal, we use a compressed-sensing (Basis Pursuit) algorithm to retrieve this sparse distribution from a small number of linear measurements. We applied a compressed-sensing algorithm to image a synthetic fault model using dense and sparse acquisition geometries. Tests on synthetic data demonstrate the ability of compressed-sensing to produce highly resolved migrated images. We, also, studied the robustness of the Basis Pursuit algorithm in the presence of Gaussian random noise.

  2. The possibilities of compressed sensing based migration

    KAUST Repository

    Aldawood, Ali; Hoteit, Ibrahim; Alkhalifah, Tariq Ali

    2013-01-01

    Linearized waveform inversion or Least-square migration helps reduce migration artifacts caused by limited acquisition aperture, coarse sampling of sources and receivers, and low subsurface illumination. However, leastsquare migration, based on L2-norm minimization of the misfit function, tends to produce a smeared (smoothed) depiction of the true subsurface reflectivity. Assuming that the subsurface reflectivity distribution is a sparse signal, we use a compressed-sensing (Basis Pursuit) algorithm to retrieve this sparse distribution from a small number of linear measurements. We applied a compressed-sensing algorithm to image a synthetic fault model using dense and sparse acquisition geometries. Tests on synthetic data demonstrate the ability of compressed-sensing to produce highly resolved migrated images. We, also, studied the robustness of the Basis Pursuit algorithm in the presence of Gaussian random noise.

  3. Spectral Compressive Sensing with Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Dadkhahi, Hamid; F. Duarte, Marco

    2013-01-01

    . In this paper, we introduce a greedy recovery algorithm that leverages a band-exclusion function and a polar interpolation function to address these two issues in spectral compressive sensing. Our algorithm is geared towards line spectral estimation from compressive measurements and outperforms most existing...

  4. Coding Strategies and Implementations of Compressive Sensing

    Science.gov (United States)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or

  5. Compressed sensing electron tomography

    International Nuclear Information System (INIS)

    Leary, Rowan; Saghi, Zineb; Midgley, Paul A.; Holland, Daniel J.

    2013-01-01

    The recent mathematical concept of compressed sensing (CS) asserts that a small number of well-chosen measurements can suffice to reconstruct signals that are amenable to sparse or compressible representation. In addition to powerful theoretical results, the principles of CS are being exploited increasingly across a range of experiments to yield substantial performance gains relative to conventional approaches. In this work we describe the application of CS to electron tomography (ET) reconstruction and demonstrate the efficacy of CS–ET with several example studies. Artefacts present in conventional ET reconstructions such as streaking, blurring of object boundaries and elongation are markedly reduced, and robust reconstruction is shown to be possible from far fewer projections than are normally used. The CS–ET approach enables more reliable quantitative analysis of the reconstructions as well as novel 3D studies from extremely limited data. - Highlights: • Compressed sensing (CS) theory and its application to electron tomography (ET) is described. • The practical implementation of CS–ET is outlined and its efficacy demonstrated with examples. • High fidelity tomographic reconstruction is possible from a small number of images. • The CS–ET reconstructions can be more reliably segmented and analysed quantitatively. • CS–ET is applicable to different image content by choice of an appropriate sparsifying transform

  6. Fast electron microscopy via compressive sensing

    Science.gov (United States)

    Larson, Kurt W; Anderson, Hyrum S; Wheeler, Jason W

    2014-12-09

    Various technologies described herein pertain to compressive sensing electron microscopy. A compressive sensing electron microscope includes a multi-beam generator and a detector. The multi-beam generator emits a sequence of electron patterns over time. Each of the electron patterns can include a plurality of electron beams, where the plurality of electron beams is configured to impart a spatially varying electron density on a sample. Further, the spatially varying electron density varies between each of the electron patterns in the sequence. Moreover, the detector collects signals respectively corresponding to interactions between the sample and each of the electron patterns in the sequence.

  7. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  8. The application of sparse linear prediction dictionary to compressive sensing in speech signals

    Directory of Open Access Journals (Sweden)

    YOU Hanxu

    2016-04-01

    Full Text Available Appling compressive sensing (CS,which theoretically guarantees that signal sampling and signal compression can be achieved simultaneously,into audio and speech signal processing is one of the most popular research topics in recent years.In this paper,K-SVD algorithm was employed to learn a sparse linear prediction dictionary regarding as the sparse basis of underlying speech signals.Compressed signals was obtained by applying random Gaussian matrix to sample original speech frames.Orthogonal matching pursuit (OMP and compressive sampling matching pursuit (CoSaMP were adopted to recovery original signals from compressed one.Numbers of experiments were carried out to investigate the impact of speech frames length,compression ratios,sparse basis and reconstruction algorithms on CS performance.Results show that sparse linear prediction dictionary can advance the performance of speech signals reconstruction compared with discrete cosine transform (DCT matrix.

  9. Compressed Sensing Methods in Radio Receivers Exposed to Noise and Interference

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek

    , there is a problem of interference, which makes digitization of radio receivers even more dicult. High-order low-pass lters are needed to remove interfering signals and secure a high-quality reception. In the mid-2000s a new method of signal acquisition, called compressed sensing, emerged. Compressed sensing...... the downconverted baseband signal and interference, may be replaced by low-order lters. Additional digital signal processing is a price to pay for this feature. Hence, the signal processing is moved from the analog to the digital domain. Filtering compressed sensing, which is a new application of compressed sensing...

  10. WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Zhouzhou Liu

    2015-01-01

    Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.

  11. Temporal compressive sensing systems

    Science.gov (United States)

    Reed, Bryan W.

    2017-12-12

    Methods and systems for temporal compressive sensing are disclosed, where within each of one or more sensor array data acquisition periods, one or more sensor array measurement datasets comprising distinct linear combinations of time slice data are acquired, and where mathematical reconstruction allows for calculation of accurate representations of the individual time slice datasets.

  12. Wireless Sensor Networks Data Processing Summary Based on Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Caiyun Huang

    2014-07-01

    Full Text Available As a newly proposed theory, compressive sensing (CS is commonly used in signal processing area. This paper investigates the applications of compressed sensing (CS in wireless sensor networks (WSNs. First, the development and research status of compressed sensing technology and wireless sensor networks are described, then a detailed investigation of WSNs research based on CS are conducted from aspects of data fusion, signal acquisition, signal routing transmission, and signal reconstruction. At the end of the paper, we conclude our survey and point out the possible future research directions.

  13. Determining building interior structures using compressive sensing

    Science.gov (United States)

    Lagunas, Eva; Amin, Moeness G.; Ahmad, Fauzia; Nájar, Montse

    2013-04-01

    We consider imaging of the building interior structures using compressive sensing (CS) with applications to through-the-wall imaging and urban sensing. We consider a monostatic synthetic aperture radar imaging system employing stepped frequency waveform. The proposed approach exploits prior information of building construction practices to form an appropriate sparse representation of the building interior layout. We devise a dictionary of possible wall locations, which is consistent with the fact that interior walls are typically parallel or perpendicular to the front wall. The dictionary accounts for the dominant normal angle reflections from exterior and interior walls for the monostatic imaging system. CS is applied to a reduced set of observations to recover the true positions of the walls. Additional information about interior walls can be obtained using a dictionary of possible corner reflectors, which is the response of the junction of two walls. Supporting results based on simulation and laboratory experiments are provided. It is shown that the proposed sparsifying basis outperforms the conventional through-the-wall CS model, the wavelet sparsifying basis, and the block sparse model for building interior layout detection.

  14. Secure biometric image sensor and authentication scheme based on compressed sensing.

    Science.gov (United States)

    Suzuki, Hiroyuki; Suzuki, Masamichi; Urabe, Takuya; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2013-11-20

    It is important to ensure the security of biometric authentication information, because its leakage causes serious risks, such as replay attacks using the stolen biometric data, and also because it is almost impossible to replace raw biometric information. In this paper, we propose a secure biometric authentication scheme that protects such information by employing an optical data ciphering technique based on compressed sensing. The proposed scheme is based on two-factor authentication, the biometric information being supplemented by secret information that is used as a random seed for a cipher key. In this scheme, a biometric image is optically encrypted at the time of image capture, and a pair of restored biometric images for enrollment and verification are verified in the authentication server. If any of the biometric information is exposed to risk, it can be reenrolled by changing the secret information. Through numerical experiments, we confirm that finger vein images can be restored from the compressed sensing measurement data. We also present results that verify the accuracy of the scheme.

  15. 2nd International MATHEON Conference on Compressed Sensing and its Applications

    CERN Document Server

    Caire, Giuseppe; Calderbank, Robert; März, Maximilian; Kutyniok, Gitta; Mathar, Rudolf

    2017-01-01

    This contributed volume contains articles written by the plenary and invited speakers from the second international MATHEON Workshop 2015 that focus on applications of compressed sensing. Article authors address their techniques for solving the problems of compressed sensing, as well as connections to related areas like detecting community-like structures in graphs, curbatures on Grassmanians, and randomized tensor train singular value decompositions. Some of the novel applications covered include dimensionality reduction, information theory, random matrices, sparse approximation, and sparse recovery.  This book is aimed at both graduate students and researchers in the areas of applied mathematics, computer science, and engineering, as well as other applied scientists exploring the potential applications for the novel methodology of compressed sensing. An introduction to the subject of compressed sensing is also provided for researchers interested in the field who are not as familiar with it. .

  16. Object specific reconstruction using compressively sensed data

    International Nuclear Information System (INIS)

    Mahalanobis, Abhijit

    2008-01-01

    Compressed sensing holds the promise for radically novel sensors that can perfectly reconstruct images using considerably less samples of data than required by the otherwise general Shannon sampling theorem. In surveillance systems however, it is also desirable to cue regions of the image where objects of interest may exist. Thus in this paper, we are interested in imaging interesting objects in a scene, without necessarily seeking perfect reconstruction of the whole image. We show that our goals are achieved by minimizing a modified L2-norm criterion with good results when the reconstruction of only specific objects is of interest. The method yields a simple closed form analytical solution that does not require iterative processing. Objects can be meaningfully sensed in considerable detail while heavily compressing the scene elsewhere. Essentially, this embeds the object detection and clutter discrimination function in the sensing and imaging process.

  17. Blind Compressed Sensing Parameter Estimation of Non-cooperative Frequency Hopping Signal

    Directory of Open Access Journals (Sweden)

    Chen Ying

    2016-10-01

    Full Text Available To overcome the disadvantages of a non-cooperative frequency hopping communication system, such as a high sampling rate and inadequate prior information, parameter estimation based on Blind Compressed Sensing (BCS is proposed. The signal is precisely reconstructed by the alternating iteration of sparse coding and basis updating, and the hopping frequencies are directly estimated based on the results. Compared with conventional compressive sensing, blind compressed sensing does not require prior information of the frequency hopping signals; hence, it offers an effective solution to the inadequate prior information problem. In the proposed method, the signal is first modeled and then reconstructed by Orthonormal Block Diagonal Blind Compressed Sensing (OBD-BCS, and the hopping frequencies and hop period are finally estimated. The simulation results suggest that the proposed method can reconstruct and estimate the parameters of noncooperative frequency hopping signals with a low signal-to-noise ratio.

  18. Compressive sensing scalp EEG signals: implementations and practical performance.

    Science.gov (United States)

    Abdulghani, Amir M; Casson, Alexander J; Rodriguez-Villegas, Esther

    2012-11-01

    Highly miniaturised, wearable computing and communication systems allow unobtrusive, convenient and long term monitoring of a range of physiological parameters. For long term operation from the physically smallest batteries, the average power consumption of a wearable device must be very low. It is well known that the overall power consumption of these devices can be reduced by the inclusion of low power consumption, real-time compression of the raw physiological data in the wearable device itself. Compressive sensing is a new paradigm for providing data compression: it has shown significant promise in fields such as MRI; and is potentially suitable for use in wearable computing systems as the compression process required in the wearable device has a low computational complexity. However, the practical performance very much depends on the characteristics of the signal being sensed. As such the utility of the technique cannot be extrapolated from one application to another. Long term electroencephalography (EEG) is a fundamental tool for the investigation of neurological disorders and is increasingly used in many non-medical applications, such as brain-computer interfaces. This article investigates in detail the practical performance of different implementations of the compressive sensing theory when applied to scalp EEG signals.

  19. Experimental scheme and restoration algorithm of block compression sensing

    Science.gov (United States)

    Zhang, Linxia; Zhou, Qun; Ke, Jun

    2018-01-01

    Compressed Sensing (CS) can use the sparseness of a target to obtain its image with much less data than that defined by the Nyquist sampling theorem. In this paper, we study the hardware implementation of a block compression sensing system and its reconstruction algorithms. Different block sizes are used. Two algorithms, the orthogonal matching algorithm (OMP) and the full variation minimum algorithm (TV) are used to obtain good reconstructions. The influence of block size on reconstruction is also discussed.

  20. Compressed sensing based joint-compensation of power amplifier's distortions in OFDMA cognitive radio systems

    KAUST Repository

    Ali, Anum Z.

    2013-12-01

    Linearization of user equipment power amplifiers driven by orthogonal frequency division multiplexing signals is addressed in this paper. Particular attention is paid to the power efficient operation of an orthogonal frequency division multiple access cognitive radio system and realization of such a system using compressed sensing. Specifically, precompensated overdriven amplifiers are employed at the mobile terminal. Over-driven amplifiers result in in-band distortions and out of band interference. Out of band interference mostly occupies the spectrum of inactive users, whereas the in-band distortions are mitigated using compressed sensing at the receiver. It is also shown that the performance of the proposed scheme can be further enhanced using multiple measurements of the distortion signal in single-input multi-output systems. Numerical results verify the ability of the proposed setup to improve error vector magnitude, bit error rate, outage capacity and mean squared error. © 2011 IEEE.

  1. Compressed sensing based joint-compensation of power amplifier's distortions in OFDMA cognitive radio systems

    KAUST Repository

    Ali, Anum Z.; Hammi, Oualid; Al-Naffouri, Tareq Y.

    2013-01-01

    Linearization of user equipment power amplifiers driven by orthogonal frequency division multiplexing signals is addressed in this paper. Particular attention is paid to the power efficient operation of an orthogonal frequency division multiple access cognitive radio system and realization of such a system using compressed sensing. Specifically, precompensated overdriven amplifiers are employed at the mobile terminal. Over-driven amplifiers result in in-band distortions and out of band interference. Out of band interference mostly occupies the spectrum of inactive users, whereas the in-band distortions are mitigated using compressed sensing at the receiver. It is also shown that the performance of the proposed scheme can be further enhanced using multiple measurements of the distortion signal in single-input multi-output systems. Numerical results verify the ability of the proposed setup to improve error vector magnitude, bit error rate, outage capacity and mean squared error. © 2011 IEEE.

  2. Integrating dynamic and distributed compressive sensing techniques to enhance image quality of the compressive line sensing system for unmanned aerial vehicles application

    Science.gov (United States)

    Ouyang, Bing; Hou, Weilin; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.; Gong, Cuiling

    2017-07-01

    The compressive line sensing imaging system adopts distributed compressive sensing (CS) to acquire data and reconstruct images. Dynamic CS uses Bayesian inference to capture the correlated nature of the adjacent lines. An image reconstruction technique that incorporates dynamic CS in the distributed CS framework was developed to improve the quality of reconstructed images. The effectiveness of the technique was validated using experimental data acquired in an underwater imaging test facility. Results that demonstrate contrast and resolution improvements will be presented. The improved efficiency is desirable for unmanned aerial vehicles conducting long-duration missions.

  3. Application of Compressive Sensing to Gravitational Microlensing Data and Implications for Miniaturized Space Observatories

    Science.gov (United States)

    Korde-Patel, Asmita (Inventor); Barry, Richard K.; Mohsenin, Tinoosh

    2016-01-01

    Compressive Sensing is a technique for simultaneous acquisition and compression of data that is sparse or can be made sparse in some domain. It is currently under intense development and has been profitably employed for industrial and medical applications. We here describe the use of this technique for the processing of astronomical data. We outline the procedure as applied to exoplanet gravitational microlensing and analyze measurement results and uncertainty values. We describe implications for on-spacecraft data processing for space observatories. Our findings suggest that application of these techniques may yield significant, enabling benefits especially for power and volume-limited space applications such as miniaturized or micro-constellation satellites.

  4. Compressive sensing sectional imaging for single-shot in-line self-interference incoherent holography

    Science.gov (United States)

    Weng, Jiawen; Clark, David C.; Kim, Myung K.

    2016-05-01

    A numerical reconstruction method based on compressive sensing (CS) for self-interference incoherent digital holography (SIDH) is proposed to achieve sectional imaging by single-shot in-line self-interference incoherent hologram. The sensing operator is built up based on the physical mechanism of SIDH according to CS theory, and a recovery algorithm is employed for image restoration. Numerical simulation and experimental studies employing LEDs as discrete point-sources and resolution targets as extended sources are performed to demonstrate the feasibility and validity of the method. The intensity distribution and the axial resolution along the propagation direction of SIDH by angular spectrum method (ASM) and by CS are discussed. The analysis result shows that compared to ASM the reconstruction by CS can improve the axial resolution of SIDH, and achieve sectional imaging. The proposed method may be useful to 3D analysis of dynamic systems.

  5. Compressive Sensing with Cross-Validation and Stop-Sampling for Sparse Polynomial Chaos Expansions

    Energy Technology Data Exchange (ETDEWEB)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Vane, Zachary Phillips; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2017-07-01

    Compressive sensing is a powerful technique for recovering sparse solutions of underdetermined linear systems, which is often encountered in uncertainty quanti cation analysis of expensive and high-dimensional physical models. We perform numerical investigations employing several com- pressive sensing solvers that target the unconstrained LASSO formulation, with a focus on linear systems that arise in the construction of polynomial chaos expansions. With core solvers of l1 ls, SpaRSA, CGIST, FPC AS, and ADMM, we develop techniques to mitigate over tting through an automated selection of regularization constant based on cross-validation, and a heuristic strategy to guide the stop-sampling decision. Practical recommendations on parameter settings for these tech- niques are provided and discussed. The overall method is applied to a series of numerical examples of increasing complexity, including large eddy simulations of supersonic turbulent jet-in-cross flow involving a 24-dimensional input. Through empirical phase-transition diagrams and convergence plots, we illustrate sparse recovery performance under structures induced by polynomial chaos, accuracy and computational tradeoffs between polynomial bases of different degrees, and practi- cability of conducting compressive sensing for a realistic, high-dimensional physical application. Across test cases studied in this paper, we find ADMM to have demonstrated empirical advantages through consistent lower errors and faster computational times.

  6. Identification of Coupled Map Lattice Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Dong Xie

    2016-01-01

    Full Text Available A novel approach for the parameter identification of coupled map lattice (CML based on compressed sensing is presented in this paper. We establish a meaningful connection between these two seemingly unrelated study topics and identify the weighted parameters using the relevant recovery algorithms in compressed sensing. Specifically, we first transform the parameter identification problem of CML into the sparse recovery problem of underdetermined linear system. In fact, compressed sensing provides a feasible method to solve underdetermined linear system if the sensing matrix satisfies some suitable conditions, such as restricted isometry property (RIP and mutual coherence. Then we give a low bound on the mutual coherence of the coefficient matrix generated by the observed values of CML and also prove that it satisfies the RIP from a theoretical point of view. If the weighted vector of each element is sparse in the CML system, our proposed approach can recover all the weighted parameters using only about M samplings, which is far less than the number of the lattice elements N. Another important and significant advantage is that if the observed data are contaminated with some types of noises, our approach is still effective. In the simulations, we mainly show the effects of coupling parameter and noise on the recovery rate.

  7. Accelerated whole-brain multi-parameter mapping using blind compressed sensing.

    Science.gov (United States)

    Bhave, Sampada; Lingala, Sajan Goud; Johnson, Casey P; Magnotta, Vincent A; Jacob, Mathews

    2016-03-01

    To introduce a blind compressed sensing (BCS) framework to accelerate multi-parameter MR mapping, and demonstrate its feasibility in high-resolution, whole-brain T1ρ and T2 mapping. BCS models the evolution of magnetization at every pixel as a sparse linear combination of bases in a dictionary. Unlike compressed sensing, the dictionary and the sparse coefficients are jointly estimated from undersampled data. Large number of non-orthogonal bases in BCS accounts for more complex signals than low rank representations. The low degree of freedom of BCS, attributed to sparse coefficients, translates to fewer artifacts at high acceleration factors (R). From 2D retrospective undersampling experiments, the mean square errors in T1ρ and T2 maps were observed to be within 0.1% up to R = 10. BCS was observed to be more robust to patient-specific motion as compared to other compressed sensing schemes and resulted in minimal degradation of parameter maps in the presence of motion. Our results suggested that BCS can provide an acceleration factor of 8 in prospective 3D imaging with reasonable reconstructions. BCS considerably reduces scan time for multiparameter mapping of the whole brain with minimal artifacts, and is more robust to motion-induced signal changes compared to current compressed sensing and principal component analysis-based techniques. © 2015 Wiley Periodicals, Inc.

  8. Online sparse representation for remote sensing compressed-sensed video sampling

    Science.gov (United States)

    Wang, Jie; Liu, Kun; Li, Sheng-liang; Zhang, Li

    2014-11-01

    Most recently, an emerging Compressed Sensing (CS) theory has brought a major breakthrough for data acquisition and recovery. It asserts that a signal, which is highly compressible in a known basis, can be reconstructed with high probability through sampling frequency which is well below Nyquist Sampling Frequency. When applying CS to Remote Sensing (RS) Video imaging, it can directly and efficiently acquire compressed image data by randomly projecting original data to obtain linear and non-adaptive measurements. In this paper, with the help of distributed video coding scheme which is a low-complexity technique for resource limited sensors, the frames of a RS video sequence are divided into Key frames (K frames) and Non-Key frames (CS frames). In other words, the input video sequence consists of many groups of pictures (GOPs) and each GOP consists of one K frame followed by several CS frames. Both of them are measured based on block, but at different sampling rates. In this way, the major encoding computation burden will be shifted to the decoder. At the decoder, the Side Information (SI) is generated for the CS frames using traditional Motion-Compensated Interpolation (MCI) technique according to the reconstructed key frames. The over-complete dictionary is trained by dictionary learning methods based on SI. These learning methods include ICA-like, PCA, K-SVD, MOD, etc. Using these dictionaries, the CS frames could be reconstructed according to sparse-land model. In the numerical experiments, the reconstruction performance of ICA algorithm, which is often evaluated by Peak Signal-to-Noise Ratio (PSNR), has been made compared with other online sparse representation algorithms. The simulation results show its advantages in reducing reconstruction time and robustness in reconstruction performance when applying ICA algorithm to remote sensing video reconstruction.

  9. COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation

    Science.gov (United States)

    Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos

    2015-01-01

    The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.

  10. Sparse BLIP: BLind Iterative Parallel imaging reconstruction using compressed sensing.

    Science.gov (United States)

    She, Huajun; Chen, Rong-Rong; Liang, Dong; DiBella, Edward V R; Ying, Leslie

    2014-02-01

    To develop a sensitivity-based parallel imaging reconstruction method to reconstruct iteratively both the coil sensitivities and MR image simultaneously based on their prior information. Parallel magnetic resonance imaging reconstruction problem can be formulated as a multichannel sampling problem where solutions are sought analytically. However, the channel functions given by the coil sensitivities in parallel imaging are not known exactly and the estimation error usually leads to artifacts. In this study, we propose a new reconstruction algorithm, termed Sparse BLind Iterative Parallel, for blind iterative parallel imaging reconstruction using compressed sensing. The proposed algorithm reconstructs both the sensitivity functions and the image simultaneously from undersampled data. It enforces the sparseness constraint in the image as done in compressed sensing, but is different from compressed sensing in that the sensing matrix is unknown and additional constraint is enforced on the sensitivities as well. Both phantom and in vivo imaging experiments were carried out with retrospective undersampling to evaluate the performance of the proposed method. Experiments show improvement in Sparse BLind Iterative Parallel reconstruction when compared with Sparse SENSE, JSENSE, IRGN-TV, and L1-SPIRiT reconstructions with the same number of measurements. The proposed Sparse BLind Iterative Parallel algorithm reduces the reconstruction errors when compared to the state-of-the-art parallel imaging methods. Copyright © 2013 Wiley Periodicals, Inc.

  11. Compressed sensing approach for wrist vein biometrics.

    Science.gov (United States)

    Lantsov, Aleksey; Ryabko, Maxim; Shchekin, Aleksey

    2018-04-01

    The work describes features of the compressed sensing (CS) approach utilized for development of a wearable system for wrist vein recognition with single-pixel detection; we consider this system useful for biometrics authentication purposes. The CS approach implies use of a spatial light modulation (SLM) which, in our case, can be performed differently-with a liquid crystal display or diffusely scattering medium. We show that compressed sensing combined with above-mentioned means of SLM allows us to avoid using an optical system-a limiting factor for wearable devices. The trade-off between the 2 different SLM approaches regarding issues of practical implementation of CS approach for wrist vein recognition purposes is discussed. A possible solution of a misalignment problem-a typical issue for imaging systems based upon 2D arrays of photodiodes-is also proposed. Proposed design of the wearable device for wrist vein recognition is based upon single-pixel detection. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Compressed Sensing and Low-Rank Matrix Decomposition in Multisource Images Fusion

    Directory of Open Access Journals (Sweden)

    Kan Ren

    2014-01-01

    Full Text Available We propose a novel super-resolution multisource images fusion scheme via compressive sensing and dictionary learning theory. Under the sparsity prior of images patches and the framework of the compressive sensing theory, the multisource images fusion is reduced to a signal recovery problem from the compressive measurements. Then, a set of multiscale dictionaries are learned from several groups of high-resolution sample image’s patches via a nonlinear optimization algorithm. Moreover, a new linear weights fusion rule is proposed to obtain the high-resolution image. Some experiments are taken to investigate the performance of our proposed method, and the results prove its superiority to its counterparts.

  13. Biomedical sensor design using analog compressed sensing

    Science.gov (United States)

    Balouchestani, Mohammadreza; Krishnan, Sridhar

    2015-05-01

    The main drawback of current healthcare systems is the location-specific nature of the system due to the use of fixed/wired biomedical sensors. Since biomedical sensors are usually driven by a battery, power consumption is the most important factor determining the life of a biomedical sensor. They are also restricted by size, cost, and transmission capacity. Therefore, it is important to reduce the load of sampling by merging the sampling and compression steps to reduce the storage usage, transmission times, and power consumption in order to expand the current healthcare systems to Wireless Healthcare Systems (WHSs). In this work, we present an implementation of a low-power biomedical sensor using analog Compressed Sensing (CS) framework for sparse biomedical signals that addresses both the energy and telemetry bandwidth constraints of wearable and wireless Body-Area Networks (BANs). This architecture enables continuous data acquisition and compression of biomedical signals that are suitable for a variety of diagnostic and treatment purposes. At the transmitter side, an analog-CS framework is applied at the sensing step before Analog to Digital Converter (ADC) in order to generate the compressed version of the input analog bio-signal. At the receiver side, a reconstruction algorithm based on Restricted Isometry Property (RIP) condition is applied in order to reconstruct the original bio-signals form the compressed bio-signals with high probability and enough accuracy. We examine the proposed algorithm with healthy and neuropathy surface Electromyography (sEMG) signals. The proposed algorithm achieves a good level for Average Recognition Rate (ARR) at 93% and reconstruction accuracy at 98.9%. In addition, The proposed architecture reduces total computation time from 32 to 11.5 seconds at sampling-rate=29 % of Nyquist rate, Percentage Residual Difference (PRD)=26 %, Root Mean Squared Error (RMSE)=3 %.

  14. Compressive sensing with a microwave photonic filter

    DEFF Research Database (Denmark)

    Chen, Ying; Yu, Xianbin; Chi, Hao

    2015-01-01

    In this letter, we present a novel approach to realizing photonics-assisted compressive sensing (CS) with the technique of microwave photonic fi ltering. In the proposed system, an input spectrally sparse signal to be captured and a random sequence are modulated on an optical carrier via two Mach...

  15. Curvelet-based compressive sensing for InSAR raw data

    Science.gov (United States)

    Costa, Marcello G.; da Silva Pinho, Marcelo; Fernandes, David

    2015-10-01

    The aim of this work is to evaluate the compression performance of SAR raw data for interferometry applications collected by airborne from BRADAR (Brazilian SAR System operating in X and P bands) using the new approach based on compressive sensing (CS) to achieve an effective recovery with a good phase preserving. For this framework is desirable a real-time capability, where the collected data can be compressed to reduce onboard storage and bandwidth required for transmission. In the CS theory, a sparse unknown signals can be recovered from a small number of random or pseudo-random measurements by sparsity-promoting nonlinear recovery algorithms. Therefore, the original signal can be significantly reduced. To achieve the sparse representation of SAR signal, was done a curvelet transform. The curvelets constitute a directional frame, which allows an optimal sparse representation of objects with discontinuities along smooth curves as observed in raw data and provides an advanced denoising optimization. For the tests were made available a scene of 8192 x 2048 samples in range and azimuth in X-band with 2 m of resolution. The sparse representation was compressed using low dimension measurements matrices in each curvelet subband. Thus, an iterative CS reconstruction method based on IST (iterative soft/shrinkage threshold) was adjusted to recover the curvelets coefficients and then the original signal. To evaluate the compression performance were computed the compression ratio (CR), signal to noise ratio (SNR), and because the interferometry applications require more reconstruction accuracy the phase parameters like the standard deviation of the phase (PSD) and the mean phase error (MPE) were also computed. Moreover, in the image domain, a single-look complex image was generated to evaluate the compression effects. All results were computed in terms of sparsity analysis to provides an efficient compression and quality recovering appropriated for inSAR applications

  16. Photonic compressive sensing enabled data efficient time stretch optical coherence tomography

    Science.gov (United States)

    Mididoddi, Chaitanya K.; Wang, Chao

    2018-03-01

    Photonic time stretch (PTS) has enabled real time spectral domain optical coherence tomography (OCT). However, this method generates a torrent of massive data at GHz stream rate, which requires capturing as per Nyquist principle. If the OCT interferogram signal is sparse in Fourier domain, which is always true for samples with limited number of layers, it can be captured at lower (sub-Nyquist) acquisition rate as per compressive sensing method. In this work we report a data compressed PTS-OCT system based on photonic compressive sensing with 66% compression with low acquisition rate of 50MHz and measurement speed of 1.51MHz per depth profile. A new method has also been proposed to improve the system with all-optical random pattern generation, which completely avoids electronic bottleneck in traditional binary pseudorandom binary sequence (PRBS) generators.

  17. A joint image encryption and watermarking algorithm based on compressive sensing and chaotic map

    International Nuclear Information System (INIS)

    Xiao Di; Cai Hong-Kun; Zheng Hong-Ying

    2015-01-01

    In this paper, a compressive sensing (CS) and chaotic map-based joint image encryption and watermarking algorithm is proposed. The transform domain coefficients of the original image are scrambled by Arnold map firstly. Then the watermark is adhered to the scrambled data. By compressive sensing, a set of watermarked measurements is obtained as the watermarked cipher image. In this algorithm, watermark embedding and data compression can be performed without knowing the original image; similarly, watermark extraction will not interfere with decryption. Due to the characteristics of CS, this algorithm features compressible cipher image size, flexible watermark capacity, and lossless watermark extraction from the compressed cipher image as well as robustness against packet loss. Simulation results and analyses show that the algorithm achieves good performance in the sense of security, watermark capacity, extraction accuracy, reconstruction, robustness, etc. (paper)

  18. Blind compressive sensing dynamic MRI

    Science.gov (United States)

    Lingala, Sajan Goud; Jacob, Mathews

    2013-01-01

    We propose a novel blind compressive sensing (BCS) frame work to recover dynamic magnetic resonance images from undersampled measurements. This scheme models the dynamic signal as a sparse linear combination of temporal basis functions, chosen from a large dictionary. In contrast to classical compressed sensing, the BCS scheme simultaneously estimates the dictionary and the sparse coefficients from the undersampled measurements. Apart from the sparsity of the coefficients, the key difference of the BCS scheme with current low rank methods is the non-orthogonal nature of the dictionary basis functions. Since the number of degrees of freedom of the BCS model is smaller than that of the low-rank methods, it provides improved reconstructions at high acceleration rates. We formulate the reconstruction as a constrained optimization problem; the objective function is the linear combination of a data consistency term and sparsity promoting ℓ1 prior of the coefficients. The Frobenius norm dictionary constraint is used to avoid scale ambiguity. We introduce a simple and efficient majorize-minimize algorithm, which decouples the original criterion into three simpler sub problems. An alternating minimization strategy is used, where we cycle through the minimization of three simpler problems. This algorithm is seen to be considerably faster than approaches that alternates between sparse coding and dictionary estimation, as well as the extension of K-SVD dictionary learning scheme. The use of the ℓ1 penalty and Frobenius norm dictionary constraint enables the attenuation of insignificant basis functions compared to the ℓ0 norm and column norm constraint assumed in most dictionary learning algorithms; this is especially important since the number of basis functions that can be reliably estimated is restricted by the available measurements. We also observe that the proposed scheme is more robust to local minima compared to K-SVD method, which relies on greedy sparse coding

  19. Compressed-sensing application - Pre-stack kirchhoff migration

    KAUST Repository

    Aldawood, Ali; Hoteit, Ibrahim; Alkhalifah, Tariq Ali

    2013-01-01

    Least-squares migration is a linearized form of waveform inversion that aims to enhance the spatial resolution of the subsurface reflectivity distribution and reduce the migration artifacts due to limited recording aperture, coarse sampling of sources and receivers, and low subsurface illumination. Least-squares migration, however, due to the nature of its minimization process, tends to produce smoothed and dispersed versions of the reflectivity of the subsurface. Assuming that the subsurface reflectivity distribution is sparse, we propose the addition of a non-quadratic L1-norm penalty term on the model space in the objective function. This aims to preserve the sparse nature of the subsurface reflectivity series and enhance resolution. We further use a compressed-sensing algorithm to solve the linear system, which utilizes the sparsity assumption to produce highly resolved migrated images. Thus, the Kirchhoff migration implementation is formulated as a Basis Pursuit denoise (BPDN) problem to obtain the sparse reflectivity model. Applications on synthetic data show that reflectivity models obtained using this compressed-sensing algorithm are highly accurate with optimal resolution.

  20. Design and analysis of compressed sensing radar detectors

    NARCIS (Netherlands)

    Anitori, L.; Maleki, A.; Otten, M.P.G.; Baraniuk, R.G.; Hoogeboom, P.

    2013-01-01

    We consider the problem of target detection from a set of Compressed Sensing (CS) radar measurements corrupted by additive white Gaussian noise. We propose two novel architectures and compare their performance by means of Receiver Operating Characteristic (ROC) curves. Using asymptotic arguments and

  1. Compressive sensing based ptychography image encryption

    Science.gov (United States)

    Rawat, Nitin

    2015-09-01

    A compressive sensing (CS) based ptychography combined with an optical image encryption is proposed. The diffraction pattern is recorded through ptychography technique further compressed by non-uniform sampling via CS framework. The system requires much less encrypted data and provides high security. The diffraction pattern as well as the lesser measurements of the encrypted samples serves as a secret key which make the intruder attacks more difficult. Furthermore, CS shows that the linearly projected few random samples have adequate information for decryption with a dramatic volume reduction. Experimental results validate the feasibility and effectiveness of our proposed technique compared with the existing techniques. The retrieved images do not reveal any information with the original information. In addition, the proposed system can be robust even with partial encryption and under brute-force attacks.

  2. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    Science.gov (United States)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  3. Low-Complexity Spatial-Temporal Filtering Method via Compressive Sensing for Interference Mitigation in a GNSS Receiver

    Directory of Open Access Journals (Sweden)

    Chung-Liang Chang

    2014-01-01

    Full Text Available A compressive sensing based array processing method is proposed to lower the complexity, and computation load of array system and to maintain the robust antijam performance in global navigation satellite system (GNSS receiver. Firstly, the spatial and temporal compressed matrices are multiplied with array signal, which results in a small size array system. Secondly, the 2-dimensional (2D minimum variance distortionless response (MVDR beamformer is employed in proposed system to mitigate the narrowband and wideband interference simultaneously. The iterative process is performed to find optimal spatial and temporal gain vector by MVDR approach, which enhances the steering gain of direction of arrival (DOA of interest. Meanwhile, the null gain is set at DOA of interference. Finally, the simulated navigation signal is generated offline by the graphic user interface tool and employed in the proposed algorithm. The theoretical analysis results using the proposed algorithm are verified based on simulated results.

  4. Effectiveness of compressed sensing and transmission in wireless sensor networks for structural health monitoring

    Science.gov (United States)

    Fujiwara, Takahiro; Uchiito, Haruki; Tokairin, Tomoya; Kawai, Hiroyuki

    2017-04-01

    Regarding Structural Health Monitoring (SHM) for seismic acceleration, Wireless Sensor Networks (WSN) is a promising tool for low-cost monitoring. Compressed sensing and transmission schemes have been drawing attention to achieve effective data collection in WSN. Especially, SHM systems installing massive nodes of WSN require efficient data transmission due to restricted communications capability. The dominant frequency band of seismic acceleration is occupied within 100 Hz or less. In addition, the response motions on upper floors of a structure are activated at a natural frequency, resulting in induced shaking at the specified narrow band. Focusing on the vibration characteristics of structures, we introduce data compression techniques for seismic acceleration monitoring in order to reduce the amount of transmission data. We carry out a compressed sensing and transmission scheme by band pass filtering for seismic acceleration data. The algorithm executes the discrete Fourier transform for the frequency domain and band path filtering for the compressed transmission. Assuming that the compressed data is transmitted through computer networks, restoration of the data is performed by the inverse Fourier transform in the receiving node. This paper discusses the evaluation of the compressed sensing for seismic acceleration by way of an average error. The results present the average error was 0.06 or less for the horizontal acceleration, in conditions where the acceleration was compressed into 1/32. Especially, the average error on the 4th floor achieved a small error of 0.02. Those results indicate that compressed sensing and transmission technique is effective to reduce the amount of data with maintaining the small average error.

  5. Rolling bearing fault feature learning using improved convolutional deep belief network with compressed sensing

    Science.gov (United States)

    Shao, Haidong; Jiang, Hongkai; Zhang, Haizhou; Duan, Wenjing; Liang, Tianchen; Wu, Shuaipeng

    2018-02-01

    The vibration signals collected from rolling bearing are usually complex and non-stationary with heavy background noise. Therefore, it is a great challenge to efficiently learn the representative fault features of the collected vibration signals. In this paper, a novel method called improved convolutional deep belief network (CDBN) with compressed sensing (CS) is developed for feature learning and fault diagnosis of rolling bearing. Firstly, CS is adopted for reducing the vibration data amount to improve analysis efficiency. Secondly, a new CDBN model is constructed with Gaussian visible units to enhance the feature learning ability for the compressed data. Finally, exponential moving average (EMA) technique is employed to improve the generalization performance of the constructed deep model. The developed method is applied to analyze the experimental rolling bearing vibration signals. The results confirm that the developed method is more effective than the traditional methods.

  6. Compressive sensing based wireless sensor for structural health monitoring

    Science.gov (United States)

    Bao, Yuequan; Zou, Zilong; Li, Hui

    2014-03-01

    Data loss is a common problem for monitoring systems based on wireless sensors. Reliable communication protocols, which enhance communication reliability by repetitively transmitting unreceived packets, is one approach to tackle the problem of data loss. An alternative approach allows data loss to some extent and seeks to recover the lost data from an algorithmic point of view. Compressive sensing (CS) provides such a data loss recovery technique. This technique can be embedded into smart wireless sensors and effectively increases wireless communication reliability without retransmitting the data. The basic idea of CS-based approach is that, instead of transmitting the raw signal acquired by the sensor, a transformed signal that is generated by projecting the raw signal onto a random matrix, is transmitted. Some data loss may occur during the transmission of this transformed signal. However, according to the theory of CS, the raw signal can be effectively reconstructed from the received incomplete transformed signal given that the raw signal is compressible in some basis and the data loss ratio is low. This CS-based technique is implemented into the Imote2 smart sensor platform using the foundation of Illinois Structural Health Monitoring Project (ISHMP) Service Tool-suite. To overcome the constraints of limited onboard resources of wireless sensor nodes, a method called random demodulator (RD) is employed to provide memory and power efficient construction of the random sampling matrix. Adaptation of RD sampling matrix is made to accommodate data loss in wireless transmission and meet the objectives of the data recovery. The embedded program is tested in a series of sensing and communication experiments. Examples and parametric study are presented to demonstrate the applicability of the embedded program as well as to show the efficacy of CS-based data loss recovery for real wireless SHM systems.

  7. Statistical mechanics approach to 1-bit compressed sensing

    International Nuclear Information System (INIS)

    Xu, Yingying; Kabashima, Yoshiyuki

    2013-01-01

    Compressed sensing is a framework that makes it possible to recover an N-dimensional sparse vector x∈R N from its linear transformation y∈R M of lower dimensionality M 1 -norm-based signal recovery scheme for 1-bit compressed sensing using statistical mechanics methods. We show that the signal recovery performance predicted by the replica method under the replica symmetric ansatz, which turns out to be locally unstable for modes breaking the replica symmetry, is in good consistency with experimental results of an approximate recovery algorithm developed earlier. This suggests that the l 1 -based recovery problem typically has many local optima of a similar recovery accuracy, which can be achieved by the approximate algorithm. We also develop another approximate recovery algorithm inspired by the cavity method. Numerical experiments show that when the density of nonzero entries in the original signal is relatively large the new algorithm offers better performance than the abovementioned scheme and does so with a lower computational cost. (paper)

  8. Dynamical Functional Theory for Compressed Sensing

    DEFF Research Database (Denmark)

    Cakmak, Burak; Opper, Manfred; Winther, Ole

    2017-01-01

    the Thouless Anderson-Palmer (TAP) equations corresponding to the ensemble. Using a dynamical functional approach we are able to derive an effective stochastic process for the marginal statistics of a single component of the dynamics. This allows us to design memory terms in the algorithm in such a way...... that the resulting fields become Gaussian random variables allowing for an explicit analysis. The asymptotic statistics of these fields are consistent with the replica ansatz of the compressed sensing problem....

  9. Compressive Sensing: Analysis of Signals in Radio Astronomy

    Directory of Open Access Journals (Sweden)

    Gaigals G.

    2013-12-01

    Full Text Available The compressive sensing (CS theory says that for some kind of signals there is no need to keep or transfer all the data acquired accordingly to the Nyquist criterion. In this work we investigate if the CS approach is applicable for recording and analysis of radio astronomy (RA signals. Since CS methods are applicable for the signals with sparse (and compressible representations, the compressibility of RA signals is verified. As a result, we identify which RA signals can be processed using CS, find the parameters which can improve or degrade CS application to RA results, describe the optimum way how to perform signal filtering in CS applications. Also, a range of virtual LabVIEW instruments are created for the signal analysis with the CS theory.

  10. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    Science.gov (United States)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  11. Sampling theory, a renaissance compressive sensing and other developments

    CERN Document Server

    2015-01-01

    Reconstructing or approximating objects from seemingly incomplete information is a frequent challenge in mathematics, science, and engineering. A multitude of tools designed to recover hidden information are based on Shannon’s classical sampling theorem, a central pillar of Sampling Theory. The growing need to efficiently obtain precise and tailored digital representations of complex objects and phenomena requires the maturation of available tools in Sampling Theory as well as the development of complementary, novel mathematical theories. Today, research themes such as Compressed Sensing and Frame Theory re-energize the broad area of Sampling Theory. This volume illustrates the renaissance that the area of Sampling Theory is currently experiencing. It touches upon trendsetting areas such as Compressed Sensing, Finite Frames, Parametric Partial Differential Equations, Quantization, Finite Rate of Innovation, System Theory, as well as sampling in Geometry and Algebraic Topology.

  12. Compressed sensing techniques for receiver based post-compensation of transmitter's nonlinear distortions in OFDM systems

    KAUST Repository

    Owodunni, Damilola S.

    2014-04-01

    In this paper, compressed sensing techniques are proposed to linearize commercial power amplifiers driven by orthogonal frequency division multiplexing signals. The nonlinear distortion is considered as a sparse phenomenon in the time-domain, and three compressed sensing based algorithms are presented to estimate and compensate for these distortions at the receiver using a few and, at times, even no frequency-domain free carriers (i.e. pilot carriers). The first technique is a conventional compressed sensing approach, while the second incorporates a priori information about the distortions to enhance the estimation. Finally, the third technique involves an iterative data-aided algorithm that does not require any pilot carriers and hence allows the system to work at maximum bandwidth efficiency. The performances of all the proposed techniques are evaluated on a commercial power amplifier and compared. The error vector magnitude and symbol error rate results show the ability of compressed sensing to compensate for the amplifier\\'s nonlinear distortions. © 2013 Elsevier B.V.

  13. Compressive Sensing Based Bio-Inspired Shape Feature Detection CMOS Imager

    Science.gov (United States)

    Duong, Tuan A. (Inventor)

    2015-01-01

    A CMOS imager integrated circuit using compressive sensing and bio-inspired detection is presented which integrates novel functions and algorithms within a novel hardware architecture enabling efficient on-chip implementation.

  14. Compressive sensing in a photonic link with optical integration

    DEFF Research Database (Denmark)

    Chen, Ying; Yu, Xianbin; Chi, Hao

    2014-01-01

    In this Letter, we present a novel structure to realize photonics-assisted compressive sensing (CS) with optical integration. In the system, a spectrally sparse signal modulates a multiwavelength continuous-wave light and then is mixed with a random sequence in optical domain. The optical signal......, which is equivalent to the function of integration required in CS. A proof-of-concept experiment with four wavelengths, corresponding to a compression factor of 4, is demonstrated. More simulation results are also given to show the potential of the technique....

  15. Bayesian signal reconstruction for 1-bit compressed sensing

    International Nuclear Information System (INIS)

    Xu, Yingying; Kabashima, Yoshiyuki; Zdeborová, Lenka

    2014-01-01

    The 1-bit compressed sensing framework enables the recovery of a sparse vector x from the sign information of each entry of its linear transformation. Discarding the amplitude information can significantly reduce the amount of data, which is highly beneficial in practical applications. In this paper, we present a Bayesian approach to signal reconstruction for 1-bit compressed sensing and analyze its typical performance using statistical mechanics. As a basic setup, we consider the case that the measuring matrix Φ has i.i.d entries and the measurements y are noiseless. Utilizing the replica method, we show that the Bayesian approach enables better reconstruction than the l 1 -norm minimization approach, asymptotically saturating the performance obtained when the non-zero entry positions of the signal are known, for signals whose non-zero entries follow zero mean Gaussian distributions. We also test a message passing algorithm for signal reconstruction on the basis of belief propagation. The results of numerical experiments are consistent with those of the theoretical analysis. (paper)

  16. A Comparison of Compressed Sensing and Sparse Recovery Algorithms Applied to Simulation Data

    Directory of Open Access Journals (Sweden)

    Ya Ju Fan

    2016-08-01

    Full Text Available The move toward exascale computing for scientific simulations is placing new demands on compression techniques. It is expected that the I/O system will not be able to support the volume of data that is expected to be written out. To enable quantitative analysis and scientific discovery, we are interested in techniques that compress high-dimensional simulation data and can provide perfect or near-perfect reconstruction.  In this paper, we explore the use of compressed sensing (CS techniques to reduce the size of the data before they are written out. Using large-scale simulation data, we investigate how the sufficient sparsity condition and the contrast in the data affect the quality of reconstruction and the degree of compression.  We provide suggestions for the practical implementation of CS techniques and compare them with other sparse recovery methods. Our results show that despite longer times for reconstruction, compressed sensing techniques can provide near perfect reconstruction over a range of data with varying sparsity.

  17. Efficient Imaging and Real-Time Display of Scanning Ion Conductance Microscopy Based on Block Compressive Sensing

    Science.gov (United States)

    Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing

    2014-07-01

    Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.

  18. Designing sparse sensing matrix for compressive sensing to reconstruct high resolution medical images

    Directory of Open Access Journals (Sweden)

    Vibha Tiwari

    2015-12-01

    Full Text Available Compressive sensing theory enables faithful reconstruction of signals, sparse in domain $ \\Psi $, at sampling rate lesser than Nyquist criterion, while using sampling or sensing matrix $ \\Phi $ which satisfies restricted isometric property. The role played by sensing matrix $ \\Phi $ and sparsity matrix $ \\Psi $ is vital in faithful reconstruction. If the sensing matrix is dense then it takes large storage space and leads to high computational cost. In this paper, effort is made to design sparse sensing matrix with least incurred computational cost while maintaining quality of reconstructed image. The design approach followed is based on sparse block circulant matrix (SBCM with few modifications. The other used sparse sensing matrix consists of 15 ones in each column. The medical images used are acquired from US, MRI and CT modalities. The image quality measurement parameters are used to compare the performance of reconstructed medical images using various sensing matrices. It is observed that, since Gram matrix of dictionary matrix ($ \\Phi \\Psi \\mathrm{} $ is closed to identity matrix in case of proposed modified SBCM, therefore, it helps to reconstruct the medical images of very good quality.

  19. Compressive Sensing for Spread Spectrum Receivers

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Jensen, Tobias Lindstrøm; Larsen, Torben

    2013-01-01

    With the advent of ubiquitous computing there are two design parameters of wireless communication devices that become very important: power efficiency and production cost. Compressive sensing enables the receiver in such devices to sample below the Shannon-Nyquist sampling rate, which may lead...... the bit error rate performance is degraded by the subsampling in the CS-enabled receivers, this may be remedied by including quantization in the receiver model.We also study the computational complexity of the proposed receiver design under different sparsity and measurement ratios. Our work shows...

  20. Compressed Sensing with Linear Correlation Between Signal and Measurement Noise

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Larsen, Torben

    2014-01-01

    reconstruction algorithms, but is not known in existing literature. The proposed technique reduces reconstruction error considerably in the case of linearly correlated measurements and noise. Numerical experiments confirm the efficacy of the technique. The technique is demonstrated with application to low......Existing convex relaxation-based approaches to reconstruction in compressed sensing assume that noise in the measurements is independent of the signal of interest. We consider the case of noise being linearly correlated with the signal and introduce a simple technique for improving compressed...... sensing reconstruction from such measurements. The technique is based on a linear model of the correlation of additive noise with the signal. The modification of the reconstruction algorithm based on this model is very simple and has negligible additional computational cost compared to standard...

  1. Making sense of employer collectivism

    DEFF Research Database (Denmark)

    Ibsen, Christian Lyhne

    2016-01-01

    This conceptual article argues that preferences of employers for collective action cannot be reduced to rational actors making decisions based on market structures or institutional logics. Both markets and institutions are inherently ambiguous and employers therefore have to settle for plausible...... – rather than accurate – rational strategies among many alternatives through so-called sensemaking. Sensemaking refers to the process by which employers continuously make sense of their competitive environment by building causal stories of competitive advantages. The article therefore tries to provide......, unlike countries in similar situations, for example Finland and Sweden, Danish employers retained a coordinated industry-level bargaining system, which makes it an interesting paradox to study from the vantage point of sensemaking....

  2. MATLAB simulation software used for the PhD thesis "Acquisition of Multi-Band Signals via Compressed Sensing

    DEFF Research Database (Denmark)

    2014-01-01

    MATLAB simulation software used for the PhD thesis "Acquisition of Multi-Band Signals via Compressed Sensing......MATLAB simulation software used for the PhD thesis "Acquisition of Multi-Band Signals via Compressed Sensing...

  3. Harmonic analysis in integrated energy system based on compressed sensing

    International Nuclear Information System (INIS)

    Yang, Ting; Pen, Haibo; Wang, Dan; Wang, Zhaoxia

    2016-01-01

    Highlights: • We propose a harmonic/inter-harmonic analysis scheme with compressed sensing theory. • Property of sparseness of harmonic signal in electrical power system is proved. • The ratio formula of fundamental and harmonic components sparsity is presented. • Spectral Projected Gradient-Fundamental Filter reconstruction algorithm is proposed. • SPG-FF enhances the precision of harmonic detection and signal reconstruction. - Abstract: The advent of Integrated Energy Systems enabled various distributed energy to access the system through different power electronic devices. The development of this has made the harmonic environment more complex. It needs low complexity and high precision of harmonic detection and analysis methods to improve power quality. To solve the shortages of large data storage capacities and high complexity of compression in sampling under the Nyquist sampling framework, this research paper presents a harmonic analysis scheme based on compressed sensing theory. The proposed scheme enables the performance of the functions of compressive sampling, signal reconstruction and harmonic detection simultaneously. In the proposed scheme, the sparsity of the harmonic signals in the base of the Discrete Fourier Transform (DFT) is numerically calculated first. This is followed by providing a proof of the matching satisfaction of the necessary conditions for compressed sensing. The binary sparse measurement is then leveraged to reduce the storage space in the sampling unit in the proposed scheme. In the recovery process, the scheme proposed a novel reconstruction algorithm called the Spectral Projected Gradient with Fundamental Filter (SPG-FF) algorithm to enhance the reconstruction precision. One of the actual microgrid systems is used as simulation example. The results of the experiment shows that the proposed scheme effectively enhances the precision of harmonic and inter-harmonic detection with low computing complexity, and has good

  4. A method of vehicle license plate recognition based on PCANet and compressive sensing

    Science.gov (United States)

    Ye, Xianyi; Min, Feng

    2018-03-01

    The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.

  5. An Adaptive Joint Sparsity Recovery for Compressive Sensing Based EEG System

    Directory of Open Access Journals (Sweden)

    Hamza Djelouat

    2017-01-01

    Full Text Available The last decade has witnessed tremendous efforts to shape the Internet of things (IoT platforms to be well suited for healthcare applications. These platforms are comprised of a network of wireless sensors to monitor several physical and physiological quantities. For instance, long-term monitoring of brain activities using wearable electroencephalogram (EEG sensors is widely exploited in the clinical diagnosis of epileptic seizures and sleeping disorders. However, the deployment of such platforms is challenged by the high power consumption and system complexity. Energy efficiency can be achieved by exploring efficient compression techniques such as compressive sensing (CS. CS is an emerging theory that enables a compressed acquisition using well-designed sensing matrices. Moreover, system complexity can be optimized by using hardware friendly structured sensing matrices. This paper quantifies the performance of a CS-based multichannel EEG monitoring. In addition, the paper exploits the joint sparsity of multichannel EEG using subspace pursuit (SP algorithm as well as a designed sparsifying basis in order to improve the reconstruction quality. Furthermore, the paper proposes a modification to the SP algorithm based on an adaptive selection approach to further improve the performance in terms of reconstruction quality, execution time, and the robustness of the recovery process.

  6. A Computational model for compressed sensing RNAi cellular screening

    Directory of Open Access Journals (Sweden)

    Tan Hua

    2012-12-01

    Full Text Available Abstract Background RNA interference (RNAi becomes an increasingly important and effective genetic tool to study the function of target genes by suppressing specific genes of interest. This system approach helps identify signaling pathways and cellular phase types by tracking intensity and/or morphological changes of cells. The traditional RNAi screening scheme, in which one siRNA is designed to knockdown one specific mRNA target, needs a large library of siRNAs and turns out to be time-consuming and expensive. Results In this paper, we propose a conceptual model, called compressed sensing RNAi (csRNAi, which employs a unique combination of group of small interfering RNAs (siRNAs to knockdown a much larger size of genes. This strategy is based on the fact that one gene can be partially bound with several small interfering RNAs (siRNAs and conversely, one siRNA can bind to a few genes with distinct binding affinity. This model constructs a multi-to-multi correspondence between siRNAs and their targets, with siRNAs much fewer than mRNA targets, compared with the conventional scheme. Mathematically this problem involves an underdetermined system of equations (linear or nonlinear, which is ill-posed in general. However, the recently developed compressed sensing (CS theory can solve this problem. We present a mathematical model to describe the csRNAi system based on both CS theory and biological concerns. To build this model, we first search nucleotide motifs in a target gene set. Then we propose a machine learning based method to find the effective siRNAs with novel features, such as image features and speech features to describe an siRNA sequence. Numerical simulations show that we can reduce the siRNA library to one third of that in the conventional scheme. In addition, the features to describe siRNAs outperform the existing ones substantially. Conclusions This csRNAi system is very promising in saving both time and cost for large-scale RNAi

  7. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar.

    Science.gov (United States)

    Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun; Huang, Yuan-Hao

    2018-04-05

    Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256 × 13 real-time radar image display with a throughput of 28.2 frames per second.

  8. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar

    Directory of Open Access Journals (Sweden)

    Kuei-Chi Tsao

    2018-04-01

    Full Text Available Complementary metal-oxide-semiconductor (CMOS radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA. The proposed reconstruction processor can support the 256 × 13 real-time radar image display with a throughput of 28.2 frames per second.

  9. Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators

    International Nuclear Information System (INIS)

    Flammia, Steven T; Gross, David; Liu, Yi-Kai; Eisert, Jens

    2012-01-01

    Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. Firstly, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e. the sample complexity of tomography decreases with the rank. Secondly, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. In this paper, we present a new theoretical analysis of compressed tomography, based on the restricted isometry property for low-rank matrices. Using these tools, we obtain near-optimal error bounds for the realistic situation where the data contain noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators—the matrix Dantzig selector and the matrix Lasso—with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low-rank estimate using direct fidelity estimation, and describe a method for compressed quantum process tomography that works for processes with small Kraus rank and requires only Pauli eigenstate preparations

  10. Polarimetric and Indoor Imaging Fusion Based on Compressive Sensing

    Science.gov (United States)

    2013-04-01

    34 in Proc. IEEE Radar Conf, Rome, Italy , May 2008. [17] M. G. Amin, F. Ahmad, W. Zhang, "A compressive sensing approach to moving target... Ferrara , J. Jackson, and M. Stuff, "Three-dimensional sparse-aperture moving-target imaging," in Proc. SPIE, vol. 6970, 2008. [43] M. Skolnik (Ed

  11. Compressed Sensing-Based Direct Conversion Receiver

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas; Larsen, Torben

    2012-01-01

    Due to the continuously increasing computational power of modern data receivers it is possible to move more and more processing from the analog to the digital domain. This paper presents a compressed sensing approach to relaxing the analog filtering requirements prior to the ADCs in a direct......-converted radio signals. As shown in an experiment presented in the article, when the proposed method is used, it is possible to relax the requirements for the quadrature down-converter filters. A random sampling device and an additional digital signal processing module is the price to pay for these relaxed...

  12. RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction

    Directory of Open Access Journals (Sweden)

    Michael M. Abdel-Sayed

    2016-11-01

    Full Text Available Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted ℓ1 minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal ℓ1 minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to ℓ1 minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.

  13. RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.

    Science.gov (United States)

    Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F

    2016-11-01

    Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.

  14. Compressive laser ranging.

    Science.gov (United States)

    Babbitt, Wm Randall; Barber, Zeb W; Renner, Christoffer

    2011-12-15

    Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.

  15. A Novel 1D Hybrid Chaotic Map-Based Image Compression and Encryption Using Compressed Sensing and Fibonacci-Lucas Transform

    Directory of Open Access Journals (Sweden)

    Tongfeng Zhang

    2016-01-01

    Full Text Available A one-dimensional (1D hybrid chaotic system is constructed by three different 1D chaotic maps in parallel-then-cascade fashion. The proposed chaotic map has larger key space and exhibits better uniform distribution property in some parametric range compared with existing 1D chaotic map. Meanwhile, with the combination of compressive sensing (CS and Fibonacci-Lucas transform (FLT, a novel image compression and encryption scheme is proposed with the advantages of the 1D hybrid chaotic map. The whole encryption procedure includes compression by compressed sensing (CS, scrambling with FLT, and diffusion after linear scaling. Bernoulli measurement matrix in CS is generated by the proposed 1D hybrid chaotic map due to its excellent uniform distribution. To enhance the security and complexity, transform kernel of FLT varies in each permutation round according to the generated chaotic sequences. Further, the key streams used in the diffusion process depend on the chaotic map as well as plain image, which could resist chosen plaintext attack (CPA. Experimental results and security analyses demonstrate the validity of our scheme in terms of high security and robustness against noise attack and cropping attack.

  16. Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Liantao Wu

    2015-08-01

    Full Text Available Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links.

  17. Research on compressive sensing reconstruction algorithm based on total variation model

    Science.gov (United States)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  18. A Compressed Sensing Framework for Magnetic Resonance Fingerprinting

    OpenAIRE

    Davies, Mike; Puy, Gilles; Vandergheynst, Pierre; Wiaux, Yves

    2013-01-01

    Inspired by the recently proposed Magnetic Resonance Fingerprinting (MRF) technique, we develop a principled compressed sensing framework for quantitative MRI. The three key components are: a random pulse excitation sequence following the MRF technique; a random EPI subsampling strategy and an iterative projection algorithm that imposes consistency with the Bloch equations. We show that theoretically, as long as the excitation sequence possesses an appropriate form of persistent excitation, w...

  19. Reducing acquisition time in clinical MRI by data undersampling and compressed sensing reconstruction

    Science.gov (United States)

    Hollingsworth, Kieren Grant

    2015-11-01

    MRI is often the most sensitive or appropriate technique for important measurements in clinical diagnosis and research, but lengthy acquisition times limit its use due to cost and considerations of patient comfort and compliance. Once an image field of view and resolution is chosen, the minimum scan acquisition time is normally fixed by the amount of raw data that must be acquired to meet the Nyquist criteria. Recently, there has been research interest in using the theory of compressed sensing (CS) in MR imaging to reduce scan acquisition times. The theory argues that if our target MR image is sparse, having signal information in only a small proportion of pixels (like an angiogram), or if the image can be mathematically transformed to be sparse then it is possible to use that sparsity to recover a high definition image from substantially less acquired data. This review starts by considering methods of k-space undersampling which have already been incorporated into routine clinical imaging (partial Fourier imaging and parallel imaging), and then explains the basis of using compressed sensing in MRI. The practical considerations of applying CS to MRI acquisitions are discussed, such as designing k-space undersampling schemes, optimizing adjustable parameters in reconstructions and exploiting the power of combined compressed sensing and parallel imaging (CS-PI). A selection of clinical applications that have used CS and CS-PI prospectively are considered. The review concludes by signposting other imaging acceleration techniques under present development before concluding with a consideration of the potential impact and obstacles to bringing compressed sensing into routine use in clinical MRI.

  20. Compressive sensing for high resolution profiles with enhanced Doppler performance

    NARCIS (Netherlands)

    Anitori, L.; Hoogeboom, P.; Chevalier, F. Le; Otten, M.P.G.

    2012-01-01

    In this paper we demonstrate how Compressive Sensing (CS) can be used in pulse-Doppler radars to improve the Doppler performance while preserving range resolution. We investigate here two types of stepped frequency waveforms, the coherent frequency bursts and successive frequency ramps, which can be

  1. A knitted glove sensing system with compression strain for finger movements

    Science.gov (United States)

    Ryu, Hochung; Park, Sangki; Park, Jong-Jin; Bae, Jihyun

    2018-05-01

    Development of a fabric structure strain sensor has received considerable attention due to its broad application in healthcare monitoring and human–machine interfaces. In the knitted textile structure, it is critical to understand the surface structural deformation from a different body motion, inducing the electrical signal characteristics. Here, we report the electromechanical properties of the knitted glove sensing system focusing on the compressive strain behavior. Compared with the electrical response of the tensile strain, the compressive strain shows much higher sensitivity, stability, and linearity via different finger motions. Additionally, the sensor exhibits constant electrical properties after repeated cyclic tests and washing processes. The proposed knitted glove sensing system can be readily extended to a scalable and cost-effective production due to the use of a commercialized manufacturing system.

  2. Block-Based Compressed Sensing for Neutron Radiation Image Using WDFB

    Directory of Open Access Journals (Sweden)

    Wei Jin

    2015-01-01

    Full Text Available An ideal compression method for neutron radiation image should have high compression ratio while keeping more details of the original image. Compressed sensing (CS, which can break through the restrictions of sampling theorem, is likely to offer an efficient compression scheme for the neutron radiation image. Combining wavelet transform with directional filter banks, a novel nonredundant multiscale geometry analysis transform named Wavelet Directional Filter Banks (WDFB is constructed and applied to represent neutron radiation image sparsely. Then, the block-based CS technique is introduced and a high performance CS scheme for neutron radiation image is proposed. By performing two-step iterative shrinkage algorithm the problem of L1 norm minimization is solved to reconstruct neutron radiation image from random measurements. The experiment results demonstrate that the scheme not only improves the quality of reconstructed image obviously but also retains more details of original image.

  3. Pilotless recovery of clipped OFDM signals by compressive sensing over reliable data carriers

    KAUST Repository

    Al-Safadi, Ebrahim B.

    2012-06-01

    In this paper we propose a novel method of clipping mitigation in OFDM using compressive sensing that completely avoids using reserved tones or channel-estimation pilots. The method builds on selecting the most reliable perturbations from the constellation lattice upon decoding at the receiver (in the frequency domain), and performs compressive sensing over these observations in order to completely recover the sparse nonlinear distortion in the time domain. As such, the method provides a practical solution to the problem of initial erroneous decoding decisions in iterative ML methods, and the ability to recover the distorted signal in one shot. © 2012 IEEE.

  4. Pilotless recovery of clipped OFDM signals by compressive sensing over reliable data carriers

    KAUST Repository

    Al-Safadi, Ebrahim B.; Al-Naffouri, Tareq Y.

    2012-01-01

    In this paper we propose a novel method of clipping mitigation in OFDM using compressive sensing that completely avoids using reserved tones or channel-estimation pilots. The method builds on selecting the most reliable perturbations from the constellation lattice upon decoding at the receiver (in the frequency domain), and performs compressive sensing over these observations in order to completely recover the sparse nonlinear distortion in the time domain. As such, the method provides a practical solution to the problem of initial erroneous decoding decisions in iterative ML methods, and the ability to recover the distorted signal in one shot. © 2012 IEEE.

  5. Direct current force sensing device based on compressive spring, permanent magnet, and coil-wound magnetostrictive/piezoelectric laminate.

    Science.gov (United States)

    Leung, Chung Ming; Or, Siu Wing; Ho, S L

    2013-12-01

    A force sensing device capable of sensing dc (or static) compressive forces is developed based on a NAS106N stainless steel compressive spring, a sintered NdFeB permanent magnet, and a coil-wound Tb(0.3)Dy(0.7)Fe(1.92)/Pb(Zr, Ti)O3 magnetostrictive∕piezoelectric laminate. The dc compressive force sensing in the device is evaluated theoretically and experimentally and is found to originate from a unique force-induced, position-dependent, current-driven dc magnetoelectric effect. The sensitivity of the device can be increased by increasing the spring constant of the compressive spring, the size of the permanent magnet, and/or the driving current for the coil-wound laminate. Devices of low-force (20 N) and high-force (200 N) types, showing high output voltages of 262 and 128 mV peak, respectively, are demonstrated at a low driving current of 100 mA peak by using different combinations of compressive spring and permanent magnet.

  6. Accelerated Air-coupled Ultrasound Imaging of Wood Using Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Yiming Fang

    2015-12-01

    Full Text Available Air-coupled ultrasound has shown excellent sensitivity and specificity for the nondestructive imaging of wood-based material. However, it is time-consuming, due to the high scanning density limited by the Nyquist law. This study investigated the feasibility of applying compressed sensing techniques to air-coupled ultrasound imaging, aiming to reduce the number of scanning lines and then accelerate the imaging. Firstly, an undersampled scanning strategy specified by a random binary matrix was proposed to address the limitation of the compressed sensing framework. The undersampled scanning can be easily implemented, while only minor modification was required for the existing imaging system. Then, discrete cosine transform was selected experimentally as the representation basis. Finally, orthogonal matching pursuit algorithm was utilized to reconstruct the wood images. Experiments on three real air-coupled ultrasound images indicated the potential of the present method to accelerate air-coupled ultrasound imaging of wood. The same quality of ACU images can be obtained with scanning time cut in half.

  7. Optical image encryption using chaos-based compressed sensing and phase-shifting interference in fractional wavelet domain

    Science.gov (United States)

    Liu, Qi; Wang, Ying; Wang, Jun; Wang, Qiong-Hua

    2018-02-01

    In this paper, a novel optical image encryption system combining compressed sensing with phase-shifting interference in fractional wavelet domain is proposed. To improve the encryption efficiency, the volume data of original image are decreased by compressed sensing. Then the compacted image is encoded through double random phase encoding in asymmetric fractional wavelet domain. In the encryption system, three pseudo-random sequences, generated by three-dimensional chaos map, are used as the measurement matrix of compressed sensing and two random-phase masks in the asymmetric fractional wavelet transform. It not only simplifies the keys to storage and transmission, but also enhances our cryptosystem nonlinearity to resist some common attacks. Further, holograms make our cryptosystem be immune to noises and occlusion attacks, which are obtained by two-step-only quadrature phase-shifting interference. And the compression and encryption can be achieved in the final result simultaneously. Numerical experiments have verified the security and validity of the proposed algorithm.

  8. On the Feedback Reduction of Relay Multiuser Networks using Compressive Sensing

    KAUST Repository

    Elkhalil, Khalil

    2016-01-29

    This paper presents a comprehensive performance analysis of full-duplex multiuser relay networks employing opportunistic scheduling with noisy and compressive feedback. Specifically, two feedback techniques based on compressive sensing (CS) theory are introduced and their effect on the system performance is analyzed. The problem of joint user identity and signal-tonoise ratio (SNR) estimation at the base-station is casted as a block sparse signal recovery problem in CS. Using existing CS block recovery algorithms, the identity of the strong users is obtained and their corresponding SNRs are estimated using the best linear unbiased estimator (BLUE). To minimize the effect of feedback noise on the estimated SNRs, a back-off strategy that optimally backs-off on the noisy estimated SNRs is introduced, and the error covariance matrix of the noise after CS recovery is derived. Finally, closed-form expressions for the end-to-end SNRs of the system are derived. Numerical results show that the proposed techniques drastically reduce the feedback air-time and achieve a rate close to that obtained by scheduling techniques that require dedicated error-free feedback from all network users. Key findings of this paper suggest that the choice of half-duplex or full-duplex SNR feedback is dependent on the channel coherence interval, and on low coherence intervals, full-duplex feedback is superior to the interference-free half-duplex feedback.

  9. Efficient High-Dimensional Entanglement Imaging with a Compressive-Sensing Double-Pixel Camera

    Directory of Open Access Journals (Sweden)

    Gregory A. Howland

    2013-02-01

    Full Text Available We implement a double-pixel compressive-sensing camera to efficiently characterize, at high resolution, the spatially entangled fields that are produced by spontaneous parametric down-conversion. This technique leverages sparsity in spatial correlations between entangled photons to improve acquisition times over raster scanning by a scaling factor up to n^{2}/log⁡(n for n-dimensional images. We image at resolutions up to 1024 dimensions per detector and demonstrate a channel capacity of 8.4 bits per photon. By comparing the entangled photons’ classical mutual information in conjugate bases, we violate an entropic Einstein-Podolsky-Rosen separability criterion for all measured resolutions. More broadly, our result indicates that compressive sensing can be especially effective for higher-order measurements on correlated systems.

  10. Compressed Sensing in Vibration Monitoring Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Osvaldo Casares-Quirós

    2014-12-01

    After an experimental test using Waspmotes the fixed-variable variant has a 56.58% reduction of power consumption by introducing a maximum error ± 0.00195g and compress in 52.44% the amount of samples. This algorithm increased the network energy autonomy from 17 hours to 26.5 hours. Through mathematical analysis, the variable-fixed technique reduces in 74.81% the power consumption in sensing nodes transmissions and decrease in 90% the number of samples.

  11. The extraction of motion-onset VEP BCI features based on deep learning and compressed sensing.

    Science.gov (United States)

    Ma, Teng; Li, Hui; Yang, Hao; Lv, Xulin; Li, Peiyang; Liu, Tiejun; Yao, Dezhong; Xu, Peng

    2017-01-01

    Motion-onset visual evoked potentials (mVEP) can provide a softer stimulus with reduced fatigue, and it has potential applications for brain computer interface(BCI)systems. However, the mVEP waveform is seriously masked in the strong background EEG activities, and an effective approach is needed to extract the corresponding mVEP features to perform task recognition for BCI control. In the current study, we combine deep learning with compressed sensing to mine discriminative mVEP information to improve the mVEP BCI performance. The deep learning and compressed sensing approach can generate the multi-modality features which can effectively improve the BCI performance with approximately 3.5% accuracy incensement over all 11 subjects and is more effective for those subjects with relatively poor performance when using the conventional features. Compared with the conventional amplitude-based mVEP feature extraction approach, the deep learning and compressed sensing approach has a higher classification accuracy and is more effective for subjects with relatively poor performance. According to the results, the deep learning and compressed sensing approach is more effective for extracting the mVEP feature to construct the corresponding BCI system, and the proposed feature extraction framework is easy to extend to other types of BCIs, such as motor imagery (MI), steady-state visual evoked potential (SSVEP)and P300. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Accelerated high-frame-rate mouse heart cine-MRI using compressed sensing reconstruction.

    Science.gov (United States)

    Motaal, Abdallah G; Coolen, Bram F; Abdurrachim, Desiree; Castro, Rui M; Prompers, Jeanine J; Florack, Luc M J; Nicolay, Klaas; Strijkers, Gustav J

    2013-04-01

    We introduce a new protocol to obtain very high-frame-rate cinematographic (Cine) MRI movies of the beating mouse heart within a reasonable measurement time. The method is based on a self-gated accelerated fast low-angle shot (FLASH) acquisition and compressed sensing reconstruction. Key to our approach is that we exploit the stochastic nature of the retrospective triggering acquisition scheme to produce an undersampled and random k-t space filling that allows for compressed sensing reconstruction and acceleration. As a standard, a self-gated FLASH sequence with a total acquisition time of 10 min was used to produce single-slice Cine movies of seven mouse hearts with 90 frames per cardiac cycle. Two times (2×) and three times (3×) k-t space undersampled Cine movies were produced from 2.5- and 1.5-min data acquisitions, respectively. The accelerated 90-frame Cine movies of mouse hearts were successfully reconstructed with a compressed sensing algorithm. The movies had high image quality and the undersampling artifacts were effectively removed. Left ventricular functional parameters, i.e. end-systolic and end-diastolic lumen surface areas and early-to-late filling rate ratio as a parameter to evaluate diastolic function, derived from the standard and accelerated Cine movies, were nearly identical. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Compressed sensing for high-resolution nonlipid suppressed 1 H FID MRSI of the human brain at 9.4T.

    Science.gov (United States)

    Nassirpour, Sahar; Chang, Paul; Avdievitch, Nikolai; Henning, Anke

    2018-04-29

    The aim of this study was to apply compressed sensing to accelerate the acquisition of high resolution metabolite maps of the human brain using a nonlipid suppressed ultra-short TR and TE 1 H FID MRSI sequence at 9.4T. X-t sparse compressed sensing reconstruction was optimized for nonlipid suppressed 1 H FID MRSI data. Coil-by-coil x-t sparse reconstruction was compared with SENSE x-t sparse and low rank reconstruction. The effect of matrix size and spatial resolution on the achievable acceleration factor was studied. Finally, in vivo metabolite maps with different acceleration factors of 2, 4, 5, and 10 were acquired and compared. Coil-by-coil x-t sparse compressed sensing reconstruction was not able to reliably recover the nonlipid suppressed data, rather a combination of parallel and sparse reconstruction was necessary (SENSE x-t sparse). For acceleration factors of up to 5, both the low-rank and the compressed sensing methods were able to reconstruct the data comparably well (root mean squared errors [RMSEs] ≤ 10.5% for Cre). However, the reconstruction time of the low rank algorithm was drastically longer than compressed sensing. Using the optimized compressed sensing reconstruction, acceleration factors of 4 or 5 could be reached for the MRSI data with a matrix size of 64 × 64. For lower spatial resolutions, an acceleration factor of up to R∼4 was successfully achieved. By tailoring the reconstruction scheme to the nonlipid suppressed data through parameter optimization and performance evaluation, we present high resolution (97 µL voxel size) accelerated in vivo metabolite maps of the human brain acquired at 9.4T within scan times of 3 to 3.75 min. © 2018 International Society for Magnetic Resonance in Medicine.

  14. Smoothly Clipped Absolute Deviation (SCAD) regularization for compressed sensing MRI Using an augmented Lagrangian scheme

    NARCIS (Netherlands)

    Mehranian, Abolfazl; Rad, Hamidreza Saligheh; Rahmim, Arman; Ay, Mohammad Reza; Zaidi, Habib

    2013-01-01

    Purpose: Compressed sensing (CS) provides a promising framework for MR image reconstruction from highly undersampled data, thus reducing data acquisition time. In this context, sparsity-promoting regularization techniques exploit the prior knowledge that MR images are sparse or compressible in a

  15. Network Traffic Prediction Based on Deep Belief Network and Spatiotemporal Compressive Sensing in Wireless Mesh Backbone Networks

    Directory of Open Access Journals (Sweden)

    Laisen Nie

    2018-01-01

    Full Text Available Wireless mesh network is prevalent for providing a decentralized access for users and other intelligent devices. Meanwhile, it can be employed as the infrastructure of the last few miles connectivity for various network applications, for example, Internet of Things (IoT and mobile networks. For a wireless mesh backbone network, it has obtained extensive attention because of its large capacity and low cost. Network traffic prediction is important for network planning and routing configurations that are implemented to improve the quality of service for users. This paper proposes a network traffic prediction method based on a deep learning architecture and the Spatiotemporal Compressive Sensing method. The proposed method first adopts discrete wavelet transform to extract the low-pass component of network traffic that describes the long-range dependence of itself. Then, a prediction model is built by learning a deep architecture based on the deep belief network from the extracted low-pass component. Otherwise, for the remaining high-pass component that expresses the gusty and irregular fluctuations of network traffic, the Spatiotemporal Compressive Sensing method is adopted to predict it. Based on the predictors of two components, we can obtain a predictor of network traffic. From the simulation, the proposed prediction method outperforms three existing methods.

  16. Compressive Sensing of Roller Bearing Faults via Harmonic Detection from Under-Sampled Vibration Signals.

    Science.gov (United States)

    Tang, Gang; Hou, Wei; Wang, Huaqing; Luo, Ganggang; Ma, Jianwei

    2015-10-09

    The Shannon sampling principle requires substantial amounts of data to ensure the accuracy of on-line monitoring of roller bearing fault signals. Challenges are often encountered as a result of the cumbersome data monitoring, thus a novel method focused on compressed vibration signals for detecting roller bearing faults is developed in this study. Considering that harmonics often represent the fault characteristic frequencies in vibration signals, a compressive sensing frame of characteristic harmonics is proposed to detect bearing faults. A compressed vibration signal is first acquired from a sensing matrix with information preserved through a well-designed sampling strategy. A reconstruction process of the under-sampled vibration signal is then pursued as attempts are conducted to detect the characteristic harmonics from sparse measurements through a compressive matching pursuit strategy. In the proposed method bearing fault features depend on the existence of characteristic harmonics, as typically detected directly from compressed data far before reconstruction completion. The process of sampling and detection may then be performed simultaneously without complete recovery of the under-sampled signals. The effectiveness of the proposed method is validated by simulations and experiments.

  17. Leveraging EAP-Sparsity for Compressed Sensing of MS-HARDI in (k, q)-Space.

    Science.gov (United States)

    Sun, Jiaqi; Sakhaee, Elham; Entezari, Alireza; Vemuri, Baba C

    2015-01-01

    Compressed Sensing (CS) for the acceleration of MR scans has been widely investigated in the past decade. Lately, considerable progress has been made in achieving similar speed ups in acquiring multi-shell high angular resolution diffusion imaging (MS-HARDI) scans. Existing approaches in this context were primarily concerned with sparse reconstruction of the diffusion MR signal S(q) in the q-space. More recently, methods have been developed to apply the compressed sensing framework to the 6-dimensional joint (k, q)-space, thereby exploiting the redundancy in this 6D space. To guarantee accurate reconstruction from partial MS-HARDI data, the key ingredients of compressed sensing that need to be brought together are: (1) the function to be reconstructed needs to have a sparse representation, and (2) the data for reconstruction ought to be acquired in the dual domain (i.e., incoherent sensing) and (3) the reconstruction process involves a (convex) optimization. In this paper, we present a novel approach that uses partial Fourier sensing in the 6D space of (k, q) for the reconstruction of P(x, r). The distinct feature of our approach is a sparsity model that leverages surfacelets in conjunction with total variation for the joint sparse representation of P(x, r). Thus, our method stands to benefit from the practical guarantees for accurate reconstruction from partial (k, q)-space data. Further, we demonstrate significant savings in acquisition time over diffusion spectral imaging (DSI) which is commonly used as the benchmark for comparisons in reported literature. To demonstrate the benefits of this approach,.we present several synthetic and real data examples.

  18. A compressive sensing approach to the calculation of the inverse data space

    KAUST Repository

    Khan, Babar Hasan

    2012-01-01

    Seismic processing in the Inverse Data Space (IDS) has its advantages like the task of removing the multiples simply becomes muting the zero offset and zero time data in the inverse domain. Calculation of the Inverse Data Space by sparse inversion techniques has seen mitigation of some artifacts. We reformulate the problem by taking advantage of some of the developments from the field of Compressive Sensing. The seismic data is compressed at the sensor level by recording projections of the traces. We then process this compressed data directly to estimate the inverse data space. Due to the smaller number of data set we also gain in terms of computational complexity.

  19. Compressed-sensing wavenumber-scanning interferometry

    Science.gov (United States)

    Bai, Yulei; Zhou, Yanzhou; He, Zhaoshui; Ye, Shuangli; Dong, Bo; Xie, Shengli

    2018-01-01

    The Fourier transform (FT), the nonlinear least-squares algorithm (NLSA), and eigenvalue decomposition algorithm (EDA) are used to evaluate the phase field in depth-resolved wavenumber-scanning interferometry (DRWSI). However, because the wavenumber series of the laser's output is usually accompanied by nonlinearity and mode-hop, FT, NLSA, and EDA, which are only suitable for equidistant interference data, often lead to non-negligible phase errors. In this work, a compressed-sensing method for DRWSI (CS-DRWSI) is proposed to resolve this problem. By using the randomly spaced inverse Fourier matrix and solving the underdetermined equation in the wavenumber domain, CS-DRWSI determines the nonuniform sampling and spectral leakage of the interference spectrum. Furthermore, it can evaluate interference data without prior knowledge of the object. The experimental results show that CS-DRWSI improves the depth resolution and suppresses sidelobes. It can replace the FT as a standard algorithm for DRWSI.

  20. Acquisition of STEM Images by Adaptive Compressive Sensing

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Weiyi; Feng, Qianli; Srinivasan, Ramprakash; Stevens, Andrew; Browning, Nigel D.

    2017-07-01

    Compressive Sensing (CS) allows a signal to be sparsely measured first and accurately recovered later in software [1]. In scanning transmission electron microscopy (STEM), it is possible to compress an image spatially by reducing the number of measured pixels, which decreases electron dose and increases sensing speed [2,3,4]. The two requirements for CS to work are: (1) sparsity of basis coefficients and (2) incoherence of the sensing system and the representation system. However, when pixels are missing from the image, it is difficult to have an incoherent sensing matrix. Nevertheless, dictionary learning techniques such as Beta-Process Factor Analysis (BPFA) [5] are able to simultaneously discover a basis and the sparse coefficients in the case of missing pixels. On top of CS, we would like to apply active learning [6,7] to further reduce the proportion of pixels being measured, while maintaining image reconstruction quality. Suppose we initially sample 10% of random pixels. We wish to select the next 1% of pixels that are most useful in recovering the image. Now, we have 11% of pixels, and we want to decide the next 1% of “most informative” pixels. Active learning methods are online and sequential in nature. Our goal is to adaptively discover the best sensing mask during acquisition using feedback about the structures in the image. In the end, we hope to recover a high quality reconstruction with a dose reduction relative to the non-adaptive (random) sensing scheme. In doing this, we try three metrics applied to the partial reconstructions for selecting the new set of pixels: (1) variance, (2) Kullback-Leibler (KL) divergence using a Radial Basis Function (RBF) kernel, and (3) entropy. Figs. 1 and 2 display the comparison of Peak Signal-to-Noise (PSNR) using these three different active learning methods at different percentages of sampled pixels. At 20% level, all the three active learning methods underperform the original CS without active learning. However

  1. Multichannel compressive sensing MRI using noiselet encoding.

    Directory of Open Access Journals (Sweden)

    Kamlesh Pawar

    Full Text Available The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS. In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding.

  2. On Compressed Sensing and the Estimation of Continuous Parameters From Noisy Observations

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2012-01-01

    Compressed sensing (CS) has in recent years become a very popular way of sampling sparse signals. This sparsity is measured with respect to some known dictionary consisting of a finite number of atoms. Most models for real world signals, however, are parametrised by continuous parameters correspo......Compressed sensing (CS) has in recent years become a very popular way of sampling sparse signals. This sparsity is measured with respect to some known dictionary consisting of a finite number of atoms. Most models for real world signals, however, are parametrised by continuous parameters...... corresponding to a dictionary with an infinite number of atoms. Examples of such parameters are the temporal and spatial frequency. In this paper, we analyse how CS affects the estimation performance of any unbiased estimator when we assume such infinite dictionaries. We base our analysis on the Cramer...

  3. Symmetric and asymmetric hybrid cryptosystem based on compressive sensing and computer generated holography

    Science.gov (United States)

    Ma, Lihong; Jin, Weimin

    2018-01-01

    A novel symmetric and asymmetric hybrid optical cryptosystem is proposed based on compressive sensing combined with computer generated holography. In this method there are six encryption keys, among which two decryption phase masks are different from the two random phase masks used in the encryption process. Therefore, the encryption system has the feature of both symmetric and asymmetric cryptography. On the other hand, because computer generated holography can flexibly digitalize the encrypted information and compressive sensing can significantly reduce data volume, what is more, the final encryption image is real function by phase truncation, the method favors the storage and transmission of the encryption data. The experimental results demonstrate that the proposed encryption scheme boosts the security and has high robustness against noise and occlusion attacks.

  4. Quasi Gradient Projection Algorithm for Sparse Reconstruction in Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Xin Meng

    2014-02-01

    Full Text Available Compressed sensing is a novel signal sampling theory under the condition that the signal is sparse or compressible. The existing recovery algorithms based on the gradient projection can either need prior knowledge or recovery the signal poorly. In this paper, a new algorithm based on gradient projection is proposed, which is referred as Quasi Gradient Projection. The algorithm presented quasi gradient direction and two step sizes schemes along this direction. The algorithm doesn’t need any prior knowledge of the original signal. Simulation results demonstrate that the presented algorithm cans recovery the signal more correctly than GPSR which also don’t need prior knowledge. Meanwhile, the algorithm has a lower computation complexity.

  5. Feature constrained compressed sensing CT image reconstruction from incomplete data via robust principal component analysis of the database

    International Nuclear Information System (INIS)

    Wu, Dufan; Li, Liang; Zhang, Li

    2013-01-01

    In computed tomography (CT), incomplete data problems such as limited angle projections often cause artifacts in the reconstruction results. Additional prior knowledge of the image has shown the potential for better results, such as a prior image constrained compressed sensing algorithm. While a pre-full-scan of the same patient is not always available, massive well-reconstructed images of different patients can be easily obtained from clinical multi-slice helical CTs. In this paper, a feature constrained compressed sensing (FCCS) image reconstruction algorithm was proposed to improve the image quality by using the prior knowledge extracted from the clinical database. The database consists of instances which are similar to the target image but not necessarily the same. Robust principal component analysis is employed to retrieve features of the training images to sparsify the target image. The features form a low-dimensional linear space and a constraint on the distance between the image and the space is used. A bi-criterion convex program which combines the feature constraint and total variation constraint is proposed for the reconstruction procedure and a flexible method is adopted for a good solution. Numerical simulations on both the phantom and real clinical patient images were taken to validate our algorithm. Promising results are shown for limited angle problems. (paper)

  6. Block Compressed Sensing of Images Using Adaptive Granular Reconstruction

    Directory of Open Access Journals (Sweden)

    Ran Li

    2016-01-01

    Full Text Available In the framework of block Compressed Sensing (CS, the reconstruction algorithm based on the Smoothed Projected Landweber (SPL iteration can achieve the better rate-distortion performance with a low computational complexity, especially for using the Principle Components Analysis (PCA to perform the adaptive hard-thresholding shrinkage. However, during learning the PCA matrix, it affects the reconstruction performance of Landweber iteration to neglect the stationary local structural characteristic of image. To solve the above problem, this paper firstly uses the Granular Computing (GrC to decompose an image into several granules depending on the structural features of patches. Then, we perform the PCA to learn the sparse representation basis corresponding to each granule. Finally, the hard-thresholding shrinkage is employed to remove the noises in patches. The patches in granule have the stationary local structural characteristic, so that our method can effectively improve the performance of hard-thresholding shrinkage. Experimental results indicate that the reconstructed image by the proposed algorithm has better objective quality when compared with several traditional ones. The edge and texture details in the reconstructed image are better preserved, which guarantees the better visual quality. Besides, our method has still a low computational complexity of reconstruction.

  7. Compressive sensing using optimized sensing matrix for face verification

    Science.gov (United States)

    Oey, Endra; Jeffry; Wongso, Kelvin; Tommy

    2017-12-01

    Biometric appears as one of the solutions which is capable in solving problems that occurred in the usage of password in terms of data access, for example there is possibility in forgetting password and hard to recall various different passwords. With biometrics, physical characteristics of a person can be captured and used in the identification process. In this research, facial biometric is used in the verification process to determine whether the user has the authority to access the data or not. Facial biometric is chosen as its low cost implementation and generate quite accurate result for user identification. Face verification system which is adopted in this research is Compressive Sensing (CS) technique, in which aims to reduce dimension size as well as encrypt data in form of facial test image where the image is represented in sparse signals. Encrypted data can be reconstructed using Sparse Coding algorithm. Two types of Sparse Coding namely Orthogonal Matching Pursuit (OMP) and Iteratively Reweighted Least Squares -ℓp (IRLS-ℓp) will be used for comparison face verification system research. Reconstruction results of sparse signals are then used to find Euclidean norm with the sparse signal of user that has been previously saved in system to determine the validity of the facial test image. Results of system accuracy obtained in this research are 99% in IRLS with time response of face verification for 4.917 seconds and 96.33% in OMP with time response of face verification for 0.4046 seconds with non-optimized sensing matrix, while 99% in IRLS with time response of face verification for 13.4791 seconds and 98.33% for OMP with time response of face verification for 3.1571 seconds with optimized sensing matrix.

  8. Study of key technology of ghost imaging via compressive sensing for a phase object based on phase-shifting digital holography

    International Nuclear Information System (INIS)

    Leihong, Zhang; Dong, Liang; Bei, Li; Zilan, Pan; Dawei, Zhang; Xiuhua, Ma

    2015-01-01

    In this article, the algorithm of compressing sensing is used to improve the imaging resolution and realize ghost imaging via compressive sensing for a phase object based on the theoretical analysis of the lensless Fourier imaging of the algorithm of ghost imaging based on phase-shifting digital holography. The algorithm of ghost imaging via compressive sensing based on phase-shifting digital holography uses the bucket detector to measure the total light intensity of the interference and the four-step phase-shifting method is used to obtain the total light intensity of differential interference light. The experimental platform is built based on the software simulation, and the experimental results show that the algorithm of ghost imaging via compressive sensing based on phase-shifting digital holography can obtain the high-resolution phase distribution figure of the phase object. With the same sampling times, the phase clarity of the phase distribution figure obtained by the algorithm of ghost imaging via compressive sensing based on phase-shifting digital holography is higher than that obtained by the algorithm of ghost imaging based on phase-shift digital holography. In this article, this study further extends the application range of ghost imaging and obtains the phase distribution of the phase object. (letter)

  9. Impact of Sink Node Placement onto Wireless Sensor Networks Performance Regarding Clustering Routing and Compressive Sensing Theory

    Directory of Open Access Journals (Sweden)

    Shima Pakdaman Tirani

    2016-01-01

    Full Text Available Wireless Sensor Networks (WSNs consist of several sensor nodes with sensing, computation, and wireless communication capabilities. The energy constraint is one of the most important issues in these networks. Thus, the data-gathering process should be carefully designed to conserve the energy. In this situation, a load balancing strategy can enhance the resources utilization, and consequently, increase the network lifetime. Furthermore, recently, the sparse nature of data in WSNs has been motivated the use of the compressive sensing as an efficient data gathering technique. Using the compressive sensing theory significantly leads to decreasing the volume of the transmitted data. Taking the above challenges into account, the main goal of this paper is to jointly consider the compressive sensing method and the load-balancing in WSNs. In this regards, using the conventional network model, we analyze the network performance in several different states. These states challenge the sink location in term of the number of transmissions. Numerical results demonstrate the efficiency of the load-balancing in the network performance.

  10. A Distributed Compressive Sensing Scheme for Event Capture in Wireless Visual Sensor Networks

    Science.gov (United States)

    Hou, Meng; Xu, Sen; Wu, Weiling; Lin, Fei

    2018-01-01

    Image signals which acquired by wireless visual sensor network can be used for specific event capture. This event capture is realized by image processing at the sink node. A distributed compressive sensing scheme is used for the transmission of these image signals from the camera nodes to the sink node. A measurement and joint reconstruction algorithm for these image signals are proposed in this paper. Make advantage of spatial correlation between images within a sensing area, the cluster head node which as the image decoder can accurately co-reconstruct these image signals. The subjective visual quality and the reconstruction error rate are used for the evaluation of reconstructed image quality. Simulation results show that the joint reconstruction algorithm achieves higher image quality at the same image compressive rate than the independent reconstruction algorithm.

  11. New trends in applied harmonic analysis sparse representations, compressed sensing, and multifractal analysis

    CERN Document Server

    Cabrelli, Carlos; Jaffard, Stephane; Molter, Ursula

    2016-01-01

    This volume is a selection of written notes corresponding to courses taught at the CIMPA School: "New Trends in Applied Harmonic Analysis: Sparse Representations, Compressed Sensing and Multifractal Analysis". New interactions between harmonic analysis and signal and image processing have seen striking development in the last 10 years, and several technological deadlocks have been solved through the resolution of deep theoretical problems in harmonic analysis. New Trends in Applied Harmonic Analysis focuses on two particularly active areas that are representative of such advances: multifractal analysis, and sparse representation and compressed sensing. The contributions are written by leaders in these areas, and covers both theoretical aspects and applications. This work should prove useful not only to PhD students and postdocs in mathematics and signal and image processing, but also to researchers working in related topics.

  12. Correspondence normalized ghost imaging on compressive sensing

    International Nuclear Information System (INIS)

    Zhao Sheng-Mei; Zhuang Peng

    2014-01-01

    Ghost imaging (GI) offers great potential with respect to conventional imaging techniques. It is an open problem in GI systems that a long acquisition time is be required for reconstructing images with good visibility and signal-to-noise ratios (SNRs). In this paper, we propose a new scheme to get good performance with a shorter construction time. We call it correspondence normalized ghost imaging based on compressive sensing (CCNGI). In the scheme, we enhance the signal-to-noise performance by normalizing the reference beam intensity to eliminate the noise caused by laser power fluctuations, and reduce the reconstruction time by using both compressive sensing (CS) and time-correspondence imaging (CI) techniques. It is shown that the qualities of the images have been improved and the reconstruction time has been reduced using CCNGI scheme. For the two-grayscale ''double-slit'' image, the mean square error (MSE) by GI and the normalized GI (NGI) schemes with the measurement number of 5000 are 0.237 and 0.164, respectively, and that is 0.021 by CCNGI scheme with 2500 measurements. For the eight-grayscale ''lena'' object, the peak signal-to-noise rates (PSNRs) are 10.506 and 13.098, respectively using GI and NGI schemes while the value turns to 16.198 using CCNGI scheme. The results also show that a high-fidelity GI reconstruction has been achieved using only 44% of the number of measurements corresponding to the Nyquist limit for the two-grayscale “double-slit'' object. The qualities of the reconstructed images using CCNGI are almost the same as those from GI via sparsity constraints (GISC) with a shorter reconstruction time. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  13. Photonic compressive sensing with a micro-ring-resonator-based microwave photonic filter

    DEFF Research Database (Denmark)

    Chen, Ying; Ding, Yunhong; Zhu, Zhijing

    2015-01-01

    A novel approach to realize photonic compressive sensing (CS) with a multi-tap microwave photonic filter is proposed and demonstrated. The system takes both advantages of CS and photonics to capture wideband sparse signals with sub-Nyquist sampling rate. The low-pass filtering function required...

  14. Data compressive paradigm for multispectral sensing using tunable DWELL mid-infrared detectors.

    Science.gov (United States)

    Jang, Woo-Yong; Hayat, Majeed M; Godoy, Sebastián E; Bender, Steven C; Zarkesh-Ha, Payman; Krishna, Sanjay

    2011-09-26

    While quantum dots-in-a-well (DWELL) infrared photodetectors have the feature that their spectral responses can be shifted continuously by varying the applied bias, the width of the spectral response at any applied bias is not sufficiently narrow for use in multispectral sensing without the aid of spectral filters. To achieve higher spectral resolutions without using physical spectral filters, algorithms have been developed for post-processing the DWELL's bias-dependent photocurrents resulting from probing an object of interest repeatedly over a wide range of applied biases. At the heart of these algorithms is the ability to approximate an arbitrary spectral filter, which we desire the DWELL-algorithm combination to mimic, by forming a weighted superposition of the DWELL's non-orthogonal spectral responses over a range of applied biases. However, these algorithms assume availability of abundant DWELL data over a large number of applied biases (>30), leading to large overall acquisition times in proportion with the number of biases. This paper reports a new multispectral sensing algorithm to substantially compress the number of necessary bias values subject to a prescribed performance level across multiple sensing applications. The algorithm identifies a minimal set of biases to be used in sensing only the relevant spectral information for remote-sensing applications of interest. Experimental results on target spectrometry and classification demonstrate a reduction in the number of required biases by a factor of 7 (e.g., from 30 to 4). The tradeoff between performance and bias compression is thoroughly investigated. © 2011 Optical Society of America

  15. Accurate reconstruction of hyperspectral images from compressive sensing measurements

    Science.gov (United States)

    Greer, John B.; Flake, J. C.

    2013-05-01

    The emerging field of Compressive Sensing (CS) provides a new way to capture data by shifting the heaviest burden of data collection from the sensor to the computer on the user-end. This new means of sensing requires fewer measurements for a given amount of information than traditional sensors. We investigate the efficacy of CS for capturing HyperSpectral Imagery (HSI) remotely. We also introduce a new family of algorithms for constructing HSI from CS measurements with Split Bregman Iteration [Goldstein and Osher,2009]. These algorithms combine spatial Total Variation (TV) with smoothing in the spectral dimension. We examine models for three different CS sensors: the Coded Aperture Snapshot Spectral Imager-Single Disperser (CASSI-SD) [Wagadarikar et al.,2008] and Dual Disperser (CASSI-DD) [Gehm et al.,2007] cameras, and a hypothetical random sensing model closer to CS theory, but not necessarily implementable with existing technology. We simulate the capture of remotely sensed images by applying the sensor forward models to well-known HSI scenes - an AVIRIS image of Cuprite, Nevada and the HYMAP Urban image. To measure accuracy of the CS models, we compare the scenes constructed with our new algorithm to the original AVIRIS and HYMAP cubes. The results demonstrate the possibility of accurately sensing HSI remotely with significantly fewer measurements than standard hyperspectral cameras.

  16. The Physics of Compressive Sensing and the Gradient-Based Recovery Algorithms

    OpenAIRE

    Dai, Qi; Sha, Wei

    2009-01-01

    The physics of compressive sensing (CS) and the gradient-based recovery algorithms are presented. First, the different forms for CS are summarized. Second, the physical meanings of coherence and measurement are given. Third, the gradient-based recovery algorithms and their geometry explanations are provided. Finally, we conclude the report and give some suggestion for future work.

  17. Learning-based compressed sensing for infrared image super resolution

    Science.gov (United States)

    Zhao, Yao; Sui, Xiubao; Chen, Qian; Wu, Shaochi

    2016-05-01

    This paper presents an infrared image super-resolution method based on compressed sensing (CS). First, the reconstruction model under the CS framework is established and a Toeplitz matrix is selected as the sensing matrix. Compared with traditional learning-based methods, the proposed method uses a set of sub-dictionaries instead of two coupled dictionaries to recover high resolution (HR) images. And Toeplitz sensing matrix allows the proposed method time-efficient. Second, all training samples are divided into several feature spaces by using the proposed adaptive k-means classification method, which is more accurate than the standard k-means method. On the basis of this approach, a complex nonlinear mapping from the HR space to low resolution (LR) space can be converted into several compact linear mappings. Finally, the relationships between HR and LR image patches can be obtained by multi-sub-dictionaries and HR infrared images are reconstructed by the input LR images and multi-sub-dictionaries. The experimental results show that the proposed method is quantitatively and qualitatively more effective than other state-of-the-art methods.

  18. An L1-norm phase constraint for half-Fourier compressed sensing in 3D MR imaging.

    Science.gov (United States)

    Li, Guobin; Hennig, Jürgen; Raithel, Esther; Büchert, Martin; Paul, Dominik; Korvink, Jan G; Zaitsev, Maxim

    2015-10-01

    In most half-Fourier imaging methods, explicit phase replacement is used. In combination with parallel imaging, or compressed sensing, half-Fourier reconstruction is usually performed in a separate step. The purpose of this paper is to report that integration of half-Fourier reconstruction into iterative reconstruction minimizes reconstruction errors. The L1-norm phase constraint for half-Fourier imaging proposed in this work is compared with the L2-norm variant of the same algorithm, with several typical half-Fourier reconstruction methods. Half-Fourier imaging with the proposed phase constraint can be seamlessly combined with parallel imaging and compressed sensing to achieve high acceleration factors. In simulations and in in-vivo experiments half-Fourier imaging with the proposed L1-norm phase constraint enables superior performance both reconstruction of image details and with regard to robustness against phase estimation errors. The performance and feasibility of half-Fourier imaging with the proposed L1-norm phase constraint is reported. Its seamless combination with parallel imaging and compressed sensing enables use of greater acceleration in 3D MR imaging.

  19. Compressive sensing of full wave field data for structural health monitoring applications

    DEFF Research Database (Denmark)

    di Ianni, Tommaso; De Marchi, Luca; Perelli, Alessandro

    2015-01-01

    ; however, the acquisition process is generally time-consuming, posing a limit in the applicability of such approaches. To reduce the acquisition time, we use a random sampling scheme based on compressive sensing (CS) to minimize the number of points at which the field is measured. The CS reconstruction...

  20. Compressed RSS Measurement for Communication and Sensing in the Internet of Things

    Directory of Open Access Journals (Sweden)

    Yanchao Zhao

    2017-01-01

    Full Text Available The receiving signal strength (RSS is crucial for the Internet of Things (IoT, as it is the key foundation for communication resource allocation, localization, interference management, sensing, and so on. Aside from its significance, the measurement process could be tedious, time consuming, inaccurate, and involving human operations. The state-of-the-art works usually applied the fashion of “measure a few, predict many,” which use measurement calibrated models to generate the RSS for the whole networks. However, this kind of methods still cannot provide accurate results in a short duration with low measurement cost. In addition, they also require careful scheduling of the measurement which is vulnerable to measurement conflict. In this paper, we propose a compressive sensing- (CS- based RSS measurement solution, which is conflict-tolerant, time-efficient, and accuracy-guaranteed without any model-calibrate operation. The CS-based solution takes advantage of compressive sensing theory to enable simultaneous measurement in the same channel, which reduces the time cost to the level of O(log⁡N (where N is the network size and works well for sparse networks. Extensive experiments based on real data trace are conducted to show the efficiency of the proposed solutions.

  1. Two-level image authentication by two-step phase-shifting interferometry and compressive sensing

    Science.gov (United States)

    Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-01-01

    A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.

  2. Compressed sensing for energy-efficient wireless telemonitoring of noninvasive fetal ECG via block sparse Bayesian learning.

    Science.gov (United States)

    Zhang, Zhilin; Jung, Tzyy-Ping; Makeig, Scott; Rao, Bhaskar D

    2013-02-01

    Fetal ECG (FECG) telemonitoring is an important branch in telemedicine. The design of a telemonitoring system via a wireless body area network with low energy consumption for ambulatory use is highly desirable. As an emerging technique, compressed sensing (CS) shows great promise in compressing/reconstructing data with low energy consumption. However, due to some specific characteristics of raw FECG recordings such as nonsparsity and strong noise contamination, current CS algorithms generally fail in this application. This paper proposes to use the block sparse Bayesian learning framework to compress/reconstruct nonsparse raw FECG recordings. Experimental results show that the framework can reconstruct the raw recordings with high quality. Especially, the reconstruction does not destroy the interdependence relation among the multichannel recordings. This ensures that the independent component analysis decomposition of the reconstructed recordings has high fidelity. Furthermore, the framework allows the use of a sparse binary sensing matrix with much fewer nonzero entries to compress recordings. Particularly, each column of the matrix can contain only two nonzero entries. This shows that the framework, compared to other algorithms such as current CS algorithms and wavelet algorithms, can greatly reduce code execution in CPU in the data compression stage.

  3. Optimized Projection Matrix for Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Jianping Xu

    2010-01-01

    Full Text Available Compressive sensing (CS is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.

  4. Monitoring and diagnosis of Alzheimer's disease using noninvasive compressive sensing EEG

    Science.gov (United States)

    Morabito, F. C.; Labate, D.; Morabito, G.; Palamara, I.; Szu, H.

    2013-05-01

    The majority of elderly with Alzheimer's Disease (AD) receive care at home from caregivers. In contrast to standard tethered clinical settings, a wireless, real-time, body-area smartphone-based remote monitoring of electroencephalogram (EEG) can be extremely advantageous for home care of those patients. Such wearable tools pave the way to personalized medicine, for example giving the opportunity to control the progression of the disease and the effect of drugs. By applying Compressive Sensing (CS) techniques it is in principle possible to overcome the difficulty raised by smartphones spatial-temporal throughput rate bottleneck. Unfortunately, EEG and other physiological signals are often non-sparse. In this paper, it is instead shown that the EEG of AD patients becomes actually more compressible with the progression of the disease. EEG of Mild Cognitive Impaired (MCI) subjects is also showing clear tendency to enhanced compressibility. This feature favor the use of CS techniques and ultimately the use of telemonitoring with wearable sensors.

  5. Stainless steel component with compressed fiber Bragg grating for high temperature sensing applications

    Science.gov (United States)

    Jinesh, Mathew; MacPherson, William N.; Hand, Duncan P.; Maier, Robert R. J.

    2016-05-01

    A smart metal component having the potential for high temperature strain sensing capability is reported. The stainless steel (SS316) structure is made by selective laser melting (SLM). A fiber Bragg grating (FBG) is embedded in to a 3D printed U-groove by high temperature brazing using a silver based alloy, achieving an axial FBG compression of 13 millistrain at room temperature. Initial results shows that the test component can be used for up to 700°C for sensing applications.

  6. Compressive Sensing for Feedback Reduction in Wireless Multiuser Networks

    KAUST Repository

    Elkhalil, Khalil

    2015-05-01

    User/relay selection is a simple technique that achieves spatial diversity in multiuser networks. However, for user/relay selection algorithms to make a selection decision, channel state information (CSI) from all cooperating users/relays is usually required at a central node. This requirement poses two important challenges. Firstly, CSI acquisition generates a great deal of feedback overhead (air-time) that could result in significant transmission delays. Secondly, the fed-back channel information is usually corrupted by additive noise. This could lead to transmission outages if the central node selects the set of cooperating relays based on inaccurate feedback information. Motivated by the aforementioned challenges, we propose a limited feedback user/relay selection scheme that is based on the theory of compressed sensing. Firstly, we introduce a limited feedback relay selection algorithm for a multicast relay network. The proposed algorithm exploits the theory of compressive sensing to first obtain the identity of the “strong” relays with limited feedback air-time. Following that, the CSI of the selected relays is estimated using minimum mean square error estimation without any additional feedback. To minimize the effect of noise on the fed-back CSI, we introduce a back-off strategy that optimally backs-off on the noisy received CSI. In the second part of the thesis, we propose a feedback reduction scheme for full-duplex relay-aided multiuser networks. The proposed scheme permits the base station (BS) to obtain channel state information (CSI) from a subset of strong users under substantially reduced feedback overhead. More specifically, we cast the problem of user identification and CSI estimation as a block sparse signal recovery problem in compressive sensing (CS). Using existing CS block recovery algorithms, we first obtain the identity of the strong users and then estimate their CSI using the best linear unbiased estimator (BLUE). Moreover, we derive the

  7. Large scale 2D spectral compressed sensing in continuous domain

    KAUST Repository

    Cai, Jian-Feng

    2017-06-20

    We consider the problem of spectral compressed sensing in continuous domain, which aims to recover a 2-dimensional spectrally sparse signal from partially observed time samples. The signal is assumed to be a superposition of s complex sinusoids. We propose a semidefinite program for the 2D signal recovery problem. Our model is able to handle large scale 2D signals of size 500 × 500, whereas traditional approaches only handle signals of size around 20 × 20.

  8. Large scale 2D spectral compressed sensing in continuous domain

    KAUST Repository

    Cai, Jian-Feng; Xu, Weiyu; Yang, Yang

    2017-01-01

    We consider the problem of spectral compressed sensing in continuous domain, which aims to recover a 2-dimensional spectrally sparse signal from partially observed time samples. The signal is assumed to be a superposition of s complex sinusoids. We propose a semidefinite program for the 2D signal recovery problem. Our model is able to handle large scale 2D signals of size 500 × 500, whereas traditional approaches only handle signals of size around 20 × 20.

  9. Vibration-based monitoring and diagnostics using compressive sensing

    Science.gov (United States)

    Ganesan, Vaahini; Das, Tuhin; Rahnavard, Nazanin; Kauffman, Jeffrey L.

    2017-04-01

    Vibration data from mechanical systems carry important information that is useful for characterization and diagnosis. Standard approaches rely on continually streaming data at a fixed sampling frequency. For applications involving continuous monitoring, such as Structural Health Monitoring (SHM), such approaches result in high volume data and rely on sensors being powered for prolonged durations. Furthermore, for spatial resolution, structures are instrumented with a large array of sensors. This paper shows that both volume of data and number of sensors can be reduced significantly by applying Compressive Sensing (CS) in vibration monitoring applications. The reduction is achieved by using random sampling and capitalizing on the sparsity of vibration signals in the frequency domain. Preliminary experimental results validating CS-based frequency recovery are also provided. By exploiting the sparsity of mode shapes, CS can also enable efficient spatial reconstruction using fewer spatially distributed sensors. CS can thereby reduce the cost and power requirement of sensing as well as streamline data storage and processing in monitoring applications. In well-instrumented structures, CS can enable continued monitoring in case of sensor or computational failures.

  10. Compressed Sensing mm-Wave SAR for Non-Destructive Testing Applications Using Multiple Weighted Side Information

    Directory of Open Access Journals (Sweden)

    Mathias Becquaert

    2018-05-01

    Full Text Available This work explores an innovative strategy for increasing the efficiency of compressed sensing applied on mm-wave SAR sensing using multiple weighted side information. The approach is tested on synthetic and on real non-destructive testing measurements performed on a 3D-printed object with defects while taking advantage of multiple previous SAR images of the object with different degrees of similarity. The tested algorithm attributes autonomously weights to the side information at two levels: (1 between the components inside the side information and (2 between the different side information. The reconstruction is thereby almost immune to poor quality side information while exploiting the relevant components hidden inside the added side information. The presented results prove that, in contrast to common compressed sensing, good SAR image reconstruction is achieved at subsampling rates far below the Nyquist rate. Moreover, the algorithm is shown to be much more robust for low quality side information compared to coherent background subtraction.

  11. Determination of nonlinear genetic architecture using compressed sensing.

    Science.gov (United States)

    Ho, Chiu Man; Hsu, Stephen D H

    2015-01-01

    One of the fundamental problems of modern genomics is to extract the genetic architecture of a complex trait from a data set of individual genotypes and trait values. Establishing this important connection between genotype and phenotype is complicated by the large number of candidate genes, the potentially large number of causal loci, and the likely presence of some nonlinear interactions between different genes. Compressed Sensing methods obtain solutions to under-constrained systems of linear equations. These methods can be applied to the problem of determining the best model relating genotype to phenotype, and generally deliver better performance than simply regressing the phenotype against each genetic variant, one at a time. We introduce a Compressed Sensing method that can reconstruct nonlinear genetic models (i.e., including epistasis, or gene-gene interactions) from phenotype-genotype (GWAS) data. Our method uses L1-penalized regression applied to nonlinear functions of the sensing matrix. The computational and data resource requirements for our method are similar to those necessary for reconstruction of linear genetic models (or identification of gene-trait associations), assuming a condition of generalized sparsity, which limits the total number of gene-gene interactions. An example of a sparse nonlinear model is one in which a typical locus interacts with several or even many others, but only a small subset of all possible interactions exist. It seems plausible that most genetic architectures fall in this category. We give theoretical arguments suggesting that the method is nearly optimal in performance, and demonstrate its effectiveness on broad classes of nonlinear genetic models using simulated human genomes and the small amount of currently available real data. A phase transition (i.e., dramatic and qualitative change) in the behavior of the algorithm indicates when sufficient data is available for its successful application. Our results indicate

  12. Enhanced acoustic sensing through wave compression and pressure amplification in anisotropic metamaterials.

    Science.gov (United States)

    Chen, Yongyao; Liu, Haijun; Reilly, Michael; Bae, Hyungdae; Yu, Miao

    2014-10-15

    Acoustic sensors play an important role in many areas, such as homeland security, navigation, communication, health care and industry. However, the fundamental pressure detection limit hinders the performance of current acoustic sensing technologies. Here, through analytical, numerical and experimental studies, we show that anisotropic acoustic metamaterials can be designed to have strong wave compression effect that renders direct amplification of pressure fields in metamaterials. This enables a sensing mechanism that can help overcome the detection limit of conventional acoustic sensing systems. We further demonstrate a metamaterial-enhanced acoustic sensing system that achieves more than 20 dB signal-to-noise enhancement (over an order of magnitude enhancement in detection limit). With this system, weak acoustic pulse signals overwhelmed by the noise are successfully recovered. This work opens up new vistas for the development of metamaterial-based acoustic sensors with improved performance and functionalities that are highly desirable for many applications.

  13. Real time network traffic monitoring for wireless local area networks based on compressed sensing

    Science.gov (United States)

    Balouchestani, Mohammadreza

    2017-05-01

    A wireless local area network (WLAN) is an important type of wireless networks which connotes different wireless nodes in a local area network. WLANs suffer from important problems such as network load balancing, large amount of energy, and load of sampling. This paper presents a new networking traffic approach based on Compressed Sensing (CS) for improving the quality of WLANs. The proposed architecture allows reducing Data Delay Probability (DDP) to 15%, which is a good record for WLANs. The proposed architecture is increased Data Throughput (DT) to 22 % and Signal to Noise (S/N) ratio to 17 %, which provide a good background for establishing high qualified local area networks. This architecture enables continuous data acquisition and compression of WLAN's signals that are suitable for a variety of other wireless networking applications. At the transmitter side of each wireless node, an analog-CS framework is applied at the sensing step before analog to digital converter in order to generate the compressed version of the input signal. At the receiver side of wireless node, a reconstruction algorithm is applied in order to reconstruct the original signals from the compressed signals with high probability and enough accuracy. The proposed algorithm out-performs existing algorithms by achieving a good level of Quality of Service (QoS). This ability allows reducing 15 % of Bit Error Rate (BER) at each wireless node.

  14. Optical scanning holography based on compressive sensing using a digital micro-mirror device

    Science.gov (United States)

    A-qian, Sun; Ding-fu, Zhou; Sheng, Yuan; You-jun, Hu; Peng, Zhang; Jian-ming, Yue; xin, Zhou

    2017-02-01

    Optical scanning holography (OSH) is a distinct digital holography technique, which uses a single two-dimensional (2D) scanning process to record the hologram of a three-dimensional (3D) object. Usually, these 2D scanning processes are in the form of mechanical scanning, and the quality of recorded hologram may be affected due to the limitation of mechanical scanning accuracy and unavoidable vibration of stepper motor's start-stop. In this paper, we propose a new framework, which replaces the 2D mechanical scanning mirrors with a Digital Micro-mirror Device (DMD) to modulate the scanning light field, and we call it OSH based on Compressive Sensing (CS) using a digital micro-mirror device (CS-OSH). CS-OSH can reconstruct the hologram of an object through the use of compressive sensing theory, and then restore the image of object itself. Numerical simulation results confirm this new type OSH can get a reconstructed image with favorable visual quality even under the condition of a low sample rate.

  15. A compressed sensing based method with support refinement for impulse noise cancelation in DSL

    KAUST Repository

    Quadeer, Ahmed Abdul

    2013-06-01

    This paper presents a compressed sensing based method to suppress impulse noise in digital subscriber line (DSL). The proposed algorithm exploits the sparse nature of the impulse noise and utilizes the carriers, already available in all practical DSL systems, for its estimation and cancelation. Specifically, compressed sensing is used for a coarse estimate of the impulse position, an a priori information based maximum aposteriori probability (MAP) metric for its refinement, followed by least squares (LS) or minimum mean square error (MMSE) estimation for estimating the impulse amplitudes. Simulation results show that the proposed scheme achieves higher rate as compared to other known sparse estimation algorithms in literature. The paper also demonstrates the superior performance of the proposed scheme compared to the ITU-T G992.3 standard that utilizes RS-coding for impulse noise refinement in DSL signals. © 2013 IEEE.

  16. Fast Detection of Compressively Sensed IR Targets Using Stochastically Trained Least Squares and Compressed Quadratic Correlation Filters

    KAUST Repository

    Millikan, Brian; Dutta, Aritra; Sun, Qiyu; Foroosh, Hassan

    2017-01-01

    Target detection of potential threats at night can be deployed on a costly infrared focal plane array with high resolution. Due to the compressibility of infrared image patches, the high resolution requirement could be reduced with target detection capability preserved. For this reason, a compressive midwave infrared imager (MWIR) with a low-resolution focal plane array has been developed. As the most probable coefficient indices of the support set of the infrared image patches could be learned from the training data, we develop stochastically trained least squares (STLS) for MWIR image reconstruction. Quadratic correlation filters (QCF) have been shown to be effective for target detection and there are several methods for designing a filter. Using the same measurement matrix as in STLS, we construct a compressed quadratic correlation filter (CQCF) employing filter designs for compressed infrared target detection. We apply CQCF to the U.S. Army Night Vision and Electronic Sensors Directorate dataset. Numerical simulations show that the recognition performance of our algorithm matches that of the standard full reconstruction methods, but at a fraction of the execution time.

  17. Fast Detection of Compressively Sensed IR Targets Using Stochastically Trained Least Squares and Compressed Quadratic Correlation Filters

    KAUST Repository

    Millikan, Brian

    2017-05-02

    Target detection of potential threats at night can be deployed on a costly infrared focal plane array with high resolution. Due to the compressibility of infrared image patches, the high resolution requirement could be reduced with target detection capability preserved. For this reason, a compressive midwave infrared imager (MWIR) with a low-resolution focal plane array has been developed. As the most probable coefficient indices of the support set of the infrared image patches could be learned from the training data, we develop stochastically trained least squares (STLS) for MWIR image reconstruction. Quadratic correlation filters (QCF) have been shown to be effective for target detection and there are several methods for designing a filter. Using the same measurement matrix as in STLS, we construct a compressed quadratic correlation filter (CQCF) employing filter designs for compressed infrared target detection. We apply CQCF to the U.S. Army Night Vision and Electronic Sensors Directorate dataset. Numerical simulations show that the recognition performance of our algorithm matches that of the standard full reconstruction methods, but at a fraction of the execution time.

  18. FPGA Implementation of Real-Time Compressive Sensing with Partial Fourier Dictionary

    Directory of Open Access Journals (Sweden)

    Yinghui Quan

    2016-01-01

    Full Text Available This paper presents a novel real-time compressive sensing (CS reconstruction which employs high density field-programmable gate array (FPGA for hardware acceleration. Traditionally, CS can be implemented using a high-level computer language in a personal computer (PC or multicore platforms, such as graphics processing units (GPUs and Digital Signal Processors (DSPs. However, reconstruction algorithms are computing demanding and software implementation of these algorithms is extremely slow and power consuming. In this paper, the orthogonal matching pursuit (OMP algorithm is refined to solve the sparse decomposition optimization for partial Fourier dictionary, which is always adopted in radar imaging and detection application. OMP reconstruction can be divided into two main stages: optimization which finds the closely correlated vectors and least square problem. For large scale dictionary, the implementation of correlation is time consuming since it often requires a large number of matrix multiplications. Also solving the least square problem always needs a scalable matrix decomposition operation. To solve these problems efficiently, the correlation optimization is implemented by fast Fourier transform (FFT and the large scale least square problem is implemented by Conjugate Gradient (CG technique, respectively. The proposed method is verified by FPGA (Xilinx Virtex-7 XC7VX690T realization, revealing its effectiveness in real-time applications.

  19. Sparse Vector Distributions and Recovery from Compressed Sensing

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    It is well known that the performance of sparse vector recovery algorithms from compressive measurements can depend on the distribution underlying the non-zero elements of a sparse vector. However, the extent of these effects has yet to be explored, and formally presented. In this paper, I...... empirically investigate this dependence for seven distributions and fifteen recovery algorithms. The two morals of this work are: 1) any judgement of the recovery performance of one algorithm over that of another must be prefaced by the conditions for which this is observed to be true, including sparse vector...... distributions, and the criterion for exact recovery; and 2) a recovery algorithm must be selected carefully based on what distribution one expects to underlie the sensed sparse signal....

  20. Linear chemically sensitive electron tomography using DualEELS and dictionary-based compressed sensing

    Energy Technology Data Exchange (ETDEWEB)

    AlAfeef, Ala, E-mail: a.al-afeef.1@research.gla.ac.uk [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); School of Computing Science, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Bobynko, Joanna [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Cockshott, W. Paul. [School of Computing Science, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Craven, Alan J. [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Zuazo, Ian; Barges, Patrick [ArcelorMittal Maizières Research, Maizières-lès-Metz 57283 (France); MacLaren, Ian, E-mail: ian.maclaren@glasgow.ac.uk [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom)

    2016-11-15

    We have investigated the use of DualEELS in elementally sensitive tilt series tomography in the scanning transmission electron microscope. A procedure is implemented using deconvolution to remove the effects of multiple scattering, followed by normalisation by the zero loss peak intensity. This is performed to produce a signal that is linearly dependent on the projected density of the element in each pixel. This method is compared with one that does not include deconvolution (although normalisation by the zero loss peak intensity is still performed). Additionally, we compare the 3D reconstruction using a new compressed sensing algorithm, DLET, with the well-established SIRT algorithm. VC precipitates, which are extracted from a steel on a carbon replica, are used in this study. It is found that the use of this linear signal results in a very even density throughout the precipitates. However, when deconvolution is omitted, a slight density reduction is observed in the cores of the precipitates (a so-called cupping artefact). Additionally, it is clearly demonstrated that the 3D morphology is much better reproduced using the DLET algorithm, with very little elongation in the missing wedge direction. It is therefore concluded that reliable elementally sensitive tilt tomography using EELS requires the appropriate use of DualEELS together with a suitable reconstruction algorithm, such as the compressed sensing based reconstruction algorithm used here, to make the best use of the limited data volume and signal to noise inherent in core-loss EELS. - Highlights: • DualEELS is essential for chemically sensitive electron tomography using EELS. • A new compressed sensing based algorithm (DLET) gives high fidelity reconstruction. • This combination of DualEELS and DLET will give reliable results from few projections.

  1. Compressed sensing in imaging mass spectrometry

    International Nuclear Information System (INIS)

    Bartels, Andreas; Dülk, Patrick; Trede, Dennis; Alexandrov, Theodore; Maaß, Peter

    2013-01-01

    Imaging mass spectrometry (IMS) is a technique of analytical chemistry for spatially resolved, label-free and multipurpose analysis of biological samples that is able to detect the spatial distribution of hundreds of molecules in one experiment. The hyperspectral IMS data is typically generated by a mass spectrometer analyzing the surface of the sample. In this paper, we propose a compressed sensing approach to IMS which potentially allows for faster data acquisition by collecting only a part of the pixels in the hyperspectral image and reconstructing the full image from this data. We present an integrative approach to perform both peak-picking spectra and denoising m/z-images simultaneously, whereas the state of the art data analysis methods solve these problems separately. We provide a proof of the robustness of the recovery of both the spectra and individual channels of the hyperspectral image and propose an algorithm to solve our optimization problem which is based on proximal mappings. The paper concludes with the numerical reconstruction results for an IMS dataset of a rat brain coronal section. (paper)

  2. Compressed Sensing, Pseudodictionary-Based, Superresolution Reconstruction

    Directory of Open Access Journals (Sweden)

    Chun-mei Li

    2016-01-01

    Full Text Available The spatial resolution of digital images is the critical factor that affects photogrammetry precision. Single-frame, superresolution, image reconstruction is a typical underdetermined, inverse problem. To solve this type of problem, a compressive, sensing, pseudodictionary-based, superresolution reconstruction method is proposed in this study. The proposed method achieves pseudodictionary learning with an available low-resolution image and uses the K-SVD algorithm, which is based on the sparse characteristics of the digital image. Then, the sparse representation coefficient of the low-resolution image is obtained by solving the norm of l0 minimization problem, and the sparse coefficient and high-resolution pseudodictionary are used to reconstruct image tiles with high resolution. Finally, single-frame-image superresolution reconstruction is achieved. The proposed method is applied to photogrammetric images, and the experimental results indicate that the proposed method effectively increase image resolution, increase image information content, and achieve superresolution reconstruction. The reconstructed results are better than those obtained from traditional interpolation methods in aspect of visual effects and quantitative indicators.

  3. Compressed sensing method for human activity recognition using tri-axis accelerometer on mobile phone

    Institute of Scientific and Technical Information of China (English)

    Song Hui; Wang Zhongmin

    2017-01-01

    The diversity in the phone placements of different mobile users' dailylife increases the difficulty of recognizing human activities by using mobile phone accelerometer data.To solve this problem,a compressed sensing method to recognize human activities that is based on compressed sensing theory and utilizes both raw mobile phone accelerometer data and phone placement information is proposed.First,an over-complete dictionary matrix is constructed using sufficient raw tri-axis acceleration data labeled with phone placement information.Then,the sparse coefficient is evaluated for the samples that need to be tested by resolving L1 minimization.Finally,residual values are calculated and the minimum value is selected as the indicator to obtain the recognition results.Experimental results show that this method can achieve a recognition accuracy reaching 89.86%,which is higher than that of a recognition method that does not adopt the phone placement information for the recognition process.The recognition accuracy of the proposed method is effective and satisfactory.

  4. Deterministic Compressed Sensing

    Science.gov (United States)

    2011-11-01

    39 4.3 Digital Communications . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.4 Group Testing ...deterministic de - sign matrices. All bounds ignore the O() constants. . . . . . . . . . . 131 xvi List of Algorithms 1 Iterative Hard Thresholding Algorithm...sensing is information theoretically possible using any (2k, )-RIP sensing matrix . The following celebrated results of Candès, Romberg and Tao [54

  5. Compressed sensing techniques for receiver based post-compensation of transmitter's nonlinear distortions in OFDM systems

    KAUST Repository

    Owodunni, Damilola S.; Ali, Anum Z.; Quadeer, Ahmed Abdul; Al-Safadi, Ebrahim B.; Hammi, Oualid; Al-Naffouri, Tareq Y.

    2014-01-01

    -domain, and three compressed sensing based algorithms are presented to estimate and compensate for these distortions at the receiver using a few and, at times, even no frequency-domain free carriers (i.e. pilot carriers). The first technique is a conventional

  6. Accelerated radial Fourier-velocity encoding using compressed sensing

    Energy Technology Data Exchange (ETDEWEB)

    Hilbert, Fabian; Han, Dietbert [Wuerzburg Univ. (Germany). Inst. of Radiology; Wech, Tobias; Koestler, Herbert [Wuerzburg Univ. (Germany). Inst. of Radiology; Wuerzburg Univ. (Germany). Comprehensive Heart Failure Center (CHFC)

    2014-10-01

    Purpose:Phase Contrast Magnetic Resonance Imaging (MRI) is a tool for non-invasive determination of flow velocities inside blood vessels. Because Phase Contrast MRI only measures a single mean velocity per voxel, it is only applicable to vessels significantly larger than the voxel size. In contrast, Fourier Velocity Encoding measures the entire velocity distribution inside a voxel, but requires a much longer acquisition time. For accurate diagnosis of stenosis in vessels on the scale of spatial resolution, it is important to know the velocity distribution of a voxel. Our aim was to determine velocity distributions with accelerated Fourier Velocity Encoding in an acquisition time required for a conventional Phase Contrast image. Materials and Methods:We imaged the femoral artery of healthy volunteers with ECG - triggered, radial CINE acquisition. Data acquisition was accelerated by undersampling, while missing data were reconstructed by Compressed Sensing. Velocity spectra of the vessel were evaluated by high resolution Phase Contrast images and compared to spectra from fully sampled and undersampled Fourier Velocity Encoding. By means of undersampling, it was possible to reduce the scan time for Fourier Velocity Encoding to the duration required for a conventional Phase Contrast image. Results:Acquisition time for a fully sampled data set with 12 different Velocity Encodings was 40 min. By applying a 12.6 - fold retrospective undersampling, a data set was generated equal to 3:10 min acquisition time, which is similar to a conventional Phase Contrast measurement. Velocity spectra from fully sampled and undersampled Fourier Velocity Encoded images are in good agreement and show the same maximum velocities as compared to velocity maps from Phase Contrast measurements. Conclusion: Compressed Sensing proved to reliably reconstruct Fourier Velocity Encoded data. Our results indicate that Fourier Velocity Encoding allows an accurate determination of the velocity

  7. Accelerated radial Fourier-velocity encoding using compressed sensing

    International Nuclear Information System (INIS)

    Hilbert, Fabian; Han, Dietbert

    2014-01-01

    Purpose:Phase Contrast Magnetic Resonance Imaging (MRI) is a tool for non-invasive determination of flow velocities inside blood vessels. Because Phase Contrast MRI only measures a single mean velocity per voxel, it is only applicable to vessels significantly larger than the voxel size. In contrast, Fourier Velocity Encoding measures the entire velocity distribution inside a voxel, but requires a much longer acquisition time. For accurate diagnosis of stenosis in vessels on the scale of spatial resolution, it is important to know the velocity distribution of a voxel. Our aim was to determine velocity distributions with accelerated Fourier Velocity Encoding in an acquisition time required for a conventional Phase Contrast image. Materials and Methods:We imaged the femoral artery of healthy volunteers with ECG - triggered, radial CINE acquisition. Data acquisition was accelerated by undersampling, while missing data were reconstructed by Compressed Sensing. Velocity spectra of the vessel were evaluated by high resolution Phase Contrast images and compared to spectra from fully sampled and undersampled Fourier Velocity Encoding. By means of undersampling, it was possible to reduce the scan time for Fourier Velocity Encoding to the duration required for a conventional Phase Contrast image. Results:Acquisition time for a fully sampled data set with 12 different Velocity Encodings was 40 min. By applying a 12.6 - fold retrospective undersampling, a data set was generated equal to 3:10 min acquisition time, which is similar to a conventional Phase Contrast measurement. Velocity spectra from fully sampled and undersampled Fourier Velocity Encoded images are in good agreement and show the same maximum velocities as compared to velocity maps from Phase Contrast measurements. Conclusion: Compressed Sensing proved to reliably reconstruct Fourier Velocity Encoded data. Our results indicate that Fourier Velocity Encoding allows an accurate determination of the velocity

  8. Accelerated radial Fourier-velocity encoding using compressed sensing.

    Science.gov (United States)

    Hilbert, Fabian; Wech, Tobias; Hahn, Dietbert; Köstler, Herbert

    2014-09-01

    Phase Contrast Magnetic Resonance Imaging (MRI) is a tool for non-invasive determination of flow velocities inside blood vessels. Because Phase Contrast MRI only measures a single mean velocity per voxel, it is only applicable to vessels significantly larger than the voxel size. In contrast, Fourier Velocity Encoding measures the entire velocity distribution inside a voxel, but requires a much longer acquisition time. For accurate diagnosis of stenosis in vessels on the scale of spatial resolution, it is important to know the velocity distribution of a voxel. Our aim was to determine velocity distributions with accelerated Fourier Velocity Encoding in an acquisition time required for a conventional Phase Contrast image. We imaged the femoral artery of healthy volunteers with ECG-triggered, radial CINE acquisition. Data acquisition was accelerated by undersampling, while missing data were reconstructed by Compressed Sensing. Velocity spectra of the vessel were evaluated by high resolution Phase Contrast images and compared to spectra from fully sampled and undersampled Fourier Velocity Encoding. By means of undersampling, it was possible to reduce the scan time for Fourier Velocity Encoding to the duration required for a conventional Phase Contrast image. Acquisition time for a fully sampled data set with 12 different Velocity Encodings was 40 min. By applying a 12.6-fold retrospective undersampling, a data set was generated equal to 3:10 min acquisition time, which is similar to a conventional Phase Contrast measurement. Velocity spectra from fully sampled and undersampled Fourier Velocity Encoded images are in good agreement and show the same maximum velocities as compared to velocity maps from Phase Contrast measurements. Compressed Sensing proved to reliably reconstruct Fourier Velocity Encoded data. Our results indicate that Fourier Velocity Encoding allows an accurate determination of the velocity distribution in vessels in the order of the voxel size. Thus

  9. Micro-Doppler Ambiguity Resolution Based on Short-Time Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Jing-bo Zhuang

    2015-01-01

    Full Text Available When using a long range radar (LRR to track a target with micromotion, the micro-Doppler embodied in the radar echoes may suffer from ambiguity problem. In this paper, we propose a novel method based on compressed sensing (CS to solve micro-Doppler ambiguity. According to the RIP requirement, a sparse probing pulse train with its transmitting time random is designed. After matched filtering, the slow-time echo signals of the micromotion target can be viewed as randomly sparse sampling of Doppler spectrum. Select several successive pulses to form a short-time window and the CS sensing matrix can be built according to the time stamps of these pulses. Then performing Orthogonal Matching Pursuit (OMP, the unambiguous micro-Doppler spectrum can be obtained. The proposed algorithm is verified using the echo signals generated according to the theoretical model and the signals with micro-Doppler signature produced using the commercial electromagnetic simulation software FEKO.

  10. A Compressed Sensing Perspective of Hippocampal Function

    Directory of Open Access Journals (Sweden)

    Panagiotis ePetrantonakis

    2014-08-01

    Full Text Available Hippocampus is one of the most important information processing units in the brain. Input from the cortex passes through convergent axon pathways to the downstream hippocampal subregions and, after being appropriately processed, is fanned out back to the cortex. Here, we review evidence of the hypothesis that information flow and processing in the hippocampus complies with the principles of Compressed Sensing (CS. The CS theory comprises a mathematical framework that describes how and under which conditions, restricted sampling of information (data set can lead to condensed, yet concise, forms of the initial, subsampled information entity (i.e. of the original data set. In this work, hippocampus related regions and their respective circuitry are presented as a CS-based system whose different components collaborate to realize efficient memory encoding and decoding processes. This proposition introduces a unifying mathematical framework for hippocampal function and opens new avenues for exploring coding and decoding strategies in the brain.

  11. Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices

    OpenAIRE

    Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan

    2012-01-01

    In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the ...

  12. Accelerated high-resolution photoacoustic tomography via compressed sensing

    Science.gov (United States)

    Arridge, Simon; Beard, Paul; Betcke, Marta; Cox, Ben; Huynh, Nam; Lucka, Felix; Ogunlade, Olumide; Zhang, Edward

    2016-12-01

    Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.

  13. Improved compressed sensing-based cone-beam CT reconstruction using adaptive prior image constraints

    Science.gov (United States)

    Lee, Ho; Xing, Lei; Davidi, Ran; Li, Ruijiang; Qian, Jianguo; Lee, Rena

    2012-04-01

    Volumetric cone-beam CT (CBCT) images are acquired repeatedly during a course of radiation therapy and a natural question to ask is whether CBCT images obtained earlier in the process can be utilized as prior knowledge to reduce patient imaging dose in subsequent scans. The purpose of this work is to develop an adaptive prior image constrained compressed sensing (APICCS) method to solve this problem. Reconstructed images using full projections are taken on the first day of radiation therapy treatment and are used as prior images. The subsequent scans are acquired using a protocol of sparse projections. In the proposed APICCS algorithm, the prior images are utilized as an initial guess and are incorporated into the objective function in the compressed sensing (CS)-based iterative reconstruction process. Furthermore, the prior information is employed to detect any possible mismatched regions between the prior and current images for improved reconstruction. For this purpose, the prior images and the reconstructed images are classified into three anatomical regions: air, soft tissue and bone. Mismatched regions are identified by local differences of the corresponding groups in the two classified sets of images. A distance transformation is then introduced to convert the information into an adaptive voxel-dependent relaxation map. In constructing the relaxation map, the matched regions (unchanged anatomy) between the prior and current images are assigned with smaller weight values, which are translated into less influence on the CS iterative reconstruction process. On the other hand, the mismatched regions (changed anatomy) are associated with larger values and the regions are updated more by the new projection data, thus avoiding any possible adverse effects of prior images. The APICCS approach was systematically assessed by using patient data acquired under standard and low-dose protocols for qualitative and quantitative comparisons. The APICCS method provides an

  14. Direction-of-Arrival Estimation for Coprime Array Using Compressive Sensing Based Array Interpolation

    Directory of Open Access Journals (Sweden)

    Aihua Liu

    2017-01-01

    Full Text Available A method of direction-of-arrival (DOA estimation using array interpolation is proposed in this paper to increase the number of resolvable sources and improve the DOA estimation performance for coprime array configuration with holes in its virtual array. The virtual symmetric nonuniform linear array (VSNLA of coprime array signal model is introduced, with the conventional MUSIC with spatial smoothing algorithm (SS-MUSIC applied on the continuous lags in the VSNLA; the degrees of freedom (DoFs for DOA estimation are obviously not fully exploited. To effectively utilize the extent of DoFs offered by the coarray configuration, a compressing sensing based array interpolation algorithm is proposed. The compressing sensing technique is used to obtain the coarse initial DOA estimation, and a modified iterative initial DOA estimation based interpolation algorithm (IMCA-AI is then utilized to obtain the final DOA estimation, which maps the sample covariance matrix of the VSNLA to the covariance matrix of a filled virtual symmetric uniform linear array (VSULA with the same aperture size. The proposed DOA estimation method can efficiently improve the DOA estimation performance. The numerical simulations are provided to demonstrate the effectiveness of the proposed method.

  15. Evaluation of heterogeneous metabolic profile in an orthotopic human glioblastoma xenograft model using compressed sensing hyperpolarized 3D 13C magnetic resonance spectroscopic imaging.

    Science.gov (United States)

    Park, Ilwoo; Hu, Simon; Bok, Robert; Ozawa, Tomoko; Ito, Motokazu; Mukherjee, Joydeep; Phillips, Joanna J; James, C David; Pieper, Russell O; Ronen, Sabrina M; Vigneron, Daniel B; Nelson, Sarah J

    2013-07-01

    High resolution compressed sensing hyperpolarized (13)C magnetic resonance spectroscopic imaging was applied in orthotopic human glioblastoma xenografts for quantitative assessment of spatial variations in (13)C metabolic profiles and comparison with histopathology. A new compressed sensing sampling design with a factor of 3.72 acceleration was implemented to enable a factor of 4 increase in spatial resolution. Compressed sensing 3D (13)C magnetic resonance spectroscopic imaging data were acquired from a phantom and 10 tumor-bearing rats following injection of hyperpolarized [1-(13)C]-pyruvate using a 3T scanner. The (13)C metabolic profiles were compared with hematoxylin and eosin staining and carbonic anhydrase 9 staining. The high-resolution compressed sensing (13)C magnetic resonance spectroscopic imaging data enabled the differentiation of distinct (13)C metabolite patterns within abnormal tissues with high specificity in similar scan times compared to the fully sampled method. The results from pathology confirmed the different characteristics of (13)C metabolic profiles between viable, non-necrotic, nonhypoxic tumor, and necrotic, hypoxic tissue. Copyright © 2012 Wiley Periodicals, Inc.

  16. Foot and ankle compression improves joint position sense but not bipedal stance in older people

    NARCIS (Netherlands)

    Hijmans, J.M.; Zijlstra, W.; Geertzen, J.H.; Hof, A.L.; Postema, K.

    This study investigates the effects of foot and ankle compression on joint position sense (JPS) and balance in older people and young adults. 12 independently living healthy older persons (77-93 years) were recruited from a senior accommodation facility. 15 young adults (19-24 years) also

  17. Development of a Neutron Spectroscopic System Utilizing Compressed Sensing Measurements

    Directory of Open Access Journals (Sweden)

    Vargas Danilo

    2016-01-01

    Full Text Available A new approach to neutron detection capable of gathering spectroscopic information has been demonstrated. The approach relies on an asymmetrical arrangement of materials, geometry, and an ability to change the orientation of the detector with respect to the neutron field. Measurements are used to unfold the energy characteristics of the neutron field using a new theoretical framework of compressed sensing. Recent theoretical results show that the number of multiplexed samples can be lower than the full number of traditional samples while providing the ability to have some super-resolution. Furthermore, the solution approach does not require a priori information or inclusion of physics models. Utilizing the MCNP code, a number of candidate detector geometries and materials were modeled. Simulations were carried out for a number of neutron energies and distributions with preselected orientations for the detector. The resulting matrix (A consists of n rows associated with orientation and m columns associated with energy and distribution where n < m. The library of known responses is used for new measurements Y (n × 1 and the solver is able to determine the system, Y = Ax where x is a sparse vector. Therefore, energy spectrum measurements are a combination of the energy distribution information of the identified elements of A. This approach allows for determination of neutron spectroscopic information using a single detector system with analog multiplexing. The analog multiplexing allows the use of a compressed sensing solution similar to approaches used in other areas of imaging. A single detector assembly provides improved flexibility and is expected to reduce uncertainty associated with current neutron spectroscopy measurement.

  18. A Fully Integrated Wireless Compressed Sensing Neural Signal Acquisition System for Chronic Recording and Brain Machine Interface.

    Science.gov (United States)

    Liu, Xilin; Zhang, Milin; Xiong, Tao; Richardson, Andrew G; Lucas, Timothy H; Chin, Peter S; Etienne-Cummings, Ralph; Tran, Trac D; Van der Spiegel, Jan

    2016-07-18

    Reliable, multi-channel neural recording is critical to the neuroscience research and clinical treatment. However, most hardware development of fully integrated, multi-channel wireless neural recorders to-date, is still in the proof-of-concept stage. To be ready for practical use, the trade-offs between performance, power consumption, device size, robustness, and compatibility need to be carefully taken into account. This paper presents an optimized wireless compressed sensing neural signal recording system. The system takes advantages of both custom integrated circuits and universal compatible wireless solutions. The proposed system includes an implantable wireless system-on-chip (SoC) and an external wireless relay. The SoC integrates 16-channel low-noise neural amplifiers, programmable filters and gain stages, a SAR ADC, a real-time compressed sensing module, and a near field wireless power and data transmission link. The external relay integrates a 32 bit low-power microcontroller with Bluetooth 4.0 wireless module, a programming interface, and an inductive charging unit. The SoC achieves high signal recording quality with minimized power consumption, while reducing the risk of infection from through-skin connectors. The external relay maximizes the compatibility and programmability. The proposed compressed sensing module is highly configurable, featuring a SNDR of 9.78 dB with a compression ratio of 8×. The SoC has been fabricated in a 180 nm standard CMOS technology, occupying 2.1 mm × 0.6 mm silicon area. A pre-implantable system has been assembled to demonstrate the proposed paradigm. The developed system has been successfully used for long-term wireless neural recording in freely behaving rhesus monkey.

  19. Compressive sensing for feedback reduction in MIMO broadcast channels

    KAUST Repository

    Eltayeb, Mohammed E.

    2014-09-01

    In multi-antenna broadcast networks, the base stations (BSs) rely on the channel state information (CSI) of the users to perform user scheduling and downlink transmission. However, in networks with large number of users, obtaining CSI from all users is arduous, if not impossible, in practice. This paper proposes channel feedback reduction techniques based on the theory of compressive sensing (CS), which permits the BS to obtain CSI with acceptable recovery guarantees under substantially reduced feedback overhead. Additionally, assuming noisy CS measurements at the BS, inexpensive ways for improving post-CS detection are explored. The proposed techniques are shown to reduce the feedback overhead, improve CS detection at the BS, and achieve a sum-rate close to that obtained by noiseless dedicated feedback channels.

  20. Compressed sensing cine imaging with high spatial or high temporal resolution for analysis of left ventricular function.

    Science.gov (United States)

    Goebel, Juliane; Nensa, Felix; Schemuth, Haemi P; Maderwald, Stefan; Gratz, Marcel; Quick, Harald H; Schlosser, Thomas; Nassenstein, Kai

    2016-08-01

    To assess two compressed sensing cine magnetic resonance imaging (MRI) sequences with high spatial or high temporal resolution in comparison to a reference steady-state free precession cine (SSFP) sequence for reliable quantification of left ventricular (LV) volumes. LV short axis stacks of two compressed sensing breath-hold cine sequences with high spatial resolution (SPARSE-SENSE HS: temporal resolution: 40 msec, in-plane resolution: 1.0 × 1.0 mm(2) ) and high temporal resolution (SPARSE-SENSE HT: temporal resolution: 11 msec, in-plane resolution: 1.7 × 1.7 mm(2) ) and of a reference cine SSFP sequence (standard SSFP: temporal resolution: 40 msec, in-plane resolution: 1.7 × 1.7 mm(2) ) were acquired in 16 healthy volunteers on a 1.5T MR system. LV parameters were analyzed semiautomatically twice by one reader and once by a second reader. The volumetric agreement between sequences was analyzed using paired t-test, Bland-Altman plots, and Passing-Bablock regression. Small differences were observed between standard SSFP and SPARSE-SENSE HS for stroke volume (SV; -7 ± 11 ml; P = 0.024), ejection fraction (EF; -2 ± 3%; P = 0.019), and myocardial mass (9 ± 9 g; P = 0.001), but not for end-diastolic volume (EDV; P = 0.079) and end-systolic volume (ESV; P = 0.266). No significant differences were observed between standard SSFP and SPARSE-SENSE HT regarding EDV (P = 0.956), SV (P = 0.088), and EF (P = 0.103), but for ESV (3 ± 5 ml; P = 0.039) and myocardial mass (8 ± 10 ml; P = 0.007). Bland-Altman analysis showed good agreement between the sequences (maximum bias ≤ -8%). Two compressed sensing cine sequences, one with high spatial resolution and one with high temporal resolution, showed good agreement with standard SSFP for LV volume assessment. J. Magn. Reson. Imaging 2016;44:366-374. © 2016 Wiley Periodicals, Inc.

  1. Undersampling strategies for compressed sensing accelerated MR spectroscopic imaging

    Science.gov (United States)

    Vidya Shankar, Rohini; Hu, Houchun Harry; Bikkamane Jayadev, Nutandev; Chang, John C.; Kodibagkar, Vikram D.

    2017-03-01

    Compressed sensing (CS) can accelerate magnetic resonance spectroscopic imaging (MRSI), facilitating its widespread clinical integration. The objective of this study was to assess the effect of different undersampling strategy on CS-MRSI reconstruction quality. Phantom data were acquired on a Philips 3 T Ingenia scanner. Four types of undersampling masks, corresponding to each strategy, namely, low resolution, variable density, iterative design, and a priori were simulated in Matlab and retrospectively applied to the test 1X MRSI data to generate undersampled datasets corresponding to the 2X - 5X, and 7X accelerations for each type of mask. Reconstruction parameters were kept the same in each case(all masks and accelerations) to ensure that any resulting differences can be attributed to the type of mask being employed. The reconstructed datasets from each mask were statistically compared with the reference 1X, and assessed using metrics like the root mean square error and metabolite ratios. Simulation results indicate that both the a priori and variable density undersampling masks maintain high fidelity with the 1X up to five-fold acceleration. The low resolution mask based reconstructions showed statistically significant differences from the 1X with the reconstruction failing at 3X, while the iterative design reconstructions maintained fidelity with the 1X till 4X acceleration. In summary, a pilot study was conducted to identify an optimal sampling mask in CS-MRSI. Simulation results demonstrate that the a priori and variable density masks can provide statistically similar results to the fully sampled reference. Future work would involve implementing these two masks prospectively on a clinical scanner.

  2. Balanced sparse model for tight frames in compressed sensing magnetic resonance imaging.

    Directory of Open Access Journals (Sweden)

    Yunsong Liu

    Full Text Available Compressed sensing has shown to be promising to accelerate magnetic resonance imaging. In this new technology, magnetic resonance images are usually reconstructed by enforcing its sparsity in sparse image reconstruction models, including both synthesis and analysis models. The synthesis model assumes that an image is a sparse combination of atom signals while the analysis model assumes that an image is sparse after the application of an analysis operator. Balanced model is a new sparse model that bridges analysis and synthesis models by introducing a penalty term on the distance of frame coefficients to the range of the analysis operator. In this paper, we study the performance of the balanced model in tight frame based compressed sensing magnetic resonance imaging and propose a new efficient numerical algorithm to solve the optimization problem. By tuning the balancing parameter, the new model achieves solutions of three models. It is found that the balanced model has a comparable performance with the analysis model. Besides, both of them achieve better results than the synthesis model no matter what value the balancing parameter is. Experiment shows that our proposed numerical algorithm constrained split augmented Lagrangian shrinkage algorithm for balanced model (C-SALSA-B converges faster than previously proposed algorithms accelerated proximal algorithm (APG and alternating directional method of multipliers for balanced model (ADMM-B.

  3. Two-Dimensional DOA Estimation in Compressed Sensing with Compressive-Reduced Dimension-lp-MUSIC

    Directory of Open Access Journals (Sweden)

    Weijian Si

    2015-01-01

    Full Text Available This paper presents a novel two-dimensional (2D direction of arrival (DOA estimation method in compressed sensing (CS to remove the estimation failure problem and achieve superior performance. The proposed method separates the steering vector into two parts to construct two corresponding noise subspaces by introducing electric angles. Then, electric angles are estimated based on the constructed noise subspaces. In order to estimate the azimuth and elevation angles in terms of estimates of electric angles, arc-tangent operations are exploited. The arc-tangent is a one-to-one function and allows the value of the argument to be larger than unity so that the proposed method never fails. The proposed method can avoid pair matching to reduce the computational complexity and extend the number of snapshots to improve performance. Simulation results show that the proposed method can avoid estimation failure occurrence and has superior performance as compared to existing methods.

  4. Energy Preserved Sampling for Compressed Sensing MRI

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2014-01-01

    Full Text Available The sampling patterns, cost functions, and reconstruction algorithms play important roles in optimizing compressed sensing magnetic resonance imaging (CS-MRI. Simple random sampling patterns did not take into account the energy distribution in k-space and resulted in suboptimal reconstruction of MR images. Therefore, a variety of variable density (VD based samplings patterns had been developed. To further improve it, we propose a novel energy preserving sampling (ePRESS method. Besides, we improve the cost function by introducing phase correction and region of support matrix, and we propose iterative thresholding algorithm (ITA to solve the improved cost function. We evaluate the proposed ePRESS sampling method, improved cost function, and ITA reconstruction algorithm by 2D digital phantom and 2D in vivo MR brains of healthy volunteers. These assessments demonstrate that the proposed ePRESS method performs better than VD, POWER, and BKO; the improved cost function can achieve better reconstruction quality than conventional cost function; and the ITA is faster than SISTA and is competitive with FISTA in terms of computation time.

  5. Energy Analysis of Decoders for Rakeness-Based Compressed Sensing of ECG Signals.

    Science.gov (United States)

    Pareschi, Fabio; Mangia, Mauro; Bortolotti, Daniele; Bartolini, Andrea; Benini, Luca; Rovatti, Riccardo; Setti, Gianluca

    2017-12-01

    In recent years, compressed sensing (CS) has proved to be effective in lowering the power consumption of sensing nodes in biomedical signal processing devices. This is due to the fact the CS is capable of reducing the amount of data to be transmitted to ensure correct reconstruction of the acquired waveforms. Rakeness-based CS has been introduced to further reduce the amount of transmitted data by exploiting the uneven distribution to the sensed signal energy. Yet, so far no thorough analysis exists on the impact of its adoption on CS decoder performance. The latter point is of great importance, since body-area sensor network architectures may include intermediate gateway nodes that receive and reconstruct signals to provide local services before relaying data to a remote server. In this paper, we fill this gap by showing that rakeness-based design also improves reconstruction performance. We quantify these findings in the case of ECG signals and when a variety of reconstruction algorithms are used either in a low-power microcontroller or a heterogeneous mobile computing platform.

  6. A Novel Object Tracking Algorithm Based on Compressed Sensing and Entropy of Information

    Directory of Open Access Journals (Sweden)

    Ding Ma

    2015-01-01

    Full Text Available Object tracking has always been a hot research topic in the field of computer vision; its purpose is to track objects with specific characteristics or representation and estimate the information of objects such as their locations, sizes, and rotation angles in the current frame. Object tracking in complex scenes will usually encounter various sorts of challenges, such as location change, dimension change, illumination change, perception change, and occlusion. This paper proposed a novel object tracking algorithm based on compressed sensing and information entropy to address these challenges. First, objects are characterized by the Haar (Haar-like and ORB features. Second, the dimensions of computation space of the Haar and ORB features are effectively reduced through compressed sensing. Then the above-mentioned features are fused based on information entropy. Finally, in the particle filter framework, an object location was obtained by selecting candidate object locations in the current frame from the local context neighboring the optimal locations in the last frame. Our extensive experimental results demonstrated that this method was able to effectively address the challenges of perception change, illumination change, and large area occlusion, which made it achieve better performance than existing approaches such as MIL and CT.

  7. Efficient Bayesian Compressed Sensing-based Channel Estimation Techniques for Massive MIMO-OFDM Systems

    OpenAIRE

    Al-Salihi, Hayder Qahtan Kshash; Nakhai, Mohammad Reza

    2017-01-01

    Efficient and highly accurate channel state information (CSI) at the base station (BS) is essential to achieve the potential benefits of massive multiple input multiple output (MIMO) systems. However, the achievable accuracy that is attainable is limited in practice due to the problem of pilot contamination. It has recently been shown that compressed sensing (CS) techniques can address the pilot contamination problem. However, CS-based channel estimation requires prior knowledge of channel sp...

  8. Single image super-resolution based on compressive sensing and improved TV minimization sparse recovery

    Science.gov (United States)

    Vishnukumar, S.; Wilscy, M.

    2017-12-01

    In this paper, we propose a single image Super-Resolution (SR) method based on Compressive Sensing (CS) and Improved Total Variation (TV) Minimization Sparse Recovery. In the CS framework, low-resolution (LR) image is treated as the compressed version of high-resolution (HR) image. Dictionary Training and Sparse Recovery are the two phases of the method. K-Singular Value Decomposition (K-SVD) method is used for dictionary training and the dictionary represents HR image patches in a sparse manner. Here, only the interpolated version of the LR image is used for training purpose and thereby the structural self similarity inherent in the LR image is exploited. In the sparse recovery phase the sparse representation coefficients with respect to the trained dictionary for LR image patches are derived using Improved TV Minimization method. HR image can be reconstructed by the linear combination of the dictionary and the sparse coefficients. The experimental results show that the proposed method gives better results quantitatively as well as qualitatively on both natural and remote sensing images. The reconstructed images have better visual quality since edges and other sharp details are preserved.

  9. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    International Nuclear Information System (INIS)

    Tang Jie; Nett, Brian E; Chen Guanghong

    2009-01-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  10. A Robust Parallel Algorithm for Combinatorial Compressed Sensing

    Science.gov (United States)

    Mendoza-Smith, Rodrigo; Tanner, Jared W.; Wechsung, Florian

    2018-04-01

    In previous work two of the authors have shown that a vector $x \\in \\mathbb{R}^n$ with at most $k Parallel-$\\ell_0$ decoding algorithm, where $\\mathrm{nnz}(A)$ denotes the number of nonzero entries in $A \\in \\mathbb{R}^{m \\times n}$. In this paper we present the Robust-$\\ell_0$ decoding algorithm, which robustifies Parallel-$\\ell_0$ when the sketch $Ax$ is corrupted by additive noise. This robustness is achieved by approximating the asymptotic posterior distribution of values in the sketch given its corrupted measurements. We provide analytic expressions that approximate these posteriors under the assumptions that the nonzero entries in the signal and the noise are drawn from continuous distributions. Numerical experiments presented show that Robust-$\\ell_0$ is superior to existing greedy and combinatorial compressed sensing algorithms in the presence of small to moderate signal-to-noise ratios in the setting of Gaussian signals and Gaussian additive noise.

  11. Towards 4D intervention guidance using compressed sensing

    Energy Technology Data Exchange (ETDEWEB)

    Kuntz, Jan; Bartling, Soenke [Deutsches Krebsforschungszentrum DKFZ, Heidelberg (Germany); Brehm, Marcus; Kachelriess, Marc [Erlangen-Nuernberg Univ., Erlangen (Germany). Inst. of Medical Physics (IMP)

    2011-07-01

    Interventional radiology is nowadays usually guided with projection radiography using mono- or biplane systems. Due to the projective nature of this guidance imaging certain intraprocedural situations remain unclear. Although helpful, the use of 3D CT is limited due to radiation dose. Using advanced reconstruction techniques incorporating prior knowledge, one could overcome these limitations without exceeding dose limitations. Intervention guidance is especially appealing to those algorithms, because certain constrains apply to useful images in intervention guidance that vary relevantly from other CT applications. These are: key relevance of high contrast structures, sparse temporal updates and little relevance of absolute CT values. In this paper the principal usability of reconstruction algorithms for intervention guidance is tested. Compressed sensing algorithms PICCS and ASD-POCS are compared to the McKinnon-Bates and Feldkamp-Davis-Kress algorithm. Animal experiments as well as simulations are performed. An outlook towards 4D intervention guidance is provided. (orig.)

  12. A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node

    Directory of Open Access Journals (Sweden)

    Kan Luo

    2018-01-01

    Full Text Available Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS- based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node’s specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM, block sparse Bayesian learning (BSBL method, and discrete cosine transform (DCT basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption.

  13. A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node.

    Science.gov (United States)

    Luo, Kan; Cai, Zhipeng; Du, Keqin; Zou, Fumin; Zhang, Xiangyu; Li, Jianqing

    2018-01-01

    Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption.

  14. Reconstruction algorithm in compressed sensing based on maximum a posteriori estimation

    International Nuclear Information System (INIS)

    Takeda, Koujin; Kabashima, Yoshiyuki

    2013-01-01

    We propose a systematic method for constructing a sparse data reconstruction algorithm in compressed sensing at a relatively low computational cost for general observation matrix. It is known that the cost of ℓ 1 -norm minimization using a standard linear programming algorithm is O(N 3 ). We show that this cost can be reduced to O(N 2 ) by applying the approach of posterior maximization. Furthermore, in principle, the algorithm from our approach is expected to achieve the widest successful reconstruction region, which is evaluated from theoretical argument. We also discuss the relation between the belief propagation-based reconstruction algorithm introduced in preceding works and our approach

  15. Compressed sensing of ECG signal for wireless system with new fast iterative method.

    Science.gov (United States)

    Tawfic, Israa; Kayhan, Sema

    2015-12-01

    Recent experiments in wireless body area network (WBAN) show that compressive sensing (CS) is a promising tool to compress the Electrocardiogram signal ECG signal. The performance of CS is based on algorithms use to reconstruct exactly or approximately the original signal. In this paper, we present two methods work with absence and presence of noise, these methods are Least Support Orthogonal Matching Pursuit (LS-OMP) and Least Support Denoising-Orthogonal Matching Pursuit (LSD-OMP). The algorithms achieve correct support recovery without requiring sparsity knowledge. We derive an improved restricted isometry property (RIP) based conditions over the best known results. The basic procedures are done by observational and analytical of a different Electrocardiogram signal downloaded them from PhysioBankATM. Experimental results show that significant performance in term of reconstruction quality and compression rate can be obtained by these two new proposed algorithms, and help the specialist gathering the necessary information from the patient in less time if we use Magnetic Resonance Imaging (MRI) application, or reconstructed the patient data after sending it through the network. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  17. Spectrum Sensing and Primary User Localization in Cognitive Radio Networks via Sparsity

    Directory of Open Access Journals (Sweden)

    Lanchao Liu

    2016-01-01

    Full Text Available The theory of compressive sensing (CS has been employed to detect available spectrum resource in cognitive radio (CR networks recently. Capitalizing on the spectrum resource underutilization and spatial sparsity of primary user (PU locations, CS enables the identification of the unused spectrum bands and PU locations at a low sampling rate. Although CS has been studied in the cooperative spectrum sensing mechanism in which CR nodes work collaboratively to accomplish the spectrum sensing and PU localization task, many important issues remain unsettled. Does the designed compressive spectrum sensing mechanism satisfy the Restricted Isometry Property, which guarantees a successful recovery of the original sparse signal? Can the spectrum sensing results help the localization of PUs? What are the characteristics of localization errors? To answer those questions, we try to justify the applicability of the CS theory to the compressive spectrum sensing framework in this paper, and propose a design of PU localization utilizing the spectrum usage information. The localization error is analyzed by the Cramér-Rao lower bound, which can be exploited to improve the localization performance. Detail analysis and simulations are presented to support the claims and demonstrate the efficacy and efficiency of the proposed mechanism.

  18. Comparison between various patch wise strategies for reconstruction of ultra-spectral cubes captured with a compressive sensing system

    Science.gov (United States)

    Oiknine, Yaniv; August, Isaac Y.; Revah, Liat; Stern, Adrian

    2016-05-01

    Recently we introduced a Compressive Sensing Miniature Ultra-Spectral Imaging (CS-MUSI) system. The system is based on a single Liquid Crystal (LC) cell and a parallel sensor array where the liquid crystal cell performs spectral encoding. Within the framework of compressive sensing, the CS-MUSI system is able to reconstruct ultra-spectral cubes captured with only an amount of ~10% samples compared to a conventional system. Despite the compression, the technique is extremely complex computationally, because reconstruction of ultra-spectral images requires processing huge data cubes of Gigavoxel size. Fortunately, the computational effort can be alleviated by using separable operation. An additional way to reduce the reconstruction effort is to perform the reconstructions on patches. In this work, we consider processing on various patch shapes. We present an experimental comparison between various patch shapes chosen to process the ultra-spectral data captured with CS-MUSI system. The patches may be one dimensional (1D) for which the reconstruction is carried out spatially pixel-wise, or two dimensional (2D) - working on spatial rows/columns of the ultra-spectral cube, as well as three dimensional (3D).

  19. Information theoretic bounds for compressed sensing in SAR imaging

    International Nuclear Information System (INIS)

    Jingxiong, Zhang; Ke, Yang; Jianzhong, Guo

    2014-01-01

    Compressed sensing (CS) is a new framework for sampling and reconstructing sparse signals from measurements significantly fewer than those prescribed by Nyquist rate in the Shannon sampling theorem. This new strategy, applied in various application areas including synthetic aperture radar (SAR), relies on two principles: sparsity, which is related to the signals of interest, and incoherence, which refers to the sensing modality. An important question in CS-based SAR system design concerns sampling rate necessary and sufficient for exact or approximate recovery of sparse signals. In the literature, bounds of measurements (or sampling rate) in CS have been proposed from the perspective of information theory. However, these information-theoretic bounds need to be reviewed and, if necessary, validated for CS-based SAR imaging, as there are various assumptions made in the derivations of lower and upper bounds on sub-Nyquist sampling rates, which may not hold true in CS-based SAR imaging. In this paper, information-theoretic bounds of sampling rate will be analyzed. For this, the SAR measurement system is modeled as an information channel, with channel capacity and rate-distortion characteristics evaluated to enable the determination of sampling rates required for recovery of sparse scenes. Experiments based on simulated data will be undertaken to test the theoretic bounds against empirical results about sampling rates required to achieve certain detection error probabilities

  20. Opportunities and challenges in applying the compressive sensing framework to nuclear science and engineering

    International Nuclear Information System (INIS)

    Mille, Matthew; Su, Lin; Yazici, Birsen; Xu, X. George

    2011-01-01

    Compressive sensing is a 5-year old theory that has already resulted in an extremely large number of publications in the literature and that has the potential to impact every field of engineering and applied science that has to do with data acquisition and processing. This paper introduces the mathematics, presents a simple demonstration of radiation dose reduction in x-ray CT imaging, and discusses potential application in nuclear science and engineering. (author)

  1. Compressive sensing for sparse time-frequency representation of nonstationary signals in the presence of impulsive noise

    Science.gov (United States)

    Orović, Irena; Stanković, Srdjan; Amin, Moeness

    2013-05-01

    A modified robust two-dimensional compressive sensing algorithm for reconstruction of sparse time-frequency representation (TFR) is proposed. The ambiguity function domain is assumed to be the domain of observations. The two-dimensional Fourier bases are used to linearly relate the observations to the sparse TFR, in lieu of the Wigner distribution. We assume that a set of available samples in the ambiguity domain is heavily corrupted by an impulsive type of noise. Consequently, the problem of sparse TFR reconstruction cannot be tackled using standard compressive sensing optimization algorithms. We introduce a two-dimensional L-statistics based modification into the transform domain representation. It provides suitable initial conditions that will produce efficient convergence of the reconstruction algorithm. This approach applies sorting and weighting operations to discard an expected amount of samples corrupted by noise. The remaining samples serve as observations used in sparse reconstruction of the time-frequency signal representation. The efficiency of the proposed approach is demonstrated on numerical examples that comprise both cases of monocomponent and multicomponent signals.

  2. Underwater Acoustic Matched Field Imaging Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Huichen Yan

    2015-10-01

    Full Text Available Matched field processing (MFP is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP model from wave propagation theory by using randomly deployed sensors. In addition, the model’s recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method.

  3. Single-snapshot DOA estimation by using Compressed Sensing

    Science.gov (United States)

    Fortunati, Stefano; Grasso, Raffaele; Gini, Fulvio; Greco, Maria S.; LePage, Kevin

    2014-12-01

    This paper deals with the problem of estimating the directions of arrival (DOA) of multiple source signals from a single observation vector of an array data. In particular, four estimation algorithms based on the theory of compressed sensing (CS), i.e., the classical ℓ 1 minimization (or Least Absolute Shrinkage and Selection Operator, LASSO), the fast smooth ℓ 0 minimization, and the Sparse Iterative Covariance-Based Estimator, SPICE and the Iterative Adaptive Approach for Amplitude and Phase Estimation, IAA-APES algorithms, are analyzed, and their statistical properties are investigated and compared with the classical Fourier beamformer (FB) in different simulated scenarios. We show that unlike the classical FB, a CS-based beamformer (CSB) has some desirable properties typical of the adaptive algorithms (e.g., Capon and MUSIC) even in the single snapshot case. Particular attention is devoted to the super-resolution property. Theoretical arguments and simulation analysis provide evidence that a CS-based beamformer can achieve resolution beyond the classical Rayleigh limit. Finally, the theoretical findings are validated by processing a real sonar dataset.

  4. Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery

    Directory of Open Access Journals (Sweden)

    Lingjun Liu

    2017-01-01

    Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.

  5. Study on the effects of sample selection on spectral reflectance reconstruction based on the algorithm of compressive sensing

    International Nuclear Information System (INIS)

    Zhang, Leihong; Liang, Dong

    2016-01-01

    In order to solve the problem that reconstruction efficiency and precision is not high, in this paper different samples are selected to reconstruct spectral reflectance, and a new kind of spectral reflectance reconstruction method based on the algorithm of compressive sensing is provided. Four different color numbers of matte color cards such as the ColorChecker Color Rendition Chart and Color Checker SG, the copperplate paper spot color card of Panton, and the Munsell colors card are chosen as training samples, the spectral image is reconstructed respectively by the algorithm of compressive sensing and pseudo-inverse and Wiener, and the results are compared. These methods of spectral reconstruction are evaluated by root mean square error and color difference accuracy. The experiments show that the cumulative contribution rate and color difference of the Munsell colors card are better than those of the other three numbers of color cards in the same conditions of reconstruction, and the accuracy of the spectral reconstruction will be affected by the training sample of different numbers of color cards. The key technology of reconstruction means that the uniformity and representation of the training sample selection has important significance upon reconstruction. In this paper, the influence of the sample selection on the spectral image reconstruction is studied. The precision of the spectral reconstruction based on the algorithm of compressive sensing is higher than that of the traditional algorithm of spectral reconstruction. By the MATLAB simulation results, it can be seen that the spectral reconstruction precision and efficiency are affected by the different color numbers of the training sample. (paper)

  6. Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers

    Directory of Open Access Journals (Sweden)

    Yuri Álvarez López

    2017-01-01

    Full Text Available One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated.

  7. Multimode waveguide speckle patterns for compressive sensing.

    Science.gov (United States)

    Valley, George C; Sefler, George A; Justin Shaw, T

    2016-06-01

    Compressive sensing (CS) of sparse gigahertz-band RF signals using microwave photonics may achieve better performances with smaller size, weight, and power than electronic CS or conventional Nyquist rate sampling. The critical element in a CS system is the device that produces the CS measurement matrix (MM). We show that passive speckle patterns in multimode waveguides potentially provide excellent MMs for CS. We measure and calculate the MM for a multimode fiber and perform simulations using this MM in a CS system. We show that the speckle MM exhibits the sharp phase transition and coherence properties needed for CS and that these properties are similar to those of a sub-Gaussian MM with the same mean and standard deviation. We calculate the MM for a multimode planar waveguide and find dimensions of the planar guide that give a speckle MM with a performance similar to that of the multimode fiber. The CS simulations show that all measured and calculated speckle MMs exhibit a robust performance with equal amplitude signals that are sparse in time, in frequency, and in wavelets (Haar wavelet transform). The planar waveguide results indicate a path to a microwave photonic integrated circuit for measuring sparse gigahertz-band RF signals using CS.

  8. Accelerated Compressed Sensing Based CT Image Reconstruction.

    Science.gov (United States)

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  9. Accelerated Compressed Sensing Based CT Image Reconstruction

    Directory of Open Access Journals (Sweden)

    SayedMasoud Hashemi

    2015-01-01

    Full Text Available In X-ray computed tomography (CT an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  10. PROMISE: parallel-imaging and compressed-sensing reconstruction of multicontrast imaging using SharablE information.

    Science.gov (United States)

    Gong, Enhao; Huang, Feng; Ying, Kui; Wu, Wenchuan; Wang, Shi; Yuan, Chun

    2015-02-01

    A typical clinical MR examination includes multiple scans to acquire images with different contrasts for complementary diagnostic information. The multicontrast scheme requires long scanning time. The combination of partially parallel imaging and compressed sensing (CS-PPI) has been used to reconstruct accelerated scans. However, there are several unsolved problems in existing methods. The target of this work is to improve existing CS-PPI methods for multicontrast imaging, especially for two-dimensional imaging. If the same field of view is scanned in multicontrast imaging, there is significant amount of sharable information. It is proposed in this study to use manifold sharable information among multicontrast images to enhance CS-PPI in a sequential way. Coil sensitivity information and structure based adaptive regularization, which were extracted from previously reconstructed images, were applied to enhance the following reconstructions. The proposed method is called Parallel-imaging and compressed-sensing Reconstruction Of Multicontrast Imaging using SharablE information (PROMISE). Using L1 -SPIRiT as a CS-PPI example, results on multicontrast brain and carotid scans demonstrated that lower error level and better detail preservation can be achieved by exploiting manifold sharable information. Besides, the privilege of PROMISE still exists while there is interscan motion. Using the sharable information among multicontrast images can enhance CS-PPI with tolerance to motions. © 2014 Wiley Periodicals, Inc.

  11. Combating Impairments in Multi-carrier Systems: A Compressed Sensing Approach

    KAUST Repository

    Al-Shuhail, Shamael

    2015-05-01

    Multi-carrier systems suffer from several impairments, and communication system engineers use powerful signal processing tools to combat these impairments and keep up with the capacity/rate demands. Compressed sensing (CS) is one such tool that allows recovering any sparse signal, requiring only a few measurements in a domain that is incoherent with the domain of sparsity. Almost all signals of interest have some degree of sparsity, and in this work we utilize the sparsity of impairments in orthogonal frequency division multiplexing (OFDM) and its variants (i.e., orthogonal frequency division multiplexing access (OFDMA) and single-carrier frequency-division multiple access (SC-FDMA)) to combat them using CS. We start with the problem of peak-to-average power ratio (PAPR) reduction in OFDM. OFDM signals suffer from high PAPR and clipping is the simplest PAPR reduction scheme. However, clipping introduces inband distortions that result in compromised performance and hence needs to be mitigated at the receiver. Due to the high PAPR nature of the OFDM signal, only a few instances are clipped, these clipping distortions can be recovered at the receiver by employing CS. We then extend the proposed clipping recovery scheme to an interleaved OFDMA system. Interleaved OFDMA presents a special structure that result in only self-inflicted clipping distortions. In this work, we prove that distortions do not spread over multiple users (while utilizing interleaved carrier assignment in OFDMA) and construct a CS system that recovers the clipping distortions on each user. Finally, we address the problem of narrowband interference (NBI) in SC-FDMA. Unlike OFDM and OFDMA systems, SC-FDMA does not suffer from high PAPR, but (as the data is encoded in time domain) is seriously vulnerable to information loss owing to NBI. Utilizing the sparse nature of NBI (in frequency domain) we combat its effect on SC-FDMA system by CS recovery.

  12. Feasibility of high temporal resolution breast DCE-MRI using compressed sensing theory.

    Science.gov (United States)

    Wang, Haoyu; Miao, Yanwei; Zhou, Kun; Yu, Yanming; Bao, Shanglian; He, Qiang; Dai, Yongming; Xuan, Stephanie Y; Tarabishy, Bisher; Ye, Yongquan; Hu, Jiani

    2010-09-01

    To investigate the feasibility of high temporal resolution breast DCE-MRI using compressed sensing theory. Two experiments were designed to investigate the feasibility of using reference image based compressed sensing (RICS) technique in DCE-MRI of the breast. The first experiment examined the capability of RICS to faithfully reconstruct uptake curves using undersampled data sets extracted from fully sampled clinical breast DCE-MRI data. An average approach and an approach using motion estimation and motion compensation (ME/MC) were implemented to obtain reference images and to evaluate their efficacy in reducing motion related effects. The second experiment, an in vitro phantom study, tested the feasibility of RICS for improving temporal resolution without degrading the spatial resolution. For the uptake-curve reconstruction experiment, there was a high correlation between uptake curves reconstructed from fully sampled data by Fourier transform and from undersampled data by RICS, indicating high similarity between them. The mean Pearson correlation coefficients for RICS with the ME/MC approach and RICS with the average approach were 0.977 +/- 0.023 and 0.953 +/- 0.031, respectively. The comparisons of final reconstruction results between RICS with the average approach and RICS with the ME/MC approach suggested that the latter was superior to the former in reducing motion related effects. For the in vitro experiment, compared to the fully sampled method, RICS improved the temporal resolution by an acceleration factor of 10 without degrading the spatial resolution. The preliminary study demonstrates the feasibility of RICS for faithfully reconstructing uptake curves and improving temporal resolution of breast DCE-MRI without degrading the spatial resolution.

  13. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    Science.gov (United States)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-02-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults.

  14. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    International Nuclear Information System (INIS)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-01-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults. (paper)

  15. Development of a compressive sampling hyperspectral imager prototype

    Science.gov (United States)

    Barducci, Alessandro; Guzzi, Donatella; Lastri, Cinzia; Nardino, Vanni; Marcoionni, Paolo; Pippi, Ivan

    2013-10-01

    Compressive sensing (CS) is a new technology that investigates the chance to sample signals at a lower rate than the traditional sampling theory. The main advantage of CS is that compression takes place during the sampling phase, making possible significant savings in terms of the ADC, data storage memory, down-link bandwidth, and electrical power absorption. The CS technology could have primary importance for spaceborne missions and technology, paving the way to noteworthy reductions of payload mass, volume, and cost. On the contrary, the main CS disadvantage is made by the intensive off-line data processing necessary to obtain the desired source estimation. In this paper we summarize the CS architecture and its possible implementations for Earth observation, giving evidence of possible bottlenecks hindering this technology. CS necessarily employs a multiplexing scheme, which should produce some SNR disadvantage. Moreover, this approach would necessitate optical light modulators and 2-dim detector arrays of high frame rate. This paper describes the development of a sensor prototype at laboratory level that will be utilized for the experimental assessment of CS performance and the related reconstruction errors. The experimental test-bed adopts a push-broom imaging spectrometer, a liquid crystal plate, a standard CCD camera and a Silicon PhotoMultiplier (SiPM) matrix. The prototype is being developed within the framework of the ESA ITI-B Project titled "Hyperspectral Passive Satellite Imaging via Compressive Sensing".

  16. Identifying Chaotic FitzHugh–Nagumo Neurons Using Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Ri-Qi Su

    2014-07-01

    Full Text Available We develop a completely data-driven approach to reconstructing coupled neuronal networks that contain a small subset of chaotic neurons. Such chaotic elements can be the result of parameter shift in their individual dynamical systems and may lead to abnormal functions of the network. To accurately identify the chaotic neurons may thus be necessary and important, for example, applying appropriate controls to bring the network to a normal state. However, due to couplings among the nodes, the measured time series, even from non-chaotic neurons, would appear random, rendering inapplicable traditional nonlinear time-series analysis, such as the delay-coordinate embedding method, which yields information about the global dynamics of the entire network. Our method is based on compressive sensing. In particular, we demonstrate that identifying chaotic elements can be formulated as a general problem of reconstructing the nodal dynamical systems, network connections and all coupling functions, as well as their weights. The working and efficiency of the method are illustrated by using networks of non-identical FitzHugh–Nagumo neurons with randomly-distributed coupling weights.

  17. Compressive sensing of high betweenness centrality nodes in networks

    Science.gov (United States)

    Mahyar, Hamidreza; Hasheminezhad, Rouzbeh; Ghalebi K., Elahe; Nazemian, Ali; Grosu, Radu; Movaghar, Ali; Rabiee, Hamid R.

    2018-05-01

    Betweenness centrality is a prominent centrality measure expressing importance of a node within a network, in terms of the fraction of shortest paths passing through that node. Nodes with high betweenness centrality have significant impacts on the spread of influence and idea in social networks, the user activity in mobile phone networks, the contagion process in biological networks, and the bottlenecks in communication networks. Thus, identifying k-highest betweenness centrality nodes in networks will be of great interest in many applications. In this paper, we introduce CS-HiBet, a new method to efficiently detect top- k betweenness centrality nodes in networks, using compressive sensing. CS-HiBet can perform as a distributed algorithm by using only the local information at each node. Hence, it is applicable to large real-world and unknown networks in which the global approaches are usually unrealizable. The performance of the proposed method is evaluated by extensive simulations on several synthetic and real-world networks. The experimental results demonstrate that CS-HiBet outperforms the best existing methods with notable improvements.

  18. Compressive Sensing Based Bayesian Sparse Channel Estimation for OFDM Communication Systems: High Performance and Low Complexity

    Science.gov (United States)

    Xu, Li; Shan, Lin; Adachi, Fumiyuki

    2014-01-01

    In orthogonal frequency division modulation (OFDM) communication systems, channel state information (CSI) is required at receiver due to the fact that frequency-selective fading channel leads to disgusting intersymbol interference (ISI) over data transmission. Broadband channel model is often described by very few dominant channel taps and they can be probed by compressive sensing based sparse channel estimation (SCE) methods, for example, orthogonal matching pursuit algorithm, which can take the advantage of sparse structure effectively in the channel as for prior information. However, these developed methods are vulnerable to both noise interference and column coherence of training signal matrix. In other words, the primary objective of these conventional methods is to catch the dominant channel taps without a report of posterior channel uncertainty. To improve the estimation performance, we proposed a compressive sensing based Bayesian sparse channel estimation (BSCE) method which cannot only exploit the channel sparsity but also mitigate the unexpected channel uncertainty without scarifying any computational complexity. The proposed method can reveal potential ambiguity among multiple channel estimators that are ambiguous due to observation noise or correlation interference among columns in the training matrix. Computer simulations show that proposed method can improve the estimation performance when comparing with conventional SCE methods. PMID:24983012

  19. Accelerated two-dimensional cine DENSE cardiovascular magnetic resonance using compressed sensing and parallel imaging.

    Science.gov (United States)

    Chen, Xiao; Yang, Yang; Cai, Xiaoying; Auger, Daniel A; Meyer, Craig H; Salerno, Michael; Epstein, Frederick H

    2016-06-14

    Cine Displacement Encoding with Stimulated Echoes (DENSE) provides accurate quantitative imaging of cardiac mechanics with rapid displacement and strain analysis; however, image acquisition times are relatively long. Compressed sensing (CS) with parallel imaging (PI) can generally provide high-quality images recovered from data sampled below the Nyquist rate. The purposes of the present study were to develop CS-PI-accelerated acquisition and reconstruction methods for cine DENSE, to assess their accuracy for cardiac imaging using retrospective undersampling, and to demonstrate their feasibility for prospectively-accelerated 2D cine DENSE imaging in a single breathhold. An accelerated cine DENSE sequence with variable-density spiral k-space sampling and golden angle rotations through time was implemented. A CS method, Block LOw-rank Sparsity with Motion-guidance (BLOSM), was combined with sensitivity encoding (SENSE) for the reconstruction of under-sampled multi-coil spiral data. Seven healthy volunteers and 7 patients underwent 2D cine DENSE imaging with fully-sampled acquisitions (14-26 heartbeats in duration) and with prospectively rate-2 and rate-4 accelerated acquisitions (14 and 8 heartbeats in duration). Retrospectively- and prospectively-accelerated data were reconstructed using BLOSM-SENSE and SENSE. Image quality of retrospectively-undersampled data was quantified using the relative root mean square error (rRMSE). Myocardial displacement and circumferential strain were computed for functional assessment, and linear correlation and Bland-Altman analyses were used to compare accelerated acquisitions to fully-sampled reference datasets. For retrospectively-undersampled data, BLOSM-SENSE provided similar or lower rRMSE at rate-2 and lower rRMSE at rate-4 acceleration compared to SENSE (p cine DENSE provided good image quality and expected values of displacement and strain. BLOSM-SENSE-accelerated spiral cine DENSE imaging with 2D displacement encoding can be

  20. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning

    International Nuclear Information System (INIS)

    Guthier, C; Aschenbrenner, K P; Buergy, D; Ehmann, M; Wenz, F; Hesser, J W

    2015-01-01

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced. (paper)

  1. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning

    Science.gov (United States)

    Guthier, C.; Aschenbrenner, K. P.; Buergy, D.; Ehmann, M.; Wenz, F.; Hesser, J. W.

    2015-03-01

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.

  2. Compressed normalized block difference for object tracking

    Science.gov (United States)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  3. Compressed sensing with cyclic-S Hadamard matrix for terahertz imaging applications

    Science.gov (United States)

    Ermeydan, Esra Şengün; ćankaya, Ilyas

    2018-01-01

    Compressed Sensing (CS) with Cyclic-S Hadamard matrix is proposed for single pixel imaging applications in this study. In single pixel imaging scheme, N = r . c samples should be taken for r×c pixel image where . denotes multiplication. CS is a popular technique claiming that the sparse signals can be reconstructed with samples under Nyquist rate. Therefore to solve the slow data acquisition problem in Terahertz (THz) single pixel imaging, CS is a good candidate. However, changing mask for each measurement is a challenging problem since there is no commercial Spatial Light Modulators (SLM) for THz band yet, therefore circular masks are suggested so that for each measurement one or two column shifting will be enough to change the mask. The CS masks are designed using cyclic-S matrices based on Hadamard transform for 9 × 7 and 15 × 17 pixel images within the framework of this study. The %50 compressed images are reconstructed using total variation based TVAL3 algorithm. Matlab simulations demonstrates that cyclic-S matrices can be used for single pixel imaging based on CS. The circular masks have the advantage to reduce the mechanical SLMs to a single sliding strip, whereas the CS helps to reduce acquisition time and energy since it allows to reconstruct the image from fewer samples.

  4. A new hyperspectral image compression paradigm based on fusion

    Science.gov (United States)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  5. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding.

    Science.gov (United States)

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-04-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering--CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes--MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme.

  6. A Compressed Sensing-Based Wearable Sensor Network for Quantitative Assessment of Stroke Patients

    Directory of Open Access Journals (Sweden)

    Lei Yu

    2016-02-01

    Full Text Available Clinical rehabilitation assessment is an important part of the therapy process because it is the premise for prescribing suitable rehabilitation interventions. However, the commonly used assessment scales have the following two drawbacks: (1 they are susceptible to subjective factors; (2 they only have several rating levels and are influenced by a ceiling effect, making it impossible to exactly detect any further improvement in the movement. Meanwhile, energy constraints are a primary design consideration in wearable sensor network systems since they are often battery-operated. Traditionally, for wearable sensor network systems that follow the Shannon/Nyquist sampling theorem, there are many data that need to be sampled and transmitted. This paper proposes a novel wearable sensor network system to monitor and quantitatively assess the upper limb motion function, based on compressed sensing technology. With the sparse representation model, less data is transmitted to the computer than with traditional systems. The experimental results show that the accelerometer signals of Bobath handshake and shoulder touch exercises can be compressed, and the length of the compressed signal is less than 1/3 of the raw signal length. More importantly, the reconstruction errors have no influence on the predictive accuracy of the Brunnstrom stage classification model. It also indicated that the proposed system can not only reduce the amount of data during the sampling and transmission processes, but also, the reconstructed accelerometer signals can be used for quantitative assessment without any loss of useful information.

  7. A Compressed Sensing-Based Wearable Sensor Network for Quantitative Assessment of Stroke Patients

    Science.gov (United States)

    Yu, Lei; Xiong, Daxi; Guo, Liquan; Wang, Jiping

    2016-01-01

    Clinical rehabilitation assessment is an important part of the therapy process because it is the premise for prescribing suitable rehabilitation interventions. However, the commonly used assessment scales have the following two drawbacks: (1) they are susceptible to subjective factors; (2) they only have several rating levels and are influenced by a ceiling effect, making it impossible to exactly detect any further improvement in the movement. Meanwhile, energy constraints are a primary design consideration in wearable sensor network systems since they are often battery-operated. Traditionally, for wearable sensor network systems that follow the Shannon/Nyquist sampling theorem, there are many data that need to be sampled and transmitted. This paper proposes a novel wearable sensor network system to monitor and quantitatively assess the upper limb motion function, based on compressed sensing technology. With the sparse representation model, less data is transmitted to the computer than with traditional systems. The experimental results show that the accelerometer signals of Bobath handshake and shoulder touch exercises can be compressed, and the length of the compressed signal is less than 1/3 of the raw signal length. More importantly, the reconstruction errors have no influence on the predictive accuracy of the Brunnstrom stage classification model. It also indicated that the proposed system can not only reduce the amount of data during the sampling and transmission processes, but also, the reconstructed accelerometer signals can be used for quantitative assessment without any loss of useful information. PMID:26861337

  8. Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices

    Science.gov (United States)

    Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan; Gross, Sam; Hills, Gage; Hornstein, Michael; Lakkam, Milinda; Lee, Jason; Li, Jian; Liu, Linxi; Sing-Long, Carlos; Marx, Mike; Mittal, Akshay; Monajemi, Hatef; No, Albert; Omrani, Reza; Pekelis, Leonid; Qin, Junjie; Raines, Kevin; Ryu, Ernest; Saxe, Andrew; Shi, Dai; Siilats, Keith; Strauss, David; Tang, Gary; Wang, Chaojun; Zhou, Zoey; Zhu, Zhen

    2013-01-01

    In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the same phase transition location—holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to for four different sets , and the results establish our finding for each of the four associated phase transitions. PMID:23277588

  9. Sparse-View Ultrasound Diffraction Tomography Using Compressed Sensing with Nonuniform FFT

    Directory of Open Access Journals (Sweden)

    Shaoyan Hua

    2014-01-01

    Full Text Available Accurate reconstruction of the object from sparse-view sampling data is an appealing issue for ultrasound diffraction tomography (UDT. In this paper, we present a reconstruction method based on compressed sensing framework for sparse-view UDT. Due to the piecewise uniform characteristics of anatomy structures, the total variation is introduced into the cost function to find a more faithful sparse representation of the object. The inverse problem of UDT is iteratively resolved by conjugate gradient with nonuniform fast Fourier transform. Simulation results show the effectiveness of the proposed method that the main characteristics of the object can be properly presented with only 16 views. Compared to interpolation and multiband method, the proposed method can provide higher resolution and lower artifacts with the same view number. The robustness to noise and the computation complexity are also discussed.

  10. A computationally efficient OMP-based compressed sensing reconstruction for dynamic MRI

    International Nuclear Information System (INIS)

    Usman, M; Prieto, C; Schaeffter, T; Batchelor, P G; Odille, F; Atkinson, D

    2011-01-01

    Compressed sensing (CS) methods in MRI are computationally intensive. Thus, designing novel CS algorithms that can perform faster reconstructions is crucial for everyday applications. We propose a computationally efficient orthogonal matching pursuit (OMP)-based reconstruction, specifically suited to cardiac MR data. According to the energy distribution of a y-f space obtained from a sliding window reconstruction, we label the y-f space as static or dynamic. For static y-f space images, a computationally efficient masked OMP reconstruction is performed, whereas for dynamic y-f space images, standard OMP reconstruction is used. The proposed method was tested on a dynamic numerical phantom and two cardiac MR datasets. Depending on the field of view composition of the imaging data, compared to the standard OMP method, reconstruction speedup factors ranging from 1.5 to 2.5 are achieved. (note)

  11. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Valenzise G

    2009-01-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  12. Statistical Prior Aided Separate Compressed Image Sensing for Green Internet of Multimedia Things

    Directory of Open Access Journals (Sweden)

    Shaohua Wu

    2017-01-01

    Full Text Available In this paper, we aim to propose an image compression and reconstruction strategy under the compressed sensing (CS framework to enable the green computation and communication for the Internet of Multimedia Things (IoMT. The core idea is to explore the statistics of image representations in the wavelet domain to aid the reconstruction method design. Specifically, the energy distribution of natural images in the wavelet domain is well characterized by an exponential decay model and then used in the two-step separate image reconstruction method, by which the row-wise (or column-wise intermediates and column-wise (or row-wise final results are reconstructed sequentially. Both the intermediates and the final results are constrained to conform with the statistical prior by using a weight matrix. Two recovery strategies with different levels of complexity, namely, the direct recovery with fixed weight matrix (DR-FM and the iterative recovery with refined weight matrix (IR-RM, are designed to obtain different quality of recovery. Extensive simulations show that both DR-FM and IR-RM can achieve much better image reconstruction quality with much faster recovery speed than traditional methods.

  13. Performance characterization of compressed sensing positron emission tomography detectors and data acquisition system

    Science.gov (United States)

    Chang, Chen-Ming; Grant, Alexander M.; Lee, Brian J.; Kim, Ealgoo; Hong, KeyJo; Levin, Craig S.

    2015-08-01

    In the field of information theory, compressed sensing (CS) had been developed to recover signals at a lower sampling rate than suggested by the Nyquist-Shannon theorem, provided the signals have a sparse representation with respect to some base. CS has recently emerged as a method to multiplex PET detector readouts thanks to the sparse nature of 511 keV photon interactions in a typical PET study. We have shown in our previous numerical studies that, at the same multiplexing ratio, CS achieves higher signal-to-noise ratio (SNR) compared to Anger and cross-strip multiplexing. In addition, unlike Anger logic, multiplexing by CS preserves the capability to resolve multi-hit events, in which multiple pixels are triggered within the resolving time of the detector. In this work, we characterized the time, energy and intrinsic spatial resolution of two CS detectors and a data acquisition system we have developed for a PET insert system for simultaneous PET/MRI. The CS detector comprises a 2× 4 mosaic of 4× 4 arrays of 3.2× 3.2× 20 mm3 lutetium-yttrium orthosilicate crystals coupled one-to-one to eight 4× 4 silicon photomultiplier arrays. The total number of 128 pixels is multiplexed down to 16 readout channels by CS. The energy, coincidence time and intrinsic spatial resolution achieved by two CS detectors were 15.4+/- 0.1 % FWHM at 511 keV, 4.5 ns FWHM and 2.3 mm FWHM, respectively. A series of experiments were conducted to measure the sources of time jitter that limit the time resolution of the current system, which provides guidance for potential system design improvements. These findings demonstrate the feasibility of compressed sensing as a promising multiplexing method for PET detectors.

  14. XD-GRASP: Golden-angle radial MRI with reconstruction of extra motion-state dimensions using compressed sensing.

    Science.gov (United States)

    Feng, Li; Axel, Leon; Chandarana, Hersh; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo

    2016-02-01

    To develop a novel framework for free-breathing MRI called XD-GRASP, which sorts dynamic data into extra motion-state dimensions using the self-navigation properties of radial imaging and reconstructs the multidimensional dataset using compressed sensing. Radial k-space data are continuously acquired using the golden-angle sampling scheme and sorted into multiple motion-states based on respiratory and/or cardiac motion signals derived directly from the data. The resulting undersampled multidimensional dataset is reconstructed using a compressed sensing approach that exploits sparsity along the new dynamic dimensions. The performance of XD-GRASP is demonstrated for free-breathing three-dimensional (3D) abdominal imaging, two-dimensional (2D) cardiac cine imaging and 3D dynamic contrast-enhanced (DCE) MRI of the liver, comparing against reconstructions without motion sorting in both healthy volunteers and patients. XD-GRASP separates respiratory motion from cardiac motion in cardiac imaging, and respiratory motion from contrast enhancement in liver DCE-MRI, which improves image quality and reduces motion-blurring artifacts. XD-GRASP represents a new use of sparsity for motion compensation and a novel way to handle motions in the context of a continuous acquisition paradigm. Instead of removing or correcting motion, extra motion-state dimensions are reconstructed, which improves image quality and also offers new physiological information of potential clinical value. © 2015 Wiley Periodicals, Inc.

  15. Compressed sensing electron tomography of needle-shaped biological specimens – Potential for improved reconstruction fidelity with reduced dose

    Energy Technology Data Exchange (ETDEWEB)

    Saghi, Zineb, E-mail: saghizineb@gmail.com [Department of Materials Science and Metallurgy, University of Cambridge, 27 Charles Babbage Road, Cambridge CB3 0FS (United Kingdom); Divitini, Giorgio [Department of Materials Science and Metallurgy, University of Cambridge, 27 Charles Babbage Road, Cambridge CB3 0FS (United Kingdom); Winter, Benjamin [Center for Nanoanalysis and Electron Microscopy (CENEM), Friedrich-Alexander-Universität Erlangen-Nürnberg, Cauerstraße 6, 91058 Erlangen (Germany); Leary, Rowan [Department of Materials Science and Metallurgy, University of Cambridge, 27 Charles Babbage Road, Cambridge CB3 0FS (United Kingdom); Spiecker, Erdmann [Center for Nanoanalysis and Electron Microscopy (CENEM), Friedrich-Alexander-Universität Erlangen-Nürnberg, Cauerstraße 6, 91058 Erlangen (Germany); Ducati, Caterina [Department of Materials Science and Metallurgy, University of Cambridge, 27 Charles Babbage Road, Cambridge CB3 0FS (United Kingdom); Midgley, Paul A., E-mail: pam33@cam.ac.uk [Department of Materials Science and Metallurgy, University of Cambridge, 27 Charles Babbage Road, Cambridge CB3 0FS (United Kingdom)

    2016-01-15

    Electron tomography is an invaluable method for 3D cellular imaging. The technique is, however, limited by the specimen geometry, with a loss of resolution due to a restricted tilt range, an increase in specimen thickness with tilt, and a resultant need for subjective and time-consuming manual segmentation. Here we show that 3D reconstructions of needle-shaped biological samples exhibit isotropic resolution, facilitating improved automated segmentation and feature detection. By using scanning transmission electron tomography, with small probe convergence angles, high spatial resolution is maintained over large depths of field and across the tilt range. Moreover, the application of compressed sensing methods to the needle data demonstrates how high fidelity reconstructions may be achieved with far fewer images (and thus greatly reduced dose) than needed by conventional methods. These findings open the door to high fidelity electron tomography over critically relevant length-scales, filling an important gap between existing 3D cellular imaging techniques. - Highlights: • On-axis electron tomography of a needle-shaped biological sample is presented. • A reconstruction with isotropic resolution is achieved. • Compressed sensing methods are compared to conventional reconstruction algorithms. • High fidelity reconstructions are achieved with greatly undersampled datasets.

  16. Compressed sensing electron tomography of needle-shaped biological specimens – Potential for improved reconstruction fidelity with reduced dose

    International Nuclear Information System (INIS)

    Saghi, Zineb; Divitini, Giorgio; Winter, Benjamin; Leary, Rowan; Spiecker, Erdmann; Ducati, Caterina; Midgley, Paul A.

    2016-01-01

    Electron tomography is an invaluable method for 3D cellular imaging. The technique is, however, limited by the specimen geometry, with a loss of resolution due to a restricted tilt range, an increase in specimen thickness with tilt, and a resultant need for subjective and time-consuming manual segmentation. Here we show that 3D reconstructions of needle-shaped biological samples exhibit isotropic resolution, facilitating improved automated segmentation and feature detection. By using scanning transmission electron tomography, with small probe convergence angles, high spatial resolution is maintained over large depths of field and across the tilt range. Moreover, the application of compressed sensing methods to the needle data demonstrates how high fidelity reconstructions may be achieved with far fewer images (and thus greatly reduced dose) than needed by conventional methods. These findings open the door to high fidelity electron tomography over critically relevant length-scales, filling an important gap between existing 3D cellular imaging techniques. - Highlights: • On-axis electron tomography of a needle-shaped biological sample is presented. • A reconstruction with isotropic resolution is achieved. • Compressed sensing methods are compared to conventional reconstruction algorithms. • High fidelity reconstructions are achieved with greatly undersampled datasets.

  17. Energy-efficient ECG compression on wireless biosensors via minimal coherence sensing and weighted ℓ₁ minimization reconstruction.

    Science.gov (United States)

    Zhang, Jun; Gu, Zhenghui; Yu, Zhu Liang; Li, Yuanqing

    2015-03-01

    Low energy consumption is crucial for body area networks (BANs). In BAN-enabled ECG monitoring, the continuous monitoring entails the need of the sensor nodes to transmit a huge data to the sink node, which leads to excessive energy consumption. To reduce airtime over energy-hungry wireless links, this paper presents an energy-efficient compressed sensing (CS)-based approach for on-node ECG compression. At first, an algorithm called minimal mutual coherence pursuit is proposed to construct sparse binary measurement matrices, which can be used to encode the ECG signals with superior performance and extremely low complexity. Second, in order to minimize the data rate required for faithful reconstruction, a weighted ℓ1 minimization model is derived by exploring the multisource prior knowledge in wavelet domain. Experimental results on MIT-BIH arrhythmia database reveals that the proposed approach can obtain higher compression ratio than the state-of-the-art CS-based methods. Together with its low encoding complexity, our approach can achieve significant energy saving in both encoding process and wireless transmission.

  18. Design of Compressed Sensing Algorithm for Coal Mine IoT Moving Measurement Data Based on a Multi-Hop Network and Total Variation

    Directory of Open Access Journals (Sweden)

    Gang Wang

    2018-05-01

    Full Text Available As the application of a coal mine Internet of Things (IoT, mobile measurement devices, such as intelligent mine lamps, cause moving measurement data to be increased. How to transmit these large amounts of mobile measurement data effectively has become an urgent problem. This paper presents a compressed sensing algorithm for the large amount of coal mine IoT moving measurement data based on a multi-hop network and total variation. By taking gas data in mobile measurement data as an example, two network models for the transmission of gas data flow, namely single-hop and multi-hop transmission modes, are investigated in depth, and a gas data compressed sensing collection model is built based on a multi-hop network. To utilize the sparse characteristics of gas data, the concept of total variation is introduced and a high-efficiency gas data compression and reconstruction method based on Total Variation Sparsity based on Multi-Hop (TVS-MH is proposed. According to the simulation results, by using the proposed method, the moving measurement data flow from an underground distributed mobile network can be acquired and transmitted efficiently.

  19. Design of Compressed Sensing Algorithm for Coal Mine IoT Moving Measurement Data Based on a Multi-Hop Network and Total Variation.

    Science.gov (United States)

    Wang, Gang; Zhao, Zhikai; Ning, Yongjie

    2018-05-28

    As the application of a coal mine Internet of Things (IoT), mobile measurement devices, such as intelligent mine lamps, cause moving measurement data to be increased. How to transmit these large amounts of mobile measurement data effectively has become an urgent problem. This paper presents a compressed sensing algorithm for the large amount of coal mine IoT moving measurement data based on a multi-hop network and total variation. By taking gas data in mobile measurement data as an example, two network models for the transmission of gas data flow, namely single-hop and multi-hop transmission modes, are investigated in depth, and a gas data compressed sensing collection model is built based on a multi-hop network. To utilize the sparse characteristics of gas data, the concept of total variation is introduced and a high-efficiency gas data compression and reconstruction method based on Total Variation Sparsity based on Multi-Hop (TVS-MH) is proposed. According to the simulation results, by using the proposed method, the moving measurement data flow from an underground distributed mobile network can be acquired and transmitted efficiently.

  20. Non-Destructive Detection of Wire Rope Discontinuities from Residual Magnetic Field Images Using the Hilbert-Huang Transform and Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Juwei Zhang

    2017-03-01

    Full Text Available Electromagnetic methods are commonly employed to detect wire rope discontinuities. However, determining the residual strength of wire rope based on the quantitative recognition of discontinuities remains problematic. We have designed a prototype device based on the residual magnetic field (RMF of ferromagnetic materials, which overcomes the disadvantages associated with in-service inspections, such as large volume, inconvenient operation, low precision, and poor portability by providing a relatively small and lightweight device with improved detection precision. A novel filtering system consisting of the Hilbert-Huang transform and compressed sensing wavelet filtering is presented. Digital image processing was applied to achieve the localization and segmentation of defect RMF images. The statistical texture and invariant moment characteristics of the defect images were extracted as the input of a radial basis function neural network. Experimental results show that the RMF device can detect defects in various types of wire rope and prolong the service life of test equipment by reducing the friction between the detection device and the wire rope by accommodating a high lift-off distance.

  1. Informational analysis for compressive sampling in radar imaging.

    Science.gov (United States)

    Zhang, Jingxiong; Yang, Ke

    2015-03-24

    Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.

  2. A Novel Image Authentication with Tamper Localization and Self-Recovery in Encrypted Domain Based on Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2018-01-01

    Full Text Available This paper proposes a novel tamper detection, localization, and recovery scheme for encrypted images with Discrete Wavelet Transformation (DWT and Compressive Sensing (CS. The original image is first transformed into DWT domain and divided into important part, that is, low-frequency part, and unimportant part, that is, high-frequency part. For low-frequency part contains the main information of image, traditional chaotic encryption is employed. Then, high-frequency part is encrypted with CS to vacate space for watermark. The scheme takes the processed original image content as watermark, from which the characteristic digest values are generated. Comparing with the existing image authentication algorithms, the proposed scheme can realize not only tamper detection and localization but also tamper recovery. Moreover, tamper recovery is based on block division and the recovery accuracy varies with the contents that are possibly tampered. If either the watermark or low-frequency part is tampered, the recovery accuracy is 100%. The experimental results show that the scheme can not only distinguish the type of tamper and find the tampered blocks but also recover the main information of the original image. With great robustness and security, the scheme can adequately meet the need of secure image transmission under unreliable conditions.

  3. Peeling Decoding of LDPC Codes with Applications in Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Weijun Zeng

    2016-01-01

    Full Text Available We present a new approach for the analysis of iterative peeling decoding recovery algorithms in the context of Low-Density Parity-Check (LDPC codes and compressed sensing. The iterative recovery algorithm is particularly interesting for its low measurement cost and low computational complexity. The asymptotic analysis can track the evolution of the fraction of unrecovered signal elements in each iteration, which is similar to the well-known density evolution analysis in the context of LDPC decoding algorithm. Our analysis shows that there exists a threshold on the density factor; if under this threshold, the recovery algorithm is successful; otherwise it will fail. Simulation results are also provided for verifying the agreement between the proposed asymptotic analysis and recovery algorithm. Compared with existing works of peeling decoding algorithm, focusing on the failure probability of the recovery algorithm, our proposed approach gives accurate evolution of performance with different parameters of measurement matrices and is easy to implement. We also show that the peeling decoding algorithm performs better than other schemes based on LDPC codes.

  4. Signal Recovery in Compressive Sensing via Multiple Sparsifying Bases

    DEFF Research Database (Denmark)

    Wijewardhana, U. L.; Belyaev, Evgeny; Codreanu, M.

    2017-01-01

    is sparse is the key assumption utilized by such algorithms. However, the basis in which the signal is the sparsest is unknown for many natural signals of interest. Instead there may exist multiple bases which lead to a compressible representation of the signal: e.g., an image is compressible in different...... wavelet transforms. We show that a significant performance improvement can be achieved by utilizing multiple estimates of the signal using sparsifying bases in the context of signal reconstruction from compressive samples. Further, we derive a customized interior-point method to jointly obtain multiple...... estimates of a 2-D signal (image) from compressive measurements utilizing multiple sparsifying bases as well as the fact that the images usually have a sparse gradient....

  5. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    Energy Technology Data Exchange (ETDEWEB)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina, E-mail: simon.felix@fhnw.ch, E-mail: roman.bolzern@fhnw.ch, E-mail: marina.battaglia@fhnw.ch [University of Applied Sciences and Arts Northwestern Switzerland FHNW, 5210 Windisch (Switzerland)

    2017-11-01

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS-CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS-CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.

  6. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    Science.gov (United States)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina

    2017-11-01

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS_CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS_CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.

  7. Visualization of Astronomical Nebulae via Distributed Multi-GPU Compressed Sensing Tomography.

    Science.gov (United States)

    Wenger, S; Ament, M; Guthe, S; Lorenz, D; Tillmann, A; Weiskopf, D; Magnor, M

    2012-12-01

    The 3D visualization of astronomical nebulae is a challenging problem since only a single 2D projection is observable from our fixed vantage point on Earth. We attempt to generate plausible and realistic looking volumetric visualizations via a tomographic approach that exploits the spherical or axial symmetry prevalent in some relevant types of nebulae. Different types of symmetry can be implemented by using different randomized distributions of virtual cameras. Our approach is based on an iterative compressed sensing reconstruction algorithm that we extend with support for position-dependent volumetric regularization and linear equality constraints. We present a distributed multi-GPU implementation that is capable of reconstructing high-resolution datasets from arbitrary projections. Its robustness and scalability are demonstrated for astronomical imagery from the Hubble Space Telescope. The resulting volumetric data is visualized using direct volume rendering. Compared to previous approaches, our method preserves a much higher amount of detail and visual variety in the 3D visualization, especially for objects with only approximate symmetry.

  8. Peak reduction and clipping mitigation in OFDM by augmented compressive sensing

    KAUST Repository

    Al-Safadi, Ebrahim B.

    2012-07-01

    This work establishes the design, analysis, and fine-tuning of a peak-to-average-power-ratio (PAPR) reducing system, based on compressed sensing (CS) at the receiver of a peak-reducing sparse clipper applied to an orthogonal frequency-division multiplexing (OFDM) signal at the transmitter. By exploiting the sparsity of clipping events in the time domain relative to a predefined clipping threshold, the method depends on partially observing the frequency content of the clipping distortion over reserved tones to estimate the remaining distortion. The approach has the advantage of eliminating the computational complexity at the transmitter and reducing the overall complexity of the system compared to previous methods which incorporate pilots to cancel nonlinear distortion. Data-based augmented CS methods are also proposed that draw upon available phase and support information from data tones for enhanced estimation and cancelation of clipping noise. This enables signal recovery under more severe clipping scenarios and hence lower PAPR can be achieved compared to conventional CS techniques. © 2012 IEEE.

  9. Peak reduction and clipping mitigation in OFDM by augmented compressive sensing

    KAUST Repository

    Al-Safadi, Ebrahim B.; Al-Naffouri, Tareq Y.

    2012-01-01

    This work establishes the design, analysis, and fine-tuning of a peak-to-average-power-ratio (PAPR) reducing system, based on compressed sensing (CS) at the receiver of a peak-reducing sparse clipper applied to an orthogonal frequency-division multiplexing (OFDM) signal at the transmitter. By exploiting the sparsity of clipping events in the time domain relative to a predefined clipping threshold, the method depends on partially observing the frequency content of the clipping distortion over reserved tones to estimate the remaining distortion. The approach has the advantage of eliminating the computational complexity at the transmitter and reducing the overall complexity of the system compared to previous methods which incorporate pilots to cancel nonlinear distortion. Data-based augmented CS methods are also proposed that draw upon available phase and support information from data tones for enhanced estimation and cancelation of clipping noise. This enables signal recovery under more severe clipping scenarios and hence lower PAPR can be achieved compared to conventional CS techniques. © 2012 IEEE.

  10. A reweighted ℓ1-minimization based compressed sensing for the spectral estimation of heart rate variability using the unevenly sampled data.

    Directory of Open Access Journals (Sweden)

    Szi-Wen Chen

    Full Text Available In this paper, a reweighted ℓ1-minimization based Compressed Sensing (CS algorithm incorporating the Integral Pulse Frequency Modulation (IPFM model for spectral estimation of HRV is introduced. Knowing as a novel sensing/sampling paradigm, the theory of CS asserts certain signals that are considered sparse or compressible can be possibly reconstructed from substantially fewer measurements than those required by traditional methods. Our study aims to employ a novel reweighted ℓ1-minimization CS method for deriving the spectrum of the modulating signal of IPFM model from incomplete RR measurements for HRV assessments. To evaluate the performance of HRV spectral estimation, a quantitative measure, referred to as the Percent Error Power (PEP that measures the percentage of difference between the true spectrum and the spectrum derived from the incomplete RR dataset, was used. We studied the performance of spectral reconstruction from incomplete simulated and real HRV signals by experimentally truncating a number of RR data accordingly in the top portion, in the bottom portion, and in a random order from the original RR column vector. As a result, for up to 20% data truncation/loss the proposed reweighted ℓ1-minimization CS method produced, on average, 2.34%, 2.27%, and 4.55% PEP in the top, bottom, and random data-truncation cases, respectively, on Autoregressive (AR model derived simulated HRV signals. Similarly, for up to 20% data loss the proposed method produced 5.15%, 4.33%, and 0.39% PEP in the top, bottom, and random data-truncation cases, respectively, on a real HRV database drawn from PhysioNet. Moreover, results generated by a number of intensive numerical experiments all indicated that the reweighted ℓ1-minimization CS method always achieved the most accurate and high-fidelity HRV spectral estimates in every aspect, compared with the ℓ1-minimization based method and Lomb's method used for estimating the spectrum of HRV from

  11. Enhanced compressed sensing for visual target tracking in wireless visual sensor networks

    Science.gov (United States)

    Qiang, Guo

    2017-11-01

    Moving object tracking in wireless sensor networks (WSNs) has been widely applied in various fields. Designing low-power WSNs for the limited resources of the sensor, such as energy limitation, energy restriction, and bandwidth constraints, is of high priority. However, most existing works focus on only single conflicting optimization criteria. An efficient compressive sensing technique based on a customized memory gradient pursuit algorithm with early termination in WSNs is presented, which strikes compelling trade-offs among energy dissipation for wireless transmission, certain types of bandwidth, and minimum storage. Then, the proposed approach adopts an unscented particle filter to predict the location of the target. The experimental results with a theoretical analysis demonstrate the substantially superior effectiveness of the proposed model and framework in regard to the energy and speed under the resource limitation of a visual sensor node.

  12. Quantitative Inspection of Remanence of Broken Wire Rope Based on Compressed Sensing.

    Science.gov (United States)

    Zhang, Juwei; Tan, Xiaojiang

    2016-08-25

    Most traditional strong magnetic inspection equipment has disadvantages such as big excitation devices, high weight, low detection precision, and inconvenient operation. This paper presents the design of a giant magneto-resistance (GMR) sensor array collection system. The remanence signal is collected to acquire two-dimensional magnetic flux leakage (MFL) data on the surface of wire ropes. Through the use of compressed sensing wavelet filtering (CSWF), the image expression of wire ropes MFL on the surface was obtained. Then this was taken as the input of the designed back propagation (BP) neural network to extract three kinds of MFL image geometry features and seven invariant moments of defect images. Good results were obtained. The experimental results show that nondestructive inspection through the use of remanence has higher accuracy and reliability compared with traditional inspection devices, along with smaller volume, lighter weight and higher precision.

  13. Data Collection Method for Mobile Control Sink Node in Wireless Sensor Network Based on Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Ling Yongfa

    2016-01-01

    Full Text Available The paper proposes a mobile control sink node data collection method in the wireless sensor network based on compressive sensing. This method, with regular track, selects the optimal data collection points in the monitoring area via the disc method, calcu-lates the shortest path by using the quantum genetic algorithm, and hence determines the data collection route. Simulation results show that this method has higher network throughput and better energy efficiency, capable of collecting a huge amount of data with balanced energy consumption in the network.

  14. The possibilities of compressed-sensing-based Kirchhoff prestack migration

    KAUST Repository

    Aldawood, Ali; Hoteit, Ibrahim; Alkhalifah, Tariq Ali

    2014-01-01

    An approximate subsurface reflectivity distribution of the earth is usually obtained through the migration process. However, conventional migration algorithms, including those based on the least-squares approach, yield structure descriptions that are slightly smeared and of low resolution caused by the common migration artifacts due to limited aperture, coarse sampling, band-limited source, and low subsurface illumination. To alleviate this problem, we use the fact that minimizing the L1-norm of a signal promotes its sparsity. Thus, we formulated the Kirchhoff migration problem as a compressed-sensing (CS) basis pursuit denoise problem to solve for highly focused migrated images compared with those obtained by standard and least-squares migration algorithms. The results of various subsurface reflectivity models revealed that solutions computed using the CS based migration provide a more accurate subsurface reflectivity location and amplitude. We applied the CS algorithm to image synthetic data from a fault model using dense and sparse acquisition geometries. Our results suggest that the proposed approach may still provide highly resolved images with a relatively small number of measurements. We also evaluated the robustness of the basis pursuit denoise algorithm in the presence of Gaussian random observational noise and in the case of imaging the recorded data with inaccurate migration velocities.

  15. The possibilities of compressed-sensing-based Kirchhoff prestack migration

    KAUST Repository

    Aldawood, Ali

    2014-05-08

    An approximate subsurface reflectivity distribution of the earth is usually obtained through the migration process. However, conventional migration algorithms, including those based on the least-squares approach, yield structure descriptions that are slightly smeared and of low resolution caused by the common migration artifacts due to limited aperture, coarse sampling, band-limited source, and low subsurface illumination. To alleviate this problem, we use the fact that minimizing the L1-norm of a signal promotes its sparsity. Thus, we formulated the Kirchhoff migration problem as a compressed-sensing (CS) basis pursuit denoise problem to solve for highly focused migrated images compared with those obtained by standard and least-squares migration algorithms. The results of various subsurface reflectivity models revealed that solutions computed using the CS based migration provide a more accurate subsurface reflectivity location and amplitude. We applied the CS algorithm to image synthetic data from a fault model using dense and sparse acquisition geometries. Our results suggest that the proposed approach may still provide highly resolved images with a relatively small number of measurements. We also evaluated the robustness of the basis pursuit denoise algorithm in the presence of Gaussian random observational noise and in the case of imaging the recorded data with inaccurate migration velocities.

  16. Matrix-Inversion-Free Compressed Sensing With Variable Orthogonal Multi-Matching Pursuit Based on Prior Information for ECG Signals.

    Science.gov (United States)

    Cheng, Yih-Chun; Tsai, Pei-Yun; Huang, Ming-Hao

    2016-05-19

    Low-complexity compressed sensing (CS) techniques for monitoring electrocardiogram (ECG) signals in wireless body sensor network (WBSN) are presented. The prior probability of ECG sparsity in the wavelet domain is first exploited. Then, variable orthogonal multi-matching pursuit (vOMMP) algorithm that consists of two phases is proposed. In the first phase, orthogonal matching pursuit (OMP) algorithm is adopted to effectively augment the support set with reliable indices and in the second phase, the orthogonal multi-matching pursuit (OMMP) is employed to rescue the missing indices. The reconstruction performance is thus enhanced with the prior information and the vOMMP algorithm. Furthermore, the computation-intensive pseudo-inverse operation is simplified by the matrix-inversion-free (MIF) technique based on QR decomposition. The vOMMP-MIF CS decoder is then implemented in 90 nm CMOS technology. The QR decomposition is accomplished by two systolic arrays working in parallel. The implementation supports three settings for obtaining 40, 44, and 48 coefficients in the sparse vector. From the measurement result, the power consumption is 11.7 mW at 0.9 V and 12 MHz. Compared to prior chip implementations, our design shows good hardware efficiency and is suitable for low-energy applications.

  17. Quantitative Inspection of Remanence of Broken Wire Rope Based on Compressed Sensing

    Science.gov (United States)

    Zhang, Juwei; Tan, Xiaojiang

    2016-01-01

    Most traditional strong magnetic inspection equipment has disadvantages such as big excitation devices, high weight, low detection precision, and inconvenient operation. This paper presents the design of a giant magneto-resistance (GMR) sensor array collection system. The remanence signal is collected to acquire two-dimensional magnetic flux leakage (MFL) data on the surface of wire ropes. Through the use of compressed sensing wavelet filtering (CSWF), the image expression of wire ropes MFL on the surface was obtained. Then this was taken as the input of the designed back propagation (BP) neural network to extract three kinds of MFL image geometry features and seven invariant moments of defect images. Good results were obtained. The experimental results show that nondestructive inspection through the use of remanence has higher accuracy and reliability compared with traditional inspection devices, along with smaller volume, lighter weight and higher precision. PMID:27571077

  18. Quantitative Inspection of Remanence of Broken Wire Rope Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Juwei Zhang

    2016-08-01

    Full Text Available Most traditional strong magnetic inspection equipment has disadvantages such as big excitation devices, high weight, low detection precision, and inconvenient operation. This paper presents the design of a giant magneto-resistance (GMR sensor array collection system. The remanence signal is collected to acquire two-dimensional magnetic flux leakage (MFL data on the surface of wire ropes. Through the use of compressed sensing wavelet filtering (CSWF, the image expression of wire ropes MFL on the surface was obtained. Then this was taken as the input of the designed back propagation (BP neural network to extract three kinds of MFL image geometry features and seven invariant moments of defect images. Good results were obtained. The experimental results show that nondestructive inspection through the use of remanence has higher accuracy and reliability compared with traditional inspection devices, along with smaller volume, lighter weight and higher precision.

  19. A compressed sensing X-ray camera with a multilayer architecture

    Science.gov (United States)

    Wang, Zhehui; Iaroshenko, O.; Li, S.; Liu, T.; Parab, N.; Chen, W. W.; Chu, P.; Kenyon, G. T.; Lipton, R.; Sun, K.-X.

    2018-01-01

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.

  20. Phase Imaging: A Compressive Sensing Approach

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Sebastian; Stevens, Andrew; Browning, Nigel D.; Pohl, Darius; Nielsch, Kornelius; Rellinghaus, Bernd

    2017-07-01

    Since Wolfgang Pauli posed the question in 1933, whether the probability densities |Ψ(r)|² (real-space image) and |Ψ(q)|² (reciprocal space image) uniquely determine the wave function Ψ(r) [1], the so called Pauli Problem sparked numerous methods in all fields of microscopy [2, 3]. Reconstructing the complete wave function Ψ(r) = a(r)e-iφ(r) with the amplitude a(r) and the phase φ(r) from the recorded intensity enables the possibility to directly study the electric and magnetic properties of the sample through the phase. In transmission electron microscopy (TEM), electron holography is by far the most established method for phase reconstruction [4]. Requiring a high stability of the microscope, next to the installation of a biprism in the TEM, holography cannot be applied to any microscope straightforwardly. Recently, a phase retrieval approach was proposed using conventional TEM electron diffractive imaging (EDI). Using the SAD aperture as reciprocal-space constraint, a localized sample structure can be reconstructed from its diffraction pattern and a real-space image using the hybrid input-output algorithm [5]. We present an alternative approach using compressive phase-retrieval [6]. Our approach does not require a real-space image. Instead, random complimentary pairs of checkerboard masks are cut into a 200 nm Pt foil covering a conventional TEM aperture (cf. Figure 1). Used as SAD aperture, subsequently diffraction patterns are recorded from the same sample area. Hereby every mask blocks different parts of gold particles on a carbon support (cf. Figure 2). The compressive sensing problem has the following formulation. First, we note that the complex-valued reciprocal-space wave-function is the Fourier transform of the (also complex-valued) real-space wave-function, Ψ(q) = F[Ψ(r)], and subsequently the diffraction pattern image is given by |Ψ(q)|2 = |F[Ψ(r)]|2. We want to find Ψ(r) given a few differently coded diffraction pattern measurements yn

  1. ISTA-Net: Iterative Shrinkage-Thresholding Algorithm Inspired Deep Network for Image Compressive Sensing

    KAUST Repository

    Zhang, Jian

    2017-06-24

    Traditional methods for image compressive sensing (CS) reconstruction solve a well-defined inverse problem that is based on a predefined CS model, which defines the underlying structure of the problem and is generally solved by employing convergent iterative solvers. These optimization-based CS methods face the challenge of choosing optimal transforms and tuning parameters in their solvers, while also suffering from high computational complexity in most cases. Recently, some deep network based CS algorithms have been proposed to improve CS reconstruction performance, while dramatically reducing time complexity as compared to optimization-based methods. Despite their impressive results, the proposed networks (either with fully-connected or repetitive convolutional layers) lack any structural diversity and they are trained as a black box, void of any insights from the CS domain. In this paper, we combine the merits of both types of CS methods: the structure insights of optimization-based method and the performance/speed of network-based ones. We propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $l_1$ norm CS reconstruction model. ISTA-Net essentially implements a truncated form of ISTA, where all ISTA-Net parameters are learned end-to-end to minimize a reconstruction error in training. Borrowing more insights from the optimization realm, we propose an accelerated version of ISTA-Net, dubbed FISTA-Net, which is inspired by the fast iterative shrinkage-thresholding algorithm (FISTA). Interestingly, this acceleration naturally leads to skip connections in the underlying network design. Extensive CS experiments demonstrate that the proposed ISTA-Net and FISTA-Net outperform existing optimization-based and network-based CS methods by large margins, while maintaining a fast runtime.

  2. Evaluation of Digital Compressed Sensing for Real-Time Wireless ECG System with Bluetooth low Energy.

    Science.gov (United States)

    Wang, Yishan; Doleschel, Sammy; Wunderlich, Ralf; Heinen, Stefan

    2016-07-01

    In this paper, a wearable and wireless ECG system is firstly designed with Bluetooth Low Energy (BLE). It can detect 3-lead ECG signals and is completely wireless. Secondly the digital Compressed Sensing (CS) is implemented to increase the energy efficiency of wireless ECG sensor. Different sparsifying basis, various compression ratio (CR) and several reconstruction algorithms are simulated and discussed. Finally the reconstruction is done by the android application (App) on smartphone to display the signal in real time. The power efficiency is measured and compared with the system without CS. The optimum satisfying basis built by 3-level decomposed db4 wavelet coefficients, 1-bit Bernoulli random matrix and the most suitable reconstruction algorithm are selected by the simulations and applied on the sensor node and App. The signal is successfully reconstructed and displayed on the App of smartphone. Battery life of sensor node is extended from 55 h to 67 h. The presented wireless ECG system with CS can significantly extend the battery life by 22 %. With the compact characteristic and long term working time, the system provides a feasible solution for the long term homecare utilization.

  3. Method for Multiple Targets Tracking in Cognitive Radar Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Yang Jun

    2016-02-01

    Full Text Available A multiple targets cognitive radar tracking method based on Compressed Sensing (CS is proposed. In this method, the theory of CS is introduced to the case of cognitive radar tracking process in multiple targets scenario. The echo signal is sparsely expressed. The designs of sparse matrix and measurement matrix are accomplished by expressing the echo signal sparsely, and subsequently, the restruction of measurement signal under the down-sampling condition is realized. On the receiving end, after considering that the problems that traditional particle filter suffers from degeneracy, and require a large number of particles, the particle swarm optimization particle filter is used to track the targets. On the transmitting end, the Posterior Cramér-Rao Bounds (PCRB of the tracking accuracy is deduced, and the radar waveform parameters are further cognitively designed using PCRB. Simulation results show that the proposed method can not only reduce the data quantity, but also provide a better tracking performance compared with traditional method.

  4. Cyclops: single-pixel imaging lidar system based on compressive sensing

    Science.gov (United States)

    Magalhães, F.; Correia, M. V.; Farahi, F.; Pereira do Carmo, J.; Araújo, F. M.

    2017-11-01

    Mars and the Moon are envisaged as major destinations of future space exploration missions in the upcoming decades. Imaging LIDARs are seen as a key enabling technology in the support of autonomous guidance, navigation and control operations, as they can provide very accurate, wide range, high-resolution distance measurements as required for the exploration missions. Imaging LIDARs can be used at critical stages of these exploration missions, such as descent and selection of safe landing sites, rendezvous and docking manoeuvres, or robotic surface navigation and exploration. Despite these devices have been commercially available and used for long in diverse metrology and ranging applications, their size, mass and power consumption are still far from being suitable and attractive for space exploratory missions. Here, we describe a compact Single-Pixel Imaging LIDAR System that is based on a compressive sensing technique. The application of the compressive codes to a DMD array enables compression of the spatial information, while the collection of timing histograms correlated to the pulsed laser source ensures image reconstruction at the ranged distances. Single-pixel cameras have been compared with raster scanning and array based counterparts in terms of noise performance, and proved to be superior. Since a single photodetector is used, a better SNR and higher reliability is expected in contrast with systems using large format photodetector arrays. Furthermore, the event of failure of one or more micromirror elements in the DMD does not prevent full reconstruction of the images. This brings additional robustness to the proposed 3D imaging LIDAR. The prototype that was implemented has three modes of operation. Range Finder: outputs the average distance between the system and the area of the target under illumination; Attitude Meter: provides the slope of the target surface based on distance measurements in three areas of the target; 3D Imager: produces 3D ranged

  5. A secure approach for encrypting and compressing biometric information employing orthogonal code and steganography

    Science.gov (United States)

    Islam, Muhammad F.; Islam, Mohammed N.

    2012-04-01

    The objective of this paper is to develop a novel approach for encryption and compression of biometric information utilizing orthogonal coding and steganography techniques. Multiple biometric signatures are encrypted individually using orthogonal codes and then multiplexed together to form a single image, which is then embedded in a cover image using the proposed steganography technique. The proposed technique employs three least significant bits for this purpose and a secret key is developed to choose one from among these bits to be replaced by the corresponding bit of the biometric image. The proposed technique offers secure transmission of multiple biometric signatures in an identification document which will be protected from unauthorized steganalysis attempt.

  6. Economics of compressed air energy storage employing thermal energy storage

    Energy Technology Data Exchange (ETDEWEB)

    Schulte, S.C.; Reilly, R.W.

    1979-11-01

    The approach taken in this study is to adopt system design and capital cost estimates from three independent CAES studies (eight total designs) and, by supplying a common set of fuel/energy costs and economic assumptions in conjunction with a common methodology, to arrive at a series of levelized energy costs over the system's lifetime. In addition, some analyses are provided to gauge the sensitivity of these levelized energy costs to fuel and compression energy costs and to system capacity factors. The systems chosen for comparison are of four generic types: conventional CAES, hybrid CAES, adiabatic CAES, and an advanced-design gas turbine (GT). In conventional CAES systems the heat of compression generated during the storage operation is rejected to the environment, and later, during the energy-generation phase, turbine fuel must be burned to reheat the compressed air. In the hybrid systems some of the heat of compression is stored and reapplied later during the generation phase, thereby reducing turbine fuel requirements. The adiabatic systems store adequate thermal energy to eliminate the need for turbine fuel entirely. The gas turbine is included within the report for comparison purposes; it is an advanced-design turbine, one that is expected to be available by 1985.

  7. Mammography image compression using Wavelet

    International Nuclear Information System (INIS)

    Azuhar Ripin; Md Saion Salikin; Wan Hazlinda Ismail; Asmaliza Hashim; Norriza Md Isa

    2004-01-01

    Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)

  8. Massive-MIMO Sparse Uplink Channel Estimation Using Implicit Training and Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Babar Mansoor

    2017-01-01

    Full Text Available Massive multiple-input multiple-output (massive-MIMO is foreseen as a potential technology for future 5G cellular communication networks due to its substantial benefits in terms of increased spectral and energy efficiency. These advantages of massive-MIMO are a consequence of equipping the base station (BS with quite a large number of antenna elements, thus resulting in an aggressive spatial multiplexing. In order to effectively reap the benefits of massive-MIMO, an adequate estimate of the channel impulse response (CIR between each transmit–receive link is of utmost importance. It has been established in the literature that certain specific multipath propagation environments lead to a sparse structured CIR in spatial and/or delay domains. In this paper, implicit training and compressed sensing based CIR estimation techniques are proposed for the case of massive-MIMO sparse uplink channels. In the proposed superimposed training (SiT based techniques, a periodic and low power training sequence is superimposed (arithmetically added over the information sequence, thus avoiding any dedicated time/frequency slots for the training sequence. For the estimation of such massive-MIMO sparse uplink channels, two greedy pursuits based compressed sensing approaches are proposed, viz: SiT based stage-wise orthogonal matching pursuit (SiT-StOMP and gradient pursuit (SiT-GP. In order to demonstrate the validity of proposed techniques, a performance comparison in terms of normalized mean square error (NCMSE and bit error rate (BER is performed with a notable SiT based least squares (SiT-LS channel estimation technique. The effect of channels’ sparsity, training-to-information power ratio (TIR and signal-to-noise ratio (SNR on BER and NCMSE performance of proposed schemes is thoroughly studied. For a simulation scenario of: 4 × 64 massive-MIMO with a channel sparsity level of 80 % and signal-to-noise ratio (SNR of 10 dB , a performance gain of 18 dB and 13 d

  9. Overlapped block-based compressive sensing imaging on mobile handset devices

    Directory of Open Access Journals (Sweden)

    Irene Manotas Gutiérrez

    2014-01-01

    Full Text Available Compressive Sensing (CS es una nueva técnica que simultáneamente comprime y muestrea una imagen tomando un conjunto de proyecciones aleatorias de una escena. Un algoritmo de optimización es empleado para reconstruir la imagen utilizando las proyecciones aleatorias. Diferentes algoritmos de optimización se han diseñado para obtener de manera eficiente una correcta reconstrucción de la señal original. En la práctica estos algoritmos se han restringido a implementaciones de CS en arquitecturas de alto rendimiento computacional, como computadores de escritorio o unidades de procesamiento gráfico, debido a el gran número de operaciones requeridas por el proceso de reconstrucción. Este trabajo extiende la aplicación de CS para ser implementado en una arquitectura con memoria y capacidad de procesamiento limitados como un dispositivo móvil. Específicamente, se describe un algoritmo basado en bloques sobrepuestos que permite reconstruir la imagen en un dispositivo móvil y se presenta un análisis del consumo de energía de los algoritmos utilizados. Los resultados muestran el tiempo computacional y la calidad de reconstrucción para imágenes de 128x128 y 256x256 píxeles.

  10. A compressive sensing-based computational method for the inversion of wide-band ground penetrating radar data

    Science.gov (United States)

    Gelmini, A.; Gottardi, G.; Moriyama, T.

    2017-10-01

    This work presents an innovative computational approach for the inversion of wideband ground penetrating radar (GPR) data. The retrieval of the dielectric characteristics of sparse scatterers buried in a lossy soil is performed by combining a multi-task Bayesian compressive sensing (MT-BCS) solver and a frequency hopping (FH) strategy. The developed methodology is able to benefit from the regularization capabilities of the MT-BCS as well as to exploit the multi-chromatic informative content of GPR measurements. A set of numerical results is reported in order to assess the effectiveness of the proposed GPR inverse scattering technique, as well as to compare it to a simpler single-task implementation.

  11. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    Science.gov (United States)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  12. The MUSIC algorithm for sparse objects: a compressed sensing analysis

    International Nuclear Information System (INIS)

    Fannjiang, Albert C

    2011-01-01

    The multiple signal classification (MUSIC) algorithm, and its extension for imaging sparse extended objects, with noisy data is analyzed by compressed sensing (CS) techniques. A thresholding rule is developed to augment the standard MUSIC algorithm. The notion of restricted isometry property (RIP) and an upper bound on the restricted isometry constant (RIC) are employed to establish sufficient conditions for the exact localization by MUSIC with or without noise. In the noiseless case, the sufficient condition gives an upper bound on the numbers of random sampling and incident directions necessary for exact localization. In the noisy case, the sufficient condition assumes additionally an upper bound for the noise-to-object ratio in terms of the RIC and the dynamic range of objects. This bound points to the super-resolution capability of the MUSIC algorithm. Rigorous comparison of performance between MUSIC and the CS minimization principle, basis pursuit denoising (BPDN), is given. In general, the MUSIC algorithm guarantees to recover, with high probability, s scatterers with n=O(s 2 ) random sampling and incident directions and sufficiently high frequency. For the favorable imaging geometry where the scatterers are distributed on a transverse plane MUSIC guarantees to recover, with high probability, s scatterers with a median frequency and n=O(s) random sampling/incident directions. Moreover, for the problems of spectral estimation and source localizations both BPDN and MUSIC guarantee, with high probability, to identify exactly the frequencies of random signals with the number n=O(s) of sampling times. However, in the absence of abundant realizations of signals, BPDN is the preferred method for spectral estimation. Indeed, BPDN can identify the frequencies approximately with just one realization of signals with the recovery error at worst linearly proportional to the noise level. Numerical results confirm that BPDN outperforms MUSIC in the well-resolved case while

  13. Accelerated echo-planar J-resolved spectroscopic imaging in the human brain using compressed sensing: a pilot validation in obstructive sleep apnea.

    Science.gov (United States)

    Sarma, M K; Nagarajan, R; Macey, P M; Kumar, R; Villablanca, J P; Furuyama, J; Thomas, M A

    2014-06-01

    Echo-planar J-resolved spectroscopic imaging is a fast spectroscopic technique to record the biochemical information in multiple regions of the brain, but for clinical applications, time is still a constraint. Investigations of neural injury in obstructive sleep apnea have revealed structural changes in the brain, but determining the neurochemical changes requires more detailed measurements across multiple brain regions, demonstrating a need for faster echo-planar J-resolved spectroscopic imaging. Hence, we have extended the compressed sensing reconstruction of prospectively undersampled 4D echo-planar J-resolved spectroscopic imaging to investigate metabolic changes in multiple brain locations of patients with obstructive sleep apnea and healthy controls. Nonuniform undersampling was imposed along 1 spatial and 1 spectral dimension of 4D echo-planar J-resolved spectroscopic imaging, and test-retest reliability of the compressed sensing reconstruction of the nonuniform undersampling data was tested by using a brain phantom. In addition, 9 patients with obstructive sleep apnea and 11 healthy controls were investigated by using a 3T MR imaging/MR spectroscopy scanner. Significantly reduced metabolite differences were observed between patients with obstructive sleep apnea and healthy controls in multiple brain regions: NAA/Cr in the left hippocampus; total Cho/Cr and Glx/Cr in the right hippocampus; total NAA/Cr, taurine/Cr, scyllo-Inositol/Cr, phosphocholine/Cr, and total Cho/Cr in the occipital gray matter; total NAA/Cr and NAA/Cr in the medial frontal white matter; and taurine/Cr and total Cho/Cr in the left frontal white matter regions. The 4D echo-planar J-resolved spectroscopic imaging technique using the nonuniform undersampling-based acquisition and compressed sensing reconstruction in patients with obstructive sleep apnea and healthy brain is feasible in a clinically suitable time. In addition to brain metabolite changes previously reported by 1D MR

  14. Lossless Compression of Classification-Map Data

    Science.gov (United States)

    Hua, Xie; Klimesh, Matthew

    2009-01-01

    A lossless image-data-compression algorithm intended specifically for application to classification-map data is based on prediction, context modeling, and entropy coding. The algorithm was formulated, in consideration of the differences between classification maps and ordinary images of natural scenes, so as to be capable of compressing classification- map data more effectively than do general-purpose image-data-compression algorithms. Classification maps are typically generated from remote-sensing images acquired by instruments aboard aircraft (see figure) and spacecraft. A classification map is a synthetic image that summarizes information derived from one or more original remote-sensing image(s) of a scene. The value assigned to each pixel in such a map is the index of a class that represents some type of content deduced from the original image data for example, a type of vegetation, a mineral, or a body of water at the corresponding location in the scene. When classification maps are generated onboard the aircraft or spacecraft, it is desirable to compress the classification-map data in order to reduce the volume of data that must be transmitted to a ground station.

  15. Generalized massive optimal data compression

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  16. The Linearized Bregman Method for Frugal Full-waveform Inversion with Compressive Sensing and Sparsity-promoting

    Science.gov (United States)

    Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong

    2018-03-01

    Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.

  17. Effective Data Acquisition Protocol for Multi-Hop Heterogeneous Wireless Sensor Networks Using Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Ahmed M. Khedr

    2015-10-01

    Full Text Available In designing wireless sensor networks (WSNs, it is important to reduce energy dissipation and prolong network lifetime. Clustering of nodes is one of the most effective approaches for conserving energy in WSNs. Cluster formation protocols generally consider the heterogeneity of sensor nodes in terms of energy difference of nodes but ignore the different transmission ranges of them. In this paper, we propose an effective data acquisition clustered protocol using compressive sensing (EDACP-CS for heterogeneous WSNs that aims to conserve the energy of sensor nodes in the presence of energy and transmission range heterogeneity. In EDACP-CS, cluster heads are selected based on the distance from the base station and sensor residual energy. Simulation results show that our protocol offers a much better performance than the existing protocols in terms of energy consumption, stability, network lifetime, and throughput.

  18. Near-source noise suppression of AMT by compressive sensing and mathematical morphology filtering

    Science.gov (United States)

    Li, Guang; Xiao, Xiao; Tang, Jing-Tian; Li, Jin; Zhu, Hui-Jie; Zhou, Cong; Yan, Fa-Bao

    2017-12-01

    In deep mineral exploration, the acquisition of audio magnetotelluric (AMT) data is severely affected by ambient noise near the observation sites; This near-field noise restricts investigation depths. Mathematical morphological filtering (MMF) proved effective in suppressing large-scale strong and variably shaped noise, typically low-frequency noise, but can not deal with pulse noise of AMT data. We combine compressive sensing and MMF. First, we use MMF to suppress the large-scale strong ambient noise; second, we use the improved orthogonal match pursuit (IOMP) algorithm to remove the residual pulse noise. To remove the noise and protect the useful AMT signal, a redundant dictionary that matches with spikes and is insensitive to the useful signal is designed. Synthetic and field data from the Luzong field suggest that the proposed method suppresses the near-source noise and preserves the signal well; thus, better results are obtained that improve the output of either MMF or IOMP.

  19. Sparse Channel Estimation for MIMO-OFDM Two-Way Relay Network with Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Aihua Zhang

    2013-01-01

    Full Text Available Accurate channel impulse response (CIR is required for equalization and can help improve communication service quality in next-generation wireless communication systems. An example of an advanced system is amplify-and-forward multiple-input multiple-output two-way relay network, which is modulated by orthogonal frequency-division multiplexing. Linear channel estimation methods, for example, least squares and expectation conditional maximization, have been proposed previously for the system. However, these methods do not take advantage of channel sparsity, and they decrease estimation performance. We propose a sparse channel estimation scheme, which is different from linear methods, at end users under the relay channel to enable us to exploit sparsity. First, we formulate the sparse channel estimation problem as a compressed sensing problem by using sparse decomposition theory. Second, the CIR is reconstructed by CoSaMP and OMP algorithms. Finally, computer simulations are conducted to confirm the superiority of the proposed methods over traditional linear channel estimation methods.

  20. Compressive Transient Imaging

    KAUST Repository

    Sun, Qilin

    2017-04-01

    High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.

  1. Adaptive compressive learning for prediction of protein-protein interactions from primary sequence.

    Science.gov (United States)

    Zhang, Ya-Nan; Pan, Xiao-Yong; Huang, Yan; Shen, Hong-Bin

    2011-08-21

    Protein-protein interactions (PPIs) play an important role in biological processes. Although much effort has been devoted to the identification of novel PPIs by integrating experimental biological knowledge, there are still many difficulties because of lacking enough protein structural and functional information. It is highly desired to develop methods based only on amino acid sequences for predicting PPIs. However, sequence-based predictors are often struggling with the high-dimensionality causing over-fitting and high computational complexity problems, as well as the redundancy of sequential feature vectors. In this paper, a novel computational approach based on compressed sensing theory is proposed to predict yeast Saccharomyces cerevisiae PPIs from primary sequence and has achieved promising results. The key advantage of the proposed compressed sensing algorithm is that it can compress the original high-dimensional protein sequential feature vector into a much lower but more condensed space taking the sparsity property of the original signal into account. What makes compressed sensing much more attractive in protein sequence analysis is its compressed signal can be reconstructed from far fewer measurements than what is usually considered necessary in traditional Nyquist sampling theory. Experimental results demonstrate that proposed compressed sensing method is powerful for analyzing noisy biological data and reducing redundancy in feature vectors. The proposed method represents a new strategy of dealing with high-dimensional protein discrete model and has great potentiality to be extended to deal with many other complicated biological systems. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    Science.gov (United States)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.

    2008-12-01

    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  3. Single exposure optically compressed imaging and visualization using random aperture coding

    Energy Technology Data Exchange (ETDEWEB)

    Stern, A [Electro Optical Unit, Ben Gurion University of the Negev, Beer-Sheva 84105 (Israel); Rivenson, Yair [Department of Electrical and Computer Engineering, Ben Gurion University of the Negev, Beer-Sheva 84105 (Israel); Javidi, Bahrain [Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut 06269-1157 (United States)], E-mail: stern@bgu.ac.il

    2008-11-01

    The common approach in digital imaging follows the sample-then-compress framework. According to this approach, in the first step as many pixels as possible are captured and in the second step the captured image is compressed by digital means. The recently introduced theory of compressed sensing provides the mathematical foundation necessary to combine these two steps in a single one, that is, to compress the information optically before it is recorded. In this paper we overview and extend an optical implementation of compressed sensing theory that we have recently proposed. With this new imaging approach the compression is accomplished inherently in the optical acquisition step. The primary feature of this imaging approach is a randomly encoded aperture realized by means of a random phase screen. The randomly encoded aperture implements random projection of the object field in the image plane. Using a single exposure, a randomly encoded image is captured which can be decoded by proper decoding algorithm.

  4. Surpassing the Theoretical 1-Norm Phase Transition in Compressive Sensing by Tuning the Smoothed L0 Algorithm

    DEFF Research Database (Denmark)

    Oxvig, Christian Schou; Pedersen, Patrick Steffen; Arildsen, Thomas

    2013-01-01

    Reconstruction of an undersampled signal is at the root of compressive sensing: when is an algorithm capable of reconstructing the signal? what quality is achievable? and how much time does reconstruction require? We have considered the worst-case performance of the smoothed ℓ0 norm reconstruction...... algorithm in a noiseless setup. Through an empirical tuning of its parameters, we have improved the phase transition (capabilities) of the algorithm for fixed quality and required time. In this paper, we present simulation results that show a phase transition surpassing that of the theoretical ℓ1 approach......: the proposed modified algorithm obtains 1-norm phase transition with greatly reduced required computation time....

  5. Feasibility study for image reconstruction in circular digital tomosynthesis (CDTS) from limited-scan angle data based on compressed-sensing theory

    Energy Technology Data Exchange (ETDEWEB)

    Park, Yeonok; Je, Uikyu; Cho, Hyosung, E-mail: hscho1@yonsei.ac.kr; Hong, Daeki; Park, Chulkyu; Cho, Heemoon; Choi, Sungil; Woo, Taeho

    2015-03-21

    In this work, we performed a feasibility study for image reconstruction in a circular digital tomosynthesis (CDTS) from limited-scan angle data based on compressed-sensing (CS) theory. Here, the X-ray source moves along an arc within a limited-scan angle (≤ 180°) on a circular path set perpendicularly to the axial direction during the image acquisition. This geometry, compared to full-angle (360°) scan geometry, allows imaging system to be designed more compactly and gives better tomographic quality than conventional linear digital tomosynthesis (DTS). We implemented an efficient CS-based reconstruction algorithm for the proposed geometry and performed systematic simulations to investigate the image characteristics. We successfully reconstructed CDTS images with incomplete projections acquired at several selected limited-scan angles of 45°, 90°, 135°, and 180° for a given tomographic angle of 80° and evaluated the reconstruction quality. Our simulation results indicate that the proposed method can provide superior tomographic quality for axial view and even for the other views (i.e., sagittal and coronal), as in computed tomography, to conventional DTS. - Highlights: • Image reconstruction is done in circular digital tomosynthesis (CDTS). • The designed geometry allows imaging system to be the better image. • An efficient compressed-sensing (CS)-based reconstruction algorithm is performed. • Proposed method can provide superior tomographic quality for the axial view.

  6. Silicone sensing phase for detection of aromatic hydrocarbons in water employing near-infrared spectroscopy.

    Science.gov (United States)

    Albuquerque, Jackson S; Pimentel, M Fernanda; Silva, Valdinete L; Raimundo, Ivo M; Rohwedder, Jarbas J R; Pasquini, Celio

    2005-01-01

    The use of silicone for detection of aromatic hydrocarbons in water using near-infrared spectroscopy is proposed. A sensing phase of poly(dimethylsiloxane) (PDMS) was prepared, and a rod of this material was adapted to a transflectance probe for measurements from 850 to 1800 nm. Deionized water samples contaminated separately with known amounts of benzene, toluene, ethylbenzene, and m-xylene were used for evaluation of the PDMS sensing phase, and measurements were made in a closed reactor with constant stirring. Equilibrium states were obtained after 90, 180, 360, and 405 min for benzene, toluene, ethylbenzene, and m-xylene, respectively. The PDMS sensing phase showed a reversible response, presenting linear response ranges up to 360, 290, 100, and 80 mg L(-1), with detection limits of 8.0, 7.0, 2.6, and 3.0 mg L(-1) for benzene, toluene, ethylbenzene, and m-xylene, respectively. Reference spectra obtained with different rods showed a relative standard deviation of 0.5%, indicating repeatability in the sensing phase preparation. A relative standard deviation of 6.7% was obtained for measurements performed with six different rods, using a 52 mg L(-1) toluene aqueous solution. The sensing phase was evaluated for identification of sources of contamination of water in simulated studies, employing Brazilian gasoline type A (without ethanol), gasoline type C (with 25% of anhydrous ethanol), and diesel fuel. Principal component analysis was able to classify the water in distinct groups, contaminated by gasoline A, gasoline C, or diesel fuel.

  7. Dry adhesives with sensing features

    International Nuclear Information System (INIS)

    Krahn, J; Menon, C

    2013-01-01

    Geckos are capable of detecting detachment of their feet. Inspired by this basic observation, a novel functional dry adhesive is proposed, which can be used to measure the instantaneous forces and torques acting on an adhesive pad. Such a novel sensing dry adhesive could potentially be used by climbing robots to quickly realize and respond appropriately to catastrophic detachment conditions. The proposed torque and force sensing dry adhesive was fabricated by mixing Carbon Black (CB) and Polydimethylsiloxane (PDMS) to form a functionalized adhesive with mushroom caps. The addition of CB to PDMS resulted in conductive PDMS which, when under compression, tension or torque, resulted in a change in the resistance across the adhesive patch terminals. The proposed design of the functionalized dry adhesive enables distinguishing an applied torque from a compressive force in a single adhesive pad. A model based on beam theory was used to predict the change in resistance across the terminals as either a torque or compressive force was applied to the adhesive patch. Under a compressive force, the sensing dry adhesive was capable of measuring compression stresses from 0.11 Pa to 20.9 kPa. The torque measured by the adhesive patch ranged from 2.6 to 10 mN m, at which point the dry adhesives became detached. The adhesive strength was 1.75 kPa under an applied preload of 1.65 kPa for an adhesive patch with an adhesive contact area of 7.07 cm 2 . (paper)

  8. Dry adhesives with sensing features

    Science.gov (United States)

    Krahn, J.; Menon, C.

    2013-08-01

    Geckos are capable of detecting detachment of their feet. Inspired by this basic observation, a novel functional dry adhesive is proposed, which can be used to measure the instantaneous forces and torques acting on an adhesive pad. Such a novel sensing dry adhesive could potentially be used by climbing robots to quickly realize and respond appropriately to catastrophic detachment conditions. The proposed torque and force sensing dry adhesive was fabricated by mixing Carbon Black (CB) and Polydimethylsiloxane (PDMS) to form a functionalized adhesive with mushroom caps. The addition of CB to PDMS resulted in conductive PDMS which, when under compression, tension or torque, resulted in a change in the resistance across the adhesive patch terminals. The proposed design of the functionalized dry adhesive enables distinguishing an applied torque from a compressive force in a single adhesive pad. A model based on beam theory was used to predict the change in resistance across the terminals as either a torque or compressive force was applied to the adhesive patch. Under a compressive force, the sensing dry adhesive was capable of measuring compression stresses from 0.11 Pa to 20.9 kPa. The torque measured by the adhesive patch ranged from 2.6 to 10 mN m, at which point the dry adhesives became detached. The adhesive strength was 1.75 kPa under an applied preload of 1.65 kPa for an adhesive patch with an adhesive contact area of 7.07 cm2.

  9. Magni: A Python Package for Compressive Sampling and Reconstruction of Atomic Force Microscopy Images

    Directory of Open Access Journals (Sweden)

    Christian Schou Oxvig

    2014-10-01

    Full Text Available Magni is an open source Python package that embraces compressed sensing and Atomic Force Microscopy (AFM imaging techniques. It provides AFM-specific functionality for undersampling and reconstructing images from AFM equipment and thereby accelerating the acquisition of AFM images. Magni also provides researchers in compressed sensing with a selection of algorithms for reconstructing undersampled general images, and offers a consistent and rigorous way to efficiently evaluate the researchers own developed reconstruction algorithms in terms of phase transitions. The package also serves as a convenient platform for researchers in compressed sensing aiming at obtaining a high degree of reproducibility of their research.

  10. New Approach Based on Compressive Sampling for Sample Rate Enhancement in DASs for Low-Cost Sensing Nodes

    Directory of Open Access Journals (Sweden)

    Francesco Bonavolontà

    2014-10-01

    Full Text Available The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.

  11. Stable and efficient Q-compensated least-squares migration with compressive sensing, sparsity-promoting, and preconditioning

    Science.gov (United States)

    Chai, Xintao; Wang, Shangxu; Tang, Genyang; Meng, Xiangcui

    2017-10-01

    The anelastic effects of subsurface media decrease the amplitude and distort the phase of propagating wave. These effects, also referred to as the earth's Q filtering effects, diminish seismic resolution. Ignoring anelastic effects during seismic imaging process generates an image with reduced amplitude and incorrect position of reflectors, especially for highly absorptive media. The numerical instability and the expensive computational cost are major concerns when compensating for anelastic effects during migration. We propose a stable and efficient Q-compensated imaging methodology with compressive sensing, sparsity-promoting, and preconditioning. The stability is achieved by using the Born operator for forward modeling and the adjoint operator for back propagating the residual wavefields. Constructing the attenuation-compensated operators by reversing the sign of attenuation operator is avoided. The method proposed is always stable. To reduce the computational cost that is proportional to the number of wave-equation to be solved (thereby the number of frequencies, source experiments, and iterations), we first subsample over both frequencies and source experiments. We mitigate the artifacts caused by the dimensionality reduction via promoting sparsity of the imaging solutions. We further employ depth- and Q-preconditioning operators to accelerate the convergence rate of iterative migration. We adopt a relatively simple linearized Bregman method to solve the sparsity-promoting imaging problem. Singular value decomposition analysis of the forward operator reveals that attenuation increases the condition number of migration operator, making the imaging problem more ill-conditioned. The visco-acoustic imaging problem converges slower than the acoustic case. The stronger the attenuation, the slower the convergence rate. The preconditioning strategy evidently decreases the condition number of migration operator, which makes the imaging problem less ill-conditioned and

  12. A compressive sensing based secure watermark detection and privacy preserving storage framework.

    Science.gov (United States)

    Qia Wang; Wenjun Zeng; Jun Tian

    2014-03-01

    Privacy is a critical issue when the data owners outsource data storage or processing to a third party computing service, such as the cloud. In this paper, we identify a cloud computing application scenario that requires simultaneously performing secure watermark detection and privacy preserving multimedia data storage. We then propose a compressive sensing (CS)-based framework using secure multiparty computation (MPC) protocols to address such a requirement. In our framework, the multimedia data and secret watermark pattern are presented to the cloud for secure watermark detection in a CS domain to protect the privacy. During CS transformation, the privacy of the CS matrix and the watermark pattern is protected by the MPC protocols under the semi-honest security model. We derive the expected watermark detection performance in the CS domain, given the target image, watermark pattern, and the size of the CS matrix (but without the CS matrix itself). The correctness of the derived performance has been validated by our experiments. Our theoretical analysis and experimental results show that secure watermark detection in the CS domain is feasible. Our framework can also be extended to other collaborative secure signal processing and data-mining applications in the cloud.

  13. Airship Sparse Array Antenna Radar Real Aperture Imaging Based on Compressed Sensing and Sparsity in Transform Domain

    Directory of Open Access Journals (Sweden)

    Li Liechen

    2016-02-01

    Full Text Available A conformal sparse array based on combined Barker code is designed for airship platform. The performance of the designed array such as signal-to-noise ratio is analyzed. Using the hovering characteristics of the airship, interferometry operation can be applied on the real aperture imaging results of two pulses, which can eliminate the random backscatter phase and make the image sparse in the transform domain. Building the relationship between echo and transform coefficients, the Compressed Sensing (CS theory can be introduced to solve the formula and achieving imaging. The image quality of the proposed method can reach the image formed by the full array imaging. The simulation results show the effectiveness of the proposed method.

  14. Opportunistic Relay Selection in Multicast Relay Networks using Compressive Sensing

    KAUST Repository

    Elkhalil, Khalil

    2014-12-01

    Relay selection is a simple technique that achieves spatial diversity in cooperative relay networks. However, for relay selection algorithms to make a selection decision, channel state information (CSI) from all cooperating relays is usually required at a central node. This requirement poses two important challenges. Firstly, CSI acquisition generates a great deal of feedback overhead (air-time) that could result in significant transmission delays. Secondly, the fed back channel information is usually corrupted by additive noise. This could lead to transmission outages if the central node selects the set of cooperating relays based on inaccurate feedback information. In this paper, we introduce a limited feedback relay selection algorithm for a multicast relay network. The proposed algorithm exploits the theory of compressive sensing to first obtain the identity of the “strong” relays with limited feedback. Following that, the CSI of the selected relays is estimated using linear minimum mean square error estimation. To minimize the effect of noise on the fed back CSI, we introduce a back-off strategy that optimally backs-off on the noisy estimated CSI. For a fixed group size, we provide closed form expressions for the scaling law of the maximum equivalent SNR for both Decode and Forward (DF) and Amplify and Forward (AF) cases. Numerical results show that the proposed algorithm drastically reduces the feedback air-time and achieves a rate close to that obtained by selection algorithms with dedicated error-free feedback channels.

  15. Deconvolution of serum cortisol levels by using compressed sensing.

    Directory of Open Access Journals (Sweden)

    Rose T Faghih

    Full Text Available The pulsatile release of cortisol from the adrenal glands is controlled by a hierarchical system that involves corticotropin releasing hormone (CRH from the hypothalamus, adrenocorticotropin hormone (ACTH from the pituitary, and cortisol from the adrenal glands. Determining the number, timing, and amplitude of the cortisol secretory events and recovering the infusion and clearance rates from serial measurements of serum cortisol levels is a challenging problem. Despite many years of work on this problem, a complete satisfactory solution has been elusive. We formulate this question as a non-convex optimization problem, and solve it using a coordinate descent algorithm that has a principled combination of (i compressed sensing for recovering the amplitude and timing of the secretory events, and (ii generalized cross validation for choosing the regularization parameter. Using only the observed serum cortisol levels, we model cortisol secretion from the adrenal glands using a second-order linear differential equation with pulsatile inputs that represent cortisol pulses released in response to pulses of ACTH. Using our algorithm and the assumption that the number of pulses is between 15 to 22 pulses over 24 hours, we successfully deconvolve both simulated datasets and actual 24-hr serum cortisol datasets sampled every 10 minutes from 10 healthy women. Assuming a one-minute resolution for the secretory events, we obtain physiologically plausible timings and amplitudes of each cortisol secretory event with R (2 above 0.92. Identification of the amplitude and timing of pulsatile hormone release allows (i quantifying of normal and abnormal secretion patterns towards the goal of understanding pathological neuroendocrine states, and (ii potentially designing optimal approaches for treating hormonal disorders.

  16. Scout-view assisted interior digital tomosynthesis (iDTS) based on compressed-sensing theory

    Science.gov (United States)

    Park, S. Y.; Kim, G. A.; Cho, H. S.; Seo, C. W.; Je, U. K.; Park, C. K.; Lim, H. W.; Kim, K. S.; Lee, D. Y.; Lee, H. W.; Kang, S. Y.; Park, J. E.; Woo, T. H.; Lee, M. S.

    2017-12-01

    Conventional digital tomosynthesis (DTS) based on the filtered-backprojection (FBP) reconstruction requires full field-of-view scan and also relatively dense projections, which results in still high dose for medical imaging purposes. In this work, to overcome these difficulties, we propose a new type of DTS examinations, the so-called scout-view assisted interior DTS (iDTS), in which the x-ray beam span covers only a small region-of-interest (ROI) containing target diagnosis with the help of some scout views and they are used in the reconstruction to add additional information to interior ROI otherwise absent with conventional iDTS reconstruction methods. We considered an effective iterative algorithm based on compressed-sensing theory, rather than the FBP-based algorithm, for more accurate iDTS reconstruction. We implemented the proposed algorithm, performed a systematic simulation and experiment, and investigated the image characteristics. We successfully reconstructed iDTS images of substantially high accuracy and no truncation artifacts by using the proposed method, preserving superior image homogeneity, edge sharpening, and in-plane spatial resolution.

  17. Detection of Defective Sensors in Phased Array Using Compressed Sensing and Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Shafqat Ullah Khan

    2016-01-01

    Full Text Available A compressed sensing based array diagnosis technique has been presented. This technique starts from collecting the measurements of the far-field pattern. The system linking the difference between the field measured using the healthy reference array and the field radiated by the array under test is solved using a genetic algorithm (GA, parallel coordinate descent (PCD algorithm, and then a hybridized GA with PCD algorithm. These algorithms are applied for fully and partially defective antenna arrays. The simulation results indicate that the proposed hybrid algorithm outperforms in terms of localization of element failure with a small number of measurements. In the proposed algorithm, the slow and early convergence of GA has been avoided by combining it with PCD algorithm. It has been shown that the hybrid GA-PCD algorithm provides an accurate diagnosis of fully and partially defective sensors as compared to GA or PCD alone. Different simulations have been provided to validate the performance of the designed algorithms in diversified scenarios.

  18. Enhanced recovery of subsurface geological structures using compressed sensing and the Ensemble Kalman filter

    KAUST Repository

    Sana, Furrukh

    2015-07-26

    Recovering information on subsurface geological features, such as flow channels, holds significant importance for optimizing the productivity of oil reservoirs. The flow channels exhibit high permeability in contrast to low permeability rock formations in their surroundings, enabling formulation of a sparse field recovery problem. The Ensemble Kalman filter (EnKF) is a widely used technique for the estimation of subsurface parameters, such as permeability. However, the EnKF often fails to recover and preserve the channel structures during the estimation process. Compressed Sensing (CS) has shown to significantly improve the reconstruction quality when dealing with such problems. We propose a new scheme based on CS principles to enhance the reconstruction of subsurface geological features by transforming the EnKF estimation process to a sparse domain representing diverse geological structures. Numerical experiments suggest that the proposed scheme provides an efficient mechanism to incorporate and preserve structural information in the estimation process and results in significant enhancement in the recovery of flow channel structures.

  19. Enhanced recovery of subsurface geological structures using compressed sensing and the Ensemble Kalman filter

    KAUST Repository

    Sana, Furrukh; Katterbauer, Klemens; Al-Naffouri, Tareq Y.; Hoteit, Ibrahim

    2015-01-01

    Recovering information on subsurface geological features, such as flow channels, holds significant importance for optimizing the productivity of oil reservoirs. The flow channels exhibit high permeability in contrast to low permeability rock formations in their surroundings, enabling formulation of a sparse field recovery problem. The Ensemble Kalman filter (EnKF) is a widely used technique for the estimation of subsurface parameters, such as permeability. However, the EnKF often fails to recover and preserve the channel structures during the estimation process. Compressed Sensing (CS) has shown to significantly improve the reconstruction quality when dealing with such problems. We propose a new scheme based on CS principles to enhance the reconstruction of subsurface geological features by transforming the EnKF estimation process to a sparse domain representing diverse geological structures. Numerical experiments suggest that the proposed scheme provides an efficient mechanism to incorporate and preserve structural information in the estimation process and results in significant enhancement in the recovery of flow channel structures.

  20. Effective Low-Power Wearable Wireless Surface EMG Sensor Design Based on Analog-Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Mohammadreza Balouchestani

    2014-12-01

    Full Text Available Surface Electromyography (sEMG is a non-invasive measurement process that does not involve tools and instruments to break the skin or physically enter the body to investigate and evaluate the muscular activities produced by skeletal muscles. The main drawbacks of existing sEMG systems are: (1 they are not able to provide real-time monitoring; (2 they suffer from long processing time and low speed; (3 they are not effective for wireless healthcare systems because they consume huge power. In this work, we present an analog-based Compressed Sensing (CS architecture, which consists of three novel algorithms for design and implementation of wearable wireless sEMG bio-sensor. At the transmitter side, two new algorithms are presented in order to apply the analog-CS theory before Analog to Digital Converter (ADC. At the receiver side, a robust reconstruction algorithm based on a combination of ℓ1-ℓ1-optimization and Block Sparse Bayesian Learning (BSBL framework is presented to reconstruct the original bio-signals from the compressed bio-signals. The proposed architecture allows reducing the sampling rate to 25% of Nyquist Rate (NR. In addition, the proposed architecture reduces the power consumption to 40%, Percentage Residual Difference (PRD to 24%, Root Mean Squared Error (RMSE to 2%, and the computation time from 22 s to 9.01 s, which provide good background for establishing wearable wireless healthcare systems. The proposed architecture achieves robust performance in low Signal-to-Noise Ratio (SNR for the reconstruction process.

  1. Optically compressed sensing by under sampling the polar Fourier plane

    International Nuclear Information System (INIS)

    Stern, A; Levi, O; Rivenson, Y

    2010-01-01

    In a previous work we presented a compressed imaging approach that uses a row of rotating sensors to capture indirectly polar strips of the Fourier transform of the image. Here we present further developments of this technique and present new results. The advantages of our technique, compared to other optically compressed imaging techniques, is that its optical implementation is relatively easy, it does not require complicate calibrations and that it can be implemented in near-real time.

  2. Adaptive compressive ghost imaging based on wavelet trees and sparse representation.

    Science.gov (United States)

    Yu, Wen-Kai; Li, Ming-Fei; Yao, Xu-Ri; Liu, Xue-Feng; Wu, Ling-An; Zhai, Guang-Jie

    2014-03-24

    Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.

  3. Less is More: Bigger Data from Compressive Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, Andrew; Browning, Nigel D.

    2017-07-01

    Compressive sensing approaches are beginning to take hold in (scanning) transmission electron microscopy (S/TEM) [1,2,3]. Compressive sensing is a mathematical theory about acquiring signals in a compressed form (measurements) and the probability of recovering the original signal by solving an inverse problem [4]. The inverse problem is underdetermined (more unknowns than measurements), so it is not obvious that recovery is possible. Compression is achieved by taking inner products of the signal with measurement weight vectors. Both Gaussian random weights and Bernoulli (0,1) random weights form a large class of measurement vectors for which recovery is possible. The measurements can also be designed through an optimization process. The key insight for electron microscopists is that compressive sensing can be used to increase acquisition speed and reduce dose. Building on work initially developed for optical cameras, this new paradigm will allow electron microscopists to solve more problems in the engineering and life sciences. We will be collecting orders of magnitude more data than previously possible. The reason that we will have more data is because we will have increased temporal/spatial/spectral sampling rates, and we will be able ability to interrogate larger classes of samples that were previously too beam sensitive to survive the experiment. For example consider an in-situ experiment that takes 1 minute. With traditional sensing, we might collect 5 images per second for a total of 300 images. With compressive sensing, each of those 300 images can be expanded into 10 more images, making the collection rate 50 images per second, and the decompressed data a total of 3000 images [3]. But, what are the implications, in terms of data, for this new methodology? Acquisition of compressed data will require downstream reconstruction to be useful. The reconstructed data will be much larger than traditional data, we will need space to store the reconstructions during

  4. Compressed sensing and the reconstruction of ultrafast 2D NMR data: Principles and biomolecular applications.

    Science.gov (United States)

    Shrot, Yoav; Frydman, Lucio

    2011-04-01

    A topic of active investigation in 2D NMR relates to the minimum number of scans required for acquiring this kind of spectra, particularly when these are dictated by sampling rather than by sensitivity considerations. Reductions in this minimum number of scans have been achieved by departing from the regular sampling used to monitor the indirect domain, and relying instead on non-uniform sampling and iterative reconstruction algorithms. Alternatively, so-called "ultrafast" methods can compress the minimum number of scans involved in 2D NMR all the way to a minimum number of one, by spatially encoding the indirect domain information and subsequently recovering it via oscillating field gradients. Given ultrafast NMR's simultaneous recording of the indirect- and direct-domain data, this experiment couples the spectral constraints of these orthogonal domains - often calling for the use of strong acquisition gradients and large filter widths to fulfill the desired bandwidth and resolution demands along all spectral dimensions. This study discusses a way to alleviate these demands, and thereby enhance the method's performance and applicability, by combining spatial encoding with iterative reconstruction approaches. Examples of these new principles are given based on the compressed-sensed reconstruction of biomolecular 2D HSQC ultrafast NMR data, an approach that we show enables a decrease of the gradient strengths demanded in this type of experiments by up to 80%. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Magni: A Python Package for Compressive Sampling and Reconstruction of Atomic Force Microscopy Images

    DEFF Research Database (Denmark)

    Oxvig, Christian Schou; Pedersen, Patrick Steffen; Arildsen, Thomas

    2014-01-01

    Magni is an open source Python package that embraces compressed sensing and Atomic Force Microscopy (AFM) imaging techniques. It provides AFM-specific functionality for undersampling and reconstructing images from AFM equipment and thereby accelerating the acquisition of AFM images. Magni also pr...... as a convenient platform for researchers in compressed sensing aiming at obtaining a high degree of reproducibility of their research....

  6. Becoming Self-Employed.

    Science.gov (United States)

    Lee, Grant; Cochran, Larry

    1997-01-01

    Explored how persons become self-employed. In critical incident interviews with five self-employed persons the critical events that assisted or hindered progress toward self-employment were listed in chronological order. In general, becoming self-employed involved establishing conditions of action that enhanced a sense of agency, thus enabling…

  7. Fast and low-dose computed laminography using compressive sensing based technique

    Science.gov (United States)

    Abbas, Sajid; Park, Miran; Cho, Seungryong

    2015-03-01

    Computed laminography (CL) is well known for inspecting microstructures in the materials, weldments and soldering defects in high density packed components or multilayer printed circuit boards. The overload problem on x-ray tube and gross failure of the radio-sensitive electronics devices during a scan are among important issues in CL which needs to be addressed. The sparse-view CL can be one of the viable option to overcome such issues. In this work a numerical aluminum welding phantom was simulated to collect sparsely sampled projection data at only 40 views using a conventional CL scanning scheme i.e. oblique scan. A compressive-sensing inspired total-variation (TV) minimization algorithm was utilized to reconstruct the images. It is found that the images reconstructed using sparse view data are visually comparable with the images reconstructed using full scan data set i.e. at 360 views on regular interval. We have quantitatively confirmed that tiny structures such as copper and tungsten slags, and copper flakes in the reconstructed images from sparsely sampled data are comparable with the corresponding structure present in the fully sampled data case. A blurring effect can be seen near the edges of few pores at the bottom of the reconstructed images from sparsely sampled data, despite the overall image quality is reasonable for fast and low-dose NDT.

  8. Fast and low-dose computed laminography using compressive sensing based technique

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Sajid, E-mail: scho@kaist.ac.kr; Park, Miran, E-mail: scho@kaist.ac.kr; Cho, Seungryong, E-mail: scho@kaist.ac.kr [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 305-701 (Korea, Republic of)

    2015-03-31

    Computed laminography (CL) is well known for inspecting microstructures in the materials, weldments and soldering defects in high density packed components or multilayer printed circuit boards. The overload problem on x-ray tube and gross failure of the radio-sensitive electronics devices during a scan are among important issues in CL which needs to be addressed. The sparse-view CL can be one of the viable option to overcome such issues. In this work a numerical aluminum welding phantom was simulated to collect sparsely sampled projection data at only 40 views using a conventional CL scanning scheme i.e. oblique scan. A compressive-sensing inspired total-variation (TV) minimization algorithm was utilized to reconstruct the images. It is found that the images reconstructed using sparse view data are visually comparable with the images reconstructed using full scan data set i.e. at 360 views on regular interval. We have quantitatively confirmed that tiny structures such as copper and tungsten slags, and copper flakes in the reconstructed images from sparsely sampled data are comparable with the corresponding structure present in the fully sampled data case. A blurring effect can be seen near the edges of few pores at the bottom of the reconstructed images from sparsely sampled data, despite the overall image quality is reasonable for fast and low-dose NDT.

  9. Fast and low-dose computed laminography using compressive sensing based technique

    International Nuclear Information System (INIS)

    Abbas, Sajid; Park, Miran; Cho, Seungryong

    2015-01-01

    Computed laminography (CL) is well known for inspecting microstructures in the materials, weldments and soldering defects in high density packed components or multilayer printed circuit boards. The overload problem on x-ray tube and gross failure of the radio-sensitive electronics devices during a scan are among important issues in CL which needs to be addressed. The sparse-view CL can be one of the viable option to overcome such issues. In this work a numerical aluminum welding phantom was simulated to collect sparsely sampled projection data at only 40 views using a conventional CL scanning scheme i.e. oblique scan. A compressive-sensing inspired total-variation (TV) minimization algorithm was utilized to reconstruct the images. It is found that the images reconstructed using sparse view data are visually comparable with the images reconstructed using full scan data set i.e. at 360 views on regular interval. We have quantitatively confirmed that tiny structures such as copper and tungsten slags, and copper flakes in the reconstructed images from sparsely sampled data are comparable with the corresponding structure present in the fully sampled data case. A blurring effect can be seen near the edges of few pores at the bottom of the reconstructed images from sparsely sampled data, despite the overall image quality is reasonable for fast and low-dose NDT

  10. Dense sampled transmission matrix for high resolution angular spectrum imaging through turbid media via compressed sensing (Conference Presentation)

    Science.gov (United States)

    Jang, Hwanchol; Yoon, Changhyeong; Choi, Wonshik; Eom, Tae Joong; Lee, Heung-No

    2016-03-01

    We provide an approach to improve the quality of image reconstruction in wide-field imaging through turbid media (WITM). In WITM, a calibration stage which measures the transmission matrix (TM), the set of responses of turbid medium to a set of plane waves with different incident angles, is preceded to the image recovery. Then, the TM is used for estimation of object image in image recovery stage. In this work, we aim to estimate highly resolved angular spectrum and use it for high quality image reconstruction. To this end, we propose to perform a dense sampling for TM measurement in calibration stage with finer incident angle spacing. In conventional approaches, incident angle spacing is made to be large enough so that the columns in TM are out of memory effect of turbid media. Otherwise, the columns in TM are correlated and the inversion becomes difficult. We employ compressed sensing (CS) for a successful high resolution angular spectrum recovery with dense sampled TM. CS is a relatively new information acquisition and reconstruction framework and has shown to provide superb performance in ill-conditioned inverse problems. We observe that the image quality metrics such as contrast-to-noise ratio and mean squared error are improved and the perceptual image quality is improved with reduced speckle noise in the reconstructed image. This results shows that the WITM performance can be improved only by executing dense sampling in the calibration stage and with an efficient signal reconstruction framework without elaborating the overall optical imaging systems.

  11. An optical color image watermarking scheme by using compressive sensing with human visual characteristics in gyrator domain

    Science.gov (United States)

    Liansheng, Sui; Bei, Zhou; Zhanmin, Wang; Ailing, Tian

    2017-05-01

    A novel optical color image watermarking scheme considering human visual characteristics is presented in gyrator transform domain. Initially, an appropriate reference image is constructed of significant blocks chosen from the grayscale host image by evaluating visual characteristics such as visual entropy and edge entropy. Three components of the color watermark image are compressed based on compressive sensing, and the corresponding results are combined to form the grayscale watermark. Then, the frequency coefficients of the watermark image are fused into the frequency data of the gyrator-transformed reference image. The fused result is inversely transformed and partitioned, and eventually the watermarked image is obtained by mapping the resultant blocks into their original positions. The scheme can reconstruct the watermark with high perceptual quality and has the enhanced security due to high sensitivity of the secret keys. Importantly, the scheme can be implemented easily under the framework of double random phase encoding with the 4f optical system. To the best of our knowledge, it is the first report on embedding the color watermark into the grayscale host image which will be out of attacker's expectation. Simulation results are given to verify the feasibility and its superior performance in terms of noise and occlusion robustness.

  12. Effects of Compression by Means of Sports Socks on the Ankle Kinesthesia

    Directory of Open Access Journals (Sweden)

    Tatsuya Hayami

    2011-10-01

    Full Text Available The purpose of this study was to clarify the effects of compression by means of sports socks (CG socks on the ankle knesthesia. Thirteen subjects were participated. In order to accomplish the purpose, we assessed a position sense, movement sense, force sense, and sensorymotor function during under three different conditions: subjects wore the normal socks that are distributed generally (normal socks condition, wore the CG socks (CG socks condition, and did not wear any socks (barefoot condition. The position sense and force sense were assessed in a reproduction task of ankle joint angle and force output during plantar/dorsiflexion, respectively. The movement sense was assessed by the threshold of detection for passive movement. The sensory motor function was assessed during our original Kinetic–Equilibrating task. The results showed that the movement sense, force sense, and sensorymotor function significantly improved in the CG socks condition compared to the other two conditions. These results suggested that the compression by means of the CG socks might improve the perception of the changes of joint angle and the extent of force output. Therefore, improvement of these senses enhanced the sensorymotor function based on these senses.

  13. Beam dynamics of the Neutralized Drift Compression Experiment-II (NDCX-II),a novel pulse-compressing ion accelerator

    International Nuclear Information System (INIS)

    Friedman, A.; Barnard, J.J.; Cohen, R.H.; Grote, D.P.; Lund, S.M.; Sharp, W.M.; Faltens, A.; Henestroza, E.; Jung, J.-Y.; Kwan, J.W.; Lee, E.P.; Leitner, M.A.; Logan, B.G.; Vay, J.-L.; Waldron, W.L.; Davidson, R.C.; Dorf, M.; Gilson, E.P.; Kaganovich, I.D.

    2009-01-01

    Intense beams of heavy ions are well suited for heating matter to regimes of emerging interest. A new facility, NDCX-II, will enable studies of warm dense matter at ∼1 eV and near-solid density, and of heavy-ion inertial fusion target physics relevant to electric power production. For these applications the beam must deposit its energy rapidly, before the target can expand significantly. To form such pulses, ion beams are temporally compressed in neutralizing plasma; current amplification factors of ∼50-100 are routinely obtained on the Neutralized Drift Compression Experiment (NDCX) at LBNL. In the NDCX-II physics design, an initial non-neutralized compression renders the pulse short enough that existing high-voltage pulsed power can be employed. This compression is first halted and then reversed by the beam's longitudinal space-charge field. Downstream induction cells provide acceleration and impose the head-to-tail velocity gradient that leads to the final neutralized compression onto the target. This paper describes the discrete-particle simulation models (1-D, 2-D, and 3-D) employed and the space-charge-dominated beam dynamics being realized.

  14. The use of compressive sensing and peak detection in the reconstruction of microtubules length time series in the process of dynamic instability.

    Science.gov (United States)

    Mahrooghy, Majid; Yarahmadian, Shantia; Menon, Vineetha; Rezania, Vahid; Tuszynski, Jack A

    2015-10-01

    Microtubules (MTs) are intra-cellular cylindrical protein filaments. They exhibit a unique phenomenon of stochastic growth and shrinkage, called dynamic instability. In this paper, we introduce a theoretical framework for applying Compressive Sensing (CS) to the sampled data of the microtubule length in the process of dynamic instability. To reduce data density and reconstruct the original signal with relatively low sampling rates, we have applied CS to experimental MT lament length time series modeled as a Dichotomous Markov Noise (DMN). The results show that using CS along with the wavelet transform significantly reduces the recovery errors comparing in the absence of wavelet transform, especially in the low and the medium sampling rates. In a sampling rate ranging from 0.2 to 0.5, the Root-Mean-Squared Error (RMSE) decreases by approximately 3 times and between 0.5 and 1, RMSE is small. We also apply a peak detection technique to the wavelet coefficients to detect and closely approximate the growth and shrinkage of MTs for computing the essential dynamic instability parameters, i.e., transition frequencies and specially growth and shrinkage rates. The results show that using compressed sensing along with the peak detection technique and wavelet transform in sampling rates reduces the recovery errors for the parameters. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Flux compression generators as plasma compression power sources

    International Nuclear Information System (INIS)

    Fowler, C.M.; Caird, R.S.; Erickson, D.J.; Freeman, B.L.; Thomson, D.B.; Garn, W.B.

    1979-01-01

    A survey is made of applications where explosive-driven magnetic flux compression generators have been or can be used to directly power devices that produce dense plasmas. Representative examples are discussed that are specific to the theta pinch, the plasma gun, the dense plasma focus and the Z pinch. These examples are used to illustrate the high energy and power capabilities of explosive generators. An application employing a rocket-borne, generator-powered plasma gun emphasizes the size and weight potential of flux compression power supplies. Recent results from a local effort to drive a dense plasma focus are provided. Imploding liners ae discussed in the context of both the theta and Z pinches

  16. Implementation of a compressive sampling scheme for wireless sensors to achieve energy efficiency in a structural health monitoring system

    Science.gov (United States)

    O'Connor, Sean M.; Lynch, Jerome P.; Gilbert, Anna C.

    2013-04-01

    Wireless sensors have emerged to offer low-cost sensors with impressive functionality (e.g., data acquisition, computing, and communication) and modular installations. Such advantages enable higher nodal densities than tethered systems resulting in increased spatial resolution of the monitoring system. However, high nodal density comes at a cost as huge amounts of data are generated, weighing heavy on power sources, transmission bandwidth, and data management requirements, often making data compression necessary. The traditional compression paradigm consists of high rate (>Nyquist) uniform sampling and storage of the entire target signal followed by some desired compression scheme prior to transmission. The recently proposed compressed sensing (CS) framework combines the acquisition and compression stage together, thus removing the need for storage and operation of the full target signal prior to transmission. The effectiveness of the CS approach hinges on the presence of a sparse representation of the target signal in a known basis, similarly exploited by several traditional compressive sensing applications today (e.g., imaging, MRI). Field implementations of CS schemes in wireless SHM systems have been challenging due to the lack of commercially available sensing units capable of sampling methods (e.g., random) consistent with the compressed sensing framework, often moving evaluation of CS techniques to simulation and post-processing. The research presented here describes implementation of a CS sampling scheme to the Narada wireless sensing node and the energy efficiencies observed in the deployed sensors. Of interest in this study is the compressibility of acceleration response signals collected from a multi-girder steel-concrete composite bridge. The study shows the benefit of CS in reducing data requirements while ensuring data analysis on compressed data remain accurate.

  17. Correlation between k-space sampling pattern and MTF in compressed sensing MRSI.

    Science.gov (United States)

    Heikal, A A; Wachowicz, K; Fallone, B G

    2016-10-01

    To investigate the relationship between the k-space sampling patterns used for compressed sensing MR spectroscopic imaging (CS-MRSI) and the modulation transfer function (MTF) of the metabolite maps. This relationship may allow the desired frequency content of the metabolite maps to be quantitatively tailored when designing an undersampling pattern. Simulations of a phantom were used to calculate the MTF of Nyquist sampled (NS) 32 × 32 MRSI, and four-times undersampled CS-MRSI reconstructions. The dependence of the CS-MTF on the k-space sampling pattern was evaluated for three sets of k-space sampling patterns generated using different probability distribution functions (PDFs). CS-MTFs were also evaluated for three more sets of patterns generated using a modified algorithm where the sampling ratios are constrained to adhere to PDFs. Strong visual correlation as well as high R 2 was found between the MTF of CS-MRSI and the product of the frequency-dependant sampling ratio and the NS 32 × 32 MTF. Also, PDF-constrained sampling patterns led to higher reproducibility of the CS-MTF, and stronger correlations to the above-mentioned product. The relationship established in this work provides the user with a theoretical solution for the MTF of CS MRSI that is both predictable and customizable to the user's needs.

  18. Communication analysis for feedback control of civil infrastructure using cochlea-inspired sensing nodes

    Science.gov (United States)

    Peckens, Courtney A.; Cook, Ireana; Lynch, Jerome P.

    2016-04-01

    Wireless sensor networks (WSNs) have emerged as a reliable, low-cost alternative to the traditional wired sensing paradigm. While such networks have made significant progress in the field of structural monitoring, significantly less development has occurred for feedback control applications. Previous work in WSNs for feedback control has highlighted many of the challenges of using this technology including latency in the wireless communication channel and computational inundation at the individual sensing nodes. This work seeks to overcome some of those challenges by drawing inspiration from the real-time sensing and control techniques employed by the biological central nervous system and in particular the mammalian cochlea. A novel bio-inspired wireless sensor node was developed that employs analog filtering techniques to perform time-frequency decomposition of a sensor signal, thus encompassing the functionality of the cochlea. The node then utilizes asynchronous sampling of the filtered signal to compress the signal prior to communication. This bio-inspired sensing architecture is extended to a feedback control application in order to overcome the traditional challenges currently faced by wireless control. In doing this, however, the network experiences high bandwidths of low-significance information exchange between nodes, resulting in some lost data. This study considers the impact of this lost data on the control capabilities of the bio-inspired control architecture and finds that it does not significantly impact the effectiveness of control.

  19. CMOS Compressed Imaging by Random Convolution

    OpenAIRE

    Jacques, Laurent; Vandergheynst, Pierre; Bibet, Alexandre; Majidzadeh, Vahid; Schmid, Alexandre; Leblebici, Yusuf

    2009-01-01

    We present a CMOS imager with built-in capability to perform Compressed Sensing. The adopted sensing strategy is the random Convolution due to J. Romberg. It is achieved by a shift register set in a pseudo-random configuration. It acts as a convolutive filter on the imager focal plane, the current issued from each CMOS pixel undergoing a pseudo-random redirection controlled by each component of the filter sequence. A pseudo-random triggering of the ADC reading is finally applied to comp...

  20. Compressing Sensing Based Source Localization for Controlled Acoustic Signals Using Distributed Microphone Arrays

    Directory of Open Access Journals (Sweden)

    Wei Ke

    2017-01-01

    Full Text Available In order to enhance the accuracy of sound source localization in noisy and reverberant environments, this paper proposes an adaptive sound source localization method based on distributed microphone arrays. Since sound sources lie at a few points in the discrete spatial domain, our method can exploit this inherent sparsity to convert the localization problem into a sparse recovery problem based on the compressive sensing (CS theory. In this method, a two-step discrete cosine transform- (DCT- based feature extraction approach is utilized to cover both short-time and long-time properties of acoustic signals and reduce the dimensions of the sparse model. In addition, an online dictionary learning (DL method is used to adjust the dictionary for matching the changes of audio signals, and then the sparse solution could better represent location estimations. Moreover, we propose an improved block-sparse reconstruction algorithm using approximate l0 norm minimization to enhance reconstruction performance for sparse signals in low signal-noise ratio (SNR conditions. The effectiveness of the proposed scheme is demonstrated by simulation results and experimental results where substantial improvement for localization performance can be obtained in the noisy and reverberant conditions.

  1. Experimental study of a DMD based compressive line sensing imaging system in the turbulence environment

    Science.gov (United States)

    Ouyang, Bing; Hou, Weilin; Gong, Cuiling; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.

    2016-05-01

    The Compressive Line Sensing (CLS) active imaging system has been demonstrated to be effective in scattering mediums, such as turbid coastal water through simulations and test tank experiments. Since turbulence is encountered in many atmospheric and underwater surveillance applications, a new CLS imaging prototype was developed to investigate the effectiveness of the CLS concept in a turbulence environment. Compared with earlier optical bench top prototype, the new system is significantly more robust and compact. A series of experiments were conducted at the Naval Research Lab's optical turbulence test facility with the imaging path subjected to various turbulence intensities. In addition to validating the system design, we obtained some unexpected exciting results - in the strong turbulence environment, the time-averaged measurements using the new CLS imaging prototype improved both SNR and resolution of the reconstructed images. We will discuss the implications of the new findings, the challenges of acquiring data through strong turbulence environment, and future enhancements.

  2. An effective approach to attenuate random noise based on compressive sensing and curvelet transform

    International Nuclear Information System (INIS)

    Liu, Wei; Cao, Siyuan; Zu, Shaohuan; Chen, Yangkang

    2016-01-01

    Random noise attenuation is an important step in seismic data processing. In this paper, we propose a novel denoising approach based on compressive sensing and the curvelet transform. We formulate the random noise attenuation problem as an L _1 norm regularized optimization problem. We propose to use the curvelet transform as the sparse transform in the optimization problem to regularize the sparse coefficients in order to separate signal and noise and to use the gradient projection for sparse reconstruction (GPSR) algorithm to solve the formulated optimization problem with an easy implementation and a fast convergence. We tested the performance of our proposed approach on both synthetic and field seismic data. Numerical results show that the proposed approach can effectively suppress the distortion near the edge of seismic events during the noise attenuation process and has high computational efficiency compared with the traditional curvelet thresholding and iterative soft thresholding based denoising methods. Besides, compared with f-x deconvolution, the proposed denoising method is capable of eliminating the random noise more effectively while preserving more useful signals. (paper)

  3. Employment as a Price or a Prize of Equality: A Descriptive Analysis

    Directory of Open Access Journals (Sweden)

    Erling Barth

    2012-06-01

    Full Text Available To put Scandinavian employment in perspective, we ask whether wage compression hampers employment rates, or not. We answer by reviewing the most important theoretical arguments and the most informative regularities across countries with different wage distributions. The pattern seems to be that countries with compressed wage distributions tend to have higher employment, and countries with higher wage inequality tend to have lower employment. This also holds when we consider the rate of labor force participation. In line with the theoretical arguments, coordination in wage bargaining seems to contribute to both employment expansion and wage compression. There is a clear positive correlation between coordination and employment even when we control for inequality, country, and year-specific effects.

  4. High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering

    Directory of Open Access Journals (Sweden)

    Nelson Eduardo Diaz

    2015-09-01

    Full Text Available The coded aperture snapshot spectral imaging system (CASSI is an imaging architecture which senses the three dimensional informa-tion of a scene with two dimensional (2D focal plane array (FPA coded projection measurements. A reconstruction algorithm takes advantage of the compressive measurements sparsity to recover the underlying 3D data cube. Traditionally, CASSI uses block-un-block coded apertures (BCA to spatially modulate the light. In CASSI the quality of the reconstructed images depends on the design of these coded apertures and the FPA dynamic range. This work presents a new CASSI architecture based on grayscaled coded apertu-res (GCA which reduce the FPA saturation and increase the dynamic range of the reconstructed images. The set of GCA is calculated in a real-time adaptive manner exploiting the information from the FPA compressive measurements. Extensive simulations show the attained improvement in the quality of the reconstructed images when GCA are employed.  In addition, a comparison between traditional coded apertures and GCA is realized with respect to noise tolerance.

  5. High-resolution coded-aperture design for compressive X-ray tomography using low resolution detectors

    Science.gov (United States)

    Mojica, Edson; Pertuz, Said; Arguello, Henry

    2017-12-01

    One of the main challenges in Computed Tomography (CT) is obtaining accurate reconstructions of the imaged object while keeping a low radiation dose in the acquisition process. In order to solve this problem, several researchers have proposed the use of compressed sensing for reducing the amount of measurements required to perform CT. This paper tackles the problem of designing high-resolution coded apertures for compressed sensing computed tomography. In contrast to previous approaches, we aim at designing apertures to be used with low-resolution detectors in order to achieve super-resolution. The proposed method iteratively improves random coded apertures using a gradient descent algorithm subject to constraints in the coherence and homogeneity of the compressive sensing matrix induced by the coded aperture. Experiments with different test sets show consistent results for different transmittances, number of shots and super-resolution factors.

  6. Joint Group Sparse PCA for Compressed Hyperspectral Imaging.

    Science.gov (United States)

    Khan, Zohaib; Shafait, Faisal; Mian, Ajmal

    2015-12-01

    A sparse principal component analysis (PCA) seeks a sparse linear combination of input features (variables), so that the derived features still explain most of the variations in the data. A group sparse PCA introduces structural constraints on the features in seeking such a linear combination. Collectively, the derived principal components may still require measuring all the input features. We present a joint group sparse PCA (JGSPCA) algorithm, which forces the basic coefficients corresponding to a group of features to be jointly sparse. Joint sparsity ensures that the complete basis involves only a sparse set of input features, whereas the group sparsity ensures that the structural integrity of the features is maximally preserved. We evaluate the JGSPCA algorithm on the problems of compressed hyperspectral imaging and face recognition. Compressed sensing results show that the proposed method consistently outperforms sparse PCA and group sparse PCA in reconstructing the hyperspectral scenes of natural and man-made objects. The efficacy of the proposed compressed sensing method is further demonstrated in band selection for face recognition.

  7. Incorporation of local dependent reliability information into the Prior Image Constrained Compressed Sensing (PICCS) reconstruction algorithm

    International Nuclear Information System (INIS)

    Vaegler, Sven; Sauer, Otto; Stsepankou, Dzmitry; Hesser, Juergen

    2015-01-01

    The reduction of dose in cone beam computer tomography (CBCT) arises from the decrease of the tube current for each projection as well as from the reduction of the number of projections. In order to maintain good image quality, sophisticated image reconstruction techniques are required. The Prior Image Constrained Compressed Sensing (PICCS) incorporates prior images into the reconstruction algorithm and outperforms the widespread used Feldkamp-Davis-Kress-algorithm (FDK) when the number of projections is reduced. However, prior images that contain major variations are not appropriately considered so far in PICCS. We therefore propose the partial-PICCS (pPICCS) algorithm. This framework is a problem-specific extension of PICCS and enables the incorporation of the reliability of the prior images additionally. We assumed that the prior images are composed of areas with large and small deviations. Accordingly, a weighting matrix considered the assigned areas in the objective function. We applied our algorithm to the problem of image reconstruction from few views by simulations with a computer phantom as well as on clinical CBCT projections from a head-and-neck case. All prior images contained large local variations. The reconstructed images were compared to the reconstruction results by the FDK-algorithm, by Compressed Sensing (CS) and by PICCS. To show the gain of image quality we compared image details with the reference image and used quantitative metrics (root-mean-square error (RMSE), contrast-to-noise-ratio (CNR)). The pPICCS reconstruction framework yield images with substantially improved quality even when the number of projections was very small. The images contained less streaking, blurring and inaccurately reconstructed structures compared to the images reconstructed by FDK, CS and conventional PICCS. The increased image quality is also reflected in large RMSE differences. We proposed a modification of the original PICCS algorithm. The pPICCS algorithm

  8. Compressive sensing-based wideband capacitance measurement with a fixed sampling rate lower than the highest exciting frequency

    International Nuclear Information System (INIS)

    Xu, Lijun; Ren, Ying; Sun, Shijie; Cao, Zhang

    2016-01-01

    In this paper, an under-sampling method for wideband capacitance measurement was proposed by using the compressive sensing strategy. As the excitation signal is sparse in the frequency domain, the compressed sampling method that uses a random demodulator was adopted, which could greatly decrease the sampling rate. Besides, four switches were used to replace the multiplier in the random demodulator. As a result, not only the sampling rate can be much smaller than the signal excitation frequency, but also the circuit’s structure is simpler and its power consumption is lower. A hardware prototype was constructed to validate the method. In the prototype, an excitation voltage with a frequency up to 200 kHz was applied to a capacitance-to-voltage converter. The output signal of the converter was randomly modulated by a pseudo-random sequence through four switches. After a low-pass filter, the signal was sampled by an analog-to-digital converter at a sampling rate of 50 kHz, which was three times lower than the highest exciting frequency. The frequency and amplitude of the signal were then reconstructed to obtain the measured capacitance. Both theoretical analysis and experiments were carried out to show the feasibility of the proposed method and to evaluate the performance of the prototype, including its linearity, sensitivity, repeatability, accuracy and stability within a given measurement range. (paper)

  9. Free-beam soliton self-compression in air

    Science.gov (United States)

    Voronin, A. A.; Mitrofanov, A. V.; Sidorov-Biryukov, D. A.; Fedotov, A. B.; Pugžlys, A.; Panchenko, V. Ya; Shumakova, V.; Ališauskas, S.; Baltuška, A.; Zheltikov, A. M.

    2018-02-01

    We identify a physical scenario whereby soliton transients generated in freely propagating laser beams within the regions of anomalous dispersion in air can be compressed as a part of their free-beam spatiotemporal evolution to yield few-cycle mid- and long-wavelength-infrared field waveforms, whose peak power is substantially higher than the peak power of the input pulses. We show that this free-beam soliton self-compression scenario does not require ionization or laser-induced filamentation, enabling high-throughput self-compression of mid- and long-wavelength-infrared laser pulses within a broad range of peak powers from tens of gigawatts up to the terawatt level. We also demonstrate that this method of pulse compression can be extended to long-range propagation, providing self-compression of high-peak-power laser pulses in atmospheric air within propagation ranges as long as hundreds of meters, suggesting new ways towards longer-range standoff detection and remote sensing.

  10. Sensor employing internal reference electrode

    DEFF Research Database (Denmark)

    2013-01-01

    The present invention concerns a novel internal reference electrode as well as a novel sensing electrode for an improved internal reference oxygen sensor and the sensor employing same.......The present invention concerns a novel internal reference electrode as well as a novel sensing electrode for an improved internal reference oxygen sensor and the sensor employing same....

  11. Clinical Feasibility of Free-Breathing Dynamic T1-Weighted Imaging With Gadoxetic Acid-Enhanced Liver Magnetic Resonance Imaging Using a Combination of Variable Density Sampling and Compressed Sensing.

    Science.gov (United States)

    Yoon, Jeong Hee; Yu, Mi Hye; Chang, Won; Park, Jin-Young; Nickel, Marcel Dominik; Son, Yohan; Kiefer, Berthold; Lee, Jeong Min

    2017-10-01

    The purpose of the study was to investigate the clinical feasibility of free-breathing dynamic T1-weighted imaging (T1WI) using Cartesian sampling, compressed sensing, and iterative reconstruction in gadoxetic acid-enhanced liver magnetic resonance imaging (MRI). This retrospective study was approved by our institutional review board, and the requirement for informed consent was waived. A total of 51 patients at high risk of breath-holding failure underwent dynamic T1WI in a free-breathing manner using volumetric interpolated breath-hold (BH) examination with compressed sensing reconstruction (CS-VIBE) and hard gating. Timing, motion artifacts, and image quality were evaluated by 4 radiologists on a 4-point scale. For patients with low image quality scores (XD]) reconstruction was additionally performed and reviewed in the same manner. In addition, in 68.6% (35/51) patients who had previously undergone liver MRI, image quality and motion artifacts on dynamic phases using CS-VIBE were compared with previous BH-T1WIs. In all patients, adequate arterial-phase timing was obtained at least once. Overall image quality of free-breathing T1WI was 3.30 ± 0.59 on precontrast and 2.68 ± 0.70, 2.93 ± 0.65, and 3.30 ± 0.49 on early arterial, late arterial, and portal venous phases, respectively. In 13 patients with lower than average image quality (XD-reconstructed CS-VIBE) significantly reduced motion artifacts (P XD reconstruction showed less motion artifacts and better image quality on precontrast, arterial, and portal venous phases (P < 0.0001-0.013). Volumetric interpolated breath-hold examination with compressed sensing has the potential to provide consistent, motion-corrected free-breathing dynamic T1WI for liver MRI in patients at high risk of breath-holding failure.

  12. Compressed sensing reconstruction of cardiac cine MRI using golden angle spiral trajectories.

    Science.gov (United States)

    Tolouee, Azar; Alirezaie, Javad; Babyn, Paul

    2015-11-01

    In dynamic cardiac cine Magnetic Resonance Imaging (MRI), the spatiotemporal resolution is limited by the low imaging speed. Compressed sensing (CS) theory has been applied to improve the imaging speed and thus the spatiotemporal resolution. The purpose of this paper is to improve CS reconstruction of under sampled data by exploiting spatiotemporal sparsity and efficient spiral trajectories. We extend k-t sparse algorithm to spiral trajectories to achieve high spatio temporal resolutions in cardiac cine imaging. We have exploited spatiotemporal sparsity of cardiac cine MRI by applying a 2D+time wavelet-Fourier transform. For efficient coverage of k-space, we have used a modified version of multi shot (interleaved) spirals trajectories. In order to reduce incoherent aliasing artifact, we use different random undersampling pattern for each temporal frame. Finally, we have used nonuniform fast Fourier transform (NUFFT) algorithm to reconstruct the image from the non-uniformly acquired samples. The proposed approach was tested in simulated and cardiac cine MRI data. Results show that higher acceleration factors with improved image quality can be obtained with the proposed approach in comparison to the existing state-of-the-art method. The flexibility of the introduced method should allow it to be used not only for the challenging case of cardiac imaging, but also for other patient motion where the patient moves or breathes during acquisition. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Worst configurations (instantons) for compressed sensing over reals: a channel coding approach

    International Nuclear Information System (INIS)

    Chertkov, Michael; Chilappagari, Shashi K.; Vasic, Bane

    2010-01-01

    We consider Linear Programming (LP) solution of a Compressed Sensing (CS) problem over reals, also known as the Basis Pursuit (BasP) algorithm. The BasP allows interpretation as a channel-coding problem, and it guarantees the error-free reconstruction over reals for properly chosen measurement matrix and sufficiently sparse error vectors. In this manuscript, we examine how the BasP performs on a given measurement matrix and develop a technique to discover sparse vectors for which the BasP fails. The resulting algorithm is a generalization of our previous results on finding the most probable error-patterns, so called instantons, degrading performance of a finite size Low-Density Parity-Check (LDPC) code in the error-floor regime. The BasP fails when its output is different from the actual error-pattern. We design CS-Instanton Search Algorithm (ISA) generating a sparse vector, called CS-instanton, such that the BasP fails on the instanton, while its action on any modification of the CS-instanton decreasing a properly defined norm is successful. We also prove that, given a sufficiently dense random input for the error-vector, the CS-ISA converges to an instanton in a small finite number of steps. Performance of the CS-ISA is tested on example of a randomly generated 512 * 120 matrix, that outputs the shortest instanton (error vector) pattern of length 11.

  14. Long-term surface EMG monitoring using K-means clustering and compressive sensing

    Science.gov (United States)

    Balouchestani, Mohammadreza; Krishnan, Sridhar

    2015-05-01

    In this work, we present an advanced K-means clustering algorithm based on Compressed Sensing theory (CS) in combination with the K-Singular Value Decomposition (K-SVD) method for Clustering of long-term recording of surface Electromyography (sEMG) signals. The long-term monitoring of sEMG signals aims at recording of the electrical activity produced by muscles which are very useful procedure for treatment and diagnostic purposes as well as for detection of various pathologies. The proposed algorithm is examined for three scenarios of sEMG signals including healthy person (sEMG-Healthy), a patient with myopathy (sEMG-Myopathy), and a patient with neuropathy (sEMG-Neuropathr), respectively. The proposed algorithm can easily scan large sEMG datasets of long-term sEMG recording. We test the proposed algorithm with Principal Component Analysis (PCA) and Linear Correlation Coefficient (LCC) dimensionality reduction methods. Then, the output of the proposed algorithm is fed to K-Nearest Neighbours (K-NN) and Probabilistic Neural Network (PNN) classifiers in order to calclute the clustering performance. The proposed algorithm achieves a classification accuracy of 99.22%. This ability allows reducing 17% of Average Classification Error (ACE), 9% of Training Error (TE), and 18% of Root Mean Square Error (RMSE). The proposed algorithm also reduces 14% clustering energy consumption compared to the existing K-Means clustering algorithm.

  15. LIGO sensing system performance

    CERN Document Server

    Landry, M

    2002-01-01

    The optical sensing subsystem of a LIGO interferometer is described. The system includes two complex interferometric sensing schemes to control test masses in length and alignment. The length sensing system is currently employed on all LIGO interferometers to lock coupled cavities on resonance. Auto-alignment is to be accomplished by a wavefront-sensing scheme which automatically corrects for angular fluctuations of the test masses. Improvements in lock stability and duration are noted when the wavefront auto-alignment system is employed. Preliminary results from the commissioning of the 2 km detector in Washington are shown.

  16. Higher resolution cine imaging with compressed sensing for accelerated clinical left ventricular evaluation.

    Science.gov (United States)

    Lin, Aaron C W; Strugnell, Wendy; Riley, Robyn; Schmitt, Benjamin; Zenge, Michael; Schmidt, Michaela; Morris, Norman R; Hamilton-Craig, Christian

    2017-06-01

    To assess the clinical feasibility of a compressed sensing cine magnetic resonance imaging (MRI) sequence of both high temporal and spatial resolution (CS_bSSFP) in comparison to a balanced steady-state free precession cine (bSSFP) sequence for reliable quantification of left ventricular (LV) volumes and mass. Segmented MRI cine images were acquired on a 1.5T scanner in 50 patients in the LV short-axis stack orientation using a retrospectively gated conventional bSSFP sequence (generalized autocalibrating partially parallel acquisition [GRAPPA] acceleration factor 2), followed by a prospectively triggered CS_bSSFP sequence with net acceleration factor of 8. Image quality was assessed by published criteria. Comparison of sequences was made in LV volumes and mass, image quality score, quantitative regional myocardial wall motion, and imaging time using Pearson's correlation, Bland-Altman and paired 2-tailed Student's t-test. Differences (bSSFP minus CS_bSSFP, mean ± SD) and Pearson's correlations were 14.8 ± 16.3 (P = 0.31) and r = 0.98 (P cine CS_bSSFP accurately and reliably quantitates LV volumes and mass, shortens acquisition times, and is clinically feasible. 1 Technical Efficacy: Stage 2 J. MAGN. RESON. IMAGING 2017;45:1693-1699. © 2016 International Society for Magnetic Resonance in Medicine.

  17. Continuous diffusion signal, EAP and ODF estimation via Compressive Sensing in diffusion MRI.

    Science.gov (United States)

    Merlet, Sylvain L; Deriche, Rachid

    2013-07-01

    In this paper, we exploit the ability of Compressed Sensing (CS) to recover the whole 3D Diffusion MRI (dMRI) signal from a limited number of samples while efficiently recovering important diffusion features such as the Ensemble Average Propagator (EAP) and the Orientation Distribution Function (ODF). Some attempts to use CS in estimating diffusion signals have been done recently. However, this was mainly an experimental insight of CS capabilities in dMRI and the CS theory has not been fully exploited. In this work, we also propose to study the impact of the sparsity, the incoherence and the RIP property on the reconstruction of diffusion signals. We show that an efficient use of the CS theory enables to drastically reduce the number of measurements commonly used in dMRI acquisitions. Only 20-30 measurements, optimally spread on several b-value shells, are shown to be necessary, which is less than previous attempts to recover the diffusion signal using CS. This opens an attractive perspective to measure the diffusion signals in white matter within a reduced acquisition time and shows that CS holds great promise and opens new and exciting perspectives in diffusion MRI (dMRI). Copyright © 2013 Elsevier B.V. All rights reserved.

  18. TH-E-17A-06: Anatomical-Adaptive Compressed Sensing (AACS) Reconstruction for Thoracic 4-Dimensional Cone-Beam CT

    International Nuclear Information System (INIS)

    Shieh, C; Kipritidis, J; OBrien, R; Cooper, B; Kuncic, Z; Keall, P

    2014-01-01

    Purpose: The Feldkamp-Davis-Kress (FDK) algorithm currently used for clinical thoracic 4-dimensional (4D) cone-beam CT (CBCT) reconstruction suffers from noise and streaking artifacts due to projection under-sampling. Compressed sensing theory enables reconstruction of under-sampled datasets via total-variation (TV) minimization, but TV-minimization algorithms such as adaptive-steepest-descent-projection-onto-convex-sets (ASD-POCS) often converge slowly and are prone to over-smoothing anatomical details. These disadvantages can be overcome by incorporating general anatomical knowledge via anatomy segmentation. Based on this concept, we have developed an anatomical-adaptive compressed sensing (AACS) algorithm for thoracic 4D-CBCT reconstruction. Methods: AACS is based on the ASD-POCS framework, where each iteration consists of a TV-minimization step and a data fidelity constraint step. Prior to every AACS iteration, four major thoracic anatomical structures - soft tissue, lungs, bony anatomy, and pulmonary details - were segmented from the updated solution image. Based on the segmentation, an anatomical-adaptive weighting was applied to the TV-minimization step, so that TV-minimization was enhanced at noisy/streaky regions and suppressed at anatomical structures of interest. The image quality and convergence speed of AACS was compared to conventional ASD-POCS using an XCAT digital phantom and a patient scan. Results: For the XCAT phantom, the AACS image represented the ground truth better than the ASD-POCS image, giving a higher structural similarity index (0.93 vs. 0.84) and lower absolute difference (1.1*10 4 vs. 1.4*10 4 ). For the patient case, while both algorithms resulted in much less noise and streaking than FDK, the AACS image showed considerably better contrast and sharpness of the vessels, tumor, and fiducial marker than the ASD-POCS image. In addition, AACS converged over 50% faster than ASD-POCS in both cases. Conclusions: The proposed AACS algorithm

  19. TH-E-17A-06: Anatomical-Adaptive Compressed Sensing (AACS) Reconstruction for Thoracic 4-Dimensional Cone-Beam CT

    Energy Technology Data Exchange (ETDEWEB)

    Shieh, C; Kipritidis, J; OBrien, R; Cooper, B; Kuncic, Z; Keall, P [The University of Sydney, Sydney, New South Wales (Australia)

    2014-06-15

    Purpose: The Feldkamp-Davis-Kress (FDK) algorithm currently used for clinical thoracic 4-dimensional (4D) cone-beam CT (CBCT) reconstruction suffers from noise and streaking artifacts due to projection under-sampling. Compressed sensing theory enables reconstruction of under-sampled datasets via total-variation (TV) minimization, but TV-minimization algorithms such as adaptive-steepest-descent-projection-onto-convex-sets (ASD-POCS) often converge slowly and are prone to over-smoothing anatomical details. These disadvantages can be overcome by incorporating general anatomical knowledge via anatomy segmentation. Based on this concept, we have developed an anatomical-adaptive compressed sensing (AACS) algorithm for thoracic 4D-CBCT reconstruction. Methods: AACS is based on the ASD-POCS framework, where each iteration consists of a TV-minimization step and a data fidelity constraint step. Prior to every AACS iteration, four major thoracic anatomical structures - soft tissue, lungs, bony anatomy, and pulmonary details - were segmented from the updated solution image. Based on the segmentation, an anatomical-adaptive weighting was applied to the TV-minimization step, so that TV-minimization was enhanced at noisy/streaky regions and suppressed at anatomical structures of interest. The image quality and convergence speed of AACS was compared to conventional ASD-POCS using an XCAT digital phantom and a patient scan. Results: For the XCAT phantom, the AACS image represented the ground truth better than the ASD-POCS image, giving a higher structural similarity index (0.93 vs. 0.84) and lower absolute difference (1.1*10{sup 4} vs. 1.4*10{sup 4}). For the patient case, while both algorithms resulted in much less noise and streaking than FDK, the AACS image showed considerably better contrast and sharpness of the vessels, tumor, and fiducial marker than the ASD-POCS image. In addition, AACS converged over 50% faster than ASD-POCS in both cases. Conclusions: The proposed AACS

  20. On music genre classification via compressive sampling

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    Recent work \\cite{Chang2010} combines low-level acoustic features and random projection (referred to as ``compressed sensing'' in \\cite{Chang2010}) to create a music genre classification system showing an accuracy among the highest reported for a benchmark dataset. This not only contradicts previ...

  1. Efficient algorithms of multidimensional γ-ray spectra compression

    International Nuclear Information System (INIS)

    Morhac, M.; Matousek, V.

    2006-01-01

    The efficient algorithms to compress multidimensional γ-ray events are presented. Two alternative kinds of compression algorithms based on both the adaptive orthogonal and randomizing transforms are proposed. In both algorithms we employ the reduction of data volume due to the symmetry of the γ-ray spectra

  2. Quantitative mapping of chemical compositions with MRI using compressed sensing.

    Science.gov (United States)

    von Harbou, Erik; Fabich, Hilary T; Benning, Martin; Tayler, Alexander B; Sederman, Andrew J; Gladden, Lynn F; Holland, Daniel J

    2015-12-01

    In this work, a magnetic resonance (MR) imaging method for accelerating the acquisition time of two dimensional concentration maps of different chemical species in mixtures by the use of compressed sensing (CS) is presented. Whilst 2D-concentration maps with a high spatial resolution are prohibitively time-consuming to acquire using full k-space sampling techniques, CS enables the reconstruction of quantitative concentration maps from sub-sampled k-space data. First, the method was tested by reconstructing simulated data. Then, the CS algorithm was used to reconstruct concentration maps of binary mixtures of 1,4-dioxane and cyclooctane in different samples with a field-of-view of 22mm and a spatial resolution of 344μm×344μm. Spiral based trajectories were used as sampling schemes. For the data acquisition, eight scans with slightly different trajectories were applied resulting in a total acquisition time of about 8min. In contrast, a conventional chemical shift imaging experiment at the same resolution would require about 17h. To get quantitative results, a careful weighting of the regularisation parameter (via the L-curve approach) or contrast-enhancing Bregman iterations are applied for the reconstruction of the concentration maps. Both approaches yield relative errors of the concentration map of less than 2mol-% without any calibration prior to the measurement. The accuracy of the reconstructed concentration maps deteriorates when the reconstruction model is biased by systematic errors such as large inhomogeneities in the static magnetic field. The presented method is a powerful tool for the fast acquisition of concentration maps that can provide valuable information for the investigation of many phenomena in chemical engineering applications. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Compressive sampling by artificial neural networks for video

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Jenkins, Jeffrey; Reinhardt, Kitt

    2011-06-01

    We describe a smart surveillance strategy for handling novelty changes. Current sensors seem to keep all, redundant or not. The Human Visual System's Hubel-Wiesel (wavelet) edge detection mechanism pays attention to changes in movement, which naturally produce organized sparseness because a stagnant edge is not reported to the brain's visual cortex by retinal neurons. Sparseness is defined as an ordered set of ones (movement or not) relative to zeros that could be pseudo-orthogonal among themselves; then suited for fault tolerant storage and retrieval by means of Associative Memory (AM). The firing is sparse at the change locations. Unlike purely random sparse masks adopted in medical Compressive Sensing, these organized ones have an additional benefit of using the image changes to make retrievable graphical indexes. We coined this organized sparseness as Compressive Sampling; sensing but skipping over redundancy without altering the original image. Thus, we turn illustrate with video the survival tactics which animals that roam the Earth use daily. They acquire nothing but the space-time changes that are important to satisfy specific prey-predator relationships. We have noticed a similarity between the mathematical Compressive Sensing and this biological mechanism used for survival. We have designed a hardware implementation of the Human Visual System's Compressive Sampling scheme. To speed up further, our mixedsignal circuit design of frame differencing is built in on-chip processing hardware. A CMOS trans-conductance amplifier is designed here to generate a linear current output using a pair of differential input voltages from 2 photon detectors for change detection---one for the previous value and the other the subsequent value, ("write" synaptic weight by Hebbian outer products; "read" by inner product & pt. NL threshold) to localize and track the threat targets.

  4. Joint synthetic aperture radar plus ground moving target indicator from single-channel radar using compressive sensing

    Science.gov (United States)

    Thompson, Douglas; Hallquist, Aaron; Anderson, Hyrum

    2017-10-17

    The various embodiments presented herein relate to utilizing an operational single-channel radar to collect and process synthetic aperture radar (SAR) and ground moving target indicator (GMTI) imagery from a same set of radar returns. In an embodiment, data is collected by randomly staggering a slow-time pulse repetition interval (PRI) over a SAR aperture such that a number of transmitted pulses in the SAR aperture is preserved with respect to standard SAR, but many of the pulses are spaced very closely enabling movers (e.g., targets) to be resolved, wherein a relative velocity of the movers places them outside of the SAR ground patch. The various embodiments of image reconstruction can be based on compressed sensing inversion from undersampled data, which can be solved efficiently using such techniques as Bregman iteration. The various embodiments enable high-quality SAR reconstruction, and high-quality GMTI reconstruction from the same set of radar returns.

  5. Fixed-Rate Compressed Floating-Point Arrays.

    Science.gov (United States)

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  6. Rupture process of the 2013 Okhotsk deep mega earthquake from iterative backprojection and compress sensing methods

    Science.gov (United States)

    Qin, W.; Yin, J.; Yao, H.

    2013-12-01

    On May 24th 2013 a Mw 8.3 normal faulting earthquake occurred at a depth of approximately 600 km beneath the sea of Okhotsk, Russia. It is a rare mega earthquake that ever occurred at such a great depth. We use the time-domain iterative backprojection (IBP) method [1] and also the frequency-domain compressive sensing (CS) technique[2] to investigate the rupture process and energy radiation of this mega earthquake. We currently use the teleseismic P-wave data from about 350 stations of USArray. IBP is an improved method of the traditional backprojection method, which more accurately locates subevents (energy burst) during earthquake rupture and determines the rupture speeds. The total rupture duration of this earthquake is about 35 s with a nearly N-S rupture direction. We find that the rupture is bilateral in the beginning 15 seconds with slow rupture speeds: about 2.5km/s for the northward rupture and about 2 km/s for the southward rupture. After that, the northward rupture stopped while the rupture towards south continued. The average southward rupture speed between 20-35 s is approximately 5 km/s, lower than the shear wave speed (about 5.5 km/s) at the hypocenter depth. The total rupture length is about 140km, in a nearly N-S direction, with a southward rupture length about 100 km and a northward rupture length about 40 km. We also use the CS method, a sparse source inversion technique, to study the frequency-dependent seismic radiation of this mega earthquake. We observe clear along-strike frequency dependence of the spatial and temporal distribution of seismic radiation and rupture process. The results from both methods are generally similar. In the next step, we'll use data from dense arrays in southwest China and also global stations for further analysis in order to more comprehensively study the rupture process of this deep mega earthquake. Reference [1] Yao H, Shearer P M, Gerstoft P. Subevent location and rupture imaging using iterative backprojection for

  7. Stress analysis of shear/compression test

    International Nuclear Information System (INIS)

    Nishijima, S.; Okada, T.; Ueno, S.

    1997-01-01

    Stress analysis has been made on the glass fiber reinforced plastics (GFRP) subjected to the combined shear and compression stresses by means of finite element method. The two types of experimental set up were analyzed, that is parallel and series method where the specimen were compressed by tilted jigs which enable to apply the combined stresses, to the specimen. Modified Tsai-Hill criterion was employed to judge the failure under the combined stresses that is the shear strength under the compressive stress. The different failure envelopes were obtained between the two set ups. In the parallel system the shear strength once increased with compressive stress then decreased. On the contrary in the series system the shear strength decreased monotonicly with compressive stress. The difference is caused by the different stress distribution due to the different constraint conditions. The basic parameters which control the failure under the combined stresses will be discussed

  8. Compression of the digitized X-ray images

    International Nuclear Information System (INIS)

    Terae, Satoshi; Miyasaka, Kazuo; Fujita, Nobuyuki; Takamura, Akio; Irie, Goro; Inamura, Kiyonari.

    1987-01-01

    Medical images are using an increased amount of space in the hospitals, while they are not accessed easily. Thus, suitable data filing system and precise data compression will be necessitated. Image quality was evaluated before and after image data compression, using local filing system (MediFile 1000, NEC Co.) and forty-seven modes of compression parameter. For this study X-ray images of 10 plain radiographs and 7 contrast examinations were digitized using a film reader of CCD sensor in MediFile 1000. Those images were compressed into forty-seven kinds of image data to save in an optical disc and then the compressed images were reconstructed. Each reconstructed image was compared with non-compressed images in respect to several regions of our interest by four radiologists. Compression and extension of radiological images were promptly made by employing the local filing system. Image quality was much more affected by the ratio of data compression than by the mode of parameter itself. In another word, the higher compression ratio became, the worse the image quality were. However, image quality was not significantly degraded until the compression ratio was about 15: 1 on plain radiographs and about 8: 1 on contrast studies. Image compression by this technique will be admitted by diagnostic radiology. (author)

  9. Lossless compression of hyperspectral images with pre-byte processing and intra-bands correlation

    OpenAIRE

    Sarinova, Assiya; Zamyatin, Alexander; Cabral, Pedro

    2015-01-01

    This paper considers an approach to the compression of hyperspectral remote sensing data by an original multistage algorithm to increase the compression ratio using auxiliary data processing with its byte representation as well as with its intra-bands correlation. A set of the experimental results for the proposed approach of effectiveness estimation and its comparison with the well-known universal and specialized compression algorithms is presented. Este documento se refiere a la compresi...

  10. Effect of compressibility on the hypervelocity penetration

    Science.gov (United States)

    Song, W. J.; Chen, X. W.; Chen, P.

    2018-02-01

    We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.

  11. Feedback Reduction in Broadcast and two Hop Multiuser Networks: A Compressed Sensing Approach

    KAUST Repository

    Shibli, Hussain J.

    2013-05-21

    In multiuser wireless networks, the base stations (BSs) rely on the channel state information (CSI) of the users to in order to perform user scheduling and downlink transmission. While the downlink channels can be easily estimated at all user terminals via a single broadcast, several key challenges are faced during uplink (feedback) transmission. Firstly, the noisy and fading feedback channels are usually unknown at the base station, and therefore, channel training is usually required from all users. Secondly, the amount of air-time required for feedback transmission grows linearly with the number of users. This domination of the network resources by feedback information leads to increased scheduling delay and outdated CSI at the BS. In this thesis, we tackle the above challenges and propose feedback reduction algorithms based on the theory of compressive sensing (CS). The proposed algorithms encompass both single and dual hop wireless networks, and; i) permit the BS to obtain CSI with acceptable recovery guarantees under substantially reduced feedback overhead, ii) are agnostic to the statistics of the feedback channels, and iii) utilize the apriori statistics of the additive noise to identify strong users. Numerical results show that the proposed algorithms are able to reduce the feedback overhead, improve detection at the BS, and achieve a sum-rate close to that obtained by noiseless dedicated feedback algorithms.

  12. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.

    Science.gov (United States)

    Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).

  13. Frequency-Selective Signal Sensing with Sub-Nyquist Uniform Sampling Scheme

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2015-01-01

    In this paper the authors discuss a problem of acquisition and reconstruction of a signal polluted by adjacent- channel interference. The authors propose a method to find a sub-Nyquist uniform sampling pattern which allows for correct reconstruction of selected frequencies. The method is inspired...... by the Restricted Isometry Property, which is known from the field of compressed sensing. Then, compressed sensing is used to successfully reconstruct a wanted signal even if some of the uniform samples were randomly lost, e. g. due to ADC saturation. An experiment which tests the proposed method in practice...

  14. Economic and technical feasibility study of compressed air storage

    Energy Technology Data Exchange (ETDEWEB)

    1976-03-01

    The results of a study of the economic and technical feasibility of compressed air energy storage (CAES) are presented. The study, which concentrated primarily on the application of underground air storage with combustion turbines, consisted of two phases. In the first phase a general assessment of the technical alternatives, economic characteristics and the institutional constraints associated with underground storage of compressed air for utility peaking application was carried out. The goal of this assessment was to identify potential barrier problems and to define the incentive for the implementation of compressed air storage. In the second phase, the general conclusions of the assessment were tested by carrying out the conceptual design of a CAES plant at two specific sites, and a program of further work indicated by the assessment study was formulated. The conceptual design of a CAES plant employing storage in an aquifer and that of a plant employing storage in a conventionally excavated cavern employing a water leg to maintain constant pressure are shown. Recommendations for further work, as well as directions of future turbo-machinery development, are made. It is concluded that compressed air storage is technically feasible for off-peak energy storage, and, depending on site conditions, CAES plants may be favored over simple cycle turbine plants to meet peak demands. (LCL)

  15. A Data-Gathering Scheme with Joint Routing and Compressive Sensing Based on Modified Diffusion Wavelets in Wireless Sensor Networks.

    Science.gov (United States)

    Gu, Xiangping; Zhou, Xiaofeng; Sun, Yanjing

    2018-02-28

    Compressive sensing (CS)-based data gathering is a promising method to reduce energy consumption in wireless sensor networks (WSNs). Traditional CS-based data-gathering approaches require a large number of sensor nodes to participate in each CS measurement task, resulting in high energy consumption, and do not guarantee load balance. In this paper, we propose a sparser analysis that depends on modified diffusion wavelets, which exploit sensor readings' spatial correlation in WSNs. In particular, a novel data-gathering scheme with joint routing and CS is presented. A modified ant colony algorithm is adopted, where next hop node selection takes a node's residual energy and path length into consideration simultaneously. Moreover, in order to speed up the coverage rate and avoid the local optimal of the algorithm, an improved pheromone impact factor is put forward. More importantly, theoretical proof is given that the equivalent sensing matrix generated can satisfy the restricted isometric property (RIP). The simulation results demonstrate that the modified diffusion wavelets' sparsity affects the sensor signal and has better reconstruction performance than DFT. Furthermore, our data gathering with joint routing and CS can dramatically reduce the energy consumption of WSNs, balance the load, and prolong the network lifetime in comparison to state-of-the-art CS-based methods.

  16. Target parameter estimation for spatial and temporal formulations in MIMO radars using compressive sensing

    KAUST Repository

    Ali, Hussain; Ahmed, Sajid; Al-Naffouri, Tareq Y.; Sharawi, Mohammad S.; Alouini, Mohamed-Slim

    2017-01-01

    Conventional algorithms used for parameter estimation in colocated multiple-input-multiple-output (MIMO) radars require the inversion of the covariance matrix of the received spatial samples. In these algorithms, the number of received snapshots should be at least equal to the size of the covariance matrix. For large size MIMO antenna arrays, the inversion of the covariance matrix becomes computationally very expensive. Compressive sensing (CS) algorithms which do not require the inversion of the complete covariance matrix can be used for parameter estimation with fewer number of received snapshots. In this work, it is shown that the spatial formulation is best suitable for large MIMO arrays when CS algorithms are used. A temporal formulation is proposed which fits the CS algorithms framework, especially for small size MIMO arrays. A recently proposed low-complexity CS algorithm named support agnostic Bayesian matching pursuit (SABMP) is used to estimate target parameters for both spatial and temporal formulations for the unknown number of targets. The simulation results show the advantage of SABMP algorithm utilizing low number of snapshots and better parameter estimation for both small and large number of antenna elements. Moreover, it is shown by simulations that SABMP is more effective than other existing algorithms at high signal-to-noise ratio.

  17. Target parameter estimation for spatial and temporal formulations in MIMO radars using compressive sensing

    KAUST Repository

    Ali, Hussain

    2017-01-09

    Conventional algorithms used for parameter estimation in colocated multiple-input-multiple-output (MIMO) radars require the inversion of the covariance matrix of the received spatial samples. In these algorithms, the number of received snapshots should be at least equal to the size of the covariance matrix. For large size MIMO antenna arrays, the inversion of the covariance matrix becomes computationally very expensive. Compressive sensing (CS) algorithms which do not require the inversion of the complete covariance matrix can be used for parameter estimation with fewer number of received snapshots. In this work, it is shown that the spatial formulation is best suitable for large MIMO arrays when CS algorithms are used. A temporal formulation is proposed which fits the CS algorithms framework, especially for small size MIMO arrays. A recently proposed low-complexity CS algorithm named support agnostic Bayesian matching pursuit (SABMP) is used to estimate target parameters for both spatial and temporal formulations for the unknown number of targets. The simulation results show the advantage of SABMP algorithm utilizing low number of snapshots and better parameter estimation for both small and large number of antenna elements. Moreover, it is shown by simulations that SABMP is more effective than other existing algorithms at high signal-to-noise ratio.

  18. Characterization of statistical prior image constrained compressed sensing (PICCS): II. Application to dose reduction

    International Nuclear Information System (INIS)

    Lauzier, Pascal Thériault; Chen Guanghong

    2013-01-01

    Purpose: The ionizing radiation imparted to patients during computed tomography exams is raising concerns. This paper studies the performance of a scheme called dose reduction using prior image constrained compressed sensing (DR-PICCS). The purpose of this study is to characterize the effects of a statistical model of x-ray detection in the DR-PICCS framework and its impact on spatial resolution. Methods: Both numerical simulations with known ground truth and in vivo animal dataset were used in this study. In numerical simulations, a phantom was simulated with Poisson noise and with varying levels of eccentricity. Both the conventional filtered backprojection (FBP) and the PICCS algorithms were used to reconstruct images. In PICCS reconstructions, the prior image was generated using two different denoising methods: a simple Gaussian blur and a more advanced diffusion filter. Due to the lack of shift-invariance in nonlinear image reconstruction such as the one studied in this paper, the concept of local spatial resolution was used to study the sharpness of a reconstructed image. Specifically, a directional metric of image sharpness, the so-called pseudopoint spread function (pseudo-PSF), was employed to investigate local spatial resolution. Results: In the numerical studies, the pseudo-PSF was reduced from twice the voxel width in the prior image down to less than 1.1 times the voxel width in DR-PICCS reconstructions when the statistical model was not included. At the same noise level, when statistical weighting was used, the pseudo-PSF width in DR-PICCS reconstructed images varied between 1.5 and 0.75 times the voxel width depending on the direction along which it was measured. However, this anisotropy was largely eliminated when the prior image was generated using diffusion filtering; the pseudo-PSF width was reduced to below one voxel width in that case. In the in vivo study, a fourfold improvement in CNR was achieved while qualitatively maintaining sharpness

  19. A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications

    Science.gov (United States)

    Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.

    2018-04-01

    Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.

  20. A compressed sensing approach for resolution improvement in fiber-bundle based endomicroscopy

    Science.gov (United States)

    Dumas, John P.; Lodhi, Muhammad A.; Bajwa, Waheed U.; Pierce, Mark C.

    2018-02-01

    Endomicroscopy techniques such as confocal, multi-photon, and wide-field imaging have all been demonstrated using coherent fiber-optic imaging bundles. While the narrow diameter and flexibility of fiber bundles is clinically advantageous, the number of resolvable points in an image is conventionally limited to the number of individual fibers within the bundle. We are introducing concepts from the compressed sensing (CS) field to fiber bundle based endomicroscopy, to allow images to be recovered with more resolvable points than fibers in the bundle. The distal face of the fiber bundle is treated as a low-resolution sensor with circular pixels (fibers) arranged in a hexagonal lattice. A spatial light modulator is located conjugate to the object and distal face, applying multiple high resolution masks to the intermediate image prior to propagation through the bundle. We acquire images of the proximal end of the bundle for each (known) mask pattern and then apply CS inversion algorithms to recover a single high-resolution image. We first developed a theoretical forward model describing image formation through the mask and fiber bundle. We then imaged objects through a rigid fiber bundle and demonstrate that our CS endomicroscopy architecture can recover intra-fiber details while filling inter-fiber regions with interpolation. Finally, we examine the relationship between reconstruction quality and the ratio of the number of mask elements to the number of fiber cores, finding that images could be generated with approximately 28,900 resolvable points for a 1,000 fiber region in our platform.

  1. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2016-01-01

    Full Text Available Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS. It is composed of three successful components: (i exponential wavelet transform, (ii iterative shrinkage-thresholding algorithm, and (iii random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches.

  2. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Science.gov (United States)

    Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping

    2016-01-01

    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068

  3. High efficient optical remote sensing images acquisition for nano-satellite: reconstruction algorithms

    Science.gov (United States)

    Liu, Yang; Li, Feng; Xin, Lei; Fu, Jie; Huang, Puming

    2017-10-01

    Large amount of data is one of the most obvious features in satellite based remote sensing systems, which is also a burden for data processing and transmission. The theory of compressive sensing(CS) has been proposed for almost a decade, and massive experiments show that CS has favorable performance in data compression and recovery, so we apply CS theory to remote sensing images acquisition. In CS, the construction of classical sensing matrix for all sparse signals has to satisfy the Restricted Isometry Property (RIP) strictly, which limits applying CS in practical in image compression. While for remote sensing images, we know some inherent characteristics such as non-negative, smoothness and etc.. Therefore, the goal of this paper is to present a novel measurement matrix that breaks RIP. The new sensing matrix consists of two parts: the standard Nyquist sampling matrix for thumbnails and the conventional CS sampling matrix. Since most of sun-synchronous based satellites fly around the earth 90 minutes and the revisit cycle is also short, lots of previously captured remote sensing images of the same place are available in advance. This drives us to reconstruct remote sensing images through a deep learning approach with those measurements from the new framework. Therefore, we propose a novel deep convolutional neural network (CNN) architecture which takes in undersampsing measurements as input and outputs an intermediate reconstruction image. It is well known that the training procedure to the network costs long time, luckily, the training step can be done only once, which makes the approach attractive for a host of sparse recovery problems.

  4. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  5. On-line compression of symmetrical multidimensional γ-ray spectra using adaptive orthogonal transforms

    International Nuclear Information System (INIS)

    Morhac, M.; Matousek, V.

    2008-01-01

    The efficient algorithm to compress multidimensional symmetrical γ-ray events is presented. The reduction of data volume can be achieved due to both the symmetry of the γ-ray spectra and compression capabilities of the employed adaptive orthogonal transform. Illustrative examples prove in the favor of the proposed compression algorithm. The algorithm was implemented for on-line compression of events. Acquired compressed data can be later processed in an interactive way

  6. Tactile Sensor Array with Fiber Bragg Gratings in Quasi-Distributed Sensing

    Directory of Open Access Journals (Sweden)

    Marcelo A. Pedroso

    2018-01-01

    Full Text Available This work describes the development of a quasi-distributed real-time tactile sensing system with a reduced number of fiber Bragg grating-based sensors and reports its use with a reconstruction method based on differential evolution. The sensing system is comprised of six fiber Bragg gratings encapsulated in silicone elastomer to form a tactile sensor array with total dimensions of 60 × 80 mm, divided into eight sensing cells with dimensions of 20 × 30 mm. Forces applied at the central position of the sensor array resulted in linear response curves for the gratings, highlighting their coupled responses and allowing the application of compressive sensing. The reduced number of sensors regarding the number of sensing cells results in an undetermined inverse problem, solved with a compressive sensing algorithm with the aid of differential evolution method. The system is capable of identifying and quantifying up to four different loads at four different cells with relative errors lower than 10.5% and signal-to-noise ratio better than 12 dB.

  7. Block compressed sensing for feedback reduction in relay-aided multiuser full duplex networks

    KAUST Repository

    Elkhalil, Khalil

    2016-08-11

    Opportunistic user selection is a simple technique that exploits the spatial diversity in multiuser relay-aided networks. Nonetheless, channel state information (CSI) from all users (and cooperating relays) is generally required at a central node in order to make selection decisions. Practically, CSI acquisition generates a great deal of feedback overhead that could result in significant transmission delays. In addition to this, the presence of a full-duplex cooperating relay corrupts the fed back CSI by additive noise and the relay\\'s loop (or self) interference. This could lead to transmission outages if user selection is based on inaccurate feedback information. In this paper, we propose an opportunistic full-duplex feedback algorithm that tackles the above challenges. We cast the problem of joint user signal-to-noise ratio (SNR) and the relay loop interference estimation at the base-station as a block sparse signal recovery problem in compressive sensing (CS). Using existing CS block recovery algorithms, the identity of the strong users is obtained and their corresponding SNRs are estimated. Numerical results show that the proposed technique drastically reduces the feedback overhead and achieves a rate close to that obtained by techniques that require dedicated error-free feedback from all users. Numerical results also show that there is a trade-off between the feedback interference and load, and for short coherence intervals, full-duplex feedback achieves higher throughput when compared to interference-free (half-duplex) feedback. © 2016 IEEE.

  8. Block compressed sensing for feedback reduction in relay-aided multiuser full duplex networks

    KAUST Repository

    Elkhalil, Khalil; Eltayeb, Mohammed; Kammoun, Abla; Al-Naffouri, Tareq Y.; Bahrami, Hamid Reza

    2016-01-01

    Opportunistic user selection is a simple technique that exploits the spatial diversity in multiuser relay-aided networks. Nonetheless, channel state information (CSI) from all users (and cooperating relays) is generally required at a central node in order to make selection decisions. Practically, CSI acquisition generates a great deal of feedback overhead that could result in significant transmission delays. In addition to this, the presence of a full-duplex cooperating relay corrupts the fed back CSI by additive noise and the relay's loop (or self) interference. This could lead to transmission outages if user selection is based on inaccurate feedback information. In this paper, we propose an opportunistic full-duplex feedback algorithm that tackles the above challenges. We cast the problem of joint user signal-to-noise ratio (SNR) and the relay loop interference estimation at the base-station as a block sparse signal recovery problem in compressive sensing (CS). Using existing CS block recovery algorithms, the identity of the strong users is obtained and their corresponding SNRs are estimated. Numerical results show that the proposed technique drastically reduces the feedback overhead and achieves a rate close to that obtained by techniques that require dedicated error-free feedback from all users. Numerical results also show that there is a trade-off between the feedback interference and load, and for short coherence intervals, full-duplex feedback achieves higher throughput when compared to interference-free (half-duplex) feedback. © 2016 IEEE.

  9. Wave energy devices with compressible volumes.

    Science.gov (United States)

    Kurniawan, Adi; Greaves, Deborah; Chaplin, John

    2014-12-08

    We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m 3 and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s.

  10. Compressed multi-block local binary pattern for object tracking

    Science.gov (United States)

    Li, Tianwen; Gao, Yun; Zhao, Lei; Zhou, Hao

    2018-04-01

    Both robustness and real-time are very important for the application of object tracking under a real environment. The focused trackers based on deep learning are difficult to satisfy with the real-time of tracking. Compressive sensing provided a technical support for real-time tracking. In this paper, an object can be tracked via a multi-block local binary pattern feature. The feature vector was extracted based on the multi-block local binary pattern feature, which was compressed via a sparse random Gaussian matrix as the measurement matrix. The experiments showed that the proposed tracker ran in real-time and outperformed the existed compressive trackers based on Haar-like feature on many challenging video sequences in terms of accuracy and robustness.

  11. Single-photon compressive imaging with some performance benefits over raster scanning

    International Nuclear Information System (INIS)

    Yu, Wen-Kai; Liu, Xue-Feng; Yao, Xu-Ri; Wang, Chao; Zhai, Guang-Jie; Zhao, Qing

    2014-01-01

    A single-photon imaging system based on compressed sensing has been developed to image objects under ultra-low illumination. With this system, we have successfully realized imaging at the single-photon level with a single-pixel avalanche photodiode without point-by-point raster scanning. From analysis of the signal-to-noise ratio in the measurement we find that our system has much higher sensitivity than conventional ones based on point-by-point raster scanning, while the measurement time is also reduced. - Highlights: • We design a single photon imaging system with compressed sensing. • A single point avalanche photodiode is used without raster scanning. • The Poisson shot noise in the measurement is analyzed. • The sensitivity of our system is proved to be higher than that of raster scanning

  12. An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks.

    Science.gov (United States)

    Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang

    2016-09-22

    To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It's theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.

  13. On the effects of quantization on mismatched pulse compression filters designed using L-p norm minimization techniques

    CSIR Research Space (South Africa)

    Cilliers, Jacques E

    2007-10-01

    Full Text Available In [1] the authors introduced a technique for generating mismatched pulse compression filters for linear frequency chirp signals. The technique minimizes the sum of the pulse compression sidelobes in a p L –norm sense. It was shown that extremely...

  14. Pressure mapping with textile sensors for compression therapy monitoring.

    Science.gov (United States)

    Baldoli, Ilaria; Mazzocchi, Tommaso; Paoletti, Clara; Ricotti, Leonardo; Salvo, Pietro; Dini, Valentina; Laschi, Cecilia; Francesco, Fabio Di; Menciassi, Arianna

    2016-08-01

    Compression therapy is the cornerstone of treatment in the case of venous leg ulcers. The therapy outcome is strictly dependent on the pressure distribution produced by bandages along the lower limb length. To date, pressure monitoring has been carried out using sensors that present considerable drawbacks, such as single point instead of distributed sensing, no shape conformability, bulkiness and constraints on patient's movements. In this work, matrix textile sensing technologies were explored in terms of their ability to measure the sub-bandage pressure with a suitable temporal and spatial resolution. A multilayered textile matrix based on a piezoresistive sensing principle was developed, calibrated and tested with human subjects, with the aim of assessing real-time distributed pressure sensing at the skin/bandage interface. Experimental tests were carried out on three healthy volunteers, using two different bandage types, from among those most commonly used. Such tests allowed the trends of pressure distribution to be evaluated over time, both at rest and during daily life activities. Results revealed that the proposed device enables the dynamic assessment of compression mapping, with a suitable spatial and temporal resolution (20 mm and 10 Hz, respectively). In addition, the sensor is flexible and conformable, thus well accepted by the patient. Overall, this study demonstrates the adequacy of the proposed piezoresistive textile sensor for the real-time monitoring of bandage-based therapeutic treatments. © IMechE 2016.

  15. A New Approach for Fingerprint Image Compression

    Energy Technology Data Exchange (ETDEWEB)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  16. Dynamic mode decomposition for compressive system identification

    Science.gov (United States)

    Bai, Zhe; Kaiser, Eurika; Proctor, Joshua L.; Kutz, J. Nathan; Brunton, Steven L.

    2017-11-01

    Dynamic mode decomposition has emerged as a leading technique to identify spatiotemporal coherent structures from high-dimensional data. In this work, we integrate and unify two recent innovations that extend DMD to systems with actuation and systems with heavily subsampled measurements. When combined, these methods yield a novel framework for compressive system identification, where it is possible to identify a low-order model from limited input-output data and reconstruct the associated full-state dynamic modes with compressed sensing, providing interpretability of the state of the reduced-order model. When full-state data is available, it is possible to dramatically accelerate downstream computations by first compressing the data. We demonstrate this unified framework on simulated data of fluid flow past a pitching airfoil, investigating the effects of sensor noise, different types of measurements (e.g., point sensors, Gaussian random projections, etc.), compression ratios, and different choices of actuation (e.g., localized, broadband, etc.). This example provides a challenging and realistic test-case for the proposed method, and results indicate that the dominant coherent structures and dynamics are well characterized even with heavily subsampled data.

  17. An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Donghao Wang

    2016-09-01

    Full Text Available To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It’s theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.

  18. Simple motion correction strategy reduces respiratory-induced motion artifacts for k-t accelerated and compressed-sensing cardiovascular magnetic resonance perfusion imaging.

    Science.gov (United States)

    Zhou, Ruixi; Huang, Wei; Yang, Yang; Chen, Xiao; Weller, Daniel S; Kramer, Christopher M; Kozerke, Sebastian; Salerno, Michael

    2018-02-01

    Cardiovascular magnetic resonance (CMR) stress perfusion imaging provides important diagnostic and prognostic information in coronary artery disease (CAD). Current clinical sequences have limited temporal and/or spatial resolution, and incomplete heart coverage. Techniques such as k-t principal component analysis (PCA) or k-t sparcity and low rank structure (SLR), which rely on the high degree of spatiotemporal correlation in first-pass perfusion data, can significantly accelerate image acquisition mitigating these problems. However, in the presence of respiratory motion, these techniques can suffer from significant degradation of image quality. A number of techniques based on non-rigid registration have been developed. However, to first approximation, breathing motion predominantly results in rigid motion of the heart. To this end, a simple robust motion correction strategy is proposed for k-t accelerated and compressed sensing (CS) perfusion imaging. A simple respiratory motion compensation (MC) strategy for k-t accelerated and compressed-sensing CMR perfusion imaging to selectively correct respiratory motion of the heart was implemented based on linear k-space phase shifts derived from rigid motion registration of a region-of-interest (ROI) encompassing the heart. A variable density Poisson disk acquisition strategy was used to minimize coherent aliasing in the presence of respiratory motion, and images were reconstructed using k-t PCA and k-t SLR with or without motion correction. The strategy was evaluated in a CMR-extended cardiac torso digital (XCAT) phantom and in prospectively acquired first-pass perfusion studies in 12 subjects undergoing clinically ordered CMR studies. Phantom studies were assessed using the Structural Similarity Index (SSIM) and Root Mean Square Error (RMSE). In patient studies, image quality was scored in a blinded fashion by two experienced cardiologists. In the phantom experiments, images reconstructed with the MC strategy had higher

  19. Lagrangian investigations of vorticity dynamics in compressible turbulence

    Science.gov (United States)

    Parashar, Nishant; Sinha, Sawan Suman; Danish, Mohammad; Srinivasan, Balaji

    2017-10-01

    In this work, we investigate the influence of compressibility on vorticity-strain rate dynamics. Well-resolved direct numerical simulations of compressible homogeneous isotropic turbulence performed over a cubical domain of 10243 are employed for this study. To clearly identify the influence of compressibility on the time-dependent dynamics (rather than on the one-time flow field), we employ a well-validated Lagrangian particle tracker. The tracker is used to obtain time correlations between the instantaneous vorticity vector and the strain-rate eigenvector system of an appropriately chosen reference time. In this work, compressibility is parameterized in terms of both global (turbulent Mach number) and local parameters (normalized dilatation-rate and flow field topology). Our investigations reveal that the local dilatation rate significantly influences these statistics. In turn, this observed influence of the dilatation rate is predominantly associated with rotation dominated topologies (unstable-focus-compressing, stable-focus-stretching). We find that an enhanced dilatation rate (in both contracting and expanding fluid elements) significantly enhances the tendency of the vorticity vector to align with the largest eigenvector of the strain-rate. Further, in fluid particles where the vorticity vector is maximally misaligned (perpendicular) at the reference time, vorticity does show a substantial tendency to align with the intermediate eigenvector as well. The authors make an attempt to provide physical explanations of these observations (in terms of moment of inertia and angular momentum) by performing detailed calculations following tetrads {approach of Chertkov et al. ["Lagrangian tetrad dynamics and the phenomenology of turbulence," Phys. Fluids 11(8), 2394-2410 (1999)] and Xu et al. ["The pirouette effect in turbulent flows," Nat. Phys. 7(9), 709-712 (2011)]} in a compressible flow field.

  20. Advanced metal artifact reduction MRI of metal-on-metal hip resurfacing arthroplasty implants: compressed sensing acceleration enables the time-neutral use of SEMAC

    International Nuclear Information System (INIS)

    Fritz, Jan; Thawait, Gaurav K.; Fritz, Benjamin; Raithel, Esther; Nittka, Mathias; Gilson, Wesley D.; Mont, Michael A.

    2016-01-01

    Compressed sensing (CS) acceleration has been theorized for slice encoding for metal artifact correction (SEMAC), but has not been shown to be feasible. Therefore, we tested the hypothesis that CS-SEMAC is feasible for MRI of metal-on-metal hip resurfacing implants. Following prospective institutional review board approval, 22 subjects with metal-on-metal hip resurfacing implants underwent 1.5 T MRI. We compared CS-SEMAC prototype, high-bandwidth TSE, and SEMAC sequences with acquisition times of 4-5, 4-5 and 10-12 min, respectively. Outcome measures included bone-implant interfaces, image quality, periprosthetic structures, artifact size, and signal- and contrast-to-noise ratios (SNR and CNR). Using Friedman, repeated measures analysis of variances, and Cohen's weighted kappa tests, Bonferroni-corrected p-values of 0.005 and less were considered statistically significant. There was no statistical difference of outcomes measures of SEMAC and CS-SEMAC images. Visibility of implant-bone interfaces and pseudocapsule as well as fat suppression and metal reduction were ''adequate'' to ''good'' on CS-SEMAC and ''non-diagnostic'' to ''adequate'' on high-BW TSE (p < 0.001, respectively). SEMAC and CS-SEMAC showed mild blur and ripple artifacts. The metal artifact size was 63 % larger for high-BW TSE as compared to SEMAC and CS-SEMAC (p < 0.0001, respectively). CNRs were sufficiently high and statistically similar, with the exception of CNR of fluid and muscle and CNR of fluid and tendon, which were higher on intermediate-weighted high-BW TSE (p < 0.005, respectively). Compressed sensing acceleration enables the time-neutral use of SEMAC for MRI of metal-on-metal hip resurfacing implants when compared to high-BW TSE and image quality similar to conventional SEMAC. (orig.)

  1. Advanced metal artifact reduction MRI of metal-on-metal hip resurfacing arthroplasty implants: compressed sensing acceleration enables the time-neutral use of SEMAC

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, Jan; Thawait, Gaurav K. [Johns Hopkins University School of Medicine, Russell H. Morgan Department of Radiology and Radiological Science, Section of Musculoskeletal Radiology, Baltimore, MD (United States); Fritz, Benjamin [University of Freiburg, Department of Radiology, Freiburg im Breisgau (Germany); Raithel, Esther; Nittka, Mathias [Siemens Healthcare GmbH, Erlangen (Germany); Gilson, Wesley D. [Siemens Healthcare USA, Inc., Baltimore, MD (United States); Mont, Michael A. [Cleveland Clinic Foundation, Department of Orthopedic Surgery, Cleveland, OH (United States)

    2016-10-15

    Compressed sensing (CS) acceleration has been theorized for slice encoding for metal artifact correction (SEMAC), but has not been shown to be feasible. Therefore, we tested the hypothesis that CS-SEMAC is feasible for MRI of metal-on-metal hip resurfacing implants. Following prospective institutional review board approval, 22 subjects with metal-on-metal hip resurfacing implants underwent 1.5 T MRI. We compared CS-SEMAC prototype, high-bandwidth TSE, and SEMAC sequences with acquisition times of 4-5, 4-5 and 10-12 min, respectively. Outcome measures included bone-implant interfaces, image quality, periprosthetic structures, artifact size, and signal- and contrast-to-noise ratios (SNR and CNR). Using Friedman, repeated measures analysis of variances, and Cohen's weighted kappa tests, Bonferroni-corrected p-values of 0.005 and less were considered statistically significant. There was no statistical difference of outcomes measures of SEMAC and CS-SEMAC images. Visibility of implant-bone interfaces and pseudocapsule as well as fat suppression and metal reduction were ''adequate'' to ''good'' on CS-SEMAC and ''non-diagnostic'' to ''adequate'' on high-BW TSE (p < 0.001, respectively). SEMAC and CS-SEMAC showed mild blur and ripple artifacts. The metal artifact size was 63 % larger for high-BW TSE as compared to SEMAC and CS-SEMAC (p < 0.0001, respectively). CNRs were sufficiently high and statistically similar, with the exception of CNR of fluid and muscle and CNR of fluid and tendon, which were higher on intermediate-weighted high-BW TSE (p < 0.005, respectively). Compressed sensing acceleration enables the time-neutral use of SEMAC for MRI of metal-on-metal hip resurfacing implants when compared to high-BW TSE and image quality similar to conventional SEMAC. (orig.)

  2. Compressed optimization of device architectures

    Energy Technology Data Exchange (ETDEWEB)

    Frees, Adam [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Gamble, John King [Microsoft Research, Redmond, WA (United States). Quantum Architectures and Computation Group; Ward, Daniel Robert [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Blume-Kohout, Robin J [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Eriksson, M. A. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Friesen, Mark [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Coppersmith, Susan N. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics

    2014-09-01

    Recent advances in nanotechnology have enabled researchers to control individual quantum mechanical objects with unprecedented accuracy, opening the door for both quantum and extreme- scale conventional computation applications. As these devices become more complex, designing for facility of control becomes a daunting and computationally infeasible task. Here, motivated by ideas from compressed sensing, we introduce a protocol for the Compressed Optimization of Device Architectures (CODA). It leads naturally to a metric for benchmarking and optimizing device designs, as well as an automatic device control protocol that reduces the operational complexity required to achieve a particular output. Because this protocol is both experimentally and computationally efficient, it is readily extensible to large systems. For this paper, we demonstrate both the bench- marking and device control protocol components of CODA through examples of realistic simulations of electrostatic quantum dot devices, which are currently being developed experimentally for quantum computation.

  3. Pattern-based compression of multi-band image data for landscape analysis

    CERN Document Server

    Myers, Wayne L; Patil, Ganapati P

    2006-01-01

    This book describes an integrated approach to using remotely sensed data in conjunction with geographic information systems for landscape analysis. Remotely sensed data are compressed into an analytical image-map that is compatible with the most popular geographic information systems as well as freeware viewers. The approach is most effective for landscapes that exhibit a pronounced mosaic pattern of land cover. The image maps are much more compact than the original remotely sensed data, which enhances utility on the internet. As value-added products, distribution of image-maps is not affected by copyrights on original multi-band image data.

  4. Lagrangian statistics in compressible isotropic homogeneous turbulence

    Science.gov (United States)

    Yang, Yantao; Wang, Jianchun; Shi, Yipeng; Chen, Shiyi

    2011-11-01

    In this work we conducted the Direct Numerical Simulation (DNS) of a forced compressible isotropic homogeneous turbulence and investigated the flow statistics from the Lagrangian point of view, namely the statistics is computed following the passive tracers trajectories. The numerical method combined the Eulerian field solver which was developed by Wang et al. (2010, J. Comp. Phys., 229, 5257-5279), and a Lagrangian module for tracking the tracers and recording the data. The Lagrangian probability density functions (p.d.f.'s) have then been calculated for both kinetic and thermodynamic quantities. In order to isolate the shearing part from the compressing part of the flow, we employed the Helmholtz decomposition to decompose the flow field (mainly the velocity field) into the solenoidal and compressive parts. The solenoidal part was compared with the incompressible case, while the compressibility effect showed up in the compressive part. The Lagrangian structure functions and cross-correlation between various quantities will also be discussed. This work was supported in part by the China's Turbulence Program under Grant No.2009CB724101.

  5. Accelerated barrier optimization compressed sensing (ABOCS) for CT reconstruction with improved convergence

    International Nuclear Information System (INIS)

    Niu, Tianye; Fruhauf, Quentin; Petrongolo, Michael; Zhu, Lei; Ye, Xiaojing

    2014-01-01

    Recently, we proposed a new algorithm of accelerated barrier optimization compressed sensing (ABOCS) for iterative CT reconstruction. The previous implementation of ABOCS uses gradient projection (GP) with a Barzilai–Borwein (BB) step-size selection scheme (GP-BB) to search for the optimal solution. The algorithm does not converge stably due to its non-monotonic behavior. In this paper, we further improve the convergence of ABOCS using the unknown-parameter Nesterov (UPN) method and investigate the ABOCS reconstruction performance on clinical patient data. Comparison studies are carried out on reconstructions of computer simulation, a physical phantom and a head-and-neck patient. In all of these studies, the ABOCS results using UPN show more stable and faster convergence than those of the GP-BB method and a state-of-the-art Bregman-type method. As shown in the simulation study of the Shepp–Logan phantom, UPN achieves the same image quality as those of GP-BB and the Bregman-type methods, but reduces the iteration numbers by up to 50% and 90%, respectively. In the Catphan©600 phantom study, a high-quality image with relative reconstruction error (RRE) less than 3% compared to the full-view result is obtained using UPN with 17% projections (60 views). In the conventional filtered-backprojection reconstruction, the corresponding RRE is more than 15% on the same projection data. The superior performance of ABOCS with the UPN implementation is further demonstrated on the head-and-neck patient. Using 25% projections (91 views), the proposed method reduces the RRE from 21% as in the filtered backprojection (FBP) results to 7.3%. In conclusion, we propose UPN for ABOCS implementation. As compared to GP-BB and the Bregman-type methods, the new method significantly improves the convergence with higher stability and fewer iterations. (paper)

  6. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    Directory of Open Access Journals (Sweden)

    Jin Li

    2014-01-01

    Full Text Available Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC combined with image data compression (IDC approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE. Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS-based algorithm has better compression performance than the traditional compression approaches.

  7. Multispectral image compression based on DSC combined with CCSDS-IDC.

    Science.gov (United States)

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  8. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  9. Vacancy behavior in a compressed fcc Lennard-Jones crystal

    International Nuclear Information System (INIS)

    Beeler, J.R. Jr.

    1981-12-01

    This computer experiment study concerns the determination of the stable vacancy configuration in a compressed fcc Lennard-Jones crystal and the migration of this defect in a compressed crystal. Isotropic and uniaxial compression stress conditions were studied. The isotropic and uniaxial compression magnitudes employed were 0.94 less than or equal to eta less than or equal to 1.5, and 1.0 less than or equal to eta less than or equal to 1.5, respectively. The site-centered vacancy (SCV) was the stable vacancy configuration whenever cubic symmetry was present. This includes all of the isotropic compression cases and the particular uniaxial compression case (eta = √2) that give a bcc structure. In addition, the SCV was the stable configuration for uniaxial compression eta 1.20, the SV-OP is an extended defect and, therefore, a saddle point for SV-OP migration could not be determined. The mechanism for the transformation from the SCV to the SV-OP as the stable form at eta = 1.29 appears to be an alternating sign [101] and/or [011] shear process

  10. Data Structure Lower Bounds on Random Access to Grammar-Compressed Strings

    DEFF Research Database (Denmark)

    Chen, Shiteng; Verbin, Elad; Yu, Wei

    2012-01-01

    ). The proof works by reduction to communication complexity, namely to the LSD problem, recently employed by Patrascu and others. We prove lower bounds also for the case of LZ-compression and Burrows-Wheeler (BWT) compression. All of our lower bounds hold even when the strings are over an alphabet of size 2...

  11. Image splitting and remapping method for radiological image compression

    Science.gov (United States)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  12. Application of the compress sensing theory for improvement of the TOF resolution in a novel J-PET instrument

    Directory of Open Access Journals (Sweden)

    Raczyński Lech

    2016-03-01

    Full Text Available Nowadays, in positron emission tomography (PET systems, a time of flight (TOF information is used to improve the image reconstruction process. In TOF-PET, fast detectors are able to measure the difference in the arrival time of the two gamma rays, with the precision enabling to shorten significantly a range along the line-of-response (LOR where the annihilation occurred. In the new concept, called J-PET scanner, gamma rays are detected in plastic scintillators. In a single strip of J-PET system, time values are obtained by probing signals in the amplitude domain. Owing to compressive sensing (CS theory, information about the shape and amplitude of the signals is recovered. In this paper, we demonstrate that based on the acquired signals parameters, a better signal normalization may be provided in order to improve the TOF resolution. The procedure was tested using large sample of data registered by a dedicated detection setup enabling sampling of signals with 50-ps intervals. Experimental setup provided irradiation of a chosen position in the plastic scintillator strip with annihilation gamma quanta.

  13. Quantitative Evaluation of Temporal Regularizers in Compressed Sensing Dynamic Contrast Enhanced MRI of the Breast

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2017-01-01

    Full Text Available Purpose. Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI is used in cancer imaging to probe tumor vascular properties. Compressed sensing (CS theory makes it possible to recover MR images from randomly undersampled k-space data using nonlinear recovery schemes. The purpose of this paper is to quantitatively evaluate common temporal sparsity-promoting regularizers for CS DCE-MRI of the breast. Methods. We considered five ubiquitous temporal regularizers on 4.5x retrospectively undersampled Cartesian in vivo breast DCE-MRI data: Fourier transform (FT, Haar wavelet transform (WT, total variation (TV, second-order total generalized variation (TGVα2, and nuclear norm (NN. We measured the signal-to-error ratio (SER of the reconstructed images, the error in tumor mean, and concordance correlation coefficients (CCCs of the derived pharmacokinetic parameters Ktrans (volume transfer constant and ve (extravascular-extracellular volume fraction across a population of random sampling schemes. Results. NN produced the lowest image error (SER: 29.1, while TV/TGVα2 produced the most accurate Ktrans (CCC: 0.974/0.974 and ve (CCC: 0.916/0.917. WT produced the highest image error (SER: 21.8, while FT produced the least accurate Ktrans (CCC: 0.842 and ve (CCC: 0.799. Conclusion. TV/TGVα2 should be used as temporal constraints for CS DCE-MRI of the breast.

  14. Comparison of conventional DCE-MRI and a novel golden-angle radial multicoil compressed sensing method for the evaluation of breast lesion conspicuity.

    Science.gov (United States)

    Heacock, Laura; Gao, Yiming; Heller, Samantha L; Melsaether, Amy N; Babb, James S; Block, Tobias K; Otazo, Ricardo; Kim, Sungheon G; Moy, Linda

    2017-06-01

    To compare a novel multicoil compressed sensing technique with flexible temporal resolution, golden-angle radial sparse parallel (GRASP), to conventional fat-suppressed spoiled three-dimensional (3D) gradient-echo (volumetric interpolated breath-hold examination, VIBE) MRI in evaluating the conspicuity of benign and malignant breast lesions. Between March and August 2015, 121 women (24-84 years; mean, 49.7 years) with 180 biopsy-proven benign and malignant lesions were imaged consecutively at 3.0 Tesla in a dynamic contrast-enhanced (DCE) MRI exam using sagittal T1-weighted fat-suppressed 3D VIBE in this Health Insurance Portability and Accountability Act-compliant, retrospective study. Subjects underwent MRI-guided breast biopsy (mean, 13 days [1-95 days]) using GRASP DCE-MRI, a fat-suppressed radial "stack-of-stars" 3D FLASH sequence with golden-angle ordering. Three readers independently evaluated breast lesions on both sequences. Statistical analysis included mixed models with generalized estimating equations, kappa-weighted coefficients and Fisher's exact test. All lesions demonstrated good conspicuity on VIBE and GRASP sequences (4.28 ± 0.81 versus 3.65 ± 1.22), with no significant difference in lesion detection (P = 0.248). VIBE had slightly higher lesion conspicuity than GRASP for all lesions, with VIBE 12.6% (0.63/5.0) more conspicuous (P < 0.001). Masses and nonmass enhancement (NME) were more conspicuous on VIBE (P < 0.001), with a larger difference for NME (14.2% versus 9.4% more conspicuous). Malignant lesions were more conspicuous than benign lesions (P < 0.001) on both sequences. GRASP DCE-MRI, a multicoil compressed sensing technique with high spatial resolution and flexible temporal resolution, has near-comparable performance to conventional VIBE imaging for breast lesion evaluation. 3 Technical Efficacy: Stage 3 J. MAGN. RESON. IMAGING 2017;45:1746-1752. © 2016 International Society for Magnetic Resonance in Medicine.

  15. Assessment of Left Ventricular Function and Mass on Free-Breathing Compressed Sensing Real-Time Cine Imaging.

    Science.gov (United States)

    Kido, Tomoyuki; Kido, Teruhito; Nakamura, Masashi; Watanabe, Kouki; Schmidt, Michaela; Forman, Christoph; Mochizuki, Teruhito

    2017-09-25

    Compressed sensing (CS) cine magnetic resonance imaging (MRI) has the advantage of being inherently insensitive to respiratory motion. This study compared the accuracy of free-breathing (FB) CS and breath-hold (BH) standard cine MRI for left ventricular (LV) volume assessment.Methods and Results:Sixty-three patients underwent cine MRI with both techniques. Both types of images were acquired in stacks of 8 short-axis slices (temporal/spatial resolution, 41 ms/1.7×1.7×6 mm 3 ) and compared for ejection fraction, end-diastolic and systolic volumes, stroke volume, and LV mass. Both BH standard and FB CS cine MRI provided acceptable image quality for LV volumetric analysis (score ≥3) in all patients (4.7±0.5 and 3.7±0.5, respectively; Pcine MRI (median, IQR: BH standard, 83.8 mL, 64.7-102.7 mL; FB CS, 79.0 mL, 66.0-101.0 mL; P=0.0006). The total acquisition times for BH standard and FB CS cine MRI were 113±7 s and 24±4 s, respectively (Pcine MRI is a clinically useful alternative to BH standard cine MRI in patients with impaired BH capacity.

  16. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    Science.gov (United States)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  17. Quorum Sensing of Periodontal Pathogens

    Directory of Open Access Journals (Sweden)

    Darije Plančak

    2015-01-01

    Full Text Available The term ‘quorum sensing’ describes intercellular bacterial communication which regulates bacterial gene expression according to population cell density. Bacteria produce and secrete small molecules, named autoinducers, into the intercellular space. The concentration of these molecules increases as a function of population cell density. Once the concentration of the stimulatory threshold is reached, alteration in gene expression occurs. Gram-positive and Gram-negative bacteria possess different types of quorum sensing systems. Canonical LuxI/R-type/acyl homoserine lactone mediated quorum sensing system is the best studied quorum sensing circuit and is described in Gram-negative bacteria which employ it for inter-species communication mostly. Grampositive bacteria possess a peptide-mediated quorum sensing system. Bacteria can communicate within their own species (intra-species but also between species (inter-species, for which they employ an autoinducer-2 quorum sensing system which is called the universal language of the bacteria. Periodontal pathogenic bacteria possess AI-2 quorum sensing systems. It is known that they use it for regulation of biofilm formation, iron uptake, stress response and virulence factor expression. A better understanding of bacterial communication mechanisms will allow the targeting of quorum sensing with quorum sensing inhibitors to prevent and control disease.

  18. Accelerated three-dimensional cine phase contrast imaging using randomly undersampled echo planar imaging with compressed sensing reconstruction.

    Science.gov (United States)

    Basha, Tamer A; Akçakaya, Mehmet; Goddu, Beth; Berg, Sophie; Nezafat, Reza

    2015-01-01

    The aim of this study was to implement and evaluate an accelerated three-dimensional (3D) cine phase contrast MRI sequence by combining a randomly sampled 3D k-space acquisition sequence with an echo planar imaging (EPI) readout. An accelerated 3D cine phase contrast MRI sequence was implemented by combining EPI readout with randomly undersampled 3D k-space data suitable for compressed sensing (CS) reconstruction. The undersampled data were then reconstructed using low-dimensional structural self-learning and thresholding (LOST). 3D phase contrast MRI was acquired in 11 healthy adults using an overall acceleration of 7 (EPI factor of 3 and CS rate of 3). For comparison, a single two-dimensional (2D) cine phase contrast scan was also performed with sensitivity encoding (SENSE) rate 2 and approximately at the level of the pulmonary artery bifurcation. The stroke volume and mean velocity in both the ascending and descending aorta were measured and compared between two sequences using Bland-Altman plots. An average scan time of 3 min and 30 s, corresponding to an acceleration rate of 7, was achieved for 3D cine phase contrast scan with one direction flow encoding, voxel size of 2 × 2 × 3 mm(3) , foot-head coverage of 6 cm and temporal resolution of 30 ms. The mean velocity and stroke volume in both the ascending and descending aorta were statistically equivalent between the proposed 3D sequence and the standard 2D cine phase contrast sequence. The combination of EPI with a randomly undersampled 3D k-space sampling sequence using LOST reconstruction allows a seven-fold reduction in scan time of 3D cine phase contrast MRI without compromising blood flow quantification. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Theoretical models for describing longitudinal bunch compression in the neutralized drift compression experiment

    Directory of Open Access Journals (Sweden)

    Adam B. Sefkow

    2006-09-01

    Full Text Available Heavy ion drivers for warm dense matter and heavy ion fusion applications use intense charge bunches which must undergo transverse and longitudinal compression in order to meet the requisite high current densities and short pulse durations desired at the target. The neutralized drift compression experiment (NDCX at the Lawrence Berkeley National Laboratory is used to study the longitudinal neutralized drift compression of a space-charge-dominated ion beam, which occurs due to an imposed longitudinal velocity tilt and subsequent neutralization of the beam’s space charge by background plasma. Reduced theoretical models have been used in order to describe the realistic propagation of an intense charge bunch through the NDCX device. A warm-fluid model is presented as a tractable computational tool for investigating the nonideal effects associated with the experimental acceleration gap geometry and voltage waveform of the induction module, which acts as a means to pulse shape both the velocity and line density profiles. Self-similar drift compression solutions can be realized in order to transversely focus the entire charge bunch to the same focal plane in upcoming simultaneous transverse and longitudinal focusing experiments. A kinetic formalism based on the Vlasov equation has been employed in order to show that the peaks in the experimental current profiles are a result of the fact that only the central portion of the beam contributes effectively to the main compressed pulse. Significant portions of the charge bunch reside in the nonlinearly compressing part of the ion beam because of deviations between the experimental and ideal velocity tilts. Those regions form a pedestal of current around the central peak, thereby decreasing the amount of achievable longitudinal compression and increasing the pulse durations achieved at the focal plane. A hybrid fluid-Vlasov model which retains the advantages of both the fluid and kinetic approaches has been

  20. Multiband CCD Image Compression for Space Camera with Large Field of View

    Directory of Open Access Journals (Sweden)

    Jin Li

    2014-01-01

    Full Text Available Space multiband CCD camera compression encoder requires low-complexity, high-robustness, and high-performance because of its captured images information being very precious and also because it is usually working on the satellite where the resources, such as power, memory, and processing capacity, are limited. However, the traditional compression approaches, such as JPEG2000, 3D transforms, and PCA, have the high-complexity. The Consultative Committee for Space Data Systems-Image Data Compression (CCSDS-IDC algorithm decreases the average PSNR by 2 dB compared with JPEG2000. In this paper, we proposed a low-complexity compression algorithm based on deep coupling algorithm among posttransform in wavelet domain, compressive sensing, and distributed source coding. In our algorithm, we integrate three low-complexity and high-performance approaches in a deeply coupled manner to remove the spatial redundant, spectral redundant, and bit information redundancy. Experimental results on multiband CCD images show that the proposed algorithm significantly outperforms the traditional approaches.

  1. Infrared and visible image fusion based on robust principal component analysis and compressed sensing

    Science.gov (United States)

    Li, Jun; Song, Minghui; Peng, Yuanxi

    2018-03-01

    Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.

  2. Microfluidic pressure sensing using trapped air compression.

    Science.gov (United States)

    Srivastava, Nimisha; Burns, Mark A

    2007-05-01

    We have developed a microfluidic method for measuring the fluid pressure head experienced at any location inside a microchannel. The principal component is a microfabricated sealed chamber with a single inlet and no exit; the entrance to the single inlet is positioned at the location where pressure is to be measured. The pressure measurement is then based on monitoring the movement of a liquid-air interface as it compresses air trapped inside the microfabricated sealed chamber and calculating the pressure using the ideal gas law. The method has been used to measure the pressure of the air stream and continuous liquid flow inside microfluidic channels (d approximately 50 microm). Further, a pressure drop has also been measured using multiple microfabricated sealed chambers. For air pressure, a resolution of 700 Pa within a full-scale range of 700-100 kPa was obtained. For liquids, pressure drops as low as 70 Pa were obtained in an operating range from 70 Pa to 10 kPa. Since the method primarily uses a microfluidic sealed chamber, it does not require additional fabrication steps and may easily be incorporated in several lab-on-a-chip fluidic applications for laminar as well as turbulent flow conditions.

  3. Electrical and Self-Sensing Properties of Ultra-High-Performance Fiber-Reinforced Concrete with Carbon Nanotubes

    Directory of Open Access Journals (Sweden)

    Ilhwan You

    2017-10-01

    Full Text Available This study examined the electrical and self-sensing capacities of ultra-high-performance fiber-reinforced concrete (UHPFRC with and without carbon nanotubes (CNTs. For this, the effects of steel fiber content, orientation, and pore water content on the electrical and piezoresistive properties of UHPFRC without CNTs were first evaluated. Then, the effect of CNT content on the self-sensing capacities of UHPFRC under compression and flexure was investigated. Test results indicated that higher steel fiber content, better fiber orientation, and higher amount of pore water led to higher electrical conductivity of UHPFRC. The effects of fiber orientation and drying condition on the electrical conductivity became minor as sufficiently high amount of steel fibers, 3% by volume, was added. Including only steel fibers did not impart UHPFRC with piezoresistive properties. Addition of CNTs substantially improved the electrical conductivity of UHPFRC. Under compression, UHPFRC with a CNT content of 0.3% or greater had a self-sensing ability that was activated by the formation of cracks, and better sensing capacity was achieved by including greater amount of CNTs. Furthermore, the pre-peak flexural behavior of UHPFRC was precisely simulated with a fractional change in resistivity when 0.3% CNTs were incorporated. The pre-cracking self-sensing capacity of UHPFRC with CNTs was more effective under tensile stress state than under compressive stress state.

  4. Electrical and Self-Sensing Properties of Ultra-High-Performance Fiber-Reinforced Concrete with Carbon Nanotubes.

    Science.gov (United States)

    You, Ilhwan; Yoo, Doo-Yeol; Kim, Sooho; Kim, Min-Jae; Zi, Goangseup

    2017-10-29

    This study examined the electrical and self-sensing capacities of ultra-high-performance fiber-reinforced concrete (UHPFRC) with and without carbon nanotubes (CNTs). For this, the effects of steel fiber content, orientation, and pore water content on the electrical and piezoresistive properties of UHPFRC without CNTs were first evaluated. Then, the effect of CNT content on the self-sensing capacities of UHPFRC under compression and flexure was investigated. Test results indicated that higher steel fiber content, better fiber orientation, and higher amount of pore water led to higher electrical conductivity of UHPFRC. The effects of fiber orientation and drying condition on the electrical conductivity became minor as sufficiently high amount of steel fibers, 3% by volume, was added. Including only steel fibers did not impart UHPFRC with piezoresistive properties. Addition of CNTs substantially improved the electrical conductivity of UHPFRC. Under compression, UHPFRC with a CNT content of 0.3% or greater had a self-sensing ability that was activated by the formation of cracks, and better sensing capacity was achieved by including greater amount of CNTs. Furthermore, the pre-peak flexural behavior of UHPFRC was precisely simulated with a fractional change in resistivity when 0.3% CNTs were incorporated. The pre-cracking self-sensing capacity of UHPFRC with CNTs was more effective under tensile stress state than under compressive stress state.

  5. Compressed sensing along physically plausible sampling trajectories in MRI

    International Nuclear Information System (INIS)

    Chauffert, Nicolas

    2015-01-01

    Magnetic Resonance Imaging (MRI) is a non-invasive and non-ionizing imaging technique that provides images of body tissues, using the contrast sensitivity coming from the magnetic parameters (T_1, T_2 and proton density). Data are acquired in the κ-space, corresponding to spatial Fourier frequencies. Because of physical constraints, the displacement in the κ-space is subject to kinematic constraints. Indeed, magnetic field gradients and their temporal derivative are upper bounded. Hence, the scanning time increases with the image resolution. Decreasing scanning time is crucial to improve patient comfort, decrease exam costs, limit the image distortions (eg, created by the patient movement), or decrease temporal resolution in functional MRI. Reducing scanning time can be addressed by Compressed Sensing (CS) theory. The latter is a technique that guarantees the perfect recovery of an image from under sampled data in κ-space, by assuming that the image is sparse in a wavelet basis. Unfortunately, CS theory cannot be directly cast to the MRI setting. The reasons are: i) acquisition (Fourier) and representation (wavelets) bases are coherent and ii) sampling schemes obtained using CS theorems are composed of isolated measurements and cannot be realistically implemented by magnetic field gradients: the sampling is usually performed along continuous or more regular curves. However, heuristic application of CS in MRI has provided promising results. In this thesis, we aim to develop theoretical tools to apply CS to MRI and other modalities. On the one hand, we propose a variable density sampling theory to answer the first impediment. The more the sample contains information, the more it is likely to be drawn. On the other hand, we propose sampling schemes and design sampling trajectories that fulfill acquisition constraints, while traversing the κ-space with the sampling density advocated by the theory. The second point is complex and is thus addressed step by step

  6. A self-sensing carbon nanotube/cement composite for traffic monitoring

    International Nuclear Information System (INIS)

    Han Baoguo; Yu Xun; Kwon, Eil

    2009-01-01

    In this paper, a self-sensing carbon nanotube (CNT)/cement composite is investigated for traffic monitoring. The cement composite is filled with multi-walled carbon nanotubes whose piezoresistive properties enable the detection of mechanical stresses induced by traffic flow. The sensing capability of the self-sensing CNT/cement composite is explored in laboratory tests and road tests. Experimental results show that the fabricated self-sensing CNT/cement composite presents sensitive and stable responses to repeated compressive loadings and impulsive loadings, and has remarkable responses to vehicular loadings. These findings indicate that the self-sensing CNT/cement composite has great potential for traffic monitoring use, such as in traffic flow detection, weigh-in-motion measurement and vehicle speed detection.

  7. Mixed raster content segmentation, compression, transmission

    CERN Document Server

    Pavlidis, George

    2017-01-01

    This book presents the main concepts in handling digital images of mixed content, traditionally referenced as mixed raster content (MRC), in two main parts. The first includes introductory chapters covering the scientific and technical background aspects, whereas the second presents a set of research and development approaches to tackle key issues in MRC segmentation, compression and transmission. The book starts with a review of color theory and the mechanism of color vision in humans. In turn, the second chapter reviews data coding and compression methods so as to set the background and demonstrate the complexity involved in dealing with MRC. Chapter three addresses the segmentation of images through an extensive literature review, which highlights the various approaches used to tackle MRC segmentation. The second part of the book focuses on the segmentation of color images for optimized compression, including multi-layered decomposition and representation of MRC and the processes that can be employed to op...

  8. Neutralized drift compression experiments with a high-intensity ion beam

    International Nuclear Information System (INIS)

    Roy, P.K.; Yu, S.S.; Waldron, W.L.; Anders, A.; Baca, D.; Barnard, J.J.; Bieniosek, F.M.; Coleman, J.; Davidson, R.C.; Efthimion, P.C.; Eylon, S.; Friedman, A.; Gilson, E.P.; Greenway, W.G.; Henestroza, E.; Kaganovich, I.; Leitner, M.; Logan, B.G.; Sefkow, A.B.; Seidl, P.A.; Sharp, W.M.; Thoma, C.; Welch, D.R.

    2007-01-01

    To create high-energy density matter and fusion conditions, high-power drivers, such as lasers, ion beams, and X-ray drivers, may be employed to heat targets with short pulses compared to hydro-motion. Both high-energy density physics and ion-driven inertial fusion require the simultaneous transverse and longitudinal compression of an ion beam to achieve high intensities. We have previously studied the effects of plasma neutralization for transverse beam compression. The scaled experiment, the Neutralized Transport Experiment (NTX), demonstrated that an initially un-neutralized beam can be compressed transversely to ∼1 mm radius when charge neutralization by background plasma electrons is provided. Here, we report longitudinal compression of a velocity-tailored, intense, neutralized 25 mA K + beam at 300 keV. The compression takes place in a 1-2 m drift section filled with plasma to provide space-charge neutralization. An induction cell produces a head-to-tail velocity ramp that longitudinally compresses the neutralized beam, enhances the beam peak current by a factor of 50 and produces a pulse duration of about 3 ns. The physics of longitudinal compression, experimental procedure, and the results of the compression experiments are presented

  9. Supercharged two-cycle engines employing novel single element reciprocating shuttle inlet valve mechanisms and with a variable compression ratio

    Science.gov (United States)

    Wiesen, Bernard (Inventor)

    2008-01-01

    This invention relates to novel reciprocating shuttle inlet valves, effective with every type of two-cycle engine, from small high-speed single cylinder model engines, to large low-speed multiple cylinder engines, employing spark or compression ignition. Also permitting the elimination of out-of-phase piston arrangements to control scavenging and supercharging of opposed-piston engines. The reciprocating shuttle inlet valve (32) and its operating mechanism (34) is constructed as a single and simple uncomplicated member, in combination with the lost-motion abutments, (46) and (48), formed in a piston skirt, obviating the need for any complex mechanisms or auxiliary drives, unaffected by heat, friction, wear or inertial forces. The reciprocating shuttle inlet valve retains the simplicity and advantages of two-cycle engines, while permitting an increase in volumetric efficiency and performance, thereby increasing the range of usefulness of two-cycle engines into many areas that are now dominated by the four-cycle engine.

  10. Compressing a spinodal surface at fixed area: bijels in a centrifuge.

    Science.gov (United States)

    Rumble, Katherine A; Thijssen, Job H J; Schofield, Andrew B; Clegg, Paul S

    2016-05-11

    Bicontinuous interfacially jammed emulsion gels (bijels) are solid-stabilised emulsions with two inter-penetrating continuous phases. Employing the method of centrifugal compression we find that macroscopically the bijel yields at relatively low angular acceleration. Both continuous phases escape from the top of the structure, making any compression immediately irreversible. Microscopically, the bijel becomes anisotropic with the domains aligned perpendicular to the compression direction which inhibits further liquid expulsion; this contrasts strongly with the sedimentation behaviour of colloidal gels. The original structure can, however, be preserved close to the top of the sample and thus the change to an anisotropic structure suggests internal yielding. Any air bubbles trapped in the bijel are found to aid compression by forming channels aligned parallel to the compression direction which provide a route for liquid to escape.

  11. Free-breathing volumetric fat/water separation by combining radial sampling, compressed sensing, and parallel imaging.

    Science.gov (United States)

    Benkert, Thomas; Feng, Li; Sodickson, Daniel K; Chandarana, Hersh; Block, Kai Tobias

    2017-08-01

    Conventional fat/water separation techniques require that patients hold breath during abdominal acquisitions, which often fails and limits the achievable spatial resolution and anatomic coverage. This work presents a novel approach for free-breathing volumetric fat/water separation. Multiecho data are acquired using a motion-robust radial stack-of-stars three-dimensional GRE sequence with bipolar readout. To obtain fat/water maps, a model-based reconstruction is used that accounts for the off-resonant blurring of fat and integrates both compressed sensing and parallel imaging. The approach additionally enables generation of respiration-resolved fat/water maps by detecting motion from k-space data and reconstructing different respiration states. Furthermore, an extension is described for dynamic contrast-enhanced fat-water-separated measurements. Uniform and robust fat/water separation is demonstrated in several clinical applications, including free-breathing noncontrast abdominal examination of adults and a pediatric subject with both motion-averaged and motion-resolved reconstructions, as well as in a noncontrast breast exam. Furthermore, dynamic contrast-enhanced fat/water imaging with high temporal resolution is demonstrated in the abdomen and breast. The described framework provides a viable approach for motion-robust fat/water separation and promises particular value for clinical applications that are currently limited by the breath-holding capacity or cooperation of patients. Magn Reson Med 78:565-576, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  12. A Fourier-based compressed sensing technique for accelerated CT image reconstruction using first-order methods

    International Nuclear Information System (INIS)

    Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei

    2014-01-01

    As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate O(1/k 2 ). In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques. (paper)

  13. A Fourier-based compressed sensing technique for accelerated CT image reconstruction using first-order methods.

    Science.gov (United States)

    Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei

    2014-06-21

    As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate [Formula: see text]. In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.

  14. Local sparsity enhanced compressed sensing magnetic resonance imaging in uniform discrete curvelet domain

    International Nuclear Information System (INIS)

    Yang, Bingxin; Yuan, Min; Ma, Yide; Zhang, Jiuwen; Zhan, Kun

    2015-01-01

    Compressed sensing(CS) has been well applied to speed up imaging by exploring image sparsity over predefined basis functions or learnt dictionary. Firstly, the sparse representation is generally obtained in a single transform domain by using wavelet-like methods, which cannot produce optimal sparsity considering sparsity, data adaptivity and computational complexity. Secondly, most state-of-the-art reconstruction models seldom consider composite regularization upon the various structural features of images and transform coefficients sub-bands. Therefore, these two points lead to high sampling rates for reconstructing high-quality images. In this paper, an efficient composite sparsity structure is proposed. It learns adaptive dictionary from lowpass uniform discrete curvelet transform sub-band coefficients patches. Consistent with the sparsity structure, a novel composite regularization reconstruction model is developed to improve reconstruction results from highly undersampled k-space data. It is established via minimizing spatial image and lowpass sub-band coefficients total variation regularization, transform sub-bands coefficients l 1 sparse regularization and constraining k-space measurements fidelity. A new augmented Lagrangian method is then introduced to optimize the reconstruction model. It updates representation coefficients of lowpass sub-band coefficients over dictionary, transform sub-bands coefficients and k-space measurements upon the ideas of constrained split augmented Lagrangian shrinkage algorithm. Experimental results on in vivo data show that the proposed method obtains high-quality reconstructed images. The reconstructed images exhibit the least aliasing artifacts and reconstruction error among current CS MRI methods. The proposed sparsity structure can fit and provide hierarchical sparsity for magnetic resonance images simultaneously, bridging the gap between predefined sparse representation methods and explicit dictionary. The new augmented

  15. Compressive sensing-based electrostatic sensor array signal processing and exhausted abnormal debris detecting

    Science.gov (United States)

    Tang, Xin; Chen, Zhongsheng; Li, Yue; Yang, Yongmin

    2018-05-01

    When faults happen at gas path components of gas turbines, some sparsely-distributed and charged debris will be generated and released into the exhaust gas. The debris is called abnormal debris. Electrostatic sensors can detect the debris online and further indicate the faults. It is generally considered that, under a specific working condition, a more serious fault generates more and larger debris, and a piece of larger debris carries more charge. Therefore, the amount and charge of the abnormal debris are important indicators of the fault severity. However, because an electrostatic sensor can only detect the superposed effect on the electrostatic field of all the debris, it can hardly identify the amount and position of the debris. Moreover, because signals of electrostatic sensors depend on not only charge but also position of debris, and the position information is difficult to acquire, measuring debris charge accurately using the electrostatic detecting method is still a technical difficulty. To solve these problems, a hemisphere-shaped electrostatic sensors' circular array (HSESCA) is used, and an array signal processing method based on compressive sensing (CS) is proposed in this paper. To research in a theoretical framework of CS, the measurement model of the HSESCA is discretized into a sparse representation form by meshing. In this way, the amount and charge of the abnormal debris are described as a sparse vector. It is further reconstructed by constraining l1-norm when solving an underdetermined equation. In addition, a pre-processing method based on singular value decomposition and a result calibration method based on weighted-centroid algorithm are applied to ensure the accuracy of the reconstruction. The proposed method is validated by both numerical simulations and experiments. Reconstruction errors, characteristics of the results and some related factors are discussed.

  16. Accelerated high-frame-rate mouse heart cine-MRI using compressed sensing reconstruction

    NARCIS (Netherlands)

    Motaal, Abdallah G.; Coolen, Bram F.; Abdurrachim, Desiree; Castro, Rui M.; Prompers, Jeanine J.; Florack, Luc M. J.; Nicolay, Klaas; Strijkers, Gustav J.

    2013-01-01

    We introduce a new protocol to obtain very high-frame-rate cinematographic (Cine) MRI movies of the beating mouse heart within a reasonable measurement time. The method is based on a self-gated accelerated fast low-angle shot (FLASH) acquisition and compressed sensi ng reconstruction. Key to our

  17. Support for Implications of Compressive Sensing Concepts to Imaging Systems

    Science.gov (United States)

    2015-08-02

    Justin Romberg Georgia Tech jrom@ece.gatech.edu Emil Sidky University of Chicago sidky@uchicago.edu Michael Stenner MITRE mstenner@mitre.org Lei Tian...assessment of image quality. Michael Stenner Michael has broad interests in optical imaging, sensing, and communications, and is published in such

  18. Quantum autoencoders for efficient compression of quantum data

    Science.gov (United States)

    Romero, Jonathan; Olson, Jonathan P.; Aspuru-Guzik, Alan

    2017-12-01

    Classical autoencoders are neural networks that can learn efficient low-dimensional representations of data in higher-dimensional space. The task of an autoencoder is, given an input x, to map x to a lower dimensional point y such that x can likely be recovered from y. The structure of the underlying autoencoder network can be chosen to represent the data on a smaller dimension, effectively compressing the input. Inspired by this idea, we introduce the model of a quantum autoencoder to perform similar tasks on quantum data. The quantum autoencoder is trained to compress a particular data set of quantum states, where a classical compression algorithm cannot be employed. The parameters of the quantum autoencoder are trained using classical optimization algorithms. We show an example of a simple programmable circuit that can be trained as an efficient autoencoder. We apply our model in the context of quantum simulation to compress ground states of the Hubbard model and molecular Hamiltonians.

  19. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    Science.gov (United States)

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  20. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT

    Directory of Open Access Journals (Sweden)

    Ran Li

    2018-04-01

    Full Text Available Aimed at a low-energy consumption of Green Internet of Things (IoT, this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  1. Optimization of compressive 4D-spatio-spectral snapshot imaging

    Science.gov (United States)

    Zhao, Xia; Feng, Weiyi; Lin, Lihua; Su, Wu; Xu, Guoqing

    2017-10-01

    In this paper, a modified 3D computational reconstruction method in the compressive 4D-spectro-volumetric snapshot imaging system is proposed for better sensing spectral information of 3D objects. In the design of the imaging system, a microlens array (MLA) is used to obtain a set of multi-view elemental images (EIs) of the 3D scenes. Then, these elemental images with one dimensional spectral information and different perspectives are captured by the coded aperture snapshot spectral imager (CASSI) which can sense the spectral data cube onto a compressive 2D measurement image. Finally, the depth images of 3D objects at arbitrary depths, like a focal stack, are computed by inversely mapping the elemental images according to geometrical optics. With the spectral estimation algorithm, the spectral information of 3D objects is also reconstructed. Using a shifted translation matrix, the contrast of the reconstruction result is further enhanced. Numerical simulation results verify the performance of the proposed method. The system can obtain both 3D spatial information and spectral data on 3D objects using only one single snapshot, which is valuable in the agricultural harvesting robots and other 3D dynamic scenes.

  2. Sub-bandage sensing system for remote monitoring of chronic wounds in healthcare

    Science.gov (United States)

    Hariz, Alex; Mehmood, Nasir; Voelcker, Nico

    2015-12-01

    Chronic wounds, such as venous leg ulcers, can be monitored non-invasively by using modern sensing devices and wireless technologies. The development of such wireless diagnostic tools may improve chronic wound management by providing evidence on efficacy of treatments being provided. In this paper we present a low-power portable telemetric system for wound condition sensing and monitoring. The system aims at measuring and transmitting real-time information of wound-site temperature, sub-bandage pressure and moisture level from within the wound dressing. The system comprises commercially available non-invasive temperature, moisture, and pressure sensors, which are interfaced with a telemetry device on a flexible 0.15 mm thick printed circuit material, making up a lightweight biocompatible sensing device. The real-time data obtained is transmitted wirelessly to a portable receiver which displays the measured values. The performance of the whole telemetric sensing system is validated on a mannequin leg using commercial compression bandages and dressings. A number of trials on a healthy human volunteer are performed where treatment conditions were emulated using various compression bandage configurations. A reliable and repeatable performance of the system is achieved under compression bandage and with minimal discomfort to the volunteer. The system is capable of reporting instantaneous changes in bandage pressure, moisture level and local temperature at wound site with average measurement resolutions of 0.5 mmHg, 3.0 %RH, and 0.2 °C respectively. Effective range of data transmission is 4-5 m in an open environment.

  3. A New Algorithm for the On-Board Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Raúl Guerra

    2018-03-01

    Full Text Available Hyperspectral sensors are able to provide information that is useful for many different applications. However, the huge amounts of data collected by these sensors are not exempt of drawbacks, especially in remote sensing environments where the hyperspectral images are collected on-board satellites and need to be transferred to the earth’s surface. In this situation, an efficient compression of the hyperspectral images is mandatory in order to save bandwidth and storage space. Lossless compression algorithms have been traditionally preferred, in order to preserve all the information present in the hyperspectral cube for scientific purposes, despite their limited compression ratio. Nevertheless, the increment in the data-rate of the new-generation sensors is making more critical the necessity of obtaining higher compression ratios, making it necessary to use lossy compression techniques. A new transform-based lossy compression algorithm, namely Lossy Compression Algorithm for Hyperspectral Image Systems (HyperLCA, is proposed in this manuscript. This compressor has been developed for achieving high compression ratios with a good compression performance at a reasonable computational burden. An extensive amount of experiments have been performed in order to evaluate the goodness of the proposed HyperLCA compressor using different calibrated and uncalibrated hyperspectral images from the AVIRIS and Hyperion sensors. The results provided by the proposed HyperLCA compressor have been evaluated and compared against those produced by the most relevant state-of-the-art compression solutions. The theoretical and experimental evidence indicates that the proposed algorithm represents an excellent option for lossy compressing hyperspectral images, especially for applications where the available computational resources are limited, such as on-board scenarios.

  4. Large breast compressions: Observations and evaluation of simulations

    Energy Technology Data Exchange (ETDEWEB)

    Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A.; Douek, Michael; Hawkes, David J. [Centre of Medical Image Computing, UCL, London WC1E 6BT, United Kingdom and Computer Vision Laboratory, ETH Zuerich, 8092 Zuerich (Switzerland); Centre of Medical Image Computing, UCL, London WC1E 6BT (United Kingdom); Department of Surgery, UCL, London W1P 7LD (United Kingdom); Department of Imaging, UCL Hospital, London NW1 2BU (United Kingdom); Department of Surgery, UCL, London W1P 7LD (United Kingdom); Centre of Medical Image Computing, UCL, London WC1E 6BT (United Kingdom)

    2011-02-15

    Purpose: Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Methods: Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Results: Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast

  5. Large breast compressions: observations and evaluation of simulations.

    Science.gov (United States)

    Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A; Douek, Michael; Hawkes, David J

    2011-02-01

    Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs. 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast shapes than when using

  6. Large breast compressions: Observations and evaluation of simulations

    International Nuclear Information System (INIS)

    Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A.; Douek, Michael; Hawkes, David J.

    2011-01-01

    Purpose: Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Methods: Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Results: Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast

  7. Iris Recognition: The Consequences of Image Compression

    Directory of Open Access Journals (Sweden)

    Bishop DanielA

    2010-01-01

    Full Text Available Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  8. Iris Recognition: The Consequences of Image Compression

    Science.gov (United States)

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig

    2010-12-01

    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  9. A compressed sensing based reconstruction algorithm for synchrotron source propagation-based X-ray phase contrast computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Melli, Seyed Ali, E-mail: sem649@mail.usask.ca [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Wahid, Khan A. [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Babyn, Paul [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada); Montgomery, James [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Snead, Elisabeth [Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, SK (Canada); El-Gayed, Ali [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Pettitt, Murray; Wolkowski, Bailey [College of Agriculture and Bioresources, University of Saskatchewan, Saskatoon, SK (Canada); Wesolowski, Michal [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada)

    2016-01-11

    Synchrotron source propagation-based X-ray phase contrast computed tomography is increasingly used in pre-clinical imaging. However, it typically requires a large number of projections, and subsequently a large radiation dose, to produce high quality images. To improve the applicability of this imaging technique, reconstruction algorithms that can reduce the radiation dose and acquisition time without degrading image quality are needed. The proposed research focused on using a novel combination of Douglas–Rachford splitting and randomized Kaczmarz algorithms to solve large-scale total variation based optimization in a compressed sensing framework to reconstruct 2D images from a reduced number of projections. Visual assessment and quantitative performance evaluations of a synthetic abdomen phantom and real reconstructed image of an ex-vivo slice of canine prostate tissue demonstrate that the proposed algorithm is competitive in reconstruction process compared with other well-known algorithms. An additional potential benefit of reducing the number of projections would be reduction of time for motion artifact to occur if the sample moves during image acquisition. Use of this reconstruction algorithm to reduce the required number of projections in synchrotron source propagation-based X-ray phase contrast computed tomography is an effective form of dose reduction that may pave the way for imaging of in-vivo samples.

  10. Enhancing Perceived Quality of Compressed Images and Video with Anisotropic Diffusion and Fuzzy Filtering

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Korhonen, Jari; Forchhammer, Søren

    2013-01-01

    and subjective results on JPEG compressed images, as well as MJPEG and H.264/AVC compressed video, indicate that the proposed algorithms employing directional and spatial fuzzy filters achieve better artifact reduction than other methods. In particular, robust improvements with H.264/AVC video have been gained...

  11. Bayesian nonparametric dictionary learning for compressed sensing MRI.

    Science.gov (United States)

    Huang, Yue; Paisley, John; Lin, Qin; Ding, Xinghao; Fu, Xueyang; Zhang, Xiao-Ping

    2014-12-01

    We develop a Bayesian nonparametric model for reconstructing magnetic resonance images (MRIs) from highly undersampled k -space data. We perform dictionary learning as part of the image reconstruction process. To this end, we use the beta process as a nonparametric dictionary learning prior for representing an image patch as a sparse combination of dictionary elements. The size of the dictionary and patch-specific sparsity pattern are inferred from the data, in addition to other dictionary learning variables. Dictionary learning is performed directly on the compressed image, and so is tailored to the MRI being considered. In addition, we investigate a total variation penalty term in combination with the dictionary learning model, and show how the denoising property of dictionary learning removes dependence on regularization parameters in the noisy setting. We derive a stochastic optimization algorithm based on Markov chain Monte Carlo for the Bayesian model, and use the alternating direction method of multipliers for efficiently performing total variation minimization. We present empirical results on several MRI, which show that the proposed regularization framework can improve reconstruction accuracy over other methods.

  12. Optimal design of compressed air energy storage systems

    Energy Technology Data Exchange (ETDEWEB)

    Ahrens, F. W.; Sharma, A.; Ragsdell, K. M.

    1979-01-01

    Compressed air energy storage (CAES) power systems are currently being considered by various electric utilities for load-leveling applications. Models of CAES systems which employ natural underground aquifer formations, and present an optimal design methodology which demonstrates their economic viability are developed. This approach is based upon a decomposition of the CAES plant and utility grid system into three partially-decoupled subsystems. Numerical results are given for a plant employing the Media, Illinois Galesville aquifer formation.

  13. Compressive Sound Speed Profile Inversion Using Beamforming Results

    OpenAIRE

    Youngmin Choo; Woojae Seong

    2018-01-01

    Sound speed profile (SSP) significantly affects acoustic propagation in the ocean. In this work, the SSP is inverted using compressive sensing (CS) combined with beamforming to indicate the direction of arrivals (DOAs). The travel times and the positions of the arrivals can be approximately linearized using their Taylor expansion with the shape function coefficients that parameterize the SSP. The linear relation between the travel times/positions and the shape function coefficients enables CS...

  14. High bit depth infrared image compression via low bit depth codecs

    Science.gov (United States)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  15. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder

    Science.gov (United States)

    August, Isaac; Oiknine, Yaniv; Abuleil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-03-01

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

  16. Compressive Detection Using Sub-Nyquist Radars for Sparse Signals

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2016-01-01

    Full Text Available This paper investigates the compression detection problem using sub-Nyquist radars, which is well suited to the scenario of high bandwidths in real-time processing because it would significantly reduce the computational burden and save power consumption and computation time. A compressive generalized likelihood ratio test (GLRT detector for sparse signals is proposed for sub-Nyquist radars without ever reconstructing the signal involved. The performance of the compressive GLRT detector is analyzed and the theoretical bounds are presented. The compressive GLRT detection performance of sub-Nyquist radars is also compared to the traditional GLRT detection performance of conventional radars, which employ traditional analog-to-digital conversion (ADC at Nyquist sampling rates. Simulation results demonstrate that the former can perform almost as well as the latter with a very small fraction of the number of measurements required by traditional detection in relatively high signal-to-noise ratio (SNR cases.

  17. DNABIT Compress – Genome compression algorithm

    OpenAIRE

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...

  18. NIR hyperspectral compressive imager based on a modified Fabry–Perot resonator

    Science.gov (United States)

    Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Stern, Adrian

    2018-04-01

    The acquisition of hyperspectral (HS) image datacubes with available 2D sensor arrays involves a time consuming scanning process. In the last decade, several compressive sensing (CS) techniques were proposed to reduce the HS acquisition time. In this paper, we present a method for near-infrared (NIR) HS imaging which relies on our rapid CS resonator spectroscopy technique. Within the framework of CS, and by using a modified Fabry–Perot resonator, a sequence of spectrally modulated images is used to recover NIR HS datacubes. Owing to the innovative CS design, we demonstrate the ability to reconstruct NIR HS images with hundreds of spectral bands from an order of magnitude fewer measurements, i.e. with a compression ratio of about 10:1. This high compression ratio, together with the high optical throughput of the system, facilitates fast acquisition of large HS datacubes.

  19. A high capacity text steganography scheme based on LZW compression and color coding

    Directory of Open Access Journals (Sweden)

    Aruna Malik

    2017-02-01

    Full Text Available In this paper, capacity and security issues of text steganography have been considered by employing LZW compression technique and color coding based approach. The proposed technique uses the forward mail platform to hide the secret data. This algorithm first compresses secret data and then hides the compressed secret data into the email addresses and also in the cover message of the email. The secret data bits are embedded in the message (or cover text by making it colored using a color coding table. Experimental results show that the proposed method not only produces a high embedding capacity but also reduces computational complexity. Moreover, the security of the proposed method is significantly improved by employing stego keys. The superiority of the proposed method has been experimentally verified by comparing with recently developed existing techniques.

  20. A measurement method for piezoelectric material properties under longitudinal compressive stress–-a compression test method for thin piezoelectric materials

    International Nuclear Information System (INIS)

    Kang, Lae-Hyong; Lee, Dae-Oen; Han, Jae-Hung

    2011-01-01

    We introduce a new compression test method for piezoelectric materials to investigate changes in piezoelectric properties under the compressive stress condition. Until now, compression tests of piezoelectric materials have been generally conducted using bulky piezoelectric ceramics and pressure block. The conventional method using the pressure block for thin piezoelectric patches, which are used in unimorph or bimorph actuators, is prone to unwanted bending and buckling. In addition, due to the constrained boundaries at both ends, the observed piezoelectric behavior contains boundary effects. In order to avoid these problems, the proposed method employs two guide plates with initial longitudinal tensile stress. By removing the tensile stress after bonding a piezoelectric material between the guide layers, longitudinal compressive stress is induced in the piezoelectric layer. Using the compression test specimens, two important properties, which govern the actuation performance of the piezoelectric material, the piezoelectric strain coefficients and the elastic modulus, are measured to evaluate the effects of applied electric fields and re-poling. The results show that the piezoelectric strain coefficient d 31 increases and the elastic modulus decreases when high voltage is applied to PZT5A, and the compression in the longitudinal direction decreases the piezoelectric strain coefficient d 31 but does not affect the elastic modulus. We also found that the re-poling of the piezoelectric material increases the elastic modulus, but the piezoelectric strain coefficient d 31 is not changed much (slightly increased) by re-poling

  1. COMPASS: an Interoperable Personal Health System to Monitor and Compress Signals in Chronic Obstructive Pulmonary Disease

    Directory of Open Access Journals (Sweden)

    Thomas Hofer

    2015-11-01

    Full Text Available In the past years the progress on the mobile market has made possible an advancement in terms of telemedicine systems and definition of systems for monitoring chronic illnesses. The distribution of mobile devices in developed countries is increasing. Many of these devices are equipped with wireless standards including Bluetooth and the amount of sold Smartphones is constantly increasing. Our approach is oriented towards this market, using existing devices to enable in-home patient monitoring and even further to ubiquitious monitoring. The idea is to increase the quality of care, reduce costs and gather medical grade data, especially vital signs, with a resolution of minutes or even less, which is nowadays only possible in an ICU (Intensive Care Units. In this paper we will present the COMPASS personal health system (PHS platform, and how this platform enables Android devices to collect, analyze and send sensor data to an observation storage by means of interoperability standards. Furthermore, we will also present how this data can be compressed using advanced compressed sensing techniques and how to optimize these techniques with genetic algorithms to improve the RMSE of the reconstructed signal after compression. We also produce a preliminary evaluation of the algorithm against the state of the art algorithms for compressed sensing.

  2. SRComp: short read sequence compression using burstsort and Elias omega coding.

    Directory of Open Access Journals (Sweden)

    Jeremy John Selva

    Full Text Available Next-generation sequencing (NGS technologies permit the rapid production of vast amounts of data at low cost. Economical data storage and transmission hence becomes an increasingly important challenge for NGS experiments. In this paper, we introduce a new non-reference based read sequence compression tool called SRComp. It works by first employing a fast string-sorting algorithm called burstsort to sort read sequences in lexicographical order and then Elias omega-based integer coding to encode the sorted read sequences. SRComp has been benchmarked on four large NGS datasets, where experimental results show that it can run 5-35 times faster than current state-of-the-art read sequence compression tools such as BEETL and SCALCE, while retaining comparable compression efficiency for large collections of short read sequences. SRComp is a read sequence compression tool that is particularly valuable in certain applications where compression time is of major concern.

  3. Pressure and compressibility factor of bidisperse magnetic fluids

    Science.gov (United States)

    Minina, Elena S.; Blaak, Ronald; Kantorovich, Sofia S.

    2018-04-01

    In this work, we investigate the pressure and compressibility factors of bidisperse magnetic fluids with relatively weak dipolar interactions and different granulometric compositions. In order to study these properties, we employ the method of diagram expansion, taking into account two possible scenarios: (1) dipolar particles repel each other as hard spheres; (2) the polymer shell on the surface of the particles is modelled through a soft-sphere approximation. The theoretical predictions of the pressure and compressibility factors of bidisperse ferrofluids at different granulometric compositions are supported by data obtained by means of molecular dynamics computer simulations, which we also carried out for these systems. Both theory and simulations reveal that the pressure and compressibility factors decrease with growing dipolar correlations in the system, namely with an increasing fraction of large particles. We also demonstrate that even if dipolar interactions are too weak for any self-assembly to take place, the interparticle correlations lead to a qualitative change in the behaviour of the compressibility factors when compared to that of non-dipolar spheres, making the dependence monotonic.

  4. Freeing Space for NASA: Incorporating a Lossless Compression Algorithm into NASA's FOSS System

    Science.gov (United States)

    Fiechtner, Kaitlyn; Parker, Allen

    2011-01-01

    NASA's Fiber Optic Strain Sensing (FOSS) system can gather and store up to 1,536,000 bytes (1.46 megabytes) per second. Since the FOSS system typically acquires hours - or even days - of data, the system can gather hundreds of gigabytes of data for a given test event. To store such large quantities of data more effectively, NASA is modifying a Lempel-Ziv-Oberhumer (LZO) lossless data compression program to compress data as it is being acquired in real time. After proving that the algorithm is capable of compressing the data from the FOSS system, the LZO program will be modified and incorporated into the FOSS system. Implementing an LZO compression algorithm will instantly free up memory space without compromising any data obtained. With the availability of memory space, the FOSS system can be used more efficiently on test specimens, such as Unmanned Aerial Vehicles (UAVs) that can be in flight for days. By integrating the compression algorithm, the FOSS system can continue gathering data, even on longer flights.

  5. Loss less real-time data compression based on LZO for steady-state Tokamak DAS

    International Nuclear Information System (INIS)

    Pujara, H.D.; Sharma, Manika

    2008-01-01

    The evolution of data acquisition system (DAS) for steady-state operation of Tokamak has been technology driven. Steady-state Tokamak demands a data acquisition system which is capable enough to acquire data losslessly from diagnostics. The needs of loss less continuous acquisition have a significant effect on data storage and takes up a greater portion of any data acquisition systems. Another basic need of steady state of nature of operation demands online viewing of data which loads the LAN significantly. So there is strong demand for something that would control the expansion of both these portion by a way of employing compression technique in real time. This paper presents a data acquisition systems employing real-time data compression technique based on LZO. It is a data compression library which is suitable for data compression and decompression in real time. The algorithm used favours speed over compression ratio. The system has been rigged up based on PXI bus and dual buffer mode architecture is implemented for loss less acquisition. The acquired buffer is compressed in real time and streamed to network and hard disk for storage. Observed performance of measure on various data type like binary, integer float, types of different type of wave form as well as compression timing overheads has been presented in the paper. Various software modules for real-time acquiring, online viewing of data on network nodes have been developed in LabWindows/CVI based on client server architecture

  6. A novel ECG data compression method based on adaptive Fourier decomposition

    Science.gov (United States)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  7. Compressive full waveform lidar

    Science.gov (United States)

    Yang, Weiyi; Ke, Jun

    2017-05-01

    To avoid high bandwidth detector, fast speed A/D converter, and large size memory disk, a compressive full waveform LIDAR system, which uses a temporally modulated laser instead of a pulsed laser, is studied in this paper. Full waveform data from NEON (National Ecological Observatory Network) are used. Random binary patterns are used to modulate the source. To achieve 0.15 m ranging resolution, a 100 MSPS A/D converter is assumed to make measurements. SPIRAL algorithm with canonical basis is employed when Poisson noise is considered in the low illuminated condition.

  8. On the compressive behavior of an FDM Steward Platform part

    Directory of Open Access Journals (Sweden)

    Nectarios Vidakis

    2017-10-01

    Full Text Available Acrylonitrile–butadiene–styrene (ABS is commonly used material in the fused deposition modeling (FDM process. In this work, ABS and ABS plus parts were built with different building parameters and they were tested according to the ASTM D695 standard. Compression strength results were compared to stock ABS material values. The fracture surfaces of selected specimens were examined under a Scanning Electron Microscope (SEM, to determine the failure mode of the filament strands. Following this a Steward Platform part was tested under compression in a tensile testing machine. The experimental results were employed to develop a finite element model of the Steward Platform part, in order to determine the maximum force the part can withstand. The Finite Element Model results were in good agreement with the values measured in the Steward Platform part compressive tests, demonstrating that the model developed is reliable. In these experiments, it was found that ABS parts build with a larger layer thickness showed lower compressive strength, which ABS plus did not show. ABS specimens on average developed about half the compressive strength of the ABS plus specimens, while the ABS plus specimens showed lower compressive strength values than stock ABS material.

  9. Fast ℓ1-SPIRiT Compressed Sensing Parallel Imaging MRI: Scalable Parallel Implementation and Clinically Feasible Runtime

    Science.gov (United States)

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-01-01

    We present ℓ1-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the Wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative Self-Consistent Parallel Imaging (SPIRiT). Like many iterative MRI reconstructions, ℓ1-SPIRiT’s image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing ℓ1-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of ℓ1-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT Spoiled Gradient Echo (SPGR) sequence with up to 8× acceleration via poisson-disc undersampling in the two phase-encoded directions. PMID:22345529

  10. Monitoring Anthropogenic Ocean Sound from Shipping Using an Acoustic Sensor Network and a Compressive Sensing Approach †

    Science.gov (United States)

    Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian

    2016-01-01

    Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches. PMID:27011187

  11. Distributed Sensing and Processing for Multi-Camera Networks

    Science.gov (United States)

    Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.

    Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.

  12. Characterization of Human Dental Pulp Tissue Under Oscillatory Shear and Compression.

    Science.gov (United States)

    Ozcan, Burak; Bayrak, Ece; Erisken, Cevat

    2016-06-01

    Availability of material as well as biological properties of native tissues is critical for biomaterial design and synthesis for regenerative engineering. Until recently, selection of biomaterials and biomolecule carriers for dental pulp regeneration has been done randomly or based on experience mainly due to the absence of benchmark data for dental pulp tissue. This study, for the first time, characterizes the linear viscoelastic material functions and compressive properties of human dental pulp tissue harvested from wisdom teeth, under oscillatory shear and compression. The results revealed a gel-like behavior of the pulp tissue over the frequency range of 0.1-100 rps. Uniaxial compression tests generated peak normal stress and compressive modulus values of 39.1 ± 20.4 kPa and 5.5 ± 2.8 kPa, respectively. Taken collectively, the linear viscoelastic and uniaxial compressive properties of the human dental pulp tissue reported here should enable the better tailoring of biomaterials or biomolecule carriers to be employed in dental pulp regeneration.

  13. Fast Compressive Tracking.

    Science.gov (United States)

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2014-10-01

    It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.

  14. Effects of Instantaneous Multiband Dynamic Compression on Speech Intelligibility

    Directory of Open Access Journals (Sweden)

    Herzke Tobias

    2005-01-01

    Full Text Available The recruitment phenomenon, that is, the reduced dynamic range between threshold and uncomfortable level, is attributed to the loss of instantaneous dynamic compression on the basilar membrane. Despite this, hearing aids commonly use slow-acting dynamic compression for its compensation, because this was found to be the most successful strategy in terms of speech quality and intelligibility rehabilitation. Former attempts to use fast-acting compression gave ambiguous results, raising the question as to whether auditory-based recruitment compensation by instantaneous compression is in principle applicable in hearing aids. This study thus investigates instantaneous multiband dynamic compression based on an auditory filterbank. Instantaneous envelope compression is performed in each frequency band of a gammatone filterbank, which provides a combination of time and frequency resolution comparable to the normal healthy cochlea. The gain characteristics used for dynamic compression are deduced from categorical loudness scaling. In speech intelligibility tests, the instantaneous dynamic compression scheme was compared against a linear amplification scheme, which used the same filterbank for frequency analysis, but employed constant gain factors that restored the sound level for medium perceived loudness in each frequency band. In subjective comparisons, five of nine subjects preferred the linear amplification scheme and would not accept the instantaneous dynamic compression in hearing aids. Four of nine subjects did not perceive any quality differences. A sentence intelligibility test in noise (Oldenburg sentence test showed little to no negative effects of the instantaneous dynamic compression, compared to linear amplification. A word intelligibility test in quiet (one-syllable rhyme test showed that the subjects benefit from the larger amplification at low levels provided by instantaneous dynamic compression. Further analysis showed that the increase

  15. Prediction of crack growth direction by Strain Energy Sih's Theory on specimens SEN under tension-compression biaxial loading employing Genetic Algorithms

    International Nuclear Information System (INIS)

    Rodriguez-MartInez R; Lugo-Gonzalez E; Urriolagoitia-Calderon G; Urriolagoitia-Sosa G; Hernandez-Gomez L H; Romero-Angeles B; Torres-San Miguel Ch

    2011-01-01

    Crack growth direction has been studied in many ways. Particularly Sih's strain energy theory predicts that a fracture under a three-dimensional state of stress spreads in direction of the minimum strain energy density. In this work a study for angle of fracture growth was made, considering a biaxial stress state at the crack tip on SEN specimens. The stress state applied on a tension-compression SEN specimen is biaxial one on crack tip, as it can observed in figure 1. A solution method proposed to obtain a mathematical model considering genetic algorithms, which have demonstrated great capacity for the solution of many engineering problems. From the model given by Sih one can deduce the density of strain energy stored for unit of volume at the crack tip as dW = [1/2E(σ 2 x + σ 2 y ) - ν/E(σ x σy)]dV (1). From equation (1) a mathematical deduction to solve in terms of θ of this case was developed employing Genetic Algorithms, where θ is a crack propagation direction in plane x-y. Steel and aluminium mechanical properties to modelled specimens were employed, because they are two of materials but used in engineering design. Obtained results show stable zones of fracture propagation but only in a range of applied loading.

  16. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    Science.gov (United States)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  17. Near-infrared Compressive Line Sensing Imaging System using Individually Addressable Laser Diode Array

    Science.gov (United States)

    2015-05-11

    Of’ R BASE _fo • J.. J ) Oral PrC$&nlalton. published i Oral Presentatoon. not pub!ɝhed ) Vldoo ( 1 PO$ Iot ) Other exp!a.r Near-infrared compress...Micromirror Device (DMD) is a microelectromechanical (MEMS) device. A DMD consists of millions of electrostatically actuated micro- mirrors (or pixels

  18. Comparative data compression techniques and multi-compression results

    International Nuclear Information System (INIS)

    Hasan, M R; Ibrahimy, M I; Motakabber, S M A; Ferdaus, M M; Khan, M N H

    2013-01-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms

  19. Compressed air energy storage system

    Science.gov (United States)

    Ahrens, Frederick W.; Kartsounes, George T.

    1981-01-01

    An internal combustion reciprocating engine is operable as a compressor during slack demand periods utilizing excess power from a power grid to charge air into an air storage reservoir and as an expander during peak demand periods to feed power into the power grid utilizing air obtained from the air storage reservoir together with combustible fuel. Preferably the internal combustion reciprocating engine is operated at high pressure and a low pressure turbine and compressor are also employed for air compression and power generation.

  20. Effect of JPEG2000 mammogram compression on microcalcifications segmentation

    International Nuclear Information System (INIS)

    Georgiev, V.; Arikidis, N.; Karahaliou, A.; Skiadopoulos, S.; Costaridou, L.

    2012-01-01

    The purpose of this study is to investigate the effect of mammographic image compression on the automated segmentation of individual microcalcifications. The dataset consisted of individual microcalcifications of 105 clusters originating from mammograms of the Digital Database for Screening Mammography. A JPEG2000 wavelet-based compression algorithm was used for compressing mammograms at 7 compression ratios (CRs): 10:1, 20:1, 30:1, 40:1, 50:1, 70:1 and 100:1. A gradient-based active contours segmentation algorithm was employed for segmentation of microcalcifications as depicted on original and compressed mammograms. The performance of the microcalcification segmentation algorithm on original and compressed mammograms was evaluated by means of the area overlap measure (AOM) and distance differentiation metrics (d mean and d max ) by comparing automatically derived microcalcification borders to manually defined ones by an expert radiologist. The AOM monotonically decreased as CR increased, while d mean and d max metrics monotonically increased with CR increase. The performance of the segmentation algorithm on original mammograms was (mean±standard deviation): AOM=0.91±0.08, d mean =0.06±0.05 and d max =0.45±0.20, while on 40:1 compressed images the algorithm's performance was: AOM=0.69±0.15, d mean =0.23±0.13 and d max =0.92±0.39. Mammographic image compression deteriorates the performance of the segmentation algorithm, influencing the quantification of individual microcalcification morphological properties and subsequently affecting computer aided diagnosis of microcalcification clusters. (authors)

  1. Optimization of wavelet decomposition for image compression and feature preservation.

    Science.gov (United States)

    Lo, Shih-Chung B; Li, Huai; Freedman, Matthew T

    2003-09-01

    A neural-network-based framework has been developed to search for an optimal wavelet kernel that can be used for a specific image processing task. In this paper, a linear convolution neural network was employed to seek a wavelet that minimizes errors and maximizes compression efficiency for an image or a defined image pattern such as microcalcifications in mammograms and bone in computed tomography (CT) head images. We have used this method to evaluate the performance of tap-4 wavelets on mammograms, CTs, magnetic resonance images, and Lena images. We found that the Daubechies wavelet or those wavelets with similar filtering characteristics can produce the highest compression efficiency with the smallest mean-square-error for many image patterns including general image textures as well as microcalcifications in digital mammograms. However, the Haar wavelet produces the best results on sharp edges and low-noise smooth areas. We also found that a special wavelet whose low-pass filter coefficients are 0.32252136, 0.85258927, 1.38458542, and -0.14548269) produces the best preservation outcomes in all tested microcalcification features including the peak signal-to-noise ratio, the contrast and the figure of merit in the wavelet lossy compression scheme. Having analyzed the spectrum of the wavelet filters, we can find the compression outcomes and feature preservation characteristics as a function of wavelets. This newly developed optimization approach can be generalized to other image analysis applications where a wavelet decomposition is employed.

  2. An Improved Fast Compressive Tracking Algorithm Based on Online Random Forest Classifier

    Directory of Open Access Journals (Sweden)

    Xiong Jintao

    2016-01-01

    Full Text Available The fast compressive tracking (FCT algorithm is a simple and efficient algorithm, which is proposed in recent years. But, it is difficult to deal with the factors such as occlusion, appearance changes, pose variation, etc in processing. The reasons are that, Firstly, even if the naive Bayes classifier is fast in training, it is not robust concerning the noise. Secondly, the parameters are required to vary with the unique environment for accurate tracking. In this paper, we propose an improved fast compressive tracking algorithm based on online random forest (FCT-ORF for robust visual tracking. Firstly, we combine ideas with the adaptive compressive sensing theory regarding the weighted random projection to exploit both local and discriminative information of the object. The second reason is the online random forest classifier for online tracking which is demonstrated with more robust to the noise adaptively and high computational efficiency. The experimental results show that the algorithm we have proposed has a better performance in the field of occlusion, appearance changes, and pose variation than the fast compressive tracking algorithm’s contribution.

  3. Stochastically Estimating Modular Criticality in Large-Scale Logic Circuits Using Sparsity Regularization and Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Mohammed Alawad

    2015-03-01

    Full Text Available This paper considers the problem of how to efficiently measure a large and complex information field with optimally few observations. Specifically, we investigate how to stochastically estimate modular criticality values in a large-scale digital circuit with a very limited number of measurements in order to minimize the total measurement efforts and time. We prove that, through sparsity-promoting transform domain regularization and by strategically integrating compressive sensing with Bayesian learning, more than 98% of the overall measurement accuracy can be achieved with fewer than 10% of measurements as required in a conventional approach that uses exhaustive measurements. Furthermore, we illustrate that the obtained criticality results can be utilized to selectively fortify large-scale digital circuits for operation with narrow voltage headrooms and in the presence of soft-errors rising at near threshold voltage levels, without excessive hardware overheads. Our numerical simulation results have shown that, by optimally allocating only 10% circuit redundancy, for some large-scale benchmark circuits, we can achieve more than a three-times reduction in its overall error probability, whereas if randomly distributing such 10% hardware resource, less than 2% improvements in the target circuit’s overall robustness will be observed. Finally, we conjecture that our proposed approach can be readily applied to estimate other essential properties of digital circuits that are critical to designing and analyzing them, such as the observability measure in reliability analysis and the path delay estimation in stochastic timing analysis. The only key requirement of our proposed methodology is that these global information fields exhibit a certain degree of smoothness, which is universally true for almost any physical phenomenon.

  4. Real-time network traffic classification technique for wireless local area networks based on compressed sensing

    Science.gov (United States)

    Balouchestani, Mohammadreza

    2017-05-01

    Network traffic or data traffic in a Wireless Local Area Network (WLAN) is the amount of network packets moving across a wireless network from each wireless node to another wireless node, which provide the load of sampling in a wireless network. WLAN's Network traffic is the main component for network traffic measurement, network traffic control and simulation. Traffic classification technique is an essential tool for improving the Quality of Service (QoS) in different wireless networks in the complex applications such as local area networks, wireless local area networks, wireless personal area networks, wireless metropolitan area networks, and wide area networks. Network traffic classification is also an essential component in the products for QoS control in different wireless network systems and applications. Classifying network traffic in a WLAN allows to see what kinds of traffic we have in each part of the network, organize the various kinds of network traffic in each path into different classes in each path, and generate network traffic matrix in order to Identify and organize network traffic which is an important key for improving the QoS feature. To achieve effective network traffic classification, Real-time Network Traffic Classification (RNTC) algorithm for WLANs based on Compressed Sensing (CS) is presented in this paper. The fundamental goal of this algorithm is to solve difficult wireless network management problems. The proposed architecture allows reducing False Detection Rate (FDR) to 25% and Packet Delay (PD) to 15 %. The proposed architecture is also increased 10 % accuracy of wireless transmission, which provides a good background for establishing high quality wireless local area networks.

  5. Dynamic curvature sensing employing ionic-polymer–metal composite sensors

    International Nuclear Information System (INIS)

    Bahramzadeh, Yousef; Shahinpoor, Mohsen

    2011-01-01

    A dynamic curvature sensor is presented based on ionic-polymer–metal composite (IPMC) for curvature monitoring of deployable/inflatable dynamic space structures. Monitoring the curvature variation is of high importance in various engineering structures including shape monitoring of deployable/inflatable space structures in which the structural boundaries undergo a dynamic deployment process. The high sensitivity of IPMCs to the applied deformations as well as its flexibility make IPMCs a promising candidate for sensing of dynamic curvature changes. Herein, we explore the dynamic response of an IPMC sensor strip with respect to controlled curvature deformations subjected to different forms of input functions. Using a specially designed experimental setup, the voltage recovery effect, phase delay, and rate dependency of the output voltage signal of an IPMC curvature sensor are analyzed. Experimental results show that the IPMC sensor maintains the linearity, sensitivity, and repeatability required for curvature sensing. Besides, in order to describe the dynamic phenomena such as the rate dependency of the IPMC sensor, a chemo-electro-mechanical model based on the Poisson–Nernst–Planck (PNP) equation for the kinetics of ion diffusion is presented. By solving the governing partial differential equations the frequency response of the IPMC sensor is derived. The physical model is able to describe the dynamic properties of the IPMC sensor and the dependency of the signal on rate of excitations

  6. Hyperspectral remote sensing for light pollution monitoring

    Directory of Open Access Journals (Sweden)

    P. Marcoionni

    2006-06-01

    Full Text Available industries. In this paper we introduce the results from a remote sensing campaign performed in September 2001 at night time. For the first time nocturnal light pollution was measured at high spatial and spectral resolution using two airborne hyperspectral sensors, namely the Multispectral Infrared and Visible Imaging Spectrometer (MIVIS and the Visible InfraRed Scanner (VIRS-200. These imagers, generally employed for day-time Earth remote sensing, were flown over the Tuscany coast (Italy on board of a Casa 212/200 airplane from an altitude of 1.5-2.0 km. We describe the experimental activities which preceded the remote sensing campaign, the optimization of sensor configuration, and the images as far acquired. The obtained results point out the novelty of the performed measurements and highlight the need to employ advanced remote sensing techniques as a spectroscopic tool for light pollution monitoring.

  7. High speed fluorescence imaging with compressed ultrafast photography

    Science.gov (United States)

    Thompson, J. V.; Mason, J. D.; Beier, H. T.; Bixler, J. N.

    2017-02-01

    Fluorescent lifetime imaging is an optical technique that facilitates imaging molecular interactions and cellular functions. Because the excited lifetime of a fluorophore is sensitive to its local microenvironment,1, 2 measurement of fluorescent lifetimes can be used to accurately detect regional changes in temperature, pH, and ion concentration. However, typical state of the art fluorescent lifetime methods are severely limited when it comes to acquisition time (on the order of seconds to minutes) and video rate imaging. Here we show that compressed ultrafast photography (CUP) can be used in conjunction with fluorescent lifetime imaging to overcome these acquisition rate limitations. Frame rates up to one hundred billion frames per second have been demonstrated with compressed ultrafast photography using a streak camera.3 These rates are achieved by encoding time in the spatial direction with a pseudo-random binary pattern. The time domain information is then reconstructed using a compressed sensing algorithm, resulting in a cube of data (x,y,t) for each readout image. Thus, application of compressed ultrafast photography will allow us to acquire an entire fluorescent lifetime image with a single laser pulse. Using a streak camera with a high-speed CMOS camera, acquisition rates of 100 frames per second can be achieved, which will significantly enhance our ability to quantitatively measure complex biological events with high spatial and temporal resolution. In particular, we will demonstrate the ability of this technique to do single-shot fluorescent lifetime imaging of cells and microspheres.

  8. Employment of security personnel

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    If a company or institution hires personnel of a security service company to protect its premises, this kind of employment does not mean the company carries on temporary employment business. Within the purview of section 99, sub-section 1 of the BetrVG (Works Constitution Act), the security service personnel is not 'employed' in the proper sense even if the security tasks fulfilled by them are done at other times by regular employees of the company or institution. The court decision also decided that the Works Council need not give consent to employment of foreign security personnel. The court decision was taken for settlement of court proceedings commenced by Institute of Plasma Physics in Garching. In his comments, W. Hunold accedes to the court's decision and discusses the underlying reasons of this decision and of a previous ruling in the same matter by putting emphasis on the difference between a contract for services and a contract for work, and a contract for temporary employment. The author also discusses the basic features of an employment contract. (orig./HP) [de

  9. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  10. Shock compression of diamond crystal

    OpenAIRE

    Kondo, Ken-ichi; Ahrens, Thomas J.

    1983-01-01

    Two shock wave experiments employing inclined mirrors have been carried out to determine the Hugoniot elastic limit (HEL), final shock state at 191 and 217 GPa, and the post-shock state of diamond crystal, which is shock-compressed along the intermediate direction between the and crystallographic axes. The HEL wave has a velocity of 19.9 ± 0.3 mm/µsec and an amplitude of 63 ± 28 GPa. An alternate interpretation of the inclined wedge mirror streak record suggests a ramp precursor wave and th...

  11. Incorporation of hydrogel as a sensing medium for recycle of sensing material in chemical sensors

    Science.gov (United States)

    Hwang, Yunjung; Park, Jeong Yong; Kwon, Oh Seok; Joo, Seokwon; Lee, Chang-Soo; Bae, Joonwon

    2018-01-01

    A hydrogel, produced with agarose extracted from seaweed, was introduced as a reusable medium in ultrasensitive sensors employing conducting polymer nanomaterials and aptamers. A basic dopamine (DA) sensor was constructed by placing a hydrogel, containing a sensing material composed of aptamer-linked carboxylated polypyrrole nanotubes (PPy-COOH NTs), onto a micropatterned gold electrode. The hydrogel provided a benign electrochemical environment, facilitated specific interactions between DA and the PPy-COOH NT sensing material, and simplified the retrieval of PPy-COOH NTs after detection. It was demonstrated that the agarose hydrogel was successfully employed as a sensing medium for detection of DA, providing a benign environment for the electrode type sensor. PPy-COOH NTs were recovered by simply heating the hydrogel in water. The hydrogel also afforded stable signal intensity after repeated use with a limit of detection of 1 nmol and a clear, stable signal up to 100 nmol DA. This work provides relevant information for future research on reusable or recyclable sensors.

  12. Multiparametric amplitude analysis with on-line compression using adaptive orthogonal transform

    Energy Technology Data Exchange (ETDEWEB)

    Morhac, M; Matousek, V; Turzo, I

    1996-12-31

    The new method of multiparameter amplitude analysis with on-line compression is developed. The proposed method decreases the memory needed to store multidimensional histograms. Examples of employing the algorithms for three-dimensional spectra are presented. 5 refs.

  13. Lensless Photoluminescence Hyperspectral Camera Employing Random Speckle Patterns.

    Czech Academy of Sciences Publication Activity Database

    Žídek, Karel; Denk, Ondřej; Hlubuček, Jiří

    2017-01-01

    Roč. 7, č. 1 (2017), č. článku 15309. ISSN 2045-2322 R&D Projects: GA MŠk(CZ) LO1206; GA ČR(CZ) GJ17-26284Y Institutional support: RVO:61389021 Keywords : compressed sensing * photoluminescence imaging * laser speckles * single-pixel camera Subject RIV: BH - Optics, Masers, Lasers OBOR OECD: Optics (including laser optics and quantum optics) Impact factor: 4.259, year: 2016 https://www.nature.com/articles/s41598-017-14443-4

  14. Medical image compression by using three-dimensional wavelet transformation

    International Nuclear Information System (INIS)

    Wang, J.; Huang, H.K.

    1996-01-01

    This paper proposes a three-dimensional (3-D) medical image compression method for computed tomography (CT) and magnetic resonance (MR) that uses a separable nonuniform 3-D wavelet transform. The separable wavelet transform employs one filter bank within two-dimensional (2-D) slices and then a second filter bank on the slice direction. CT and MR image sets normally have different resolutions within a slice and between slices. The pixel distances within a slice are normally less than 1 mm and the distance between slices can vary from 1 mm to 10 mm. To find the best filter bank in the slice direction, the authors use the various filter banks in the slice direction and compare the compression results. The results from the 12 selected MR and CT image sets at various slice thickness show that the Haar transform in the slice direction gives the optimum performance for most image sets, except for a CT image set which has 1 mm slice distance. Compared with 2-D wavelet compression, compression ratios of the 3-D method are about 70% higher for CT and 35% higher for MR image sets at a peak signal to noise ratio (PSNR) of 50 dB. In general, the smaller the slice distance, the better the 3-D compression performance

  15. Moving image compression and generalization capability of constructive neural networks

    Science.gov (United States)

    Ma, Liying; Khorasani, Khashayar

    2001-03-01

    To date numerous techniques have been proposed to compress digital images to ease their storage and transmission over communication channels. Recently, a number of image compression algorithms using Neural Networks NNs have been developed. Particularly, several constructive feed-forward neural networks FNNs have been proposed by researchers for image compression, and promising results have been reported. At the previous SPIE AeroSense conference 2000, we proposed to use a constructive One-Hidden-Layer Feedforward Neural Network OHL-FNN for compressing digital images. In this paper, we first investigate the generalization capability of the proposed OHL-FNN in the presence of additive noise for network training and/ or generalization. Extensive experimental results for different scenarios are presented. It is revealed that the constructive OHL-FNN is not as robust to additive noise in input image as expected. Next, the constructive OHL-FNN is applied to moving images, video sequences. The first, or other specified frame in a moving image sequence is used to train the network. The remaining moving images that follow are then generalized/compressed by this trained network. Three types of correlation-like criteria measuring the similarity of any two images are introduced. The relationship between the generalization capability of the constructed net and the similarity of images is investigated in some detail. It is shown that the constructive OHL-FNN is promising even for changing images such as those extracted from a football game.

  16. Wavelet compression of multichannel ECG data by enhanced set partitioning in hierarchical trees algorithm.

    Science.gov (United States)

    Sharifahmadian, Ershad

    2006-01-01

    The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.

  17. Dual-wavelength OR-PAM with compressed sensing for cell tracking in a 3D cell culture system

    Science.gov (United States)

    Huang, Rou-Xuan; Fu, Ying; Liu, Wang; Ma, Yu-Ting; Hsieh, Bao-Yu; Chen, Shu-Ching; Sun, Mingjian; Li, Pai-Chi

    2018-02-01

    Monitoring dynamic interactions of T cells migrating toward tumor is beneficial to understand how cancer immunotherapy works. Optical-resolution photoacoustic microscope (OR-PAM) can provide not only high spatial resolution but also deeper penetration than conventional optical microscopy. With the aid of exogenous contrast agents, the dual-wavelength OR-PAM can be applied to map the distribution of CD8+ cytotoxic T lymphocytes (CTLs) with gold nanospheres (AuNS) under 523nm laser irradiation and Hepta1-6 tumor spheres with indocyanine green (ICG) under 800nm irradiation. However, at 1K laser PRF, it takes approximately 20 minutes to obtain a full sample volume of 160 × 160 × 150 μm3 . To increase the imaging rate, we propose a random non-uniform sparse sampling mechanism to achieve fast sparse photoacoustic data acquisition. The image recovery process is formulated as a low-rank matrix recovery (LRMR) based on compressed sensing (CS) theory. We show that it could be stably recovered via nuclear-norm minimization optimization problem to maintain image quality from a significantly fewer measurement. In this study, we use the dual-wavelength OR-PAM with CS to visualize T cell trafficking in a 3D culture system with higher temporal resolution. Data acquisition time is reduced by 40% in such sample volume where sampling density is 0.5. The imaging system reveals the potential to understand the dynamic cellular process for preclinical screening of anti-cancer drugs.

  18. Pan-sharpening via compressed superresolution reconstruction and multidictionary learning

    Science.gov (United States)

    Shi, Cheng; Liu, Fang; Li, Lingling; Jiao, Licheng; Hao, Hongxia; Shang, Ronghua; Li, Yangyang

    2018-01-01

    In recent compressed sensing (CS)-based pan-sharpening algorithms, pan-sharpening performance is affected by two key problems. One is that there are always errors between the high-resolution panchromatic (HRP) image and the linear weighted high-resolution multispectral (HRM) image, resulting in spatial and spectral information lost. The other is that the dictionary construction process depends on the nontruth training samples. These problems have limited applications to CS-based pan-sharpening algorithm. To solve these two problems, we propose a pan-sharpening algorithm via compressed superresolution reconstruction and multidictionary learning. Through a two-stage implementation, compressed superresolution reconstruction model reduces the error effectively between the HRP and the linear weighted HRM images. Meanwhile, the multidictionary with ridgelet and curvelet is learned for both the two stages in the superresolution reconstruction process. Since ridgelet and curvelet can better capture the structure and directional characteristics, a better reconstruction result can be obtained. Experiments are done on the QuickBird and IKONOS satellites images. The results indicate that the proposed algorithm is competitive compared with the recent CS-based pan-sharpening methods and other well-known methods.

  19. Application of a dual-resolution voxelization scheme to compressed-sensing (CS)-based iterative reconstruction in digital tomosynthesis (DTS)

    Science.gov (United States)

    Park, S. Y.; Kim, G. A.; Cho, H. S.; Park, C. K.; Lee, D. Y.; Lim, H. W.; Lee, H. W.; Kim, K. S.; Kang, S. Y.; Park, J. E.; Kim, W. S.; Jeon, D. H.; Je, U. K.; Woo, T. H.; Oh, J. E.

    2018-02-01

    In recent digital tomosynthesis (DTS), iterative reconstruction methods are often used owing to the potential to provide multiplanar images of superior image quality to conventional filtered-backprojection (FBP)-based methods. However, they require enormous computational cost in the iterative process, which has still been an obstacle to put them to practical use. In this work, we propose a new DTS reconstruction method incorporated with a dual-resolution voxelization scheme in attempt to overcome these difficulties, in which the voxels outside a small region-of-interest (ROI) containing target diagnosis are binned by 2 × 2 × 2 while the voxels inside the ROI remain unbinned. We considered a compressed-sensing (CS)-based iterative algorithm with a dual-constraint strategy for more accurate DTS reconstruction. We implemented the proposed algorithm and performed a systematic simulation and experiment to demonstrate its viability. Our results indicate that the proposed method seems to be effective for reducing computational cost considerably in iterative DTS reconstruction, keeping the image quality inside the ROI not much degraded. A binning size of 2 × 2 × 2 required only about 31.9% computational memory and about 2.6% reconstruction time, compared to those for no binning case. The reconstruction quality was evaluated in terms of the root-mean-square error (RMSE), the contrast-to-noise ratio (CNR), and the universal-quality index (UQI).

  20. Efficient transmission of compressed data for remote volume visualization.

    Science.gov (United States)

    Krishnan, Karthik; Marcellin, Michael W; Bilgin, Ali; Nadar, Mariappan S

    2006-09-01

    One of the goals of telemedicine is to enable remote visualization and browsing of medical volumes. There is a need to employ scalable compression schemes and efficient client-server models to obtain interactivity and an enhanced viewing experience. First, we present a scheme that uses JPEG2000 and JPIP (JPEG2000 Interactive Protocol) to transmit data in a multi-resolution and progressive fashion. The server exploits the spatial locality offered by the wavelet transform and packet indexing information to transmit, in so far as possible, compressed volume data relevant to the clients query. Once the client identifies its volume of interest (VOI), the volume is refined progressively within the VOI from an initial lossy to a final lossless representation. Contextual background information can also be made available having quality fading away from the VOI. Second, we present a prioritization that enables the client to progressively visualize scene content from a compressed file. In our specific example, the client is able to make requests to progressively receive data corresponding to any tissue type. The server is now capable of reordering the same compressed data file on the fly to serve data packets prioritized as per the client's request. Lastly, we describe the effect of compression parameters on compression ratio, decoding times and interactivity. We also present suggestions for optimizing JPEG2000 for remote volume visualization and volume browsing applications. The resulting system is ideally suited for client-server applications with the server maintaining the compressed volume data, to be browsed by a client with a low bandwidth constraint.

  1. Wellhead gas compression extends life of beam-pumped wells

    International Nuclear Information System (INIS)

    Sherry, M.J.; Fairchild, P.W.

    1992-01-01

    This paper reports that operators of marginal oil and gas wells often can avoid having to shut them in by compressing gas from the back side of the casing at the well head and delivering it into the flowline. This process can reduce the back pressure at the face of the producing formation, which allows additional oil and gas to be produced and extends the economical reserves. Small, low-horsepower stationary compressors or a walking beam compressor (WBC) may be used for this purpose. A portable compressor test unit recently has been employed to evaluate wells that are possible candidates for wellhead compression as another cost cutting measure

  2. Experimental testing of a self-sensing FRP-concrete composite beam using FBG sensors

    Science.gov (United States)

    Wang, Yanlei; Hao, Qingduo; Ou, Jinping

    2009-03-01

    A new kind of self-sensing fiber reinforced polymer (FRP)-concrete composite beam, which consists of a FRP box beam combined with a thin layer of concrete in the compression zone, was developed by using two embedded FBG sensors in the top and bottom flanges of FRP box beam at mid-span section along longitudinal direction, respectively. The flexural behavior of the proposed self-sensing FRP-concrete composite beam was experimentally studied in four-point bending. The longitudinal strains of the composite beam were recorded using the embedded FBG sensors as well as the surfacebonded electric resistance strain gauges. Test results indicate that the FBG sensors can faithfully record the longitudinal strain of the composite beam in tension at bottom flange of the FRP box beam or in compression at top flange over the entire load range, as compared with the surface-bonded strain gauges. The proposed self-sensing FRP-concrete composite beam can monitor its longitudinal strains in serviceability limit state as well as in strength limit state, and will has wide applications for long-term monitoring in civil engineering.

  3. Method and algorithm for efficient calibration of compressive hyperspectral imaging system based on a liquid crystal retarder

    Science.gov (United States)

    Shecter, Liat; Oiknine, Yaniv; August, Isaac; Stern, Adrian

    2017-09-01

    Recently we presented a Compressive Sensing Miniature Ultra-spectral Imaging System (CS-MUSI)1 . This system consists of a single Liquid Crystal (LC) phase retarder as a spectral modulator and a gray scale sensor array to capture a multiplexed signal of the imaged scene. By designing the LC spectral modulator in compliance with the Compressive Sensing (CS) guidelines and applying appropriate algorithms we demonstrated reconstruction of spectral (hyper/ ultra) datacubes from an order of magnitude fewer samples than taken by conventional sensors. The LC modulator is designed to have an effective width of a few tens of micrometers, therefore it is prone to imperfections and spatial nonuniformity. In this work, we present the study of this nonuniformity and present a mathematical algorithm that allows the inference of the spectral transmission over the entire cell area from only a few calibration measurements.

  4. Extending the frontiers of employment regulation:

    African Journals Online (AJOL)

    UWC

    role, as it were, by the coincidence of their class, race and gender. In this sense ..... especially women) to be employed in the formal economy, domestic work remains ..... greatest barrier to more effective regulation in a sector such as this.

  5. SeqCompress: an algorithm for biological sequence compression.

    Science.gov (United States)

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Theory of compressive modeling and simulation

    Science.gov (United States)

    Szu, Harold; Cha, Jae; Espinola, Richard L.; Krapels, Keith

    2013-05-01

    Modeling and Simulation (M&S) has been evolving along two general directions: (i) data-rich approach suffering the curse of dimensionality and (ii) equation-rich approach suffering computing power and turnaround time. We suggest a third approach. We call it (iii) compressive M&S (CM&S); because the basic Minimum Free-Helmholtz Energy (MFE) facilitating CM&S can reproduce and generalize Candes, Romberg, Tao & Donoho (CRT&D) Compressive Sensing (CS) paradigm as a linear Lagrange Constraint Neural network (LCNN) algorithm. CM&S based MFE can generalize LCNN to 2nd order as Nonlinear augmented LCNN. For example, during the sunset, we can avoid a reddish bias of sunlight illumination due to a long-range Rayleigh scattering over the horizon. With CM&S we can take instead of day camera, a night vision camera. We decomposed long wave infrared (LWIR) band with filter into 2 vector components (8~10μm and 10~12μm) and used LCNN to find pixel by pixel the map of Emissive-Equivalent Planck Radiation Sources (EPRS). Then, we up-shifted consistently, according to de-mixed sources map, to the sub-micron RGB color image. Moreover, the night vision imaging can also be down-shifted at Passive Millimeter Wave (PMMW) imaging, suffering less blur owing to dusty smokes scattering and enjoying apparent smoothness of surface reflectivity of man-made objects under the Rayleigh resolution. One loses three orders of magnitudes in the spatial Rayleigh resolution; but gains two orders of magnitude in the reflectivity, and gains another two orders in the propagation without obscuring smog . Since CM&S can generate missing data and hard to get dynamic transients, CM&S can reduce unnecessary measurements and their associated cost and computing in the sense of super-saving CS: measuring one & getting one's neighborhood free .

  7. Fast imaging of laboratory core floods using 3D compressed sensing RARE MRI.

    Science.gov (United States)

    Ramskill, N P; Bush, I; Sederman, A J; Mantle, M D; Benning, M; Anger, B C; Appel, M; Gladden, L F

    2016-09-01

    Three-dimensional (3D) imaging of the fluid distributions within the rock is essential to enable the unambiguous interpretation of core flooding data. Magnetic resonance imaging (MRI) has been widely used to image fluid saturation in rock cores; however, conventional acquisition strategies are typically too slow to capture the dynamic nature of the displacement processes that are of interest. Using Compressed Sensing (CS), it is possible to reconstruct a near-perfect image from significantly fewer measurements than was previously thought necessary, and this can result in a significant reduction in the image acquisition times. In the present study, a method using the Rapid Acquisition with Relaxation Enhancement (RARE) pulse sequence with CS to provide 3D images of the fluid saturation in rock core samples during laboratory core floods is demonstrated. An objective method using image quality metrics for the determination of the most suitable regularisation functional to be used in the CS reconstructions is reported. It is shown that for the present application, Total Variation outperforms the Haar and Daubechies3 wavelet families in terms of the agreement of their respective CS reconstructions with a fully-sampled reference image. Using the CS-RARE approach, 3D images of the fluid saturation in the rock core have been acquired in 16min. The CS-RARE technique has been applied to image the residual water saturation in the rock during a water-water displacement core flood. With a flow rate corresponding to an interstitial velocity of vi=1.89±0.03ftday(-1), 0.1 pore volumes were injected over the course of each image acquisition, a four-fold reduction when compared to a fully-sampled RARE acquisition. Finally, the 3D CS-RARE technique has been used to image the drainage of dodecane into the water-saturated rock in which the dynamics of the coalescence of discrete clusters of the non-wetting phase are clearly observed. The enhancement in the temporal resolution that has

  8. Sensing of RNA viruses

    DEFF Research Database (Denmark)

    Jensen, Søren; Thomsen, Allan Randrup

    2012-01-01

    pathogen-associated molecular patterns have emerged in great detail. This review presents an overview of our current knowledge regarding the receptors used to detect RNA virus invasion, the molecular structures these receptors sense, and the involved downstream signaling pathways.......Our knowledge regarding the contribution of the innate immune system in recognizing and subsequently initiating a host response to an invasion of RNA virus has been rapidly growing over the last decade. Descriptions of the receptors involved and the molecular mechanisms they employ to sense viral...

  9. Parallel compression of data chunks of a shared data object using a log-structured file system

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-10-25

    Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storage node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.

  10. Smartphones for distributed multimode sensing: biological and environmental sensing and analysis

    Science.gov (United States)

    Feitshans, Tyler; Williams, Robert

    2013-05-01

    Active and Agile Environmental and Biological sensing are becoming obligatory to generate prompt warnings for the troops and law enforcements conducting missions in hostile environments. The traditional static sensing mesh networks which provide a coarse-grained (far-field) measurement of the environmental conditions like air quality, radiation , CO2, etc … would not serve the dynamic and localized changes in the environment, which requires a fine-grained (near-field) sensing solutions. Further, sensing the biological conditions of (healthy and injured) personnel in a contaminated environment and providing a personalized analysis of the life-threatening conditions in real-time would greatly aid the success of the mission. In this vein, under SATE and YATE programs, the research team at AFRL Tec^Edge Discovery labs had demonstrated the feasibility of developing Smartphone applications , that employ a suite of external environmental and biological sensors, which provide fine-grained and customized sensing in real-time fashion. In its current state, these smartphone applications leverage a custom designed modular standalone embedded platform (with external sensors) that can be integrated seamlessly with Smartphones for sensing and further provides connectivity to a back-end data architecture for archiving, analysis and dissemination of real-time alerts. Additionally, the developed smartphone applications have been successfully tested in the field with varied environmental sensors to sense humidity, CO2/CO, wind, etc…, ; and with varied biological sensors to sense body temperature and pulse with apt real-time analysis

  11. Lightweight, compressible and electrically conductive polyurethane sponges coated with synergistic multiwalled carbon nanotubes and graphene for piezoresistive sensors.

    Science.gov (United States)

    Ma, Zhonglei; Wei, Ajing; Ma, Jianzhong; Shao, Liang; Jiang, Huie; Dong, Diandian; Ji, Zhanyou; Wang, Qian; Kang, Songlei

    2018-04-19

    Lightweight, compressible and highly sensitive pressure/strain sensing materials are highly desirable for the development of health monitoring, wearable devices and artificial intelligence. Herein, a very simple, low-cost and solution-based approach is presented to fabricate versatile piezoresistive sensors based on conductive polyurethane (PU) sponges coated with synergistic multiwalled carbon nanotubes (MWCNTs) and graphene. These sensor materials are fabricated by convenient dip-coating layer-by-layer (LBL) electrostatic assembly followed by in situ reduction without using any complicated microfabrication processes. The resultant conductive MWCNT/RGO@PU sponges exhibit very low densities (0.027-0.064 g cm-3), outstanding compressibility (up to 75%) and high electrical conductivity benefiting from the porous PU sponges and synergistic conductive MWCNT/RGO structures. In addition, the MWCNT/RGO@PU sponges present larger relative resistance changes and superior sensing performances under external applied pressures (0-5.6 kPa) and a wide range of strains (0-75%) compared with the RGO@PU and MWCNT@PU sponges, due to the synergistic effect of multiple mechanisms: "disconnect-connect" transition of nanogaps, microcracks and fractured skeletons at low compression strain and compressive contact of the conductive skeletons at high compression strain. The electrical and piezoresistive properties of MWCNT/RGO@PU sponges are strongly associated with the dip-coating cycle, suspension concentration, and the applied pressure and strain. Fully functional applications of MWCNT/RGO@PU sponge-based piezoresistive sensors in lighting LED lamps and detecting human body movements are demonstrated, indicating their excellent potential for emerging applications such as health monitoring, wearable devices and artificial intelligence.

  12. Factors that influence the tribocharging of pulverulent materials in compressed-air devices

    Energy Technology Data Exchange (ETDEWEB)

    Das, S; Medles, K; Mihalcioiu, A; Beleca, R; Dragan, C; Dascalescu, L [Laboratory of Aerodynamic Studies, University of Poitiers, University Institute of Technology, Angouleme, 16021 (France)], E-mail: ldascalescu@iutang.univ-poitiers.fr

    2008-12-01

    Tribocharging of pulverulent materials in compressed-air devices is a typical multi-factorial process. This paper aims at demonstrating the interest of using the design of experiments methodology in association with virtual instrumentation for quantifying the effects of various process varaibles and of their interactions, as a prerequisite for the development of new tribocharging devices for industrial applications. The study is focused on the tribocharging of PVC powders in compressed-air devices similar to those employed in electrostatic painting. A classical 2 full-factorial design (3 factors at two levels) was employed for conducting the experiments. The response function was the charge/mass ratio of the material collected in a modified Faraday cage, at the exit of the tribocharging device. The charge/mass ratio was found to increase with the injection pressure and the vortex pressure in the tribocharging device, and to decrease with the increasing of the feed rate. In the present study an in-house design of experiments software was employed for statistical analysis of experimental data and validation of the experimental model.

  13. Factors that influence the tribocharging of pulverulent materials in compressed-air devices

    International Nuclear Information System (INIS)

    Das, S; Medles, K; Mihalcioiu, A; Beleca, R; Dragan, C; Dascalescu, L

    2008-01-01

    Tribocharging of pulverulent materials in compressed-air devices is a typical multi-factorial process. This paper aims at demonstrating the interest of using the design of experiments methodology in association with virtual instrumentation for quantifying the effects of various process varaibles and of their interactions, as a prerequisite for the development of new tribocharging devices for industrial applications. The study is focused on the tribocharging of PVC powders in compressed-air devices similar to those employed in electrostatic painting. A classical 2 full-factorial design (3 factors at two levels) was employed for conducting the experiments. The response function was the charge/mass ratio of the material collected in a modified Faraday cage, at the exit of the tribocharging device. The charge/mass ratio was found to increase with the injection pressure and the vortex pressure in the tribocharging device, and to decrease with the increasing of the feed rate. In the present study an in-house design of experiments software was employed for statistical analysis of experimental data and validation of the experimental model.

  14. A Proxy Architecture to Enhance the Performance of WAP 2.0 by Data Compression

    Directory of Open Access Journals (Sweden)

    Yin Zhanping

    2005-01-01

    Full Text Available This paper presents a novel proxy architecture for wireless application protocol (WAP employing an advanced data compression scheme. Though optional in WAP , a proxy can isolate the wireless from the wired domain to prevent error propagations and to eliminate wireless session delays (WSD by enabling long-lived connections between the proxy and wireless terminals. The proposed data compression scheme combines content compression together with robust header compression (ROHC, which minimizes the air-interface traffic data, thus significantly reduces the wireless access time. By using the content compression at the transport layer, it also enables TLS tunneling, which overcomes the end-to-end security problem in WAP 1.x. Performance evaluations show that while WAP 1.x is optimized for narrowband wireless channels, WAP utilizing TCP/IP outperforms WAP 1.x over wideband wireless channels even without compression. The proposed data compression scheme reduces the wireless access time of WAP by over in CDMA2000 1XRTT channels, and in low-speed IS-95 channels, substantially reduces access time to give comparable performance to WAP 1.x. The performance enhancement is mainly contributed by the reply content compression, with ROHC offering further enhancements.

  15. Self-sensing piezoresistive cement composite loaded with carbon black particles

    KAUST Repository

    Monteiro, André O.

    2017-04-27

    Strain sensors can be embedded in civil engineering infrastructures to perform real-time service life monitoring. Here, the sensing capability of piezoresistive cement-based composites loaded with carbon black (CB) particles is investigated. Several composite mixtures, with a CB filler loading up to 10% of binder mass, were mechanically tested under cyclic uniaxial compression, registering variations in electrical resistance as a function of deformation. The results show a reversible piezoresistive behaviour and a quasi-linear relation between the fractional change in resistivity and the compressive strain, in particular for those compositions with higher amount of CB. Gage factors of 30 and 24 were found for compositions containing 7 and 10% of binder mass, respectively. These findings suggest that the CB-cement composites may be a promising active material to monitor compressive strain in civil infrastructures such as concrete bridges and roadways.

  16. Real-time multimodal sensing in nano/bio environment

    Science.gov (United States)

    Song, Bo

    As a sensing device in nano-scale, scanning probe microscopy (SPM) is a powerful tool for exploring nano world. Nevertheless two fundamental problems tackle the development and application of SPM based imaging and measurement: slow imaging/measurement speed and inaccuracy of motion or position control. Usually, SPM imaging/properties measuring speed is too slow to capture a dynamic observation on sample surface. In addition, Both SPM imaging and properties measurement always experience positioning inaccuracy problems caused by hysteresis and creep of the piezo scanner. This dissertation will try to solve these issues and proposed a SPM based real-time multimodal sensing system which can be used in nano/bio environment. First, a compressive sensing based video rate fast SPM imaging system is shown as an efficient method to dynamically capture the sample surface change with the imaging speed 1.5 frame/s with the scan size of 500 nm * 500 nm. Besides topography imaging, a new additional modal of SPM: vibration mode, will be introduced, and it is developed by us to investigate the subsurface mechanical properties of the elastic sample such as cells and bacteria. A followed up study of enzymatic hydrolysis will demonstrate the ability of in situ observation of single molecule event using video rate SPM. After that we will introduce another modal of this SPM sensing system: accurate electrical properties measurement. In this electrical properties measurement mode, a compressive feedbacks based non-vector space control approach is proposed in order to improve the accuracy of SPM based nanomanipulations. Instead of sensors, the local images are used as both the input and feedback of a non-vector space closed-loop controller. A followed up study will also be introduced to shown the important role of non-vector space control in the study of conductivity distribution of multi-wall carbon nanotubes. At the end of this dissertation, some future work will be also proposed to

  17. Radiological Image Compression

    Science.gov (United States)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  18. Single-particle dispersion in compressible turbulence

    Science.gov (United States)

    Zhang, Qingqing; Xiao, Zuoli

    2018-04-01

    Single-particle dispersion statistics in compressible box turbulence are studied using direct numerical simulation. Focus is placed on the detailed discussion of effects of the particle Stokes number and turbulent Mach number, as well as the forcing type. When solenoidal forcing is adopted, it is found that the single-particle dispersion undergoes a transition from the ballistic regime at short times to the diffusive regime at long times, in agreement with Taylor's particle dispersion argument. The strongest dispersion of heavy particles is announced when the Stokes number is of order 1, which is similar to the scenario in incompressible turbulence. The dispersion tends to be suppressed as the Mach number increases. When hybrid solenoidal and compressive forcing at a ratio of 1/2 is employed, the flow field shows apparent anisotropic property, characterized by the appearance of large shock wave structures. Accordingly, the single-particle dispersion shows extremely different behavior from the solenoidal forcing case.

  19. A Coded Aperture Compressive Imaging Array and Its Visual Detection and Tracking Algorithms for Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Hanxiao Wu

    2012-10-01

    Full Text Available In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l1 optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l1 tracker without any optimization.

  20. Prediction of crack growth direction by Strain Energy Sih's Theory on specimens SEN under tension-compression biaxial loading employing Genetic Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez-MartInez R; Lugo-Gonzalez E; Urriolagoitia-Calderon G; Urriolagoitia-Sosa G; Hernandez-Gomez L H; Romero-Angeles B; Torres-San Miguel Ch, E-mail: rrodriguezm@ipn.mx, E-mail: urrio332@hotmail.com, E-mail: guiurri@hotmail.com, E-mail: luishector56@hotmail.com, E-mail: romerobeatriz98@hotmail.com, E-mail: napor@hotmail.com [INSTITUTO POLITECNICO NACIONAL Seccion de Estudios de Posgrado e Investigacion (SEPI), Escuela Superior de Ingenieria Mecanica y Electrica (ESIME), Edificio 5. 2do Piso, Unidad Profesional Adolfo Lopez Mateos ' Zacatenco' Col. Lindavista, C.P. 07738, Mexico, D.F. (Mexico)

    2011-07-19

    Crack growth direction has been studied in many ways. Particularly Sih's strain energy theory predicts that a fracture under a three-dimensional state of stress spreads in direction of the minimum strain energy density. In this work a study for angle of fracture growth was made, considering a biaxial stress state at the crack tip on SEN specimens. The stress state applied on a tension-compression SEN specimen is biaxial one on crack tip, as it can observed in figure 1. A solution method proposed to obtain a mathematical model considering genetic algorithms, which have demonstrated great capacity for the solution of many engineering problems. From the model given by Sih one can deduce the density of strain energy stored for unit of volume at the crack tip as dW = [1/2E({sigma}{sup 2}{sub x} + {sigma}{sup 2}{sub y}) - {nu}/E({sigma}{sub x}{sigma}{sub y})]dV (1). From equation (1) a mathematical deduction to solve in terms of {theta} of this case was developed employing Genetic Algorithms, where {theta} is a crack propagation direction in plane x-y. Steel and aluminium mechanical properties to modelled specimens were employed, because they are two of materials but used in engineering design. Obtained results show stable zones of fracture propagation but only in a range of applied loading.