WorldWideScience

Sample records for template-based signaling compression

  1. Signal Recovery in Compressive Sensing via Multiple Sparsifying Bases

    DEFF Research Database (Denmark)

    Wijewardhana, U. L.; Belyaev, Evgeny; Codreanu, M.

    2017-01-01

    is sparse is the key assumption utilized by such algorithms. However, the basis in which the signal is the sparsest is unknown for many natural signals of interest. Instead there may exist multiple bases which lead to a compressible representation of the signal: e.g., an image is compressible in different...... wavelet transforms. We show that a significant performance improvement can be achieved by utilizing multiple estimates of the signal using sparsifying bases in the context of signal reconstruction from compressive samples. Further, we derive a customized interior-point method to jointly obtain multiple...... estimates of a 2-D signal (image) from compressive measurements utilizing multiple sparsifying bases as well as the fact that the images usually have a sparse gradient....

  2. WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Zhouzhou Liu

    2015-01-01

    Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.

  3. A novel signal compression method based on optimal ensemble empirical mode decomposition for bearing vibration signals

    Science.gov (United States)

    Guo, Wei; Tse, Peter W.

    2013-01-01

    Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect

  4. Compression of surface myoelectric signals using MP3 encoding.

    Science.gov (United States)

    Chan, Adrian D C

    2011-01-01

    The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).

  5. Compressive Sampling of EEG Signals with Finite Rate of Innovation

    Directory of Open Access Journals (Sweden)

    Poh Kok-Kiong

    2010-01-01

    Full Text Available Analyses of electroencephalographic signals and subsequent diagnoses can only be done effectively on long term recordings that preserve the signals' morphologies. Currently, electroencephalographic signals are obtained at Nyquist rate or higher, thus introducing redundancies. Existing compression methods remove these redundancies, thereby achieving compression. We propose an alternative compression scheme based on a sampling theory developed for signals with a finite rate of innovation (FRI which compresses electroencephalographic signals during acquisition. We model the signals as FRI signals and then sample them at their rate of innovation. The signals are thus effectively represented by a small set of Fourier coefficients corresponding to the signals' rate of innovation. Using the FRI theory, original signals can be reconstructed using this set of coefficients. Seventy-two hours of electroencephalographic recording are tested and results based on metrices used in compression literature and morphological similarities of electroencephalographic signals are presented. The proposed method achieves results comparable to that of wavelet compression methods, achieving low reconstruction errors while preserving the morphologiies of the signals. More importantly, it introduces a new framework to acquire electroencephalographic signals at their rate of innovation, thus entailing a less costly low-rate sampling device that does not waste precious computational resources.

  6. Wavelet-based audio embedding and audio/video compression

    Science.gov (United States)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  7. MP3 compression of Doppler ultrasound signals.

    Science.gov (United States)

    Poepping, Tamie L; Gill, Jeremy; Fenster, Aaron; Holdsworth, David W

    2003-01-01

    The effect of lossy, MP3 compression on spectral parameters derived from Doppler ultrasound (US) signals was investigated. Compression was tested on signals acquired from two sources: 1. phase quadrature and 2. stereo audio directional output. A total of 11, 10-s acquisitions of Doppler US signal were collected from each source at three sites in a flow phantom. Doppler signals were digitized at 44.1 kHz and compressed using four grades of MP3 compression (in kilobits per second, kbps; compression ratios in brackets): 1400 kbps (uncompressed), 128 kbps (11:1), 64 kbps (22:1) and 32 kbps (44:1). Doppler spectra were characterized by peak velocity, mean velocity, spectral width, integrated power and ratio of spectral power between negative and positive velocities. The results suggest that MP3 compression on digital Doppler US signals is feasible at 128 kbps, with a resulting 11:1 compression ratio, without compromising clinically relevant information. Higher compression ratios led to significant differences for both signal sources when compared with the uncompressed signals. Copyright 2003 World Federation for Ultrasound in Medicine & Biology

  8. A lossless multichannel bio-signal compression based on low-complexity joint coding scheme for portable medical devices.

    Science.gov (United States)

    Kim, Dong-Sun; Kwon, Jin-San

    2014-09-18

    Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor.

  9. Signal compression in radar using FPGA

    OpenAIRE

    Escamilla Hemández, Enrique; Kravchenko, Víctor; Ponomaryov, Volodymyr; Duchen Sánchez, Gonzalo; Hernández Sánchez, David

    2010-01-01

    We present the hardware implementation of radar real time processing procedures using a simple, fast technique based on FPGA (Field Programmable Gate Array) architecture. This processing includes different window procedures during pulse compression in synthetic aperture radar (SAR). The radar signal compression processing is realized using matched filter, and classical and novel window functions, where we focus on better solution for minimum values of sidelobes. The proposed architecture expl...

  10. Benign compression fractures of the spine: signal patterns

    International Nuclear Information System (INIS)

    Ryu, Kyung Nam; Choi, Woo Suk; Lee, Sun Wha; Lim, Jae Hoon

    1992-01-01

    Fifteen patients with 38 compression fractures of the spine underwent magnetic resonance(MR) imaging. We retrospectively evaluated MR images in those benign compression fractures. MR images showed four patterns in T1-weighted images. MR imaging patterns were normal signal(21), band like low signal(8), low signal with preservation of peripheral portion of the body(8), and diffuse low signal through the vertebral body(1). The low signal portions were changed to high signal intensities in T2-weighted images. In 7 of 15 patients (11 compression fractures), there was a history of trauma, and the remaining 8 patients (27 compression fractures) had no history of trauma. Benign compression fractures of trauma, remained 8 patients (27 compression fractures) were non-traumatic. Benign compression fractures of the spine reveal variable signal intensities in MR imagings. These patterns of benign compression fractures may be useful in interpretation of MR imagings of the spine

  11. Blind Compressed Sensing Parameter Estimation of Non-cooperative Frequency Hopping Signal

    Directory of Open Access Journals (Sweden)

    Chen Ying

    2016-10-01

    Full Text Available To overcome the disadvantages of a non-cooperative frequency hopping communication system, such as a high sampling rate and inadequate prior information, parameter estimation based on Blind Compressed Sensing (BCS is proposed. The signal is precisely reconstructed by the alternating iteration of sparse coding and basis updating, and the hopping frequencies are directly estimated based on the results. Compared with conventional compressive sensing, blind compressed sensing does not require prior information of the frequency hopping signals; hence, it offers an effective solution to the inadequate prior information problem. In the proposed method, the signal is first modeled and then reconstructed by Orthonormal Block Diagonal Blind Compressed Sensing (OBD-BCS, and the hopping frequencies and hop period are finally estimated. The simulation results suggest that the proposed method can reconstruct and estimate the parameters of noncooperative frequency hopping signals with a low signal-to-noise ratio.

  12. Wavelet-Based Watermarking and Compression for ECG Signals with Verification Evaluation

    Directory of Open Access Journals (Sweden)

    Kuo-Kun Tseng

    2014-02-01

    Full Text Available In the current open society and with the growth of human rights, people are more and more concerned about the privacy of their information and other important data. This study makes use of electrocardiography (ECG data in order to protect individual information. An ECG signal can not only be used to analyze disease, but also to provide crucial biometric information for identification and authentication. In this study, we propose a new idea of integrating electrocardiogram watermarking and compression approach, which has never been researched before. ECG watermarking can ensure the confidentiality and reliability of a user’s data while reducing the amount of data. In the evaluation, we apply the embedding capacity, bit error rate (BER, signal-to-noise ratio (SNR, compression ratio (CR, and compressed-signal to noise ratio (CNR methods to assess the proposed algorithm. After comprehensive evaluation the final results show that our algorithm is robust and feasible.

  13. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  14. Compressed ECG biometric: a fast, secured and efficient method for identification of CVD patient.

    Science.gov (United States)

    Sufi, Fahim; Khalil, Ibrahim; Mahmood, Abdun

    2011-12-01

    Adoption of compression technology is often required for wireless cardiovascular monitoring, due to the enormous size of Electrocardiography (ECG) signal and limited bandwidth of Internet. However, compressed ECG must be decompressed before performing human identification using present research on ECG based biometric techniques. This additional step of decompression creates a significant processing delay for identification task. This becomes an obvious burden on a system, if this needs to be done for a trillion of compressed ECG per hour by the hospital. Even though the hospital might be able to come up with an expensive infrastructure to tame the exuberant processing, for small intermediate nodes in a multihop network identification preceded by decompression is confronting. In this paper, we report a technique by which a person can be identified directly from his / her compressed ECG. This technique completely obviates the step of decompression and therefore upholds biometric identification less intimidating for the smaller nodes in a multihop network. The biometric template created by this new technique is lower in size compared to the existing ECG based biometrics as well as other forms of biometrics like face, finger, retina etc. (up to 8302 times lower than face template and 9 times lower than existing ECG based biometric template). Lower size of the template substantially reduces the one-to-many matching time for biometric recognition, resulting in a faster biometric authentication mechanism.

  15. Efficient ECG Signal Compression Using Adaptive Heart Model

    National Research Council Canada - National Science Library

    Szilagyi, S

    2001-01-01

    This paper presents an adaptive, heart-model-based electrocardiography (ECG) compression method. After conventional pre-filtering the waves from the signal are localized and the model's parameters are determined...

  16. Energy Analysis of Decoders for Rakeness-Based Compressed Sensing of ECG Signals.

    Science.gov (United States)

    Pareschi, Fabio; Mangia, Mauro; Bortolotti, Daniele; Bartolini, Andrea; Benini, Luca; Rovatti, Riccardo; Setti, Gianluca

    2017-12-01

    In recent years, compressed sensing (CS) has proved to be effective in lowering the power consumption of sensing nodes in biomedical signal processing devices. This is due to the fact the CS is capable of reducing the amount of data to be transmitted to ensure correct reconstruction of the acquired waveforms. Rakeness-based CS has been introduced to further reduce the amount of transmitted data by exploiting the uneven distribution to the sensed signal energy. Yet, so far no thorough analysis exists on the impact of its adoption on CS decoder performance. The latter point is of great importance, since body-area sensor network architectures may include intermediate gateway nodes that receive and reconstruct signals to provide local services before relaying data to a remote server. In this paper, we fill this gap by showing that rakeness-based design also improves reconstruction performance. We quantify these findings in the case of ECG signals and when a variety of reconstruction algorithms are used either in a low-power microcontroller or a heterogeneous mobile computing platform.

  17. Compressed Sensing with Linear Correlation Between Signal and Measurement Noise

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Larsen, Torben

    2014-01-01

    reconstruction algorithms, but is not known in existing literature. The proposed technique reduces reconstruction error considerably in the case of linearly correlated measurements and noise. Numerical experiments confirm the efficacy of the technique. The technique is demonstrated with application to low......Existing convex relaxation-based approaches to reconstruction in compressed sensing assume that noise in the measurements is independent of the signal of interest. We consider the case of noise being linearly correlated with the signal and introduce a simple technique for improving compressed...... sensing reconstruction from such measurements. The technique is based on a linear model of the correlation of additive noise with the signal. The modification of the reconstruction algorithm based on this model is very simple and has negligible additional computational cost compared to standard...

  18. Searching for gravitational-wave signals emitted by eccentric compact binaries using a non-eccentric template bank: implications for ground-based detectors

    Energy Technology Data Exchange (ETDEWEB)

    Cokelaer, T; Pathak, D, E-mail: Thomas.Cokelaer@astro.cf.ac.u, E-mail: Devanka.Pathak@astro.cf.ac.u [School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom)

    2009-02-21

    Most of the inspiralling compact binaries are expected to be circularized by the time their gravitational-wave signals enter the frequency band of ground-based detectors such as LIGO or VIRGO. However, it is not excluded that some of these binaries might still possess a significant eccentricity at a few tens of hertz. Despite this possibility, current search pipelines-based on matched filtering techniques-consider only non-eccentric templates. The effect of such an approximation on the loss of signal-to-noise ratio (SNR) has been investigated by Martel and Poisson (1999 Phys. Rev. D 60 124008) in the context of initial LIGO detector. They ascertained that non-eccentric templates will be successful at detecting eccentric signals. We revisit their work by incorporating current and future ground-based detectors and precisely quantify the exact loss of SNR. In order to be more faithful to an actual search, we maximized the SNR over a template bank, whose minimal match is set to 95%. For initial LIGO detector, we claim that the initial eccentricity does not need to be taken into account in our searches for any system with total mass M element of [2-45]M{sub o-dot} if e{sub 0} approx< 0.05 because the loss of SNR (about 5%) is consistent with the discreteness of the template bank. Similarly, this statement is also true for systems with M element of [6-35]M{sub o-dot} and e{sub 0} approx< 0.10. However, by neglecting the eccentricity in our searches, significant loss of detection (larger than 10%) may arise as soon as e{sub 0} >= 0.05 for neutron-star binaries. We also provide exhaustive results for VIRGO, Advanced LIGO and Einstein Telescope detectors. It is worth noting that for Einstein Telescope, neutron star binaries with e{sub 0} >= 0.02 lead to a 10% loss of detection.

  19. Compressive Sensing: Analysis of Signals in Radio Astronomy

    Directory of Open Access Journals (Sweden)

    Gaigals G.

    2013-12-01

    Full Text Available The compressive sensing (CS theory says that for some kind of signals there is no need to keep or transfer all the data acquired accordingly to the Nyquist criterion. In this work we investigate if the CS approach is applicable for recording and analysis of radio astronomy (RA signals. Since CS methods are applicable for the signals with sparse (and compressible representations, the compressibility of RA signals is verified. As a result, we identify which RA signals can be processed using CS, find the parameters which can improve or degrade CS application to RA results, describe the optimum way how to perform signal filtering in CS applications. Also, a range of virtual LabVIEW instruments are created for the signal analysis with the CS theory.

  20. Compressing Sensing Based Source Localization for Controlled Acoustic Signals Using Distributed Microphone Arrays

    Directory of Open Access Journals (Sweden)

    Wei Ke

    2017-01-01

    Full Text Available In order to enhance the accuracy of sound source localization in noisy and reverberant environments, this paper proposes an adaptive sound source localization method based on distributed microphone arrays. Since sound sources lie at a few points in the discrete spatial domain, our method can exploit this inherent sparsity to convert the localization problem into a sparse recovery problem based on the compressive sensing (CS theory. In this method, a two-step discrete cosine transform- (DCT- based feature extraction approach is utilized to cover both short-time and long-time properties of acoustic signals and reduce the dimensions of the sparse model. In addition, an online dictionary learning (DL method is used to adjust the dictionary for matching the changes of audio signals, and then the sparse solution could better represent location estimations. Moreover, we propose an improved block-sparse reconstruction algorithm using approximate l0 norm minimization to enhance reconstruction performance for sparse signals in low signal-noise ratio (SNR conditions. The effectiveness of the proposed scheme is demonstrated by simulation results and experimental results where substantial improvement for localization performance can be obtained in the noisy and reverberant conditions.

  1. [A wavelet neural network algorithm of EEG signals data compression and spikes recognition].

    Science.gov (United States)

    Zhang, Y; Liu, A; Yu, K

    1999-06-01

    A novel method of EEG signals compression representation and epileptiform spikes recognition based on wavelet neural network and its algorithm is presented. The wavelet network not only can compress data effectively but also can recover original signal. In addition, the characters of the spikes and the spike-slow rhythm are auto-detected from the time-frequency isoline of EEG signal. This method is well worth using in the field of the electrophysiological signal processing and time-frequency analyzing.

  2. Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Liantao Wu

    2015-08-01

    Full Text Available Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links.

  3. Magnetic Particle-Based Immunoassay of Phosphorylated p53 Using Protein-Cage Templated Lead Phosphate and Carbon Nanospheres for Signal Amplification

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Aiqiong; Bao, Yuanwu; Ge, Xiaoxiao; Shin, Yongsoon; Du, Dan; Lin, Yuehe

    2012-11-20

    Phosphorylated p53 at serin 15 (phospho-p53-15) is a potential biomarker of Gamma-radiation exposure. In this paper, we described a new magnetic particles (MPs)-based electrochemical immunoassay of human phospho-p53-15 using carbon nanospheres (CNS) and protein-cage templated lead phosphate nanoparticles for signal amplification. Greatly enhanced sensitivity was achieved by three aspects: 1) The protein-cage nanoparticle (PCN) and p53-15 signal antibody (p53-15 Ab2) are linked to CNS (PCNof each apoferritin; 3) MPs capture a large amount of primary antibodies. Using apoferritin templated metallic phosphate instead of enzyme as label has the advantage of eliminating the addition of mediator or immunoreagents and thus makes the immunoassay system simpler. The subsequent stripping voltammetric analysis of the released lead ions were detected on a disposable screen printed electrode. The response current was proportional to the phospho-p53-15 concentration in the range of 0.02 to 20 ng mL-1 with detection limit of 0.01 ng mL-1. This method shows a good stability, reproducibility and recovery.

  4. The mathematical theory of signal processing and compression-designs

    Science.gov (United States)

    Feria, Erlan H.

    2006-05-01

    The mathematical theory of signal processing, named processor coding, will be shown to inherently arise as the computational time dual of Shannon's mathematical theory of communication which is also known as source coding. Source coding is concerned with signal source memory space compression while processor coding deals with signal processor computational time compression. Their combination is named compression-designs and referred as Conde in short. A compelling and pedagogically appealing diagram will be discussed highlighting Conde's remarkable successful application to real-world knowledge-aided (KA) airborne moving target indicator (AMTI) radar.

  5. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  6. Wireless Sensor Networks Data Processing Summary Based on Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Caiyun Huang

    2014-07-01

    Full Text Available As a newly proposed theory, compressive sensing (CS is commonly used in signal processing area. This paper investigates the applications of compressed sensing (CS in wireless sensor networks (WSNs. First, the development and research status of compressed sensing technology and wireless sensor networks are described, then a detailed investigation of WSNs research based on CS are conducted from aspects of data fusion, signal acquisition, signal routing transmission, and signal reconstruction. At the end of the paper, we conclude our survey and point out the possible future research directions.

  7. Efficiency of template banks for binary black-hole detection

    International Nuclear Information System (INIS)

    Cokelaer, Thomas; Babak, Stas; Sathyaprakash, B S

    2004-01-01

    In the framework of matched filtering theory, which is the most promising method for the detection of gravitational waves emitted by coalescing binaries, we report on the ability of a template bank to catch a simulated binary black-hole gravitational wave signal. If we suppose that the incoming signal waveform is known a priori, then both the (simulated) signal and the templates can be based on the same physical model and therefore the template bank can be optimal in the sense of Wiener filtering. This turns out to be true for the case of neutron star binaries but not necessarily for the black-hole case. When the templates and the signal are based on different physical models the detection bank may still remain efficient. Nonetheless, it might be a judicious choice to use a phenomenological template family such as the so-called BCV templates to catch all the different physical models. In the first part of that report, we illustrate in a non-exhaustive study, by using Monte Carlo simulations, the efficiency of a template bank based on the stationary phase approximation and show how it catches simulated signals based on the same physical model but fails to catch signals built using other models (Pade, EOB, ...) especially in the case of high mass binaries. In the second part, we construct a BCV-template bank and test its validity by injecting simulated signals based on different physical models such as the PN-approximants, Pade-approximant and the effective one-body method. We show that it is suitable for a search pipeline since it gives a match higher than 95% for all the different physical models. The range of individual mass which has been used is [3-20]M o-dot

  8. Curvelet-based compressive sensing for InSAR raw data

    Science.gov (United States)

    Costa, Marcello G.; da Silva Pinho, Marcelo; Fernandes, David

    2015-10-01

    The aim of this work is to evaluate the compression performance of SAR raw data for interferometry applications collected by airborne from BRADAR (Brazilian SAR System operating in X and P bands) using the new approach based on compressive sensing (CS) to achieve an effective recovery with a good phase preserving. For this framework is desirable a real-time capability, where the collected data can be compressed to reduce onboard storage and bandwidth required for transmission. In the CS theory, a sparse unknown signals can be recovered from a small number of random or pseudo-random measurements by sparsity-promoting nonlinear recovery algorithms. Therefore, the original signal can be significantly reduced. To achieve the sparse representation of SAR signal, was done a curvelet transform. The curvelets constitute a directional frame, which allows an optimal sparse representation of objects with discontinuities along smooth curves as observed in raw data and provides an advanced denoising optimization. For the tests were made available a scene of 8192 x 2048 samples in range and azimuth in X-band with 2 m of resolution. The sparse representation was compressed using low dimension measurements matrices in each curvelet subband. Thus, an iterative CS reconstruction method based on IST (iterative soft/shrinkage threshold) was adjusted to recover the curvelets coefficients and then the original signal. To evaluate the compression performance were computed the compression ratio (CR), signal to noise ratio (SNR), and because the interferometry applications require more reconstruction accuracy the phase parameters like the standard deviation of the phase (PSD) and the mean phase error (MPE) were also computed. Moreover, in the image domain, a single-look complex image was generated to evaluate the compression effects. All results were computed in terms of sparsity analysis to provides an efficient compression and quality recovering appropriated for inSAR applications

  9. Compressive Detection Using Sub-Nyquist Radars for Sparse Signals

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2016-01-01

    Full Text Available This paper investigates the compression detection problem using sub-Nyquist radars, which is well suited to the scenario of high bandwidths in real-time processing because it would significantly reduce the computational burden and save power consumption and computation time. A compressive generalized likelihood ratio test (GLRT detector for sparse signals is proposed for sub-Nyquist radars without ever reconstructing the signal involved. The performance of the compressive GLRT detector is analyzed and the theoretical bounds are presented. The compressive GLRT detection performance of sub-Nyquist radars is also compared to the traditional GLRT detection performance of conventional radars, which employ traditional analog-to-digital conversion (ADC at Nyquist sampling rates. Simulation results demonstrate that the former can perform almost as well as the latter with a very small fraction of the number of measurements required by traditional detection in relatively high signal-to-noise ratio (SNR cases.

  10. The application of sparse linear prediction dictionary to compressive sensing in speech signals

    Directory of Open Access Journals (Sweden)

    YOU Hanxu

    2016-04-01

    Full Text Available Appling compressive sensing (CS,which theoretically guarantees that signal sampling and signal compression can be achieved simultaneously,into audio and speech signal processing is one of the most popular research topics in recent years.In this paper,K-SVD algorithm was employed to learn a sparse linear prediction dictionary regarding as the sparse basis of underlying speech signals.Compressed signals was obtained by applying random Gaussian matrix to sample original speech frames.Orthogonal matching pursuit (OMP and compressive sampling matching pursuit (CoSaMP were adopted to recovery original signals from compressed one.Numbers of experiments were carried out to investigate the impact of speech frames length,compression ratios,sparse basis and reconstruction algorithms on CS performance.Results show that sparse linear prediction dictionary can advance the performance of speech signals reconstruction compared with discrete cosine transform (DCT matrix.

  11. Compressed sensing of ECG signal for wireless system with new fast iterative method.

    Science.gov (United States)

    Tawfic, Israa; Kayhan, Sema

    2015-12-01

    Recent experiments in wireless body area network (WBAN) show that compressive sensing (CS) is a promising tool to compress the Electrocardiogram signal ECG signal. The performance of CS is based on algorithms use to reconstruct exactly or approximately the original signal. In this paper, we present two methods work with absence and presence of noise, these methods are Least Support Orthogonal Matching Pursuit (LS-OMP) and Least Support Denoising-Orthogonal Matching Pursuit (LSD-OMP). The algorithms achieve correct support recovery without requiring sparsity knowledge. We derive an improved restricted isometry property (RIP) based conditions over the best known results. The basic procedures are done by observational and analytical of a different Electrocardiogram signal downloaded them from PhysioBankATM. Experimental results show that significant performance in term of reconstruction quality and compression rate can be obtained by these two new proposed algorithms, and help the specialist gathering the necessary information from the patient in less time if we use Magnetic Resonance Imaging (MRI) application, or reconstructed the patient data after sending it through the network. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. Seismic Signal Compression Using Nonparametric Bayesian Dictionary Learning via Clustering

    Directory of Open Access Journals (Sweden)

    Xin Tian

    2017-06-01

    Full Text Available We introduce a seismic signal compression method based on nonparametric Bayesian dictionary learning method via clustering. The seismic data is compressed patch by patch, and the dictionary is learned online. Clustering is introduced for dictionary learning. A set of dictionaries could be generated, and each dictionary is used for one cluster’s sparse coding. In this way, the signals in one cluster could be well represented by their corresponding dictionaries. A nonparametric Bayesian dictionary learning method is used to learn the dictionaries, which naturally infers an appropriate dictionary size for each cluster. A uniform quantizer and an adaptive arithmetic coding algorithm are adopted to code the sparse coefficients. With comparisons to other state-of-the art approaches, the effectiveness of the proposed method could be validated in the experiments.

  13. Compression and channel-coding algorithms for high-definition television signals

    Science.gov (United States)

    Alparone, Luciano; Benelli, Giuliano; Fabbri, A. F.

    1990-09-01

    In this paper results of investigations about the effects of channel errors in the transmission of images compressed by means of techniques based on Discrete Cosine Transform (DOT) and Vector Quantization (VQ) are presented. Since compressed images are heavily degraded by noise in the transmission channel more seriously for what concern VQ-coded images theoretical studies and simulations are presented in order to define and evaluate this degradation. Some channel coding schemes are proposed in order to protect information during transmission. Hamming codes (7 (15 and (31 have been used for DCT-compressed images more powerful codes such as Golay (23 for VQ-compressed images. Performances attainable with softdecoding techniques are also evaluated better quality images have been obtained than using classical hard decoding techniques. All tests have been carried out to simulate the transmission of a digital image from HDTV signal over an AWGN channel with P5K modulation.

  14. Proximity hybridization-regulated catalytic DNA hairpin assembly for electrochemical immunoassay based on in situ DNA template-synthesized Pd nanoparticles

    International Nuclear Information System (INIS)

    Zhou, Fuyi; Yao, Yao; Luo, Jianjun; Zhang, Xing; Zhang, Yu; Yin, Dengyang; Gao, Fenglei; Wang, Po

    2017-01-01

    Novel hybridization proximity-regulated catalytic DNA hairpin assembly strategy has been proposed for electrochemical immunoassay based on in situ DNA template-synthesized Pd nanoparticles as signal label. The DNA template-synthesized Pd nanoparticles were characterized with atomic force microscopic and X-ray photoelectron spectroscopy. The highly efficient electrocatalysis by DNA template synthesized Pd nanoparticles for NaBH 4 oxidation produced an intense detection signal. The label-free electrochemical method achieved the detection of carcinoembryonic antigen (CEA) with a linear range from 10 −15 to 10 −11  g mL −1 and a detection limit of 0.43 × 10 −15  g mL −1 . Through introducing a supersandwich reaction to increase the DNA length, the electrochemical signal was further amplified, leading to a detection limit of 0.52 × 10 −16  g mL −1 . And it rendered satisfactory analytical performance for the determination of CEA in serum samples. Furthermore, it exhibited good reproducibility and stability; meanwhile, it also showed excellent specificity due to the specific recognition of antigen by antibody. Therefore, the DNA template synthesized Pd nanoparticles based signal amplification approach has great potential in clinical applications and is also suitable for quantification of biomarkers at ultralow level. - Graphical abstract: A novel label-free and enzyme-free electrochemical immunoassay based on proximity hybridization-regulated catalytic DNA hairpin assemblies for recycling of the CEA. - Highlights: • A novel enzyme-free electrochemical immunosensor was developed for detection of CEA. • The signal amplification was based on catalytic DNA hairpin assembly and DNA-template-synthesized Pd nanoparticles. • The biosensor could detect CEA down to 0.52 × 10 −16  g mL −1 level with a dynamic range spanning 5 orders of magnitude.

  15. The effects of lossy compression on diagnostically relevant seizure information in EEG signals.

    Science.gov (United States)

    Higgins, G; McGinley, B; Faul, S; McEvoy, R P; Glavin, M; Marnane, W P; Jones, E

    2013-01-01

    This paper examines the effects of compression on EEG signals, in the context of automated detection of epileptic seizures. Specifically, it examines the use of lossy compression on EEG signals in order to reduce the amount of data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to diagnosing epileptic seizures. Two popular compression methods, JPEG2000 and SPIHT, were used. A range of compression levels was selected for both algorithms in order to compress the signals with varying degrees of loss. This compression was applied to the database of epileptiform data provided by the University of Freiburg, Germany. The real-time EEG analysis for event detection automated seizure detection system was used in place of a trained clinician for scoring the reconstructed data. Results demonstrate that compression by a factor of up to 120:1 can be achieved, with minimal loss in seizure detection performance as measured by the area under the receiver operating characteristic curve of the seizure detection system.

  16. iTemplate: A template-based eye movement data analysis approach.

    Science.gov (United States)

    Xiao, Naiqi G; Lee, Kang

    2018-02-08

    Current eye movement data analysis methods rely on defining areas of interest (AOIs). Due to the fact that AOIs are created and modified manually, variances in their size, shape, and location are unavoidable. These variances affect not only the consistency of the AOI definitions, but also the validity of the eye movement analyses based on the AOIs. To reduce the variances in AOI creation and modification and achieve a procedure to process eye movement data with high precision and efficiency, we propose a template-based eye movement data analysis method. Using a linear transformation algorithm, this method registers the eye movement data from each individual stimulus to a template. Thus, users only need to create one set of AOIs for the template in order to analyze eye movement data, rather than creating a unique set of AOIs for all individual stimuli. This change greatly reduces the error caused by the variance from manually created AOIs and boosts the efficiency of the data analysis. Furthermore, this method can help researchers prepare eye movement data for some advanced analysis approaches, such as iMap. We have developed software (iTemplate) with a graphic user interface to make this analysis method available to researchers.

  17. A novel ECG data compression method based on adaptive Fourier decomposition

    Science.gov (United States)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  18. Toward Wireless Health Monitoring via an Analog Signal Compression-Based Biosensing Platform.

    Science.gov (United States)

    Zhao, Xueyuan; Sadhu, Vidyasagar; Le, Tuan; Pompili, Dario; Javanmard, Mehdi

    2018-06-01

    Wireless all-analog biosensor design for the concurrent microfluidic and physiological signal monitoring is presented in this paper. The key component is an all-analog circuit capable of compressing two analog sources into one analog signal by the analog joint source-channel coding (AJSCC). Two circuit designs are discussed, including the stacked-voltage-controlled voltage source (VCVS) design with the fixed number of levels, and an improved design, which supports a flexible number of AJSCC levels. Experimental results are presented on the wireless biosensor prototype, composed of printed circuit board realizations of the stacked-VCVS design. Furthermore, circuit simulation and wireless link simulation results are presented on the improved design. Results indicate that the proposed wireless biosensor is well suited for sensing two biological signals simultaneously with high accuracy, and can be applied to a wide variety of low-power and low-cost wireless continuous health monitoring applications.

  19. Bayesian signal reconstruction for 1-bit compressed sensing

    International Nuclear Information System (INIS)

    Xu, Yingying; Kabashima, Yoshiyuki; Zdeborová, Lenka

    2014-01-01

    The 1-bit compressed sensing framework enables the recovery of a sparse vector x from the sign information of each entry of its linear transformation. Discarding the amplitude information can significantly reduce the amount of data, which is highly beneficial in practical applications. In this paper, we present a Bayesian approach to signal reconstruction for 1-bit compressed sensing and analyze its typical performance using statistical mechanics. As a basic setup, we consider the case that the measuring matrix Φ has i.i.d entries and the measurements y are noiseless. Utilizing the replica method, we show that the Bayesian approach enables better reconstruction than the l 1 -norm minimization approach, asymptotically saturating the performance obtained when the non-zero entry positions of the signal are known, for signals whose non-zero entries follow zero mean Gaussian distributions. We also test a message passing algorithm for signal reconstruction on the basis of belief propagation. The results of numerical experiments are consistent with those of the theoretical analysis. (paper)

  20. Compressive sensing scalp EEG signals: implementations and practical performance.

    Science.gov (United States)

    Abdulghani, Amir M; Casson, Alexander J; Rodriguez-Villegas, Esther

    2012-11-01

    Highly miniaturised, wearable computing and communication systems allow unobtrusive, convenient and long term monitoring of a range of physiological parameters. For long term operation from the physically smallest batteries, the average power consumption of a wearable device must be very low. It is well known that the overall power consumption of these devices can be reduced by the inclusion of low power consumption, real-time compression of the raw physiological data in the wearable device itself. Compressive sensing is a new paradigm for providing data compression: it has shown significant promise in fields such as MRI; and is potentially suitable for use in wearable computing systems as the compression process required in the wearable device has a low computational complexity. However, the practical performance very much depends on the characteristics of the signal being sensed. As such the utility of the technique cannot be extrapolated from one application to another. Long term electroencephalography (EEG) is a fundamental tool for the investigation of neurological disorders and is increasingly used in many non-medical applications, such as brain-computer interfaces. This article investigates in detail the practical performance of different implementations of the compressive sensing theory when applied to scalp EEG signals.

  1. Proximity hybridization-regulated catalytic DNA hairpin assembly for electrochemical immunoassay based on in situ DNA template-synthesized Pd nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Fuyi [School of Chemistry and Chemical Engineering, Jiangsu Normal University, Xuzhou 221116 (China); Jiangsu Key Laboratory of New Drug Research and Clinical Pharmacy, Department of Pharmaceutical Analysis, School of Pharmacy, Xuzhou Medical College, 221004, Xuzhou (China); Yao, Yao; Luo, Jianjun; Zhang, Xing; Zhang, Yu; Yin, Dengyang [Jiangsu Key Laboratory of New Drug Research and Clinical Pharmacy, Department of Pharmaceutical Analysis, School of Pharmacy, Xuzhou Medical College, 221004, Xuzhou (China); Gao, Fenglei, E-mail: jsxzgfl@sina.com [Jiangsu Key Laboratory of New Drug Research and Clinical Pharmacy, Department of Pharmaceutical Analysis, School of Pharmacy, Xuzhou Medical College, 221004, Xuzhou (China); Wang, Po, E-mail: wangpo@jsnu.edu.cn [School of Chemistry and Chemical Engineering, Jiangsu Normal University, Xuzhou 221116 (China)

    2017-05-29

    Novel hybridization proximity-regulated catalytic DNA hairpin assembly strategy has been proposed for electrochemical immunoassay based on in situ DNA template-synthesized Pd nanoparticles as signal label. The DNA template-synthesized Pd nanoparticles were characterized with atomic force microscopic and X-ray photoelectron spectroscopy. The highly efficient electrocatalysis by DNA template synthesized Pd nanoparticles for NaBH{sub 4} oxidation produced an intense detection signal. The label-free electrochemical method achieved the detection of carcinoembryonic antigen (CEA) with a linear range from 10{sup −15} to 10{sup −11} g mL{sup −1} and a detection limit of 0.43 × 10{sup −15} g mL{sup −1}. Through introducing a supersandwich reaction to increase the DNA length, the electrochemical signal was further amplified, leading to a detection limit of 0.52 × 10{sup −16} g mL{sup −1}. And it rendered satisfactory analytical performance for the determination of CEA in serum samples. Furthermore, it exhibited good reproducibility and stability; meanwhile, it also showed excellent specificity due to the specific recognition of antigen by antibody. Therefore, the DNA template synthesized Pd nanoparticles based signal amplification approach has great potential in clinical applications and is also suitable for quantification of biomarkers at ultralow level. - Graphical abstract: A novel label-free and enzyme-free electrochemical immunoassay based on proximity hybridization-regulated catalytic DNA hairpin assemblies for recycling of the CEA. - Highlights: • A novel enzyme-free electrochemical immunosensor was developed for detection of CEA. • The signal amplification was based on catalytic DNA hairpin assembly and DNA-template-synthesized Pd nanoparticles. • The biosensor could detect CEA down to 0.52 × 10{sup −16} g mL{sup −1} level with a dynamic range spanning 5 orders of magnitude.

  2. Wavelet transform and Huffman coding based electrocardiogram compression algorithm: Application to telecardiology

    International Nuclear Information System (INIS)

    Chouakri, S A; Djaafri, O; Taleb-Ahmed, A

    2013-01-01

    We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly

  3. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    Science.gov (United States)

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  4. Biometric template revocation

    Science.gov (United States)

    Arndt, Craig M.

    2004-08-01

    Biometric are a powerful technology for identifying humans both locally and at a distance. In order to perform identification or verification biometric systems capture an image of some biometric of a user or subject. The image is then converted mathematical to representation of the person call a template. Since we know that every human in the world is different each human will have different biometric images (different fingerprints, or faces, etc.). This is what makes biometrics useful for identification. However unlike a credit card number or a password to can be given to a person and later revoked if it is compromised and biometric is with the person for life. The problem then is to develop biometric templates witch can be easily revoked and reissued which are also unique to the user and can be easily used for identification and verification. In this paper we develop and present a method to generate a set of templates which are fully unique to the individual and also revocable. By using bases set compression algorithms in an n-dimensional orthogonal space we can represent a give biometric image in an infinite number of equally valued and unique ways. The verification and biometric matching system would be presented with a given template and revocation code. The code will then representing where in the sequence of n-dimensional vectors to start the recognition.

  5. Are post-Newtonian templates faithful and effectual in detecting gravitational signals from neutron star binaries?

    International Nuclear Information System (INIS)

    Berti, E.; Pons, J. A.; Miniutti, G.; Gualtieri, L.; Ferrari, V.

    2002-01-01

    We compute the overlap function between post-Newtonian (PN) templates and gravitational signals emitted by binary systems composed of one neutron star and one point mass, obtained by a perturbative approach. The calculations are performed for different stellar models and for different detectors, to estimate how effectual and faithful the PN templates are, and to establish whether effects related to the internal structure of neutron stars may possibly be extracted by the matched filtering technique

  6. Compressive Sensing of Roller Bearing Faults via Harmonic Detection from Under-Sampled Vibration Signals.

    Science.gov (United States)

    Tang, Gang; Hou, Wei; Wang, Huaqing; Luo, Ganggang; Ma, Jianwei

    2015-10-09

    The Shannon sampling principle requires substantial amounts of data to ensure the accuracy of on-line monitoring of roller bearing fault signals. Challenges are often encountered as a result of the cumbersome data monitoring, thus a novel method focused on compressed vibration signals for detecting roller bearing faults is developed in this study. Considering that harmonics often represent the fault characteristic frequencies in vibration signals, a compressive sensing frame of characteristic harmonics is proposed to detect bearing faults. A compressed vibration signal is first acquired from a sensing matrix with information preserved through a well-designed sampling strategy. A reconstruction process of the under-sampled vibration signal is then pursued as attempts are conducted to detect the characteristic harmonics from sparse measurements through a compressive matching pursuit strategy. In the proposed method bearing fault features depend on the existence of characteristic harmonics, as typically detected directly from compressed data far before reconstruction completion. The process of sampling and detection may then be performed simultaneously without complete recovery of the under-sampled signals. The effectiveness of the proposed method is validated by simulations and experiments.

  7. Harmonic analysis in integrated energy system based on compressed sensing

    International Nuclear Information System (INIS)

    Yang, Ting; Pen, Haibo; Wang, Dan; Wang, Zhaoxia

    2016-01-01

    Highlights: • We propose a harmonic/inter-harmonic analysis scheme with compressed sensing theory. • Property of sparseness of harmonic signal in electrical power system is proved. • The ratio formula of fundamental and harmonic components sparsity is presented. • Spectral Projected Gradient-Fundamental Filter reconstruction algorithm is proposed. • SPG-FF enhances the precision of harmonic detection and signal reconstruction. - Abstract: The advent of Integrated Energy Systems enabled various distributed energy to access the system through different power electronic devices. The development of this has made the harmonic environment more complex. It needs low complexity and high precision of harmonic detection and analysis methods to improve power quality. To solve the shortages of large data storage capacities and high complexity of compression in sampling under the Nyquist sampling framework, this research paper presents a harmonic analysis scheme based on compressed sensing theory. The proposed scheme enables the performance of the functions of compressive sampling, signal reconstruction and harmonic detection simultaneously. In the proposed scheme, the sparsity of the harmonic signals in the base of the Discrete Fourier Transform (DFT) is numerically calculated first. This is followed by providing a proof of the matching satisfaction of the necessary conditions for compressed sensing. The binary sparse measurement is then leveraged to reduce the storage space in the sampling unit in the proposed scheme. In the recovery process, the scheme proposed a novel reconstruction algorithm called the Spectral Projected Gradient with Fundamental Filter (SPG-FF) algorithm to enhance the reconstruction precision. One of the actual microgrid systems is used as simulation example. The results of the experiment shows that the proposed scheme effectively enhances the precision of harmonic and inter-harmonic detection with low computing complexity, and has good

  8. The research of optimal selection method for wavelet packet basis in compressing the vibration signal of a rolling bearing in fans and pumps

    International Nuclear Information System (INIS)

    Hao, W; Jinji, G

    2012-01-01

    Compressing the vibration signal of a rolling bearing has important significance to wireless monitoring and remote diagnosis of fans and pumps which is widely used in the petrochemical industry. In this paper, according to the characteristics of the vibration signal in a rolling bearing, a compression method based on the optimal selection of wavelet packet basis is proposed. We analyze several main attributes of wavelet packet basis and the effect to the compression of the vibration signal in a rolling bearing using wavelet packet transform in various compression ratios, and proposed a method to precisely select a wavelet packet basis. Through an actual signal, we come to the conclusion that an orthogonal wavelet packet basis with low vanishing moment should be used to compress the vibration signal of a rolling bearing to get an accurate energy proportion between the feature bands in the spectrum of reconstructing the signal. Within these low vanishing moments, orthogonal wavelet packet basis, and 'coif' wavelet packet basis can obtain the best signal-to-noise ratio in the same compression ratio for its best symmetry.

  9. Quality-on-Demand Compression of EEG Signals for Telemedicine Applications Using Neural Network Predictors

    Directory of Open Access Journals (Sweden)

    N. Sriraam

    2011-01-01

    Full Text Available A telemedicine system using communication and information technology to deliver medical signals such as ECG, EEG for long distance medical services has become reality. In either the urgent treatment or ordinary healthcare, it is necessary to compress these signals for the efficient use of bandwidth. This paper discusses a quality on demand compression of EEG signals using neural network predictors for telemedicine applications. The objective is to obtain a greater compression gains at a low bit rate while preserving the clinical information content. A two-stage compression scheme with a predictor and an entropy encoder is used. The residue signals obtained after prediction is first thresholded using various levels of thresholds and are further quantized and then encoded using an arithmetic encoder. Three neural network models, single-layer and multi-layer perceptrons and Elman network are used and the results are compared with linear predictors such as FIR filters and AR modeling. The fidelity of the reconstructed EEG signal is assessed quantitatively using parameters such as PRD, SNR, cross correlation and power spectral density. It is found from the results that the quality of the reconstructed signal is preserved at a low PRD thereby yielding better compression results compared to results obtained using lossless scheme.

  10. Biometric and Emotion Identification: An ECG Compression Based Method.

    Science.gov (United States)

    Brás, Susana; Ferreira, Jacqueline H T; Soares, Sandra C; Pinho, Armando J

    2018-01-01

    We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.

  11. The possibilities of compressed sensing based migration

    KAUST Repository

    Aldawood, Ali

    2013-09-22

    Linearized waveform inversion or Least-square migration helps reduce migration artifacts caused by limited acquisition aperture, coarse sampling of sources and receivers, and low subsurface illumination. However, leastsquare migration, based on L2-norm minimization of the misfit function, tends to produce a smeared (smoothed) depiction of the true subsurface reflectivity. Assuming that the subsurface reflectivity distribution is a sparse signal, we use a compressed-sensing (Basis Pursuit) algorithm to retrieve this sparse distribution from a small number of linear measurements. We applied a compressed-sensing algorithm to image a synthetic fault model using dense and sparse acquisition geometries. Tests on synthetic data demonstrate the ability of compressed-sensing to produce highly resolved migrated images. We, also, studied the robustness of the Basis Pursuit algorithm in the presence of Gaussian random noise.

  12. The possibilities of compressed sensing based migration

    KAUST Repository

    Aldawood, Ali; Hoteit, Ibrahim; Alkhalifah, Tariq Ali

    2013-01-01

    Linearized waveform inversion or Least-square migration helps reduce migration artifacts caused by limited acquisition aperture, coarse sampling of sources and receivers, and low subsurface illumination. However, leastsquare migration, based on L2-norm minimization of the misfit function, tends to produce a smeared (smoothed) depiction of the true subsurface reflectivity. Assuming that the subsurface reflectivity distribution is a sparse signal, we use a compressed-sensing (Basis Pursuit) algorithm to retrieve this sparse distribution from a small number of linear measurements. We applied a compressed-sensing algorithm to image a synthetic fault model using dense and sparse acquisition geometries. Tests on synthetic data demonstrate the ability of compressed-sensing to produce highly resolved migrated images. We, also, studied the robustness of the Basis Pursuit algorithm in the presence of Gaussian random noise.

  13. Signal Compression in Automatic Ultrasonic testing of Rails

    Directory of Open Access Journals (Sweden)

    Tomasz Ciszewski

    2007-01-01

    Full Text Available Full recording of the most important information carried by the ultrasonic signals allows realizing statistical analysis of measurement data. Statistical analysis of the results gathered during automatic ultrasonic tests gives data which lead, together with use of features of measuring method, differential lossy coding and traditional method of lossless data compression (Huffman’s coding, dictionary coding, to a comprehensive, efficient data compression algorithm. The subject of the article is to present the algorithm and the benefits got by using it in comparison to alternative compression methods. Storage of large amount  of data allows to create an electronic catalogue of ultrasonic defects. If it is created, the future qualification system training in the new solutions of the automat for test in rails will be possible.

  14. Research on compressive sensing reconstruction algorithm based on total variation model

    Science.gov (United States)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  15. Image Compression Based On Wavelet, Polynomial and Quadtree

    Directory of Open Access Journals (Sweden)

    Bushra A. SULTAN

    2011-01-01

    Full Text Available In this paper a simple and fast image compression scheme is proposed, it is based on using wavelet transform to decompose the image signal and then using polynomial approximation to prune the smoothing component of the image band. The architect of proposed coding scheme is high synthetic where the error produced due to polynomial approximation in addition to the detail sub-band data are coded using both quantization and Quadtree spatial coding. As a last stage of the encoding process shift encoding is used as a simple and efficient entropy encoder to compress the outcomes of the previous stage.The test results indicate that the proposed system can produce a promising compression performance while preserving the image quality level.

  16. A Comparative Study of Compression Methods and the Development of CODEC Program of Biological Signal for Emergency Telemedicine Service

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, T.S.; Kim, J.S. [Changwon National University, Changwon (Korea); Lim, Y.H. [Visionite Co., Ltd., Seoul (Korea); Yoo, S.K. [Yonsei University, Seoul (Korea)

    2003-05-01

    In an emergency telemedicine system such as the High-quality Multimedia based Real-time Emergency Telemedicine(HMRET) service, it is very important to examine the status of the patient continuously using the multimedia data including the biological signals(ECG, BP, Respiration, S{sub p}O{sub 2}) of the patient. In order to transmit these data real time through the communication means which have the limited transmission capacity, it is also necessary to compress the biological data besides other multimedia data. For this purpose, we investigate and compare the ECG compression techniques in the time domain and in the wavelet transform domain, and present an effective lossless compression method of the biological signals using JPEG Huffman table for an emergency telemedicine system. And, for the HMRET service, we developed the lossless compression and reconstruction program of the biological signals in MSVC++ 6.0 using DPCM method and JPEG Huffman table, and tested in an internet environment. (author). 15 refs., 17 figs., 7 tabs.

  17. Compressive sensing-based electrostatic sensor array signal processing and exhausted abnormal debris detecting

    Science.gov (United States)

    Tang, Xin; Chen, Zhongsheng; Li, Yue; Yang, Yongmin

    2018-05-01

    When faults happen at gas path components of gas turbines, some sparsely-distributed and charged debris will be generated and released into the exhaust gas. The debris is called abnormal debris. Electrostatic sensors can detect the debris online and further indicate the faults. It is generally considered that, under a specific working condition, a more serious fault generates more and larger debris, and a piece of larger debris carries more charge. Therefore, the amount and charge of the abnormal debris are important indicators of the fault severity. However, because an electrostatic sensor can only detect the superposed effect on the electrostatic field of all the debris, it can hardly identify the amount and position of the debris. Moreover, because signals of electrostatic sensors depend on not only charge but also position of debris, and the position information is difficult to acquire, measuring debris charge accurately using the electrostatic detecting method is still a technical difficulty. To solve these problems, a hemisphere-shaped electrostatic sensors' circular array (HSESCA) is used, and an array signal processing method based on compressive sensing (CS) is proposed in this paper. To research in a theoretical framework of CS, the measurement model of the HSESCA is discretized into a sparse representation form by meshing. In this way, the amount and charge of the abnormal debris are described as a sparse vector. It is further reconstructed by constraining l1-norm when solving an underdetermined equation. In addition, a pre-processing method based on singular value decomposition and a result calibration method based on weighted-centroid algorithm are applied to ensure the accuracy of the reconstruction. The proposed method is validated by both numerical simulations and experiments. Reconstruction errors, characteristics of the results and some related factors are discussed.

  18. Detecting double compression of audio signal

    Science.gov (United States)

    Yang, Rui; Shi, Yun Q.; Huang, Jiwu

    2010-01-01

    MP3 is the most popular audio format nowadays in our daily life, for example music downloaded from the Internet and file saved in the digital recorder are often in MP3 format. However, low bitrate MP3s are often transcoded to high bitrate since high bitrate ones are of high commercial value. Also audio recording in digital recorder can be doctored easily by pervasive audio editing software. This paper presents two methods for the detection of double MP3 compression. The methods are essential for finding out fake-quality MP3 and audio forensics. The proposed methods use support vector machine classifiers with feature vectors formed by the distributions of the first digits of the quantized MDCT (modified discrete cosine transform) coefficients. Extensive experiments demonstrate the effectiveness of the proposed methods. To the best of our knowledge, this piece of work is the first one to detect double compression of audio signal.

  19. Compressive sensing based wireless sensor for structural health monitoring

    Science.gov (United States)

    Bao, Yuequan; Zou, Zilong; Li, Hui

    2014-03-01

    Data loss is a common problem for monitoring systems based on wireless sensors. Reliable communication protocols, which enhance communication reliability by repetitively transmitting unreceived packets, is one approach to tackle the problem of data loss. An alternative approach allows data loss to some extent and seeks to recover the lost data from an algorithmic point of view. Compressive sensing (CS) provides such a data loss recovery technique. This technique can be embedded into smart wireless sensors and effectively increases wireless communication reliability without retransmitting the data. The basic idea of CS-based approach is that, instead of transmitting the raw signal acquired by the sensor, a transformed signal that is generated by projecting the raw signal onto a random matrix, is transmitted. Some data loss may occur during the transmission of this transformed signal. However, according to the theory of CS, the raw signal can be effectively reconstructed from the received incomplete transformed signal given that the raw signal is compressible in some basis and the data loss ratio is low. This CS-based technique is implemented into the Imote2 smart sensor platform using the foundation of Illinois Structural Health Monitoring Project (ISHMP) Service Tool-suite. To overcome the constraints of limited onboard resources of wireless sensor nodes, a method called random demodulator (RD) is employed to provide memory and power efficient construction of the random sampling matrix. Adaptation of RD sampling matrix is made to accommodate data loss in wireless transmission and meet the objectives of the data recovery. The embedded program is tested in a series of sensing and communication experiments. Examples and parametric study are presented to demonstrate the applicability of the embedded program as well as to show the efficacy of CS-based data loss recovery for real wireless SHM systems.

  20. Biometric and Emotion Identification: An ECG Compression Based Method

    Directory of Open Access Journals (Sweden)

    Susana Brás

    2018-04-01

    Full Text Available We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG. The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1 conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2 conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3 identification of the ECG record class, using a 1-NN (nearest neighbor classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.

  1. Biometric and Emotion Identification: An ECG Compression Based Method

    Science.gov (United States)

    Brás, Susana; Ferreira, Jacqueline H. T.; Soares, Sandra C.; Pinho, Armando J.

    2018-01-01

    We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model. PMID:29670564

  2. RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction

    Directory of Open Access Journals (Sweden)

    Michael M. Abdel-Sayed

    2016-11-01

    Full Text Available Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted ℓ1 minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal ℓ1 minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to ℓ1 minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.

  3. RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.

    Science.gov (United States)

    Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F

    2016-11-01

    Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.

  4. Cancelable ECG biometrics using GLRT and performance improvement using guided filter with irreversible guide signal.

    Science.gov (United States)

    Kim, Hanvit; Minh Phuong Nguyen; Se Young Chun

    2017-07-01

    Biometrics such as ECG provides a convenient and powerful security tool to verify or identify an individual. However, one important drawback of biometrics is that it is irrevocable. In other words, biometrics cannot be re-used practically once it is compromised. Cancelable biometrics has been investigated to overcome this drawback. In this paper, we propose a cancelable ECG biometrics by deriving a generalized likelihood ratio test (GLRT) detector from a composite hypothesis testing in randomly projected domain. Since it is common to observe performance degradation for cancelable biometrics, we also propose a guided filtering (GF) with irreversible guide signal that is a non-invertibly transformed signal of ECG authentication template. We evaluated our proposed method using ECG-ID database with 89 subjects. Conventional Euclidean detector with original ECG template yielded 93.9% PD1 (detection probability at 1% FAR) while Euclidean detector with 10% compressed ECG (1/10 of the original data size) yielded 90.8% PD1. Our proposed GLRT detector with 10% compressed ECG yielded 91.4%, which is better than Euclidean with the same compressed ECG. GF with our proposed irreversible ECG template further improved the performance of our GLRT with 10% compressed ECG up to 94.3%, which is higher than Euclidean detector with original ECG. Lastly, we showed that our proposed cancelable ECG biometrics practically met cancelable biometrics criteria such as efficiency, re-usability, diversity and non-invertibility.

  5. Signal-on electrochemical assay for label-free detection of TdT and BamHI activity based on grown DNA nanowire-templated copper nanoclusters.

    Science.gov (United States)

    Hu, Yufang; Zhang, Qingqing; Xu, Lihua; Wang, Jiao; Rao, Jiajia; Guo, Zhiyong; Wang, Sui

    2017-11-01

    Electrochemical methods allow fast and inexpensive analysis of enzymatic activity. Here, a simple and yet efficient "signal-on" electrochemical assay for sensitive, label-free detection of DNA-related enzyme activity was established on the basis of terminal deoxynucleotidyl transferase (TdT)-mediated extension strategy. TdT, which is a template-independent DNA polymerase, can catalyze the sequential addition of deoxythymidine triphosphate (dTTP) at the 3'-OH terminus of single-stranded DNA (ssDNA); then, the TdT-yield T-rich DNA nanowires can be employed as the synthetic template of copper nanoclusters (CuNCs). Grown DNA nanowires-templated CuNCs (noted as DNA-CuNCs) were attached onto graphene oxide (GO) surface and exhibited unique electrocatalytic activity to H 2 O 2 reduction. Under optimal conditions, the proposed biosensor was utilized for quantitatively monitoring TdT activity, with the observed LOD of 0.1 U/mL. It also displayed high selectivity to TdT with excellent stability, and offered a facile, convenient electrochemical method for TdT-relevant inhibitors screening. Moreover, the proposed sensor was successfully used for BamHI activity detection, in which a new 3'-OH terminal was exposed by the digestion of a phosphate group. Ultimately, it has good prospects in DNA-related enzyme-based biochemical studies, disease diagnosis, and drug discovery. Graphical Abstract Extraordinary TdT-generated DNA-CuNCs are synthesized and act as a novel electrochemical sensing platform for sensitive detection of TdT and BamHI activity in biological environments.

  6. Fast Template-based Shape Analysis using Diffeomorphic Iterative Centroid

    OpenAIRE

    Cury , Claire; Glaunès , Joan Alexis; Chupin , Marie; Colliot , Olivier

    2014-01-01

    International audience; A common approach for the analysis of anatomical variability relies on the estimation of a representative template of the population, followed by the study of this population based on the parameters of the deformations going from the template to the population. The Large Deformation Diffeomorphic Metric Mapping framework is widely used for shape analysis of anatomical structures, but computing a template with such framework is computationally expensive. In this paper w...

  7. MATLAB simulation software used for the PhD thesis "Acquisition of Multi-Band Signals via Compressed Sensing

    DEFF Research Database (Denmark)

    2014-01-01

    MATLAB simulation software used for the PhD thesis "Acquisition of Multi-Band Signals via Compressed Sensing......MATLAB simulation software used for the PhD thesis "Acquisition of Multi-Band Signals via Compressed Sensing...

  8. Tools for signal compression applications to speech and audio coding

    CERN Document Server

    Moreau, Nicolas

    2013-01-01

    This book presents tools and algorithms required to compress/uncompress signals such as speech and music. These algorithms are largely used in mobile phones, DVD players, HDTV sets, etc. In a first rather theoretical part, this book presents the standard tools used in compression systems: scalar and vector quantization, predictive quantization, transform quantization, entropy coding. In particular we show the consistency between these different tools. The second part explains how these tools are used in the latest speech and audio coders. The third part gives Matlab programs simulating t

  9. Template based rodent brain extraction and atlas mapping.

    Science.gov (United States)

    Weimin Huang; Jiaqi Zhang; Zhiping Lin; Su Huang; Yuping Duan; Zhongkang Lu

    2016-08-01

    Accurate rodent brain extraction is the basic step for many translational studies using MR imaging. This paper presents a template based approach with multi-expert refinement to automatic rodent brain extraction. We first build the brain appearance model based on the learning exemplars. Together with the template matching, we encode the rodent brain position into the search space to reliably locate the rodent brain and estimate the rough segmentation. With the initial mask, a level-set segmentation and a mask-based template learning are implemented further to the brain region. The multi-expert fusion is used to generate a new mask. We finally combine the region growing based on the histogram distribution learning to delineate the final brain mask. A high-resolution rodent atlas is used to illustrate that the segmented low resolution anatomic image can be well mapped to the atlas. Tested on a public data set, all brains are located reliably and we achieve the mean Jaccard similarity score at 94.99% for brain segmentation, which is a statistically significant improvement compared to two other rodent brain extraction methods.

  10. Gravitational waves from inspiralling compact binaries: Hexagonal template placement and its efficiency in detecting physical signals

    International Nuclear Information System (INIS)

    Cokelaer, T.

    2007-01-01

    Matched filtering is used to search for gravitational waves emitted by inspiralling compact binaries in data from the ground-based interferometers. One of the key aspects of the detection process is the design of a template bank that covers the astrophysically pertinent parameter space. In an earlier paper, we described a template bank that is based on a square lattice. Although robust, we showed that the square placement is overefficient, with the implication that it is computationally more demanding than required. In this paper, we present a template bank based on an hexagonal lattice, which size is reduced by 40% with respect to the proposed square placement. We describe the practical aspects of the hexagonal template bank implementation, its size, and computational cost. We have also performed exhaustive simulations to characterize its efficiency and safeness. We show that the bank is adequate to search for a wide variety of binary systems (primordial black holes, neutron stars, and stellar-mass black holes) and in data from both current detectors (initial LIGO, Virgo and GEO600) as well as future detectors (advanced LIGO and EGO). Remarkably, although our template bank placement uses a metric arising from a particular template family, namely, stationary phase approximation, we show that it can be used successfully with other template families (e.g., Pade resummation and effective one-body approximation). This quality of being effective for different template families makes the proposed bank suitable for a search that would use several of them in parallel (e.g., in a binary black hole search). The hexagonal template bank described in this paper is currently used to search for nonspinning inspiralling compact binaries in data from the Laser Interferometer Gravitational-Wave Observatory (LIGO)

  11. Real time network traffic monitoring for wireless local area networks based on compressed sensing

    Science.gov (United States)

    Balouchestani, Mohammadreza

    2017-05-01

    A wireless local area network (WLAN) is an important type of wireless networks which connotes different wireless nodes in a local area network. WLANs suffer from important problems such as network load balancing, large amount of energy, and load of sampling. This paper presents a new networking traffic approach based on Compressed Sensing (CS) for improving the quality of WLANs. The proposed architecture allows reducing Data Delay Probability (DDP) to 15%, which is a good record for WLANs. The proposed architecture is increased Data Throughput (DT) to 22 % and Signal to Noise (S/N) ratio to 17 %, which provide a good background for establishing high qualified local area networks. This architecture enables continuous data acquisition and compression of WLAN's signals that are suitable for a variety of other wireless networking applications. At the transmitter side of each wireless node, an analog-CS framework is applied at the sensing step before analog to digital converter in order to generate the compressed version of the input signal. At the receiver side of wireless node, a reconstruction algorithm is applied in order to reconstruct the original signals from the compressed signals with high probability and enough accuracy. The proposed algorithm out-performs existing algorithms by achieving a good level of Quality of Service (QoS). This ability allows reducing 15 % of Bit Error Rate (BER) at each wireless node.

  12. A template bank to search for gravitational waves from inspiralling compact binaries: II. Phenomenological model

    International Nuclear Information System (INIS)

    Cokelaer, T

    2007-01-01

    Matched filtering is used to search for gravitational waves emitted by inspiralling compact binaries in data from ground-based interferometers. One of the key aspects of the detection process is the deployment of a set of templates, also called a template bank, to cover the astrophysically interesting region of the parameter space. In a companion paper, we described the template bank algorithm used in the analysis of Laser Interferometer Gravitational-Wave Observatory (LIGO) data to search for signals from non-spinning binaries made of neutron star and/or stellar-mass black holes; this template bank is based upon physical template families. In this paper, we describe the phenomenological template bank that was used to search for gravitational waves from non-spinning black hole binaries (from stellar mass formation) in the second, third and fourth LIGO science runs. We briefly explain the design of the bank, whose templates are based on a phenomenological detection template family. We show that this template bank gives matches greater than 95% with the physical template families that are expected to be captured by the phenomenological templates

  13. Atomic effect algebras with compression bases

    International Nuclear Information System (INIS)

    Caragheorgheopol, Dan; Tkadlec, Josef

    2011-01-01

    Compression base effect algebras were recently introduced by Gudder [Demonstr. Math. 39, 43 (2006)]. They generalize sequential effect algebras [Rep. Math. Phys. 49, 87 (2002)] and compressible effect algebras [Rep. Math. Phys. 54, 93 (2004)]. The present paper focuses on atomic compression base effect algebras and the consequences of atoms being foci (so-called projections) of the compressions in the compression base. Part of our work generalizes results obtained in atomic sequential effect algebras by Tkadlec [Int. J. Theor. Phys. 47, 185 (2008)]. The notion of projection-atomicity is introduced and studied, and several conditions that force a compression base effect algebra or the set of its projections to be Boolean are found. Finally, we apply some of these results to sequential effect algebras and strengthen a previously established result concerning a sufficient condition for them to be Boolean.

  14. Object-based target templates guide attention during visual search.

    Science.gov (United States)

    Berggren, Nick; Eimer, Martin

    2018-05-03

    During visual search, attention is believed to be controlled in a strictly feature-based fashion, without any guidance by object-based target representations. To challenge this received view, we measured electrophysiological markers of attentional selection (N2pc component) and working memory (sustained posterior contralateral negativity; SPCN) in search tasks where two possible targets were defined by feature conjunctions (e.g., blue circles and green squares). Critically, some search displays also contained nontargets with two target features (incorrect conjunction objects, e.g., blue squares). Because feature-based guidance cannot distinguish these objects from targets, any selective bias for targets will reflect object-based attentional control. In Experiment 1, where search displays always contained only one object with target-matching features, targets and incorrect conjunction objects elicited identical N2pc and SPCN components, demonstrating that attentional guidance was entirely feature-based. In Experiment 2, where targets and incorrect conjunction objects could appear in the same display, clear evidence for object-based attentional control was found. The target N2pc became larger than the N2pc to incorrect conjunction objects from 250 ms poststimulus, and only targets elicited SPCN components. This demonstrates that after an initial feature-based guidance phase, object-based templates are activated when they are required to distinguish target and nontarget objects. These templates modulate visual processing and control access to working memory, and their activation may coincide with the start of feature integration processes. Results also suggest that while multiple feature templates can be activated concurrently, only a single object-based target template can guide attention at any given time. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Template-based protein-protein docking exploiting pairwise interfacial residue restraints

    NARCIS (Netherlands)

    Xue, Li C; Garcia Lopes Maia Rodrigues, João; Dobbs, Drena; Honavar, Vasant; Bonvin, Alexandre M J J

    2016-01-01

    Although many advanced and sophisticatedab initioapproaches for modeling protein-protein complexes have been proposed in past decades, template-based modeling (TBM) remains the most accurate and widely used approach, given a reliable template is available. However, there are many different ways to

  16. Compressive sensing for sparse time-frequency representation of nonstationary signals in the presence of impulsive noise

    Science.gov (United States)

    Orović, Irena; Stanković, Srdjan; Amin, Moeness

    2013-05-01

    A modified robust two-dimensional compressive sensing algorithm for reconstruction of sparse time-frequency representation (TFR) is proposed. The ambiguity function domain is assumed to be the domain of observations. The two-dimensional Fourier bases are used to linearly relate the observations to the sparse TFR, in lieu of the Wigner distribution. We assume that a set of available samples in the ambiguity domain is heavily corrupted by an impulsive type of noise. Consequently, the problem of sparse TFR reconstruction cannot be tackled using standard compressive sensing optimization algorithms. We introduce a two-dimensional L-statistics based modification into the transform domain representation. It provides suitable initial conditions that will produce efficient convergence of the reconstruction algorithm. This approach applies sorting and weighting operations to discard an expected amount of samples corrupted by noise. The remaining samples serve as observations used in sparse reconstruction of the time-frequency signal representation. The efficiency of the proposed approach is demonstrated on numerical examples that comprise both cases of monocomponent and multicomponent signals.

  17. EP-based wavelet coefficient quantization for linear distortion ECG data compression.

    Science.gov (United States)

    Hung, King-Chu; Wu, Tsung-Ching; Lee, Hsieh-Wei; Liu, Tung-Kuan

    2014-07-01

    Reconstruction quality maintenance is of the essence for ECG data compression due to the desire for diagnosis use. Quantization schemes with non-linear distortion characteristics usually result in time-consuming quality control that blocks real-time application. In this paper, a new wavelet coefficient quantization scheme based on an evolution program (EP) is proposed for wavelet-based ECG data compression. The EP search can create a stationary relationship among the quantization scales of multi-resolution levels. The stationary property implies that multi-level quantization scales can be controlled with a single variable. This hypothesis can lead to a simple design of linear distortion control with 3-D curve fitting technology. In addition, a competitive strategy is applied for alleviating data dependency effect. By using the ECG signals saved in MIT and PTB databases, many experiments were undertaken for the evaluation of compression performance, quality control efficiency, data dependency influence. The experimental results show that the new EP-based quantization scheme can obtain high compression performance and keep linear distortion behavior efficiency. This characteristic guarantees fast quality control even for the prediction model mismatching practical distortion curve. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  18. Code Generation with Templates

    CERN Document Server

    Arnoldus, Jeroen; Serebrenik, A

    2012-01-01

    Templates are used to generate all kinds of text, including computer code. The last decade, the use of templates gained a lot of popularity due to the increase of dynamic web applications. Templates are a tool for programmers, and implementations of template engines are most times based on practical experience rather than based on a theoretical background. This book reveals the mathematical background of templates and shows interesting findings for improving the practical use of templates. First, a framework to determine the necessary computational power for the template metalanguage is presen

  19. Double displacement: An improved bioorthogonal reaction strategy for templated nucleic acid detection.

    Science.gov (United States)

    Kleinbaum, Daniel J; Miller, Gregory P; Kool, Eric T

    2010-06-16

    Quenched autoligation probes have been employed previously in a target-templated nonenzymatic ligation strategy for detecting nucleic acids in cells by fluorescence. A common source of background signal in such probes is the undesired reaction with water and other cellular nucleophiles. Here, we describe a new class of self-ligating probes, double displacement (DD) probes, that rely on two displacement reactions to fully unquench a nearby fluorophore. Three potential double displacement architectures, all possessing two fluorescence quencher/leaving groups (dabsylate groups), were synthesized and evaluated for templated reaction with nucleophile (phosphorothioate) probes both in vitro and in intact bacterial cells. All three DD probe designs provided substantially better initial quenching than a single-Dabsyl control. In isothermal templated reactions in vitro, double displacement probes yielded considerably lower background signal than previous single displacement probes; investigation into the mechanism revealed that one dabsylate acts as a sacrificial leaving group, reacting nonspecifically with water, but yielding little signal because another quencher group remains. Templated reaction with the specific nucleophile probe is required to activate a signal. The double displacement probes provided a ca. 80-fold turn-on signal and yielded a 2-4-fold improvement in signal/background over single Dabsyl probes. The best-performing probe architecture was demonstrated in a two-color, FRET-based two-allele discrimination system in vitro and was shown to be capable of discriminating between two closely related species of bacteria differing by a single nucleotide at an rRNA target site.

  20. A Novel Range Compression Algorithm for Resolution Enhancement in GNSS-SARs

    Directory of Open Access Journals (Sweden)

    Yu Zheng

    2017-06-01

    Full Text Available In this paper, a novel range compression algorithm for enhancing range resolutions of a passive Global Navigation Satellite System-based Synthetic Aperture Radar (GNSS-SAR is proposed. In the proposed algorithm, within each azimuth bin, firstly range compression is carried out by correlating a reflected GNSS intermediate frequency (IF signal with a synchronized direct GNSS base-band signal in the range domain. Thereafter, spectrum equalization is applied to the compressed results for suppressing side lobes to obtain a final range-compressed signal. Both theoretical analysis and simulation results have demonstrated that significant range resolution improvement in GNSS-SAR images can be achieved by the proposed range compression algorithm, compared to the conventional range compression algorithm.

  1. Multi-template tensor-based morphometry: application to analysis of Alzheimer's disease.

    Science.gov (United States)

    Koikkalainen, Juha; Lötjönen, Jyrki; Thurfjell, Lennart; Rueckert, Daniel; Waldemar, Gunhild; Soininen, Hilkka

    2011-06-01

    In this paper methods for using multiple templates in tensor-based morphometry (TBM) are presented and compared to the conventional single-template approach. TBM analysis requires non-rigid registrations which are often subject to registration errors. When using multiple templates and, therefore, multiple registrations, it can be assumed that the registration errors are averaged and eventually compensated. Four different methods are proposed for multi-template TBM. The methods were evaluated using magnetic resonance (MR) images of healthy controls, patients with stable or progressive mild cognitive impairment (MCI), and patients with Alzheimer's disease (AD) from the ADNI database (N=772). The performance of TBM features in classifying images was evaluated both quantitatively and qualitatively. Classification results show that the multi-template methods are statistically significantly better than the single-template method. The overall classification accuracy was 86.0% for the classification of control and AD subjects, and 72.1% for the classification of stable and progressive MCI subjects. The statistical group-level difference maps produced using multi-template TBM were smoother, formed larger continuous regions, and had larger t-values than the maps obtained with single-template TBM. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Template-based automatic breast segmentation on MRI by excluding the chest region

    OpenAIRE

    Lin, M; Chen, JH; Wang, X; Chan, S; Chen, S; Su, MY

    2013-01-01

    Purpose: Methods for quantification of breast density on MRI using semiautomatic approaches are commonly used. In this study, the authors report on a fully automatic chest template-based method. Methods: Nonfat-suppressed breast MR images from 31 healthy women were analyzed. Among them, one case was randomly selected and used as th e template, and the remaining 30 cases were used for testing. Unlike most model-based breast segmentation methods that use the breast region as the template, the c...

  3. Investigation of non-uniform airflow signal oscillation during high frequency chest compression

    Directory of Open Access Journals (Sweden)

    Lee Jongwon

    2005-05-01

    Full Text Available Abstract Background High frequency chest compression (HFCC is a useful and popular therapy for clearing bronchial airways of excessive or thicker mucus. Our observation of respiratory airflow of a subject during use of HFCC showed the airflow oscillation by HFCC was strongly influenced by the nonlinearity of the respiratory system. We used a computational model-based approach to analyse the respiratory airflow during use of HFCC. Methods The computational model, which is based on previous physiological studies and represented by an electrical circuit analogue, was used for simulation of in vivo protocol that shows the nonlinearity of the respiratory system. Besides, airflow was measured during use of HFCC. We compared the simulation results to either the measured data or the previous research, to understand and explain the observations. Results and discussion We could observe two important phenomena during respiration pertaining to the airflow signal oscillation generated by HFCC. The amplitudes of HFCC airflow signals varied depending on spontaneous airflow signals. We used the simulation results to investigate how the nonlinearity of airway resistance, lung capacitance, and inertance of air characterized the respiratory airflow. The simulation results indicated that lung capacitance or the inertance of air is also not a factor in the non-uniformity of HFCC airflow signals. Although not perfect, our circuit analogue model allows us to effectively simulate the nonlinear characteristics of the respiratory system. Conclusion We found that the amplitudes of HFCC airflow signals behave as a function of spontaneous airflow signals. This is due to the nonlinearity of the respiratory system, particularly variations in airway resistance.

  4. A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node

    Directory of Open Access Journals (Sweden)

    Kan Luo

    2018-01-01

    Full Text Available Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS- based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node’s specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM, block sparse Bayesian learning (BSBL method, and discrete cosine transform (DCT basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption.

  5. A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node.

    Science.gov (United States)

    Luo, Kan; Cai, Zhipeng; Du, Keqin; Zou, Fumin; Zhang, Xiangyu; Li, Jianqing

    2018-01-01

    Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption.

  6. From scores to face templates: a model-based approach.

    Science.gov (United States)

    Mohanty, Pranab; Sarkar, Sudeep; Kasturi, Rangachar

    2007-12-01

    Regeneration of templates from match scores has security and privacy implications related to any biometric authentication system. We propose a novel paradigm to reconstruct face templates from match scores using a linear approach. It proceeds by first modeling the behavior of the given face recognition algorithm by an affine transformation. The goal of the modeling is to approximate the distances computed by a face recognition algorithm between two faces by distances between points, representing these faces, in an affine space. Given this space, templates from an independent image set (break-in) are matched only once with the enrolled template of the targeted subject and match scores are recorded. These scores are then used to embed the targeted subject in the approximating affine (non-orthogonal) space. Given the coordinates of the targeted subject in the affine space, the original template of the targeted subject is reconstructed using the inverse of the affine transformation. We demonstrate our ideas using three, fundamentally different, face recognition algorithms: Principal Component Analysis (PCA) with Mahalanobis cosine distance measure, Bayesian intra-extrapersonal classifier (BIC), and a feature-based commercial algorithm. To demonstrate the independence of the break-in set with the gallery set, we select face templates from two different databases: Face Recognition Grand Challenge (FRGC) and Facial Recognition Technology (FERET) Database (FERET). With an operational point set at 1 percent False Acceptance Rate (FAR) and 99 percent True Acceptance Rate (TAR) for 1,196 enrollments (FERET gallery), we show that at most 600 attempts (score computations) are required to achieve a 73 percent chance of breaking in as a randomly chosen target subject for the commercial face recognition system. With similar operational set up, we achieve a 72 percent and 100 percent chance of breaking in for the Bayesian and PCA based face recognition systems, respectively. With

  7. Photon signature analysis using template matching

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, D.A., E-mail: d.a.bradley@surrey.ac.uk [Department of Physics, University of Surrey, Guildford GU2 7XH (United Kingdom); Hashim, S., E-mail: suhairul@utm.my [Department of Physics, Universiti Teknologi Malaysia, 81310 Skudai, Johor (Malaysia); Saripan, M.I. [Faculty of Engineering, Universiti Putra Malaysia, 43400 Serdang, Selangor (Malaysia); Wells, K. [Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford GU2 7XH (United Kingdom); Dunn, W.L. [Department of Mechanical and Nuclear Engineering, Kansas State University, 3002 Rathbone Hall, Manhattan, KS 66506 (United States)

    2011-10-01

    We describe an approach to detect improvised explosive devices (IEDs) by using a template matching procedure. This approach relies on the signature due to backstreaming {gamma} photons from various targets. In this work we have simulated cylindrical targets of aluminum, iron, copper, water and ammonium nitrate (nitrogen-rich fertilizer). We simulate 3.5 MeV source photons distributed on a plane inside a shielded area using Monte Carlo N-Particle (MCNP{sup TM}) code version 5 (V5). The 3.5 MeV source gamma rays yield 511 keV peaks due to pair production and scattered gamma rays. In this work, we simulate capture of those photons that backstream, after impinging on the target element, toward a NaI detector. The captured backstreamed photons are expected to produce a unique spectrum that will become part of a simple signal processing recognition system based on the template matching method. Different elements were simulated using different sets of random numbers in the Monte Carlo simulation. To date, the sum of absolute differences (SAD) method has been used to match the template. In the examples investigated, template matching was found to detect all elements correctly.

  8. Physics Based Modeling of Compressible Turbulance

    Science.gov (United States)

    2016-11-07

    AFRL-AFOSR-VA-TR-2016-0345 PHYSICS -BASED MODELING OF COMPRESSIBLE TURBULENCE PARVIZ MOIN LELAND STANFORD JUNIOR UNIV CA Final Report 09/13/2016...on the AFOSR project (FA9550-11-1-0111) entitled: Physics based modeling of compressible turbulence. The period of performance was, June 15, 2011...by ANSI Std. Z39.18 Page 1 of 2FORM SF 298 11/10/2016https://livelink.ebs.afrl.af.mil/livelink/llisapi.dll PHYSICS -BASED MODELING OF COMPRESSIBLE

  9. Evaluation of template-based models in CASP8 with standard measures

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The strategy for evaluating template-based models submitted to CASP has continuously evolved from CASP1 to CASP5, leading to a standard procedure that has been used in all subsequent editions. The established approach includes methods for calculating the quality of each individual model, for assigning scores based on the distribution of the results for each target and for computing the statistical significance of the differences in scores between prediction methods. These data are made available to the assessor of the template-based modeling category, who uses them as a starting point for further evaluations and analyses. This article describes the detailed workflow of the procedure, provides justifications for a number of choices that are customarily made for CASP data evaluation, and reports the results of the analysis of template-based predictions at CASP8.

  10. Iris Template Protection Based on Local Ranking

    Directory of Open Access Journals (Sweden)

    Dongdong Zhao

    2018-01-01

    Full Text Available Biometrics have been widely studied in recent years, and they are increasingly employed in real-world applications. Meanwhile, a number of potential threats to the privacy of biometric data arise. Iris template protection demands that the privacy of iris data should be protected when performing iris recognition. According to the international standard ISO/IEC 24745, iris template protection should satisfy the irreversibility, revocability, and unlinkability. However, existing works about iris template protection demonstrate that it is difficult to satisfy the three privacy requirements simultaneously while supporting effective iris recognition. In this paper, we propose an iris template protection method based on local ranking. Specifically, the iris data are first XORed (Exclusive OR operation with an application-specific string; next, we divide the results into blocks and then partition the blocks into groups. The blocks in each group are ranked according to their decimal values, and original blocks are transformed to their rank values for storage. We also extend the basic method to support the shifting strategy and masking strategy, which are two important strategies for iris recognition. We demonstrate that the proposed method satisfies the irreversibility, revocability, and unlinkability. Experimental results on typical iris datasets (i.e., CASIA-IrisV3-Interval, CASIA-IrisV4-Lamp, UBIRIS-V1-S1, and MMU-V1 show that the proposed method could maintain the recognition performance while protecting the privacy of iris data.

  11. A Novel Approach Based on PCNNs Template for Fingerprint Image Thinning

    NARCIS (Netherlands)

    Dacheng, X.; Bailiang, L.; Nijholt, Antinus; Kacprzyk, J.

    2009-01-01

    A PCNNs-based square-and-triangle-template method for binary fingerprint image thinning is proposed. The algorithm is iterative, in which a combined sequential and parallel processing is employed to accelerate execution. When a neuron satisfies the square template, the pixel corresponding to this

  12. Highly tailorable thiol-ene based emulsion-templated monoliths

    DEFF Research Database (Denmark)

    Lafleur, J. P.; Kutter, J. P.

    2014-01-01

    The attractive surface properties of thiol-ene polymers combined with their ease of processing make them ideal substrates in many bioanalytical applications. We report the synthesis of highly tailorable emulsion-templated porous polymers and beads in microfluidic devices based on off-stoichiometr......The attractive surface properties of thiol-ene polymers combined with their ease of processing make them ideal substrates in many bioanalytical applications. We report the synthesis of highly tailorable emulsion-templated porous polymers and beads in microfluidic devices based on off......-stoichiometry thiolene chemistry. The method allows monolith synthesis and anchoring inside thiol-ene microchannels in a single step. Variations in the monomer stoichiometric ratios and/or amount of porogen used allow for the creation of extremely varied polymer morphologies, from foam-like materials to dense networks...

  13. Welding template

    International Nuclear Information System (INIS)

    Ben Venue, R.J. of.

    1976-01-01

    A welding template is described which is used to weld strip material into a cellular grid structure for the accommodation of fuel elements in a nuclear reactor. On a base plate the template carries a multitude of cylindrical pins whose upper half is narrower than the bottom half and only one of which is attached to the base plate. The others are arrested in a hexagonal array by oblong webs clamped together by chuck jaws which can be secured by means of screws. The parts are ground very accurately. The template according to the invention is very easy to make. (UWI) [de

  14. Composite Techniques Based Color Image Compression

    Directory of Open Access Journals (Sweden)

    Zainab Ibrahim Abood

    2017-03-01

    Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.

  15. Dynamic Range Enhancement of High-Speed Electrical Signal Data via Non-Linear Compression

    Science.gov (United States)

    Laun, Matthew C. (Inventor)

    2016-01-01

    Systems and methods for high-speed compression of dynamic electrical signal waveforms to extend the measuring capabilities of conventional measuring devices such as oscilloscopes and high-speed data acquisition systems are discussed. Transfer function components and algorithmic transfer functions can be used to accurately measure signals that are within the frequency bandwidth but beyond the voltage range and voltage resolution capabilities of the measuring device.

  16. Julius – a template based supplementary electronic health record system

    Directory of Open Access Journals (Sweden)

    Klein Gunnar O

    2007-05-01

    Full Text Available Abstract Background EHR systems are widely used in hospitals and primary care centres but it is usually difficult to share information and to collect patient data for clinical research. This is partly due to the different proprietary information models and inconsistent data quality. Our objective was to provide a more flexible solution enabling the clinicians to define which data to be recorded and shared for both routine documentation and clinical studies. The data should be possible to reuse through a common set of variable definitions providing a consistent nomenclature and validation of data. Another objective was that the templates used for the data entry and presentation should be possible to use in combination with the existing EHR systems. Methods We have designed and developed a template based system (called Julius that was integrated with existing EHR systems. The system is driven by the medical domain knowledge defined by clinicians in the form of templates and variable definitions stored in a common data repository. The system architecture consists of three layers. The presentation layer is purely web-based, which facilitates integration with existing EHR products. The domain layer consists of the template design system, a variable/clinical concept definition system, the transformation and validation logic all implemented in Java. The data source layer utilizes an object relational mapping tool and a relational database. Results The Julius system has been implemented, tested and deployed to three health care units in Stockholm, Sweden. The initial responses from the pilot users were positive. The template system facilitates patient data collection in many ways. The experience of using the template system suggests that enabling the clinicians to be in control of the system, is a good way to add supplementary functionality to the present EHR systems. Conclusion The approach of the template system in combination with various local EHR

  17. Research on Remote Sensing Image Template Processing Based on Global Subdivision Theory

    OpenAIRE

    Xiong Delan; Du Genyuan

    2013-01-01

    Aiming at the questions of vast data, complex operation, and time consuming processing for remote sensing image, subdivision template was proposed based on global subdivision theory, which can set up high level of abstraction and generalization for remote sensing image. The paper emphatically discussed the model and structure of subdivision template, and put forward some new ideas for remote sensing image template processing, key technology and quickly applied demonstration. The research has ...

  18. Nanopolyaniline as immobilization template for signal enhancement of surface plasmon resonance biosensor - A preliminary study

    Science.gov (United States)

    Kamarun, Dzaraini; Abdul Azem, Nor Hazirah Kamel; Sarijo, Siti Halimah; Mohd, Ahmad Faiza; Abdullah @ Mohd Noor, Mashita

    2012-07-01

    A technique for the enhancement of Surface Plasmon Resonance (SPR) signal for sensing biomolecular interactions is described. Polyaniline (PANI) of particle size in the range of 1 to 15 nm was synthesized and used as the template for the immobilization of protein molecules. Biomolecular interactions of unbound and PANI-bound proteins with antibody molecules were SPR-monitored using a model system comprising of Bovine Serum Albumin (BSA) and anti BSA. A 7-fold increased in the signal was recorded from interactions of the PANI-bound BSA with anti BSA compared to the interactions of its unbound counterpart. This preliminary observation provides new avenue in immunosensor technology for improving the detection sensitivity of SPR biosensor; and thereby increasing the lower detection limit of biomolecules.

  19. Template-based education toolkit for mobile platforms

    Science.gov (United States)

    Golagani, Santosh Chandana; Esfahanian, Moosa; Akopian, David

    2012-02-01

    Nowadays mobile phones are the most widely used portable devices which evolve very fast adding new features and improving user experiences. The latest generation of hand-held devices called smartphones is equipped with superior memory, cameras and rich multimedia features, empowering people to use their mobile phones not only as a communication tool but also for entertainment purposes. With many young students showing interest in learning mobile application development one should introduce novel learning methods which may adapt to fast technology changes and introduce students to application development. Mobile phones become a common device, and engineering community incorporates phones in various solutions. Overcoming the limitations of conventional undergraduate electrical engineering (EE) education this paper explores the concept of template-based based education in mobile phone programming. The concept is based on developing small exercise templates which students can manipulate and revise for quick hands-on introduction to the application development and integration. Android platform is used as a popular open source environment for application development. The exercises relate to image processing topics typically studied by many students. The goal is to enable conventional course enhancements by incorporating in them short hands-on learning modules.

  20. [An Algorithm to Eliminate Power Frequency Interference in ECG Using Template].

    Science.gov (United States)

    Shi, Guohua; Li, Jiang; Xu, Yan; Feng, Liang

    2017-01-01

    Researching an algorithm to eliminate power frequency interference in ECG. The algorithm first creates power frequency interference template, then, subtracts the template from the original ECG signals, final y, the algorithm gets the ECG signals without interference. Experiment shows the algorithm can eliminate interference effectively and has none side effect to normal signal. It’s efficient and suitable for practice.

  1. A Compressed Sensing-Based Wearable Sensor Network for Quantitative Assessment of Stroke Patients

    Directory of Open Access Journals (Sweden)

    Lei Yu

    2016-02-01

    Full Text Available Clinical rehabilitation assessment is an important part of the therapy process because it is the premise for prescribing suitable rehabilitation interventions. However, the commonly used assessment scales have the following two drawbacks: (1 they are susceptible to subjective factors; (2 they only have several rating levels and are influenced by a ceiling effect, making it impossible to exactly detect any further improvement in the movement. Meanwhile, energy constraints are a primary design consideration in wearable sensor network systems since they are often battery-operated. Traditionally, for wearable sensor network systems that follow the Shannon/Nyquist sampling theorem, there are many data that need to be sampled and transmitted. This paper proposes a novel wearable sensor network system to monitor and quantitatively assess the upper limb motion function, based on compressed sensing technology. With the sparse representation model, less data is transmitted to the computer than with traditional systems. The experimental results show that the accelerometer signals of Bobath handshake and shoulder touch exercises can be compressed, and the length of the compressed signal is less than 1/3 of the raw signal length. More importantly, the reconstruction errors have no influence on the predictive accuracy of the Brunnstrom stage classification model. It also indicated that the proposed system can not only reduce the amount of data during the sampling and transmission processes, but also, the reconstructed accelerometer signals can be used for quantitative assessment without any loss of useful information.

  2. A Compressed Sensing-Based Wearable Sensor Network for Quantitative Assessment of Stroke Patients

    Science.gov (United States)

    Yu, Lei; Xiong, Daxi; Guo, Liquan; Wang, Jiping

    2016-01-01

    Clinical rehabilitation assessment is an important part of the therapy process because it is the premise for prescribing suitable rehabilitation interventions. However, the commonly used assessment scales have the following two drawbacks: (1) they are susceptible to subjective factors; (2) they only have several rating levels and are influenced by a ceiling effect, making it impossible to exactly detect any further improvement in the movement. Meanwhile, energy constraints are a primary design consideration in wearable sensor network systems since they are often battery-operated. Traditionally, for wearable sensor network systems that follow the Shannon/Nyquist sampling theorem, there are many data that need to be sampled and transmitted. This paper proposes a novel wearable sensor network system to monitor and quantitatively assess the upper limb motion function, based on compressed sensing technology. With the sparse representation model, less data is transmitted to the computer than with traditional systems. The experimental results show that the accelerometer signals of Bobath handshake and shoulder touch exercises can be compressed, and the length of the compressed signal is less than 1/3 of the raw signal length. More importantly, the reconstruction errors have no influence on the predictive accuracy of the Brunnstrom stage classification model. It also indicated that the proposed system can not only reduce the amount of data during the sampling and transmission processes, but also, the reconstructed accelerometer signals can be used for quantitative assessment without any loss of useful information. PMID:26861337

  3. Signal amplification of microRNAs with modified strand displacement-based cycling probe technology.

    Science.gov (United States)

    Jia, Huning; Bu, Ying; Zou, Bingjie; Wang, Jianping; Kumar, Shalen; Pitman, Janet L; Zhou, Guohua; Song, Qinxin

    2016-10-24

    Micro ribose nucleic acids (miRNAs) play an important role in biological processes such as cell differentiation, proliferation and apoptosis. Therefore, miRNAs are potentially a powerful marker for monitoring cancer and diagnosis. Here, we present sensitive signal amplification for miRNAs based on modified cycling probe technology with strand displacement amplification. miRNA was captured by the template coupled with beads, and then the first cycle based on SDA was repeatedly extended to the nicking end, which was produced by the extension reaction of miRNA. The products generated by SDA are captured by a molecular beacon (MB), which is designed to initiate the second amplification cycle, with a similar principle to the cycling probe technology (CPT), which is based on repeated digestion of the DNA-RNA hybrid by the RNase H. After one sample enrichment and two steps of signal amplification, 0.1 pM of let-7a can be detected. The miRNA assay exhibits a great dynamic range of over 100 orders of magnitude and high specificity to clearly discriminate a single base difference in miRNA sequences. This isothermal amplification does not require any special temperature control instrument. The assay is also about signal amplification rather than template amplification, therefore minimising contamination issues. In addition, there is no need for the reverse transcription (RT) process. Thus the amplification is suitable for miRNA detection.

  4. 3-Dimensional printing guide template assisted percutaneous vertebroplasty: Technical note.

    Science.gov (United States)

    Li, Jian; Lin, JiSheng; Yang, Yong; Xu, JunChuan; Fei, Qi

    2018-06-01

    Percutaneous vertebroplasty (PVP) is currently considered as an effective treatment for pain caused by acute osteoporotic vertebral compression fracture. Recently, puncture-related complications are increasingly reported. It's important to find a precise technique to reduce the puncture-related complications. We report a case and discussed the novel surgical technique with step-by-step operating procedures, to introduce the precise PVP assisted by a 3-dimensional printing guide template. Based on the preoperative CT scan and infrared scan data, a well-designed individual guide template could be established in a 3-dimensional reconstruction software and printed out by a 3-dimensional printer. In real operation, by matching the guide template to patient's back skin, cement needles' insertion orientation and depth were easily established. Only 14 times C-arm fluoroscopy with HDF mode (total exposure dose was 4.5 mSv) were required during the procedure. The operation took only 17 min. Cement distribution in the vertebral body was very good without any puncture-related complications. Pain was significantly relieved after surgery. In conclusion, the novel precise 3-dimensional printing guide template system may allow (1) comprehensive visualization of the fractured vertebral body and the individual surgical planning, (2) the perfect fitting between skin and guide template to ensure the puncture stability and accuracy, and (3) increased puncture precision and decreased puncture-related complications, surgical time and radiation exposure. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Fluid discrimination based on rock physics templates

    International Nuclear Information System (INIS)

    Liu, Qian; Yin, Xingyao; Li, Chao

    2015-01-01

    Reservoir fluid discrimination is an indispensable part of seismic exploration. Reliable fluid discrimination helps to decrease the risk of exploration and to increase the success ratio of drilling. There are many kinds of fluid indicators that are used in fluid discriminations, most of which are single indicators. But single indicators do not always work well under complicated reservoir conditions. Therefore, combined fluid indicators are needed to increase accuracies of discriminations. In this paper, we have proposed an alternative strategy for the combination of fluid indicators. An alternative fluid indicator, the rock physics template-based indicator (RPTI) has been derived to combine the advantages of two single indicators. The RPTI is more sensitive to the contents of fluid than traditional indicators. The combination is implemented based on the characteristic of the fluid trend in the rock physics template, which means few subjective factors are involved. We also propose an inversion method to assure the accuracy of the RPTI input data. The RPTI profile is an intuitionistic interpretation of fluid content. Real data tests demonstrate the applicability and validity. (paper)

  6. Template security analysis of multimodal biometric frameworks based on fingerprint and hand geometry

    Directory of Open Access Journals (Sweden)

    Arvind Selwal

    2016-09-01

    Full Text Available Biometric systems are automatic tools used to provide authentication during various applications of modern computing. In this work, three different design frameworks for multimodal biometric systems based on fingerprint and hand geometry modalities are proposed. An analysis is also presented to diagnose various types of template security issues in the proposed system. Fuzzy analytic hierarchy process (FAHP is applied with five decision parameters on all the designs and framework 1 is found to be better in terms of template data security, templates fusion and computational efficiency. It is noticed that template data security before storage in database is a challenging task. An important observation is that a template may be secured at feature fusion level and an indexing technique may be used to improve the size of secured templates.

  7. Compressed Sensing-Based Direct Conversion Receiver

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas; Larsen, Torben

    2012-01-01

    Due to the continuously increasing computational power of modern data receivers it is possible to move more and more processing from the analog to the digital domain. This paper presents a compressed sensing approach to relaxing the analog filtering requirements prior to the ADCs in a direct......-converted radio signals. As shown in an experiment presented in the article, when the proposed method is used, it is possible to relax the requirements for the quadrature down-converter filters. A random sampling device and an additional digital signal processing module is the price to pay for these relaxed...

  8. Improving Remote Health Monitoring: A Low-Complexity ECG Compression Approach.

    Science.gov (United States)

    Elgendi, Mohamed; Al-Ali, Abdulla; Mohamed, Amr; Ward, Rabab

    2018-01-16

    Recent advances in mobile technology have created a shift towards using battery-driven devices in remote monitoring settings and smart homes. Clinicians are carrying out diagnostic and screening procedures based on the electrocardiogram (ECG) signals collected remotely for outpatients who need continuous monitoring. High-speed transmission and analysis of large recorded ECG signals are essential, especially with the increased use of battery-powered devices. Exploring low-power alternative compression methodologies that have high efficiency and that enable ECG signal collection, transmission, and analysis in a smart home or remote location is required. Compression algorithms based on adaptive linear predictors and decimation by a factor B / K are evaluated based on compression ratio (CR), percentage root-mean-square difference (PRD), and heartbeat detection accuracy of the reconstructed ECG signal. With two databases (153 subjects), the new algorithm demonstrates the highest compression performance ( CR = 6 and PRD = 1.88 ) and overall detection accuracy (99.90% sensitivity, 99.56% positive predictivity) over both databases. The proposed algorithm presents an advantage for the real-time transmission of ECG signals using a faster and more efficient method, which meets the growing demand for more efficient remote health monitoring.

  9. Improving Remote Health Monitoring: A Low-Complexity ECG Compression Approach

    Directory of Open Access Journals (Sweden)

    Mohamed Elgendi

    2018-01-01

    Full Text Available Recent advances in mobile technology have created a shift towards using battery-driven devices in remote monitoring settings and smart homes. Clinicians are carrying out diagnostic and screening procedures based on the electrocardiogram (ECG signals collected remotely for outpatients who need continuous monitoring. High-speed transmission and analysis of large recorded ECG signals are essential, especially with the increased use of battery-powered devices. Exploring low-power alternative compression methodologies that have high efficiency and that enable ECG signal collection, transmission, and analysis in a smart home or remote location is required. Compression algorithms based on adaptive linear predictors and decimation by a factor B / K are evaluated based on compression ratio (CR, percentage root-mean-square difference (PRD, and heartbeat detection accuracy of the reconstructed ECG signal. With two databases (153 subjects, the new algorithm demonstrates the highest compression performance ( CR = 6 and PRD = 1.88 and overall detection accuracy (99.90% sensitivity, 99.56% positive predictivity over both databases. The proposed algorithm presents an advantage for the real-time transmission of ECG signals using a faster and more efficient method, which meets the growing demand for more efficient remote health monitoring.

  10. Context-Aware Image Compression.

    Directory of Open Access Journals (Sweden)

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  11. Design of Cancelable Palmprint Templates Based on Look Up Table

    Science.gov (United States)

    Qiu, Jian; Li, Hengjian; Dong, Jiwen

    2018-03-01

    A novel cancelable palmprint templates generation scheme is proposed in this paper. Firstly, the Gabor filter and chaotic matrix are used to extract palmprint features. It is then arranged into a row vector and divided into equal size blocks. These blocks are converted to corresponding decimals and mapped to look up tables, forming final cancelable palmprint features based on the selected check bits. Finally, collaborative representation based classification with regularized least square is used for classification. Experimental results on the Hong Kong PolyU Palmprint Database verify that the proposed cancelable templates can achieve very high performance and security levels. Meanwhile, it can also satisfy the needs of real-time applications.

  12. A PBOM configuration and management method based on templates

    Science.gov (United States)

    Guo, Kai; Qiao, Lihong; Qie, Yifan

    2018-03-01

    The design of Process Bill of Materials (PBOM) holds a hinge position in the process of product development. The requirements of PBOM configuration design and management for complex products are analysed in this paper, which include the reuse technique of configuration procedure and urgent management need of huge quantity of product family PBOM data. Based on the analysis, the function framework of PBOM configuration and management has been established. Configuration templates and modules are defined in the framework to support the customization and the reuse of configuration process. The configuration process of a detection sensor PBOM is shown as an illustration case in the end. The rapid and agile PBOM configuration and management can be achieved utilizing template-based method, which has a vital significance to improve the development efficiency for complex products.

  13. Pilotless recovery of clipped OFDM signals by compressive sensing over reliable data carriers

    KAUST Repository

    Al-Safadi, Ebrahim B.

    2012-06-01

    In this paper we propose a novel method of clipping mitigation in OFDM using compressive sensing that completely avoids using reserved tones or channel-estimation pilots. The method builds on selecting the most reliable perturbations from the constellation lattice upon decoding at the receiver (in the frequency domain), and performs compressive sensing over these observations in order to completely recover the sparse nonlinear distortion in the time domain. As such, the method provides a practical solution to the problem of initial erroneous decoding decisions in iterative ML methods, and the ability to recover the distorted signal in one shot. © 2012 IEEE.

  14. Pilotless recovery of clipped OFDM signals by compressive sensing over reliable data carriers

    KAUST Repository

    Al-Safadi, Ebrahim B.; Al-Naffouri, Tareq Y.

    2012-01-01

    In this paper we propose a novel method of clipping mitigation in OFDM using compressive sensing that completely avoids using reserved tones or channel-estimation pilots. The method builds on selecting the most reliable perturbations from the constellation lattice upon decoding at the receiver (in the frequency domain), and performs compressive sensing over these observations in order to completely recover the sparse nonlinear distortion in the time domain. As such, the method provides a practical solution to the problem of initial erroneous decoding decisions in iterative ML methods, and the ability to recover the distorted signal in one shot. © 2012 IEEE.

  15. Wavelet-based compression of pathological images for telemedicine applications

    Science.gov (United States)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  16. A Coded Aperture Compressive Imaging Array and Its Visual Detection and Tracking Algorithms for Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Hanxiao Wu

    2012-10-01

    Full Text Available In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l1 optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l1 tracker without any optimization.

  17. Novel active signal compression in low-noise analog readout at future X-ray FEL facilities

    Science.gov (United States)

    Manghisoni, M.; Comotti, D.; Gaioni, L.; Lodola, L.; Ratti, L.; Re, V.; Traversi, G.; Vacchi, C.

    2015-04-01

    This work presents the design of a low-noise front-end implementing a novel active signal compression technique. This feature can be exploited in the design of analog readout channels for application to the next generation free electron laser (FEL) experiments. The readout architecture includes the low-noise charge sensitive amplifier (CSA) with dynamic signal compression, a time variant shaper used to process the signal at the preamplifier output and a 10-bit successive approximation register (SAR) analog-to-digital converter (ADC). The channel will be operated in such a way to cope with the high frame rate (exceeding 1 MHz) foreseen for future XFEL machines. The choice of a 65 nm CMOS technology has been made in order to include all the building blocks in the target pixel pitch of 100 μm. This work has been carried out in the frame of the PixFEL Project funded by the Istituto Nazionale di Fisica Nucleare (INFN), Italy.

  18. Protective Effect of Unacylated Ghrelin on Compression-Induced Skeletal Muscle Injury Mediated by SIRT1-Signaling

    Directory of Open Access Journals (Sweden)

    Felix N. Ugwu

    2017-11-01

    Full Text Available Unacylated ghrelin, the predominant form of circulating ghrelin, protects myotubes from cell death, which is a known attribute of pressure ulcers. In this study, we investigated whether unacylated ghrelin protects skeletal muscle from pressure-induced deep tissue injury by abolishing necroptosis and apoptosis signaling and whether these effects were mediated by SIRT1 pathway. Fifteen adult Sprague Dawley rats were assigned to receive saline or unacylated ghrelin with or without EX527 (a SIRT1 inhibitor. Animals underwent two 6-h compression cycles with 100 mmHg static pressure applied over the mid-tibialis region of the right limb whereas the left uncompressed limb served as the intra-animal control. Muscle tissues underneath the compression region, and at the similar region of the opposite uncompressed limb, were collected for analysis. Unacylated ghrelin attenuated the compression-induced muscle pathohistological alterations including rounding contour of myofibers, extensive nucleus accumulation in the interstitial space, and increased interstitial space. Unacylated ghrelin abolished the increase in necroptosis proteins including RIP1 and RIP3 and attenuated the elevation of apoptotic proteins including p53, Bax, and AIF in the compressed muscle. Furthermore, unacylated ghrelin opposed the compression-induced phosphorylation and acetylation of p65 subunit of NF-kB. The anti-apoptotic effect of unacylated ghrelin was shown by a decrease in apoptotic DNA fragmentation and terminal dUTP nick-end labeling index in the compressed muscle. The protective effects of unacylated ghrelin vanished when co-treated with EX527. Our findings demonstrated that unacylated ghrelin protected skeletal muscle from compression-induced injury. The myoprotective effects of unacylated ghrelin on pressure-induced tissue injury were associated with SIRT1 signaling.

  19. A template-based approach for responsibility management in executable business processes

    Science.gov (United States)

    Cabanillas, Cristina; Resinas, Manuel; Ruiz-Cortés, Antonio

    2018-05-01

    Process-oriented organisations need to manage the different types of responsibilities their employees may have w.r.t. the activities involved in their business processes. Despite several approaches provide support for responsibility modelling, in current Business Process Management Systems (BPMS) the only responsibility considered at runtime is the one related to performing the work required for activity completion. Others like accountability or consultation must be implemented by manually adding activities in the executable process model, which is time-consuming and error-prone. In this paper, we address this limitation by enabling current BPMS to execute processes in which people with different responsibilities interact to complete the activities. We introduce a metamodel based on Responsibility Assignment Matrices (RAM) to model the responsibility assignment for each activity, and a flexible template-based mechanism that automatically transforms such information into BPMN elements, which can be interpreted and executed by a BPMS. Thus, our approach does not enforce any specific behaviour for the different responsibilities but new templates can be modelled to specify the interaction that best suits the activity requirements. Furthermore, libraries of templates can be created and reused in different processes. We provide a reference implementation and build a library of templates for a well-known set of responsibilities.

  20. A LabVIEW based template for user created experiment automation.

    Science.gov (United States)

    Kim, D J; Fisk, Z

    2012-12-01

    We have developed an expandable software template to automate user created experiments. The LabVIEW based template is easily modifiable to add together user created measurements, controls, and data logging with virtually any type of laboratory equipment. We use reentrant sequential selection to implement sequence script making it possible to wrap a long series of the user created experiments and execute them in sequence. Details of software structure and application examples for scanning probe microscope and automated transport experiments using custom built laboratory electronics and a cryostat are described.

  1. Ferritin-Templated Quantum-Dots for Quantum Logic Gates

    Science.gov (United States)

    Choi, Sang H.; Kim, Jae-Woo; Chu, Sang-Hyon; Park, Yeonjoon; King, Glen C.; Lillehei, Peter T.; Kim, Seon-Jeong; Elliott, James R.

    2005-01-01

    Quantum logic gates (QLGs) or other logic systems are based on quantum-dots (QD) with a stringent requirement of size uniformity. The QD are widely known building units for QLGs. The size control of QD is a critical issue in quantum-dot fabrication. The work presented here offers a new method to develop quantum-dots using a bio-template, called ferritin, that ensures QD production in uniform size of nano-scale proportion. The bio-template for uniform yield of QD is based on a ferritin protein that allows reconstitution of core material through the reduction and chelation processes. One of the biggest challenges for developing QLG is the requirement of ordered and uniform size of QD for arrays on a substrate with nanometer precision. The QD development by bio-template includes the electrochemical/chemical reconsitution of ferritins with different core materials, such as iron, cobalt, manganese, platinum, and nickel. The other bio-template method used in our laboratory is dendrimers, precisely defined chemical structures. With ferritin-templated QD, we fabricated the heptagonshaped patterned array via direct nano manipulation of the ferritin molecules with a tip of atomic force microscope (AFM). We also designed various nanofabrication methods of QD arrays using a wide range manipulation techniques. The precise control of the ferritin-templated QD for a patterned arrangement are offered by various methods, such as a site-specific immobilization of thiolated ferritins through local oxidation using the AFM tip, ferritin arrays induced by gold nanoparticle manipulation, thiolated ferritin positioning by shaving method, etc. In the signal measurements, the current-voltage curve is obtained by measuring the current through the ferritin, between the tip and the substrate for potential sweeping or at constant potential. The measured resistance near zero bias was 1.8 teraohm for single holoferritin and 5.7 teraohm for single apoferritin, respectively.

  2. Ab initio and template-based prediction of multi-class distance maps by two-dimensional recursive neural networks

    Directory of Open Access Journals (Sweden)

    Martin Alberto JM

    2009-01-01

    Full Text Available Abstract Background Prediction of protein structures from their sequences is still one of the open grand challenges of computational biology. Some approaches to protein structure prediction, especially ab initio ones, rely to some extent on the prediction of residue contact maps. Residue contact map predictions have been assessed at the CASP competition for several years now. Although it has been shown that exact contact maps generally yield correct three-dimensional structures, this is true only at a relatively low resolution (3–4 Å from the native structure. Another known weakness of contact maps is that they are generally predicted ab initio, that is not exploiting information about potential homologues of known structure. Results We introduce a new class of distance restraints for protein structures: multi-class distance maps. We show that Cα trace reconstructions based on 4-class native maps are significantly better than those from residue contact maps. We then build two predictors of 4-class maps based on recursive neural networks: one ab initio, or relying on the sequence and on evolutionary information; one template-based, or in which homology information to known structures is provided as a further input. We show that virtually any level of sequence similarity to structural templates (down to less than 10% yields more accurate 4-class maps than the ab initio predictor. We show that template-based predictions by recursive neural networks are consistently better than the best template and than a number of combinations of the best available templates. We also extract binary residue contact maps at an 8 Å threshold (as per CASP assessment from the 4-class predictors and show that the template-based version is also more accurate than the best template and consistently better than the ab initio one, down to very low levels of sequence identity to structural templates. Furthermore, we test both ab-initio and template-based 8

  3. Speed-up Template Matching through Integral Image based Weak Classifiers

    NARCIS (Netherlands)

    Wu, t.; Toet, A.

    2014-01-01

    Template matching is a widely used pattern recognition method, especially in industrial inspection. However, the computational costs of traditional template matching increase dramatically with both template-and scene imagesize. This makes traditional template matching less useful for many (e.g.

  4. Quasi Gradient Projection Algorithm for Sparse Reconstruction in Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Xin Meng

    2014-02-01

    Full Text Available Compressed sensing is a novel signal sampling theory under the condition that the signal is sparse or compressible. The existing recovery algorithms based on the gradient projection can either need prior knowledge or recovery the signal poorly. In this paper, a new algorithm based on gradient projection is proposed, which is referred as Quasi Gradient Projection. The algorithm presented quasi gradient direction and two step sizes schemes along this direction. The algorithm doesn’t need any prior knowledge of the original signal. Simulation results demonstrate that the presented algorithm cans recovery the signal more correctly than GPSR which also don’t need prior knowledge. Meanwhile, the algorithm has a lower computation complexity.

  5. Cross-catalytic peptide nucleic acid (PNA) replication based on templated ligation

    DEFF Research Database (Denmark)

    Singhal, Abhishek; Nielsen, Peter E

    2014-01-01

    We report the first PNA self-replicating system based on template directed cross-catalytic ligation, a process analogous to biological replication. Using two template PNAs and four pentameric precursor PNAs, all four possible carbodiimide assisted amide ligation products were detected...... precursors. Cross-catalytic product formation followed product inhibited kinetics, but approximately two replication rounds were observed. Analogous but less efficient replication was found for a similar tetrameric system. These results demonstrate that simpler nucleobase replication systems than natural...

  6. Application of content-based image compression to telepathology

    Science.gov (United States)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  7. Study on a digital pulse processing algorithm based on template-matching for high-throughput spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Wen, Xianfei; Yang, Haori

    2015-06-01

    A major challenge in utilizing spectroscopy techniques for nuclear safeguards is to perform high-resolution measurements at an ultra-high throughput rate. Traditionally, piled-up pulses are rejected to ensure good energy resolution. To improve throughput rate, high-pass filters are normally implemented to shorten pulses. However, this reduces signal-to-noise ratio and causes degradation in energy resolution. In this work, a pulse pile-up recovery algorithm based on template-matching was proved to be an effective approach to achieve high-throughput gamma ray spectroscopy. First, a discussion of the algorithm was given in detail. Second, the algorithm was then successfully utilized to process simulated piled-up pulses from a scintillator detector. Third, the algorithm was implemented to analyze high rate data from a NaI detector, a silicon drift detector and a HPGe detector. The promising results demonstrated the capability of this algorithm to achieve high-throughput rate without significant sacrifice in energy resolution. The performance of the template-matching algorithm was also compared with traditional shaping methods. - Highlights: • A detailed discussion on the template-matching algorithm was given. • The algorithm was tested on data from a NaI and a Si detector. • The algorithm was successfully implemented on high rate data from a HPGe detector. • The performance of the algorithm was compared with traditional shaping methods. • The advantage of the algorithm in active interrogation was discussed.

  8. Linear chemically sensitive electron tomography using DualEELS and dictionary-based compressed sensing

    Energy Technology Data Exchange (ETDEWEB)

    AlAfeef, Ala, E-mail: a.al-afeef.1@research.gla.ac.uk [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); School of Computing Science, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Bobynko, Joanna [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Cockshott, W. Paul. [School of Computing Science, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Craven, Alan J. [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Zuazo, Ian; Barges, Patrick [ArcelorMittal Maizières Research, Maizières-lès-Metz 57283 (France); MacLaren, Ian, E-mail: ian.maclaren@glasgow.ac.uk [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom)

    2016-11-15

    We have investigated the use of DualEELS in elementally sensitive tilt series tomography in the scanning transmission electron microscope. A procedure is implemented using deconvolution to remove the effects of multiple scattering, followed by normalisation by the zero loss peak intensity. This is performed to produce a signal that is linearly dependent on the projected density of the element in each pixel. This method is compared with one that does not include deconvolution (although normalisation by the zero loss peak intensity is still performed). Additionally, we compare the 3D reconstruction using a new compressed sensing algorithm, DLET, with the well-established SIRT algorithm. VC precipitates, which are extracted from a steel on a carbon replica, are used in this study. It is found that the use of this linear signal results in a very even density throughout the precipitates. However, when deconvolution is omitted, a slight density reduction is observed in the cores of the precipitates (a so-called cupping artefact). Additionally, it is clearly demonstrated that the 3D morphology is much better reproduced using the DLET algorithm, with very little elongation in the missing wedge direction. It is therefore concluded that reliable elementally sensitive tilt tomography using EELS requires the appropriate use of DualEELS together with a suitable reconstruction algorithm, such as the compressed sensing based reconstruction algorithm used here, to make the best use of the limited data volume and signal to noise inherent in core-loss EELS. - Highlights: • DualEELS is essential for chemically sensitive electron tomography using EELS. • A new compressed sensing based algorithm (DLET) gives high fidelity reconstruction. • This combination of DualEELS and DLET will give reliable results from few projections.

  9. Immunophenotype Discovery, Hierarchical Organization, and Template-based Classification of Flow Cytometry Samples

    Directory of Open Access Journals (Sweden)

    Ariful Azad

    2016-08-01

    Full Text Available We describe algorithms for discovering immunophenotypes from large collections of flow cytometry (FC samples, and using them to organize the samples into a hierarchy based on phenotypic similarity. The hierarchical organization is helpful for effective and robust cytometry data mining, including the creation of collections of cell populations characteristic of different classes of samples, robust classification, and anomaly detection. We summarize a set of samples belonging to a biological class or category with a statistically derived template for the class. Whereas individual samples are represented in terms of their cell populations (clusters, a template consists of generic meta-populations (a group of homogeneous cell populations obtained from the samples in a class that describe key phenotypes shared among all those samples. We organize an FC data collection in a hierarchical data structure that supports the identification of immunophenotypes relevant to clinical diagnosis. A robust template-based classification scheme is also developed, but our primary focus is in the discovery of phenotypic signatures and inter-sample relationships in an FC data collection. This collective analysis approach is more efficient and robust since templates describe phenotypic signatures common to cell populations in several samples, while ignoring noise and small sample-specific variations.We have applied the template-base scheme to analyze several data setsincluding one representing a healthy immune system, and one of Acute Myeloid Leukemia (AMLsamples. The last task is challenging due to the phenotypic heterogeneity of the severalsubtypes of AML. However, we identified thirteen immunophenotypes corresponding to subtypes of AML, and were able to distinguish Acute Promyelocytic Leukemia from other subtypes of AML.

  10. SYNCHROTRON X-RAY OBSERVATIONS OF A MONOLAYER TEMPLATE FOR MINERALIZATION

    International Nuclear Information System (INIS)

    Dimasi, E.; Gower, L.B.

    2000-01-01

    Mineral nucleation at a Langmuir film interface has been studied by synchrotron x-ray scattering. Diluted calcium bicarbonate solutions were used as subphases for arachidic and stearic acid monolayers, compressed in a Langmuir trough. Self-assembly of the monolayer template is observed directly, and subsequent crystal growth monitored in-situ

  11. Influence of chest compression artefact on capnogram-based ventilation detection during out-of-hospital cardiopulmonary resuscitation.

    Science.gov (United States)

    Leturiondo, Mikel; Ruiz de Gauna, Sofía; Ruiz, Jesus M; Julio Gutiérrez, J; Leturiondo, Luis A; González-Otero, Digna M; Russell, James K; Zive, Dana; Daya, Mohamud

    2018-03-01

    Capnography has been proposed as a method for monitoring the ventilation rate during cardiopulmonary resuscitation (CPR). A high incidence (above 70%) of capnograms distorted by chest compression induced oscillations has been previously reported in out-of-hospital (OOH) CPR. The aim of the study was to better characterize the chest compression artefact and to evaluate its influence on the performance of a capnogram-based ventilation detector during OOH CPR. Data from the MRx monitor-defibrillator were extracted from OOH cardiac arrest episodes. For each episode, presence of chest compression artefact was annotated in the capnogram. Concurrent compression depth and transthoracic impedance signals were used to identify chest compressions and to annotate ventilations, respectively. We designed a capnogram-based ventilation detection algorithm and tested its performance with clean and distorted episodes. Data were collected from 232 episodes comprising 52 654 ventilations, with a mean (±SD) of 227 (±118) per episode. Overall, 42% of the capnograms were distorted. Presence of chest compression artefact degraded algorithm performance in terms of ventilation detection, estimation of ventilation rate, and the ability to detect hyperventilation. Capnogram-based ventilation detection during CPR using our algorithm was compromised by the presence of chest compression artefact. In particular, artefact spanning from the plateau to the baseline strongly degraded ventilation detection, and caused a high number of false hyperventilation alarms. Further research is needed to reduce the impact of chest compression artefact on capnographic ventilation monitoring. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Nucleic Acid Templated Reactions for Chemical Biology.

    Science.gov (United States)

    Di Pisa, Margherita; Seitz, Oliver

    2017-06-21

    Nucleic acid directed bioorthogonal reactions offer the fascinating opportunity to unveil and redirect a plethora of intracellular mechanisms. Nano- to picomolar amounts of specific RNA molecules serve as templates and catalyze the selective formation of molecules that 1) exert biological effects, or 2) provide measurable signals for RNA detection. Turnover of reactants on the template is a valuable asset when concentrations of RNA templates are low. The idea is to use RNA-templated reactions to fully control the biodistribution of drugs and to push the detection limits of DNA or RNA analytes to extraordinary sensitivities. Herein we review recent and instructive examples of conditional synthesis or release of compounds for in cellulo protein interference and intracellular nucleic acid imaging. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  13. Micro-Doppler Ambiguity Resolution Based on Short-Time Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Jing-bo Zhuang

    2015-01-01

    Full Text Available When using a long range radar (LRR to track a target with micromotion, the micro-Doppler embodied in the radar echoes may suffer from ambiguity problem. In this paper, we propose a novel method based on compressed sensing (CS to solve micro-Doppler ambiguity. According to the RIP requirement, a sparse probing pulse train with its transmitting time random is designed. After matched filtering, the slow-time echo signals of the micromotion target can be viewed as randomly sparse sampling of Doppler spectrum. Select several successive pulses to form a short-time window and the CS sensing matrix can be built according to the time stamps of these pulses. Then performing Orthogonal Matching Pursuit (OMP, the unambiguous micro-Doppler spectrum can be obtained. The proposed algorithm is verified using the echo signals generated according to the theoretical model and the signals with micro-Doppler signature produced using the commercial electromagnetic simulation software FEKO.

  14. Integration of EEG lead placement templates into traditional technologist-based staffing models reduces costs in continuous video-EEG monitoring service.

    Science.gov (United States)

    Kolls, Brad J; Lai, Amy H; Srinivas, Anang A; Reid, Robert R

    2014-06-01

    The purpose of this study was to determine the relative cost reductions within different staffing models for continuous video-electroencephalography (cvEEG) service by introducing a template system for 10/20 lead application. We compared six staffing models using decision tree modeling based on historical service line utilization data from the cvEEG service at our center. Templates were integrated into technologist-based service lines in six different ways. The six models studied were templates for all studies, templates for intensive care unit (ICU) studies, templates for on-call studies, templates for studies of ≤ 24-hour duration, technologists for on-call studies, and technologists for all studies. Cost was linearly related to the study volume for all models with the "templates for all" model incurring the lowest cost. The "technologists for all" model carried the greatest cost. Direct cost comparison shows that any introduction of templates results in cost savings, with the templates being used for patients located in the ICU being the second most cost efficient and the most practical of the combined models to implement. Cost difference between the highest and lowest cost models under the base case produced an annual estimated savings of $267,574. Implementation of the ICU template model at our institution under base case conditions would result in a $205,230 savings over our current "technologist for all" model. Any implementation of templates into a technologist-based cvEEG service line results in cost savings, with the most significant annual savings coming from using the templates for all studies, but the most practical implementation approach with the second highest cost reduction being the template used in the ICU. The lowered costs determined in this work suggest that a template-based cvEEG service could be supported at smaller centers with significantly reduced costs and could allow for broader use of cvEEG patient monitoring.

  15. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  16. A new s-adenosylhomocysteine hydrolase-linked method for adenosine detection based on DNA-templated fluorescent Cu/Ag nanoclusters.

    Science.gov (United States)

    Ahn, Jun Ki; Kim, Hyo Yong; Baek, Songyi; Park, Hyun Gyu

    2017-07-15

    We herein describe a novel fluorescent method for the rapid and selective detection of adenosine by utilizing DNA-templated Cu/Ag nanoclusters (NCs) and employing s-adenosylhomocysteine hydrolase (SAHH). SAHH is allowed to promote hydrolysis reaction of s-adenosylhomocysteine (SAH) and consequently produces homocysteine, which would quench the fluorescence signal from DNA-templated Cu/Ag nanoclusters employed as a signaling probe in this study. On the other hand, adenosine significantly inhibits the hydrolysis reaction and prevent the formation of homocysteine. Consequently, highly enhanced fluorescence signal from DNA-Cu/Ag NCs is retained, which could be used to identify the presence of adenosine. By employing this design principle, adenosine was sensitively detected down to 19nM with high specificity over other adenosine analogs such as AMP, ADP, ATP, cAMP, guanosine, cytidine, and urine. Finally, the diagnostic capability of this method was successfully verified by reliably detecting adenosine present in a real human serum sample. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. People counting with stereo cameras : two template-based solutions

    NARCIS (Netherlands)

    Englebienne, Gwenn; van Oosterhout, Tim; Kröse, B.J.A.

    2012-01-01

    People counting is a challenging task with many applications. We propose a method with a fixed stereo camera that is based on projecting a template onto the depth image. The method was tested on a challenging outdoor dataset with good results and runs in real time.

  18. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  19. Hardware-efficient robust biometric identification from 0.58 second template and 12 features of limb (Lead I) ECG signal using logistic regression classifier.

    Science.gov (United States)

    Sahadat, Md Nazmus; Jacobs, Eddie L; Morshed, Bashir I

    2014-01-01

    The electrocardiogram (ECG), widely known as a cardiac diagnostic signal, has recently been proposed for biometric identification of individuals; however reliability and reproducibility are of research interest. In this paper, we propose a template matching technique with 12 features using logistic regression classifier that achieved high reliability and identification accuracy. Non-invasive ECG signals were captured using our custom-built ambulatory EEG/ECG embedded device (NeuroMonitor). ECG data were collected from healthy subjects (10), between 25-35 years, for 10 seconds per trial. The number of trials from each subject was 10. From each trial, only 0.58 seconds of Lead I ECG data were used as template. Hardware-efficient fiducial point detection technique was implemented for feature extraction. To obtain repeated random sub-sampling validation, data were randomly separated into training and testing sets at a ratio of 80:20. Test data were used to find the classification accuracy. ECG template data with 12 extracted features provided the best performance in terms of accuracy (up to 100%) and processing complexity (computation time of 1.2ms). This work shows that a single limb (Lead I) ECG can robustly identify an individual quickly and reliably with minimal contact and data processing using the proposed algorithm.

  20. A Study on the Data Compression Technology-Based Intelligent Data Acquisition (IDAQ System for Structural Health Monitoring of Civil Structures

    Directory of Open Access Journals (Sweden)

    Gwanghee Heo

    2017-07-01

    Full Text Available In this paper, a data compression technology-based intelligent data acquisition (IDAQ system was developed for structural health monitoring of civil structures, and its validity was tested using random signals (El-Centro seismic waveform. The IDAQ system was structured to include a high-performance CPU with large dynamic memory for multi-input and output in a radio frequency (RF manner. In addition, the embedded software technology (EST has been applied to it to implement diverse logics needed in the process of acquiring, processing and transmitting data. In order to utilize IDAQ system for the structural health monitoring of civil structures, this study developed an artificial filter bank by which structural dynamic responses (acceleration were efficiently acquired, and also optimized it on the random El-Centro seismic waveform. All techniques developed in this study have been embedded to our system. The data compression technology-based IDAQ system was proven valid in acquiring valid signals in a compressed size.

  1. Chaotic secure content-based hidden transmission of biometric templates

    International Nuclear Information System (INIS)

    Khan, Muhammad Khurram; Zhang Jiashu; Tian Lei

    2007-01-01

    The large-scale proliferation of biometric verification systems creates a demand for effective and reliable security and privacy of its data. Like passwords and PIN codes, biometric data is also not secret and if it is compromised, the integrity of the whole verification system could be at high risk. To address these issues, this paper presents a novel chaotic secure content-based hidden transmission scheme of biometric data. Encryption and data hiding techniques are used to improve the security and secrecy of the transmitted templates. Secret keys are generated by the biometric image and used as the parameter value and initial condition of the chaotic map, and each transaction session has different secret keys to protect from the attacks. Two chaotic maps are incorporated for the encryption to resolve the finite word length effect and to improve the system's resistance against attacks. Encryption is applied on the biometric templates before hiding into the cover/host images to make them secure, and then templates are hidden into the cover image. Experimental results show that the security, performance, and accuracy of the presented scheme are encouraging comparable with other methods found in the current literature

  2. Chaotic secure content-based hidden transmission of biometric templates

    Energy Technology Data Exchange (ETDEWEB)

    Khan, Muhammad Khurram [Research Group for Biometrics and Security, Sichuan Province Key Lab of Signal and Information Processing, Southwest Jiaotong University, Chengdu 610031, Sichuan (China)]. E-mail: khurram.khan@scientist.com; Zhang Jiashu [Research Group for Biometrics and Security, Sichuan Province Key Lab of Signal and Information Processing, Southwest Jiaotong University, Chengdu 610031, Sichuan (China); Tian Lei [Research Group for Biometrics and Security, Sichuan Province Key Lab of Signal and Information Processing, Southwest Jiaotong University, Chengdu 610031, Sichuan (China)

    2007-06-15

    The large-scale proliferation of biometric verification systems creates a demand for effective and reliable security and privacy of its data. Like passwords and PIN codes, biometric data is also not secret and if it is compromised, the integrity of the whole verification system could be at high risk. To address these issues, this paper presents a novel chaotic secure content-based hidden transmission scheme of biometric data. Encryption and data hiding techniques are used to improve the security and secrecy of the transmitted templates. Secret keys are generated by the biometric image and used as the parameter value and initial condition of the chaotic map, and each transaction session has different secret keys to protect from the attacks. Two chaotic maps are incorporated for the encryption to resolve the finite word length effect and to improve the system's resistance against attacks. Encryption is applied on the biometric templates before hiding into the cover/host images to make them secure, and then templates are hidden into the cover image. Experimental results show that the security, performance, and accuracy of the presented scheme are encouraging comparable with other methods found in the current literature.

  3. Less accurate but more efficient family of search templates for detection of gravitational waves from inspiraling compact binaries

    International Nuclear Information System (INIS)

    Chronopoulos, Andreas E.; Apostolatos, Theocharis A.

    2001-01-01

    The network of interferometric detectors that is under construction at various locations on Earth is expected to start searching for gravitational waves in a few years. The number of search templates that is needed to be cross correlated with the noisy output of the detectors is a major issue since computing power capabilities are restricted. By choosing higher and higher post-Newtonian order expansions for the family of search templates we make sure that our filters are more accurate copies of the real waves that hit our detectors. However, this is not the only criterion for choosing a family of search templates. To make the process of detection as efficient as possible, one needs a family of templates with a relatively small number of members that manages to pick up any detectable signal with only a tiny reduction in signal-to-noise ratio. Evidently, one family is better than another if it accomplishes its goal with a smaller number of templates. Following the geometric language of Owen, we have studied the performance of the post 1.5 -Newtonian family of templates on detecting post 2 -Newtonian signals for binaries. Several technical issues arise from the fact that the two types of waveforms cannot be made to coincide by a suitable choice of parameters. In general, the parameter space of the signals is not identical with the parameter space of the templates, although in our case they are of the same dimension, and one has to take into account all such peculiarities before drawing any conclusion. An interesting result we have obtained is that the post 1.5 -Newtonian family of templates happens to be more economical for detecting post 2 -Newtonian signals than the perfectly accurate post 2 -Newtonian family of templates itself. The number of templates is reduced by 20-30%, depending on the acceptable level of reduction in signal-to-noise ratio due to discretization of the family of templates. This makes the post 1.5 -Newtonian family of templates more favorable

  4. Template-based automatic breast segmentation on MRI by excluding the chest region

    International Nuclear Information System (INIS)

    Lin, Muqing; Chen, Jeon-Hor; Wang, Xiaoyong; Su, Min-Ying; Chan, Siwa; Chen, Siping

    2013-01-01

    Purpose: Methods for quantification of breast density on MRI using semiautomatic approaches are commonly used. In this study, the authors report on a fully automatic chest template-based method. Methods: Nonfat-suppressed breast MR images from 31 healthy women were analyzed. Among them, one case was randomly selected and used as the template, and the remaining 30 cases were used for testing. Unlike most model-based breast segmentation methods that use the breast region as the template, the chest body region on a middle slice was used as the template. Within the chest template, three body landmarks (thoracic spine and bilateral boundary of the pectoral muscle) were identified for performing the initial V-shape cut to determine the posterior lateral boundary of the breast. The chest template was mapped to each subject's image space to obtain a subject-specific chest model for exclusion. On the remaining image, the chest wall muscle was identified and excluded to obtain clean breast segmentation. The chest and muscle boundaries determined on the middle slice were used as the reference for the segmentation of adjacent slices, and the process continued superiorly and inferiorly until all 3D slices were segmented. The segmentation results were evaluated by an experienced radiologist to mark voxels that were wrongly included or excluded for error analysis. Results: The breast volumes measured by the proposed algorithm were very close to the radiologist's corrected volumes, showing a % difference ranging from 0.01% to 3.04% in 30 tested subjects with a mean of 0.86% ± 0.72%. The total error was calculated by adding the inclusion and the exclusion errors (so they did not cancel each other out), which ranged from 0.05% to 6.75% with a mean of 3.05% ± 1.93%. The fibroglandular tissue segmented within the breast region determined by the algorithm and the radiologist were also very close, showing a % difference ranging from 0.02% to 2.52% with a mean of 1.03% ± 1.03%. The

  5. Template-based automatic breast segmentation on MRI by excluding the chest region

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Muqing [Tu and Yuen Center for Functional Onco-Imaging, Department of Radiological Sciences, University of California, Irvine, California 92697-5020 and National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, 518060 China (China); Chen, Jeon-Hor [Tu and Yuen Center for Functional Onco-Imaging, Department of Radiological Sciences, University of California, Irvine, California 92697-5020 and Department of Radiology, E-Da Hospital and I-Shou University, Kaohsiung 82445, Taiwan (China); Wang, Xiaoyong; Su, Min-Ying, E-mail: msu@uci.edu [Tu and Yuen Center for Functional Onco-Imaging, Department of Radiological Sciences, University of California, Irvine, California 92697-5020 (United States); Chan, Siwa [Department of Radiology, Taichung Veterans General Hospital, Taichung 40407, Taiwan (China); Chen, Siping [National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, 518060 China (China)

    2013-12-15

    Purpose: Methods for quantification of breast density on MRI using semiautomatic approaches are commonly used. In this study, the authors report on a fully automatic chest template-based method. Methods: Nonfat-suppressed breast MR images from 31 healthy women were analyzed. Among them, one case was randomly selected and used as the template, and the remaining 30 cases were used for testing. Unlike most model-based breast segmentation methods that use the breast region as the template, the chest body region on a middle slice was used as the template. Within the chest template, three body landmarks (thoracic spine and bilateral boundary of the pectoral muscle) were identified for performing the initial V-shape cut to determine the posterior lateral boundary of the breast. The chest template was mapped to each subject's image space to obtain a subject-specific chest model for exclusion. On the remaining image, the chest wall muscle was identified and excluded to obtain clean breast segmentation. The chest and muscle boundaries determined on the middle slice were used as the reference for the segmentation of adjacent slices, and the process continued superiorly and inferiorly until all 3D slices were segmented. The segmentation results were evaluated by an experienced radiologist to mark voxels that were wrongly included or excluded for error analysis. Results: The breast volumes measured by the proposed algorithm were very close to the radiologist's corrected volumes, showing a % difference ranging from 0.01% to 3.04% in 30 tested subjects with a mean of 0.86% ± 0.72%. The total error was calculated by adding the inclusion and the exclusion errors (so they did not cancel each other out), which ranged from 0.05% to 6.75% with a mean of 3.05% ± 1.93%. The fibroglandular tissue segmented within the breast region determined by the algorithm and the radiologist were also very close, showing a % difference ranging from 0.02% to 2.52% with a mean of 1.03% ± 1

  6. n-Gram-Based Text Compression

    Science.gov (United States)

    Duong, Hieu N.; Snasel, Vaclav

    2016-01-01

    We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods. PMID:27965708

  7. n-Gram-Based Text Compression

    Directory of Open Access Journals (Sweden)

    Vu H. Nguyen

    2016-01-01

    Full Text Available We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods.

  8. Astronomical Image Compression Techniques Based on ACC and KLT Coder

    Directory of Open Access Journals (Sweden)

    J. Schindler

    2011-01-01

    Full Text Available This paper deals with a compression of image data in applications in astronomy. Astronomical images have typical specific properties — high grayscale bit depth, size, noise occurrence and special processing algorithms. They belong to the class of scientific images. Their processing and compression is quite different from the classical approach of multimedia image processing. The database of images from BOOTES (Burst Observer and Optical Transient Exploring System has been chosen as a source of the testing signal. BOOTES is a Czech-Spanish robotic telescope for observing AGN (active galactic nuclei and the optical transient of GRB (gamma ray bursts searching. This paper discusses an approach based on an analysis of statistical properties of image data. A comparison of two irrelevancy reduction methods is presented from a scientific (astrometric and photometric point of view. The first method is based on a statistical approach, using the Karhunen-Loeve transform (KLT with uniform quantization in the spectral domain. The second technique is derived from wavelet decomposition with adaptive selection of used prediction coefficients. Finally, the comparison of three redundancy reduction methods is discussed. Multimedia format JPEG2000 and HCOMPRESS, designed especially for astronomical images, are compared with the new Astronomical Context Coder (ACC coder based on adaptive median regression.

  9. Securing Iris Templates using Combined User and Soft Biometric based Password Hardened Fuzzy Vault

    OpenAIRE

    Meenakshi, V. S.; Padmavathi, G.

    2010-01-01

    Personal identification and authentication is very crucial in the current scenario. Biometrics plays an important role in this area. Biometric based authentication has proved superior compared to traditional password based authentication. Anyhow biometrics is permanent feature of a person and cannot be reissued when compromised as passwords. To over come this problem, instead of storing the original biometric templates transformed templates can be stored. Whenever the transformation function ...

  10. Light-weight reference-based compression of FASTQ data.

    Science.gov (United States)

    Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan

    2015-06-09

    The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.

  11. DETERMINING OPTIMAL CUBE FOR 3D-DCT BASED VIDEO COMPRESSION FOR DIFFERENT MOTION LEVELS

    Directory of Open Access Journals (Sweden)

    J. Augustin Jacob

    2012-11-01

    Full Text Available This paper proposes new three dimensional discrete cosine transform (3D-DCT based video compression algorithm that will select the optimal cube size based on the motion content of the video sequence. It is determined by finding normalized pixel difference (NPD values, and by categorizing the cubes as “low” or “high” motion cube suitable cube size of dimension either [16×16×8] or[8×8×8] is chosen instead of fixed cube algorithm. To evaluate the performance of the proposed algorithm test sequence with different motion levels are chosen. By doing rate vs. distortion analysis the level of compression that can be achieved and the quality of reconstructed video sequence are determined and compared against fixed cube size algorithm. Peak signal to noise ratio (PSNR is taken to measure the video quality. Experimental result shows that varying the cube size with reference to the motion content of video frames gives better performance in terms of compression ratio and video quality.

  12. Compressed sensing techniques for receiver based post-compensation of transmitter's nonlinear distortions in OFDM systems

    KAUST Repository

    Owodunni, Damilola S.

    2014-04-01

    In this paper, compressed sensing techniques are proposed to linearize commercial power amplifiers driven by orthogonal frequency division multiplexing signals. The nonlinear distortion is considered as a sparse phenomenon in the time-domain, and three compressed sensing based algorithms are presented to estimate and compensate for these distortions at the receiver using a few and, at times, even no frequency-domain free carriers (i.e. pilot carriers). The first technique is a conventional compressed sensing approach, while the second incorporates a priori information about the distortions to enhance the estimation. Finally, the third technique involves an iterative data-aided algorithm that does not require any pilot carriers and hence allows the system to work at maximum bandwidth efficiency. The performances of all the proposed techniques are evaluated on a commercial power amplifier and compared. The error vector magnitude and symbol error rate results show the ability of compressed sensing to compensate for the amplifier\\'s nonlinear distortions. © 2013 Elsevier B.V.

  13. Random template placement and prior information

    International Nuclear Information System (INIS)

    Roever, Christian

    2010-01-01

    In signal detection problems, one is usually faced with the task of searching a parameter space for peaks in the likelihood function which indicate the presence of a signal. Random searches have proven to be very efficient as well as easy to implement, compared e.g. to searches along regular grids in parameter space. Knowledge of the parameterised shape of the signal searched for adds structure to the parameter space, i.e., there are usually regions requiring to be densely searched while in other regions a coarser search is sufficient. On the other hand, prior information identifies the regions in which a search will actually be promising or may likely be in vain. Defining specific figures of merit allows one to combine both template metric and prior distribution and devise optimal sampling schemes over the parameter space. We show an example related to the gravitational wave signal from a binary inspiral event. Here the template metric and prior information are particularly contradictory, since signals from low-mass systems tolerate the least mismatch in parameter space while high-mass systems are far more likely, as they imply a greater signal-to-noise ratio (SNR) and hence are detectable to greater distances. The derived sampling strategy is implemented in a Markov chain Monte Carlo (MCMC) algorithm where it improves convergence.

  14. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    Science.gov (United States)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  15. Template-based electrophoretic deposition of perovskite PZT nanotubes

    Energy Technology Data Exchange (ETDEWEB)

    Nourmohammadi, A. [Solid Surfaces Analysis and Electron Microscopy Group, Institute of Physics, Chemnitz University of Technology, D-09126 Chemnitz (Germany); Semiconductors Department, Materials and Energy Research Center (MERC), 31779-83634 Karaj (Iran, Islamic Republic of); Bahrevar, M.A. [Semiconductors Department, Materials and Energy Research Center (MERC), 31779-83634 Karaj (Iran, Islamic Republic of)], E-mail: ma.bahrevar@yahoo.com; Hietschold, M. [Solid Surfaces Analysis and Electron Microscopy Group, Institute of Physics, Chemnitz University of Technology, D-09126 Chemnitz (Germany)

    2009-04-03

    Template-based electrophoretic deposition of perovskite lead zirconate titanate (PZT) nanotubes was achieved using anodic alumina (AA) membranes and sols, containing lead, zirconium and titanium precursors. The effect of various anodizing voltages on the size of the channels in the anodic alumina template was investigated. The prepared sol was driven into the channels under the influence of various electric fields and subsequently sintered at about 700 deg. C. The effects of the initial heating rates and the burn-out temperature on the phase evolution of the samples were studied and a modified firing process was employed. The effects of the electrophoretic voltage and the deposition time on the average wall thickness of the tubes were investigated. Scanning and transmission electron microscopy (SEM and TEM) revealed the efficiency of electrophoresis in the growth of lead zirconate titanate nanotubes in a close-packed array. The X-ray diffraction analyses indicated the presence of perovskite as the principal phase after a modified firing schedule.

  16. Real versus template-based Natural Language Generation: a false opposition?

    NARCIS (Netherlands)

    van Deemter, Kees; Krahmer, Emiel; Theune, Mariet

    2005-01-01

    This paper challenges the received wisdom that template-based approaches to the generation of language are necessarily inferior to other approaches as regards their maintainability, linguistic well-foundedness and quality of output. Some recent NLG systems that call themselves `templatebased' will

  17. A template bank to search for gravitational waves from inspiralling compact binaries: I. Physical models

    International Nuclear Information System (INIS)

    Babak, S; Balasubramanian, R; Churches, D; Cokelaer, T; Sathyaprakash, B S

    2006-01-01

    Gravitational waves from coalescing compact binaries are searched for using the matched filtering technique. As the model waveform depends on a number of parameters, it is necessary to filter the data through a template bank covering the astrophysically interesting region of the parameter space. The choice of templates is defined by the maximum allowed drop in signal-to-noise ratio due to the discreteness of the template bank. In this paper we describe the template-bank algorithm that was used in the analysis of data from the Laser Interferometer Gravitational Wave Observatory (LIGO) and GEO 600 detectors to search for signals from binaries consisting of non-spinning compact objects. Using Monte Carlo simulations, we study the efficiency of the bank and show that its performance is satisfactory for the design sensitivity curves of ground-based interferometric gravitational wave detectors GEO 600, initial LIGO, advanced LIGO and Virgo. The bank is efficient in searching for various compact binaries such as binary primordial black holes, binary neutron stars, binary black holes, as well as a mixed binary consisting of a non-spinning black hole and a neutron star

  18. Muscle Performance Investigated With a Novel Smart Compression Garment Based on Pressure Sensor Force Myography and Its Validation Against EMG

    Directory of Open Access Journals (Sweden)

    Aaron Belbasis

    2018-04-01

    Full Text Available Muscle activity and fatigue performance parameters were obtained and compared between both a smart compression garment and the gold-standard, a surface electromyography (EMG system during high-speed cycling in seven participants. The smart compression garment, based on force myography (FMG, comprised of integrated pressure sensors that were sandwiched between skin and garment, located on five thigh muscles. The muscle activity was assessed by means of crank cycle diagrams (polar plots that displayed the muscle activity relative to the crank cycle. The fatigue was assessed by means of the median frequency of the power spectrum of the EMG signal; the fractal dimension (FD of the EMG signal; and the FD of the pressure signal. The smart compression garment returned performance parameters (muscle activity and fatigue comparable to the surface EMG. The major differences were that the EMG measured the electrical activity, whereas the pressure sensor measured the mechanical activity. As such, there was a phase shift between electrical and mechanical signals, with the electrical signals preceding the mechanical counterparts in most cases. This is specifically pronounced in high-speed cycling. The fatigue trend over the duration of the cycling exercise was clearly reflected in the fatigue parameters (FDs and median frequency obtained from pressure and EMG signals. The fatigue parameter of the pressure signal (FD showed a higher time dependency (R2 = 0.84 compared to the EMG signal. This reflects that the pressure signal puts more emphasis on the fatigue as a function of time rather than on the origin of fatigue (e.g., peripheral or central fatigue. In light of the high-speed activity results, caution should be exerted when using data obtained from EMG for biomechanical models. In contrast to EMG data, activity data obtained from FMG are considered more appropriate and accurate as an input for biomechanical modeling as they truly reflect the mechanical

  19. Muscle Performance Investigated With a Novel Smart Compression Garment Based on Pressure Sensor Force Myography and Its Validation Against EMG.

    Science.gov (United States)

    Belbasis, Aaron; Fuss, Franz Konstantin

    2018-01-01

    Muscle activity and fatigue performance parameters were obtained and compared between both a smart compression garment and the gold-standard, a surface electromyography (EMG) system during high-speed cycling in seven participants. The smart compression garment, based on force myography (FMG), comprised of integrated pressure sensors that were sandwiched between skin and garment, located on five thigh muscles. The muscle activity was assessed by means of crank cycle diagrams (polar plots) that displayed the muscle activity relative to the crank cycle. The fatigue was assessed by means of the median frequency of the power spectrum of the EMG signal; the fractal dimension (FD) of the EMG signal; and the FD of the pressure signal. The smart compression garment returned performance parameters (muscle activity and fatigue) comparable to the surface EMG. The major differences were that the EMG measured the electrical activity, whereas the pressure sensor measured the mechanical activity. As such, there was a phase shift between electrical and mechanical signals, with the electrical signals preceding the mechanical counterparts in most cases. This is specifically pronounced in high-speed cycling. The fatigue trend over the duration of the cycling exercise was clearly reflected in the fatigue parameters (FDs and median frequency) obtained from pressure and EMG signals. The fatigue parameter of the pressure signal (FD) showed a higher time dependency ( R 2 = 0.84) compared to the EMG signal. This reflects that the pressure signal puts more emphasis on the fatigue as a function of time rather than on the origin of fatigue (e.g., peripheral or central fatigue). In light of the high-speed activity results, caution should be exerted when using data obtained from EMG for biomechanical models. In contrast to EMG data, activity data obtained from FMG are considered more appropriate and accurate as an input for biomechanical modeling as they truly reflect the mechanical muscle

  20. EPC: A Provably Secure Permutation Based Compression Function

    DEFF Research Database (Denmark)

    Bagheri, Nasour; Gauravaram, Praveen; Naderi, Majid

    2010-01-01

    The security of permutation-based hash functions in the ideal permutation model has been studied when the input-length of compression function is larger than the input-length of the permutation function. In this paper, we consider permutation based compression functions that have input lengths sh...

  1. An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks.

    Science.gov (United States)

    Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang

    2016-09-22

    To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It's theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.

  2. HVS-based medical image compression

    Energy Technology Data Exchange (ETDEWEB)

    Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)

    2005-07-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.

  3. HVS-based medical image compression

    International Nuclear Information System (INIS)

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  4. Refinement of protein termini in template-based modeling using conformational space annealing.

    Science.gov (United States)

    Park, Hahnbeom; Ko, Junsu; Joo, Keehyoung; Lee, Julian; Seok, Chaok; Lee, Jooyoung

    2011-09-01

    The rapid increase in the number of experimentally determined protein structures in recent years enables us to obtain more reliable protein tertiary structure models than ever by template-based modeling. However, refinement of template-based models beyond the limit available from the best templates is still needed for understanding protein function in atomic detail. In this work, we develop a new method for protein terminus modeling that can be applied to refinement of models with unreliable terminus structures. The energy function for terminus modeling consists of both physics-based and knowledge-based potential terms with carefully optimized relative weights. Effective sampling of both the framework and terminus is performed using the conformational space annealing technique. This method has been tested on a set of termini derived from a nonredundant structure database and two sets of termini from the CASP8 targets. The performance of the terminus modeling method is significantly improved over our previous method that does not employ terminus refinement. It is also comparable or superior to the best server methods tested in CASP8. The success of the current approach suggests that similar strategy may be applied to other types of refinement problems such as loop modeling or secondary structure rearrangement. Copyright © 2011 Wiley-Liss, Inc.

  5. Compressive Sensing in Communication Systems

    DEFF Research Database (Denmark)

    Fyhn, Karsten

    2013-01-01

    . The need for cheaper, smarter and more energy efficient wireless devices is greater now than ever. This thesis addresses this problem and concerns the application of the recently developed sampling theory of compressive sensing in communication systems. Compressive sensing is the merging of signal...... acquisition and compression. It allows for sampling a signal with a rate below the bound dictated by the celebrated Shannon-Nyquist sampling theorem. In some communication systems this necessary minimum sample rate, dictated by the Shannon-Nyquist sampling theorem, is so high it is at the limit of what...... with using compressive sensing in communication systems. The main contribution of this thesis is two-fold: 1) a new compressive sensing hardware structure for spread spectrum signals, which is simpler than the current state-of-the-art, and 2) a range of algorithms for parameter estimation for the class...

  6. Template characterization and correlation algorithm created from segmentation for the iris biometric authentication based on analysis of textures implemented on a FPGA

    International Nuclear Information System (INIS)

    Giacometto, F J; Vilardy, J M; Torres, C O; Mattos, L

    2011-01-01

    Among the most used biometric signals to set personal security permissions, taker increasingly importance biometric iris recognition based on their textures and images of blood vessels due to the rich in these two unique characteristics that are unique to each individual. This paper presents an implementation of an algorithm characterization and correlation of templates created for biometric authentication based on iris texture analysis programmed on a FPGA (Field Programmable Gate Array), authentication is based on processes like characterization methods based on frequency analysis of the sample, and frequency correlation to obtain the expected results of authentication.

  7. Compressed Sensing with Rank Deficient Dictionaries

    DEFF Research Database (Denmark)

    Hansen, Thomas Lundgaard; Johansen, Daniel Højrup; Jørgensen, Peter Bjørn

    2012-01-01

    In compressed sensing it is generally assumed that the dictionary matrix constitutes a (possibly overcomplete) basis of the signal space. In this paper we consider dictionaries that do not span the signal space, i.e. rank deficient dictionaries. We show that in this case the signal-to-noise ratio...... (SNR) in the compressed samples can be increased by selecting the rows of the measurement matrix from the column space of the dictionary. As an example application of compressed sensing with a rank deficient dictionary, we present a case study of compressed sensing applied to the Coarse Acquisition (C...

  8. A compressed sensing based method with support refinement for impulse noise cancelation in DSL

    KAUST Repository

    Quadeer, Ahmed Abdul

    2013-06-01

    This paper presents a compressed sensing based method to suppress impulse noise in digital subscriber line (DSL). The proposed algorithm exploits the sparse nature of the impulse noise and utilizes the carriers, already available in all practical DSL systems, for its estimation and cancelation. Specifically, compressed sensing is used for a coarse estimate of the impulse position, an a priori information based maximum aposteriori probability (MAP) metric for its refinement, followed by least squares (LS) or minimum mean square error (MMSE) estimation for estimating the impulse amplitudes. Simulation results show that the proposed scheme achieves higher rate as compared to other known sparse estimation algorithms in literature. The paper also demonstrates the superior performance of the proposed scheme compared to the ITU-T G992.3 standard that utilizes RS-coding for impulse noise refinement in DSL signals. © 2013 IEEE.

  9. Biomedical sensor design using analog compressed sensing

    Science.gov (United States)

    Balouchestani, Mohammadreza; Krishnan, Sridhar

    2015-05-01

    The main drawback of current healthcare systems is the location-specific nature of the system due to the use of fixed/wired biomedical sensors. Since biomedical sensors are usually driven by a battery, power consumption is the most important factor determining the life of a biomedical sensor. They are also restricted by size, cost, and transmission capacity. Therefore, it is important to reduce the load of sampling by merging the sampling and compression steps to reduce the storage usage, transmission times, and power consumption in order to expand the current healthcare systems to Wireless Healthcare Systems (WHSs). In this work, we present an implementation of a low-power biomedical sensor using analog Compressed Sensing (CS) framework for sparse biomedical signals that addresses both the energy and telemetry bandwidth constraints of wearable and wireless Body-Area Networks (BANs). This architecture enables continuous data acquisition and compression of biomedical signals that are suitable for a variety of diagnostic and treatment purposes. At the transmitter side, an analog-CS framework is applied at the sensing step before Analog to Digital Converter (ADC) in order to generate the compressed version of the input analog bio-signal. At the receiver side, a reconstruction algorithm based on Restricted Isometry Property (RIP) condition is applied in order to reconstruct the original bio-signals form the compressed bio-signals with high probability and enough accuracy. We examine the proposed algorithm with healthy and neuropathy surface Electromyography (sEMG) signals. The proposed algorithm achieves a good level for Average Recognition Rate (ARR) at 93% and reconstruction accuracy at 98.9%. In addition, The proposed architecture reduces total computation time from 32 to 11.5 seconds at sampling-rate=29 % of Nyquist rate, Percentage Residual Difference (PRD)=26 %, Root Mean Squared Error (RMSE)=3 %.

  10. Multi-atlas labeling with population-specific template and non-local patch-based label fusion

    DEFF Research Database (Denmark)

    Fonov, Vladimir; Coupé, Pierrick; Eskildsen, Simon Fristed

    We propose a new method combining a population-specific nonlinear template atlas approach with non-local patch-based structure segmentation for whole brain segmentation into individual structures. This way, we benefit from the efficient intensity-driven segmentation of the non-local means framework...... and from the global shape constraints imposed by the nonlinear template matching....

  11. Understanding the general packing rearrangements required for successful template based modeling of protein structure from a CASP experiment.

    Science.gov (United States)

    Day, Ryan; Joo, Hyun; Chavan, Archana C; Lennox, Kristin P; Chen, Y Ann; Dahl, David B; Vannucci, Marina; Tsai, Jerry W

    2013-02-01

    As an alternative to the common template based protein structure prediction methods based on main-chain position, a novel side-chain centric approach has been developed. Together with a Bayesian loop modeling procedure and a combination scoring function, the Stone Soup algorithm was applied to the CASP9 set of template based modeling targets. Although the method did not generate as large of perturbations to the template structures as necessary, the analysis of the results gives unique insights into the differences in packing between the target structures and their templates. Considerable variation in packing is found between target and template structures even when the structures are close, and this variation is found due to 2 and 3 body packing interactions. Outside the inherent restrictions in packing representation of the PDB, the first steps in correctly defining those regions of variable packing have been mapped primarily to local interactions, as the packing at the secondary and tertiary structure are largely conserved. Of the scoring functions used, a loop scoring function based on water structure exhibited some promise for discrimination. These results present a clear structural path for further development of a side-chain centered approach to template based modeling. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Huffman-based code compression techniques for embedded processors

    KAUST Repository

    Bonny, Mohamed Talal

    2010-09-01

    The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures

  13. A new template matching method based on contour information

    Science.gov (United States)

    Cai, Huiying; Zhu, Feng; Wu, Qingxiao; Li, Sicong

    2014-11-01

    Template matching is a significant approach in machine vision due to its effectiveness and robustness. However, most of the template matching methods are so time consuming that they can't be used to many real time applications. The closed contour matching method is a popular kind of template matching methods. This paper presents a new closed contour template matching method which is suitable for two dimensional objects. Coarse-to-fine searching strategy is used to improve the matching efficiency and a partial computation elimination scheme is proposed to further speed up the searching process. The method consists of offline model construction and online matching. In the process of model construction, triples and distance image are obtained from the template image. A certain number of triples which are composed by three points are created from the contour information that is extracted from the template image. The rule to select the three points is that the template contour is divided equally into three parts by these points. The distance image is obtained here by distance transform. Each point on the distance image represents the nearest distance between current point and the points on the template contour. During the process of matching, triples of the searching image are created with the same rule as the triples of the model. Through the similarity that is invariant to rotation, translation and scaling between triangles, the triples corresponding to the triples of the model are found. Then we can obtain the initial RST (rotation, translation and scaling) parameters mapping the searching contour to the template contour. In order to speed up the searching process, the points on the searching contour are sampled to reduce the number of the triples. To verify the RST parameters, the searching contour is projected into the distance image, and the mean distance can be computed rapidly by simple operations of addition and multiplication. In the fine searching process

  14. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  15. An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Donghao Wang

    2016-09-01

    Full Text Available To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It’s theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.

  16. Two-level image authentication by two-step phase-shifting interferometry and compressive sensing

    Science.gov (United States)

    Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-01-01

    A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.

  17. Compression of a Deep Competitive Network Based on Mutual Information for Underwater Acoustic Targets Recognition

    Directory of Open Access Journals (Sweden)

    Sheng Shen

    2018-04-01

    Full Text Available The accuracy of underwater acoustic targets recognition via limited ship radiated noise can be improved by a deep neural network trained with a large number of unlabeled samples. However, redundant features learned by deep neural network have negative effects on recognition accuracy and efficiency. A compressed deep competitive network is proposed to learn and extract features from ship radiated noise. The core idea of the algorithm includes: (1 Competitive learning: By integrating competitive learning into the restricted Boltzmann machine learning algorithm, the hidden units could share the weights in each predefined group; (2 Network pruning: The pruning based on mutual information is deployed to remove the redundant parameters and further compress the network. Experiments based on real ship radiated noise show that the network can increase recognition accuracy with fewer informative features. The compressed deep competitive network can achieve a classification accuracy of 89.1 % , which is 5.3 % higher than deep competitive network and 13.1 % higher than the state-of-the-art signal processing feature extraction methods.

  18. Priming and the guidance by visual and categorical templates in visual search

    Directory of Open Access Journals (Sweden)

    Anna eWilschut

    2014-02-01

    Full Text Available Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity towards the target feature, i.e. the extent to which observers searched selectively among items of the cued versus uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  19. Priming and the guidance by visual and categorical templates in visual search.

    Science.gov (United States)

    Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L

    2014-01-01

    Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  20. Phoneme Compression: processing of the speech signal and effects on speech intelligibility in hearing-Impaired listeners

    NARCIS (Netherlands)

    A. Goedegebure (Andre)

    2005-01-01

    textabstractHearing-aid users often continue to have problems with poor speech understanding in difficult acoustical conditions. Another generally accounted problem is that certain sounds become too loud whereas other sounds are still not audible. Dynamic range compression is a signal processing

  1. Evaluation of template-based models in CASP8 with standard measures

    KAUST Repository

    Cozzetto, Domenico; Kryshtafovych, Andriy; Fidelis, Krzysztof; Moult, John; Rost, Burkhard; Tramontano, Anna

    2009-01-01

    The strategy for evaluating template-based models submitted to CASP has continuously evolved from CASP1 to CASP5, leading to a standard procedure that has been used in all subsequent editions. The established approach includes methods

  2. Symmetrical compression distance for arrhythmia discrimination in cloud-based big-data services.

    Science.gov (United States)

    Lillo-Castellano, J M; Mora-Jiménez, I; Santiago-Mozos, R; Chavarría-Asso, F; Cano-González, A; García-Alberola, A; Rojo-Álvarez, J L

    2015-07-01

    The current development of cloud computing is completely changing the paradigm of data knowledge extraction in huge databases. An example of this technology in the cardiac arrhythmia field is the SCOOP platform, a national-level scientific cloud-based big data service for implantable cardioverter defibrillators. In this scenario, we here propose a new methodology for automatic classification of intracardiac electrograms (EGMs) in a cloud computing system, designed for minimal signal preprocessing. A new compression-based similarity measure (CSM) is created for low computational burden, so-called weighted fast compression distance, which provides better performance when compared with other CSMs in the literature. Using simple machine learning techniques, a set of 6848 EGMs extracted from SCOOP platform were classified into seven cardiac arrhythmia classes and one noise class, reaching near to 90% accuracy when previous patient arrhythmia information was available and 63% otherwise, hence overcoming in all cases the classification provided by the majority class. Results show that this methodology can be used as a high-quality service of cloud computing, providing support to physicians for improving the knowledge on patient diagnosis.

  3. Adaptive compressive ghost imaging based on wavelet trees and sparse representation.

    Science.gov (United States)

    Yu, Wen-Kai; Li, Ming-Fei; Yao, Xu-Ri; Liu, Xue-Feng; Wu, Ling-An; Zhai, Guang-Jie

    2014-03-24

    Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.

  4. Nanolithography based contacting method for electrical measurements on single template synthesized nanowires

    DEFF Research Database (Denmark)

    Fusil, S.; Piraux, L.; Mátéfi-Tempfli, Stefan

    2005-01-01

    A reliable method enabling electrical measurements on single nanowires prepared by electrodeposition in an alumina template is described. This technique is based on electrically controlled nanoindentation of a thin insulating resist deposited on the top face of the template filled by the nanowires....... We show that this method is very flexible, allowing us to electrically address single nanowires of controlled length down to 100 nm and of desired composition. Using this approach, current densities as large as 10 A cm were successfully injected through a point contact on a single magnetic...

  5. A diversity compression and combining technique based on channel shortening for cooperative networks

    KAUST Repository

    Hussain, Syed Imtiaz

    2012-02-01

    The cooperative relaying process with multiple relays needs proper coordination among the communicating and the relaying nodes. This coordination and the required capabilities may not be available in some wireless systems where the nodes are equipped with very basic communication hardware. We consider a scenario where the source node transmits its signal to the destination through multiple relays in an uncoordinated fashion. The destination captures the multiple copies of the transmitted signal through a Rake receiver. We analyze a situation where the number of Rake fingers N is less than that of the relaying nodes L. In this case, the receiver can combine N strongest signals out of L. The remaining signals will be lost and act as interference to the desired signal components. To tackle this problem, we develop a novel signal combining technique based on channel shortening principles. This technique proposes a processing block before the Rake reception which compresses the energy of L signal components over N branches while keeping the noise level at its minimum. The proposed scheme saves the system resources and makes the received signal compatible to the available hardware. Simulation results show that it outperforms the selection combining scheme. © 2012 IEEE.

  6. Matrix-Inversion-Free Compressed Sensing With Variable Orthogonal Multi-Matching Pursuit Based on Prior Information for ECG Signals.

    Science.gov (United States)

    Cheng, Yih-Chun; Tsai, Pei-Yun; Huang, Ming-Hao

    2016-05-19

    Low-complexity compressed sensing (CS) techniques for monitoring electrocardiogram (ECG) signals in wireless body sensor network (WBSN) are presented. The prior probability of ECG sparsity in the wavelet domain is first exploited. Then, variable orthogonal multi-matching pursuit (vOMMP) algorithm that consists of two phases is proposed. In the first phase, orthogonal matching pursuit (OMP) algorithm is adopted to effectively augment the support set with reliable indices and in the second phase, the orthogonal multi-matching pursuit (OMMP) is employed to rescue the missing indices. The reconstruction performance is thus enhanced with the prior information and the vOMMP algorithm. Furthermore, the computation-intensive pseudo-inverse operation is simplified by the matrix-inversion-free (MIF) technique based on QR decomposition. The vOMMP-MIF CS decoder is then implemented in 90 nm CMOS technology. The QR decomposition is accomplished by two systolic arrays working in parallel. The implementation supports three settings for obtaining 40, 44, and 48 coefficients in the sparse vector. From the measurement result, the power consumption is 11.7 mW at 0.9 V and 12 MHz. Compared to prior chip implementations, our design shows good hardware efficiency and is suitable for low-energy applications.

  7. Elliptical tiling method to generate a 2-dimensional set of templates for gravitational wave search

    International Nuclear Information System (INIS)

    Arnaud, Nicolas; Barsuglia, Matteo; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Kreckelbergh, Stephane; Porter, Edward K.

    2003-01-01

    Searching for a signal depending on unknown parameters in a noisy background with matched filtering techniques always requires an analysis of the data with several templates in parallel in order to ensure a proper match between the filter and the real waveform. The key feature of such an implementation is the design of the filter bank which must be small to limit the computational cost while keeping the detection efficiency as high as possible. This paper presents a geometrical method that allows one to cover the corresponding physical parameter space by a set of ellipses, each of them being associated with a given template. After the description of the main characteristics of the algorithm, the method is applied in the field of gravitational wave (GW) data analysis, for the search of damped sine signals. Such waveforms are expected to be produced during the deexcitation phase of black holes - the so-called 'ringdown' signals - and are also encountered in some numerically computed supernova signals. First, the number of templates N computed by the method is similar to its analytical estimation, despite the overlaps between neighbor templates and the border effects. Moreover, N is small enough to test for the first time the performances of the set of templates for different choices of the minimal match MM, the parameter used to define the maximal allowed loss of signal-to-noise ratio (SNR) due to the mismatch between real signals and templates. The main result of this analysis is that the fraction of SNR recovered is on average much higher than MM, which dramatically decreases the mean percentage of false dismissals. Indeed, it goes well below its estimated value of 1-MM 3 used as input of the algorithm. Thus, as this feature should be common to any tiling algorithm, it seems possible to reduce the constraint on the value of MM - and indeed the number of templates and the computing power - without losing as many events as expected on average. This should be of great

  8. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    Science.gov (United States)

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  9. A new chest compression depth feedback algorithm for high-quality CPR based on smartphone.

    Science.gov (United States)

    Song, Yeongtak; Oh, Jaehoon; Chee, Youngjoon

    2015-01-01

    Although many smartphone application (app) programs provide education and guidance for basic life support, they do not commonly provide feedback on the chest compression depth (CCD) and rate. The validation of its accuracy has not been reported to date. This study was a feasibility assessment of use of the smartphone as a CCD feedback device. In this study, we proposed the concept of a new real-time CCD estimation algorithm using a smartphone and evaluated the accuracy of the algorithm. Using the double integration of the acceleration signal, which was obtained from the accelerometer in the smartphone, we estimated the CCD in real time. Based on its periodicity, we removed the bias error from the accelerometer. To evaluate this instrument's accuracy, we used a potentiometer as the reference depth measurement. The evaluation experiments included three levels of CCD (insufficient, adequate, and excessive) and four types of grasping orientations with various compression directions. We used the difference between the reference measurement and the estimated depth as the error. The error was calculated for each compression. When chest compressions were performed with adequate depth for the patient who was lying on a flat floor, the mean (standard deviation) of the errors was 1.43 (1.00) mm. When the patient was lying on an oblique floor, the mean (standard deviation) of the errors was 3.13 (1.88) mm. The error of the CCD estimation was tolerable for the algorithm to be used in the smartphone-based CCD feedback app to compress more than 51 mm, which is the 2010 American Heart Association guideline.

  10. Disk-based compression of data from genome sequencing.

    Science.gov (United States)

    Grabowski, Szymon; Deorowicz, Sebastian; Roguski, Łukasz

    2015-05-01

    High-coverage sequencing data have significant, yet hard to exploit, redundancy. Most FASTQ compressors cannot efficiently compress the DNA stream of large datasets, since the redundancy between overlapping reads cannot be easily captured in the (relatively small) main memory. More interesting solutions for this problem are disk based, where the better of these two, from Cox et al. (2012), is based on the Burrows-Wheeler transform (BWT) and achieves 0.518 bits per base for a 134.0 Gbp human genome sequencing collection with almost 45-fold coverage. We propose overlapping reads compression with minimizers, a compression algorithm dedicated to sequencing reads (DNA only). Our method makes use of a conceptually simple and easily parallelizable idea of minimizers, to obtain 0.317 bits per base as the compression ratio, allowing to fit the 134.0 Gbp dataset into only 5.31 GB of space. http://sun.aei.polsl.pl/orcom under a free license. sebastian.deorowicz@polsl.pl Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  12. Method for Multiple Targets Tracking in Cognitive Radar Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Yang Jun

    2016-02-01

    Full Text Available A multiple targets cognitive radar tracking method based on Compressed Sensing (CS is proposed. In this method, the theory of CS is introduced to the case of cognitive radar tracking process in multiple targets scenario. The echo signal is sparsely expressed. The designs of sparse matrix and measurement matrix are accomplished by expressing the echo signal sparsely, and subsequently, the restruction of measurement signal under the down-sampling condition is realized. On the receiving end, after considering that the problems that traditional particle filter suffers from degeneracy, and require a large number of particles, the particle swarm optimization particle filter is used to track the targets. On the transmitting end, the Posterior Cramér-Rao Bounds (PCRB of the tracking accuracy is deduced, and the radar waveform parameters are further cognitively designed using PCRB. Simulation results show that the proposed method can not only reduce the data quantity, but also provide a better tracking performance compared with traditional method.

  13. Templated Chemistry for Sequence-Specific Fluorogenic Detection of Duplex DNA

    Science.gov (United States)

    Li, Hao; Franzini, Raphael M.; Bruner, Christopher; Kool, Eric T.

    2015-01-01

    We describe the development of templated fluorogenic chemistry for detection of specific sequences of duplex DNA in solution. In this approach, two modified homopyrimidine oligodeoxynucleotide probes are designed to bind by triple helix formation at adjacent positions on a specific purine-rich target sequence of duplex DNA. One fluorescein-labeled probe contains an α-azidoether linker to a fluorescence quencher; the second (trigger) probe carries a triarylphosphine, designed to reduce the azide and cleave the linker. The data showed that at pH 5.6 these probes yielded a strong fluorescence signal within minutes on addition to a complementary homopurine duplex DNA target. The signal increased by a factor of ca. 60, and was completely dependent on the presence of the target DNA. Replacement of cytosine in the probes with pseudoisocytosine allowed the templated chemistry to proceed readily at pH 7. Single nucleotide mismatches in the target oligonucleotide slowed the templated reaction considerably, demonstrating high sequence selectivity. The use of templated fluorogenic chemistry for detection of duplex DNAs has not been previously reported and may allow detection of double stranded DNA, at least for homopurine-homopyrimidine target sites, under native, non-disturbing conditions. PMID:20859985

  14. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    Directory of Open Access Journals (Sweden)

    Jin Li

    2014-01-01

    Full Text Available Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC combined with image data compression (IDC approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE. Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS-based algorithm has better compression performance than the traditional compression approaches.

  15. Multispectral image compression based on DSC combined with CCSDS-IDC.

    Science.gov (United States)

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  16. Highly Efficient Compression Algorithms for Multichannel EEG.

    Science.gov (United States)

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  17. Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Duarte, Marco F.; Jensen, Søren Holdt

    2015-01-01

    We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non...... to attain good estimation precision and keep the computational complexity low. Our numerical experiments show that the proposed algorithms outperform existing approaches that either leverage polynomial interpolation or are based on a conversion to a frequency-estimation problem followed by a super...... interpolation increases the estimation precision....

  18. Effective Low-Power Wearable Wireless Surface EMG Sensor Design Based on Analog-Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Mohammadreza Balouchestani

    2014-12-01

    Full Text Available Surface Electromyography (sEMG is a non-invasive measurement process that does not involve tools and instruments to break the skin or physically enter the body to investigate and evaluate the muscular activities produced by skeletal muscles. The main drawbacks of existing sEMG systems are: (1 they are not able to provide real-time monitoring; (2 they suffer from long processing time and low speed; (3 they are not effective for wireless healthcare systems because they consume huge power. In this work, we present an analog-based Compressed Sensing (CS architecture, which consists of three novel algorithms for design and implementation of wearable wireless sEMG bio-sensor. At the transmitter side, two new algorithms are presented in order to apply the analog-CS theory before Analog to Digital Converter (ADC. At the receiver side, a robust reconstruction algorithm based on a combination of ℓ1-ℓ1-optimization and Block Sparse Bayesian Learning (BSBL framework is presented to reconstruct the original bio-signals from the compressed bio-signals. The proposed architecture allows reducing the sampling rate to 25% of Nyquist Rate (NR. In addition, the proposed architecture reduces the power consumption to 40%, Percentage Residual Difference (PRD to 24%, Root Mean Squared Error (RMSE to 2%, and the computation time from 22 s to 9.01 s, which provide good background for establishing wearable wireless healthcare systems. The proposed architecture achieves robust performance in low Signal-to-Noise Ratio (SNR for the reconstruction process.

  19. Hyperspectral image compressing using wavelet-based method

    Science.gov (United States)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  20. Applications of wavelet-based compression to multidimensional earth science data

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1993-01-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  1. Applications of wavelet-based compression to multidimensional earth science data

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1993-02-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  2. Broadband infrared metamaterial absorber based on anodic aluminum oxide template

    Science.gov (United States)

    Yang, Jingfan; Qu, Shaobo; Ma, Hua; Wang, Jiafu; Yang, Shen; Pang, Yongqiang

    2018-05-01

    In this work, a broadband infrared metamaterial absorber is proposed based on trapezoid-shaped anodic aluminum oxide (AAO) template. Unlike traditional metamaterial absorber constructed from metal-dielectric-metal sandwich structure, our proposed absorber is composed of trapezoid-shaped AAO template with metallic nanowires inside. The infrared absorption efficiency is numerically calculated and the mechanism analysis is given in the paper. Owing to the superposition of multiple resonances produced by the nanowires with different heights, the infrared metamatrial absorber can keep high absorption efficiency during broad working wavelength band from 3.4 μm to 6.1 μm. In addition, the resonance wavelength is associated with the height of nanowires, which indicates that the resonance wavelength can be modulated flexibly through changing the heights of nanowires. This kind of design can also be adapted to other wavelength regions.

  3. ECG biometric identification: A compression based approach.

    Science.gov (United States)

    Bras, Susana; Pinho, Armando J

    2015-08-01

    Using the electrocardiogram signal (ECG) to identify and/or authenticate persons are problems still lacking satisfactory solutions. Yet, ECG possesses characteristics that are unique or difficult to get from other signals used in biometrics: (1) it requires contact and liveliness for acquisition (2) it changes under stress, rendering it potentially useless if acquired under threatening. Our main objective is to present an innovative and robust solution to the above-mentioned problem. To successfully conduct this goal, we rely on information-theoretic data models for data compression and on similarity metrics related to the approximation of the Kolmogorov complexity. The proposed measure allows the comparison of two (or more) ECG segments, without having to follow traditional approaches that require heartbeat segmentation (described as highly influenced by external or internal interferences). As a first approach, the method was able to cluster the data in three groups: identical record, same participant, different participant, by the stratification of the proposed measure with values near 0 for the same participant and closer to 1 for different participants. A leave-one-out strategy was implemented in order to identify the participant in the database based on his/her ECG. A 1NN classifier was implemented, using as distance measure the method proposed in this work. The classifier was able to identify correctly almost all participants, with an accuracy of 99% in the database used.

  4. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    Science.gov (United States)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at

  5. Preparation of Three-Dimensional Graphene Foams Using Powder Metallurgy Templates.

    Science.gov (United States)

    Sha, Junwei; Gao, Caitian; Lee, Seoung-Ki; Li, Yilun; Zhao, Naiqin; Tour, James M

    2016-01-26

    A simple and scalable method which combines traditional powder metallurgy and chemical vapor deposition is developed for the synthesis of mesoporous free-standing 3D graphene foams. The powder metallurgy templates for 3D graphene foams (PMT-GFs) consist of particle-like carbon shells which are connected by multilayered graphene that shows high specific surface area (1080 m(2) g(-1)), good crystallization, good electrical conductivity (13.8 S cm(-1)), and a mechanically robust structure. The PMT-GFs did not break under direct flushing with DI water, and they were able to recover after being compressed. These properties indicate promising applications of PMT-GFs for fields requiring 3D carbon frameworks such as in energy-based electrodes and mechanical dampening.

  6. Application specific compression : final report.

    Energy Technology Data Exchange (ETDEWEB)

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  7. Object Detection Based on Template Matching through Use of Best-So-Far ABC

    Directory of Open Access Journals (Sweden)

    Anan Banharnsakun

    2014-01-01

    Full Text Available Best-so-far ABC is a modified version of the artificial bee colony (ABC algorithm used for optimization tasks. This algorithm is one of the swarm intelligence (SI algorithms proposed in recent literature, in which the results demonstrated that the best-so-far ABC can produce higher quality solutions with faster convergence than either the ordinary ABC or the current state-of-the-art ABC-based algorithm. In this work, we aim to apply the best-so-far ABC-based approach for object detection based on template matching by using the difference between the RGB level histograms corresponding to the target object and the template object as the objective function. Results confirm that the proposed method was successful in both detecting objects and optimizing the time used to reach the solution.

  8. Binaural model-based dynamic-range compression.

    Science.gov (United States)

    Ernst, Stephan M A; Kortlang, Steffen; Grimm, Giso; Bisitz, Thomas; Kollmeier, Birger; Ewert, Stephan D

    2018-01-26

    Binaural cues such as interaural level differences (ILDs) are used to organise auditory perception and to segregate sound sources in complex acoustical environments. In bilaterally fitted hearing aids, dynamic-range compression operating independently at each ear potentially alters these ILDs, thus distorting binaural perception and sound source segregation. A binaurally-linked model-based fast-acting dynamic compression algorithm designed to approximate the normal-hearing basilar membrane (BM) input-output function in hearing-impaired listeners is suggested. A multi-center evaluation in comparison with an alternative binaural and two bilateral fittings was performed to assess the effect of binaural synchronisation on (a) speech intelligibility and (b) perceived quality in realistic conditions. 30 and 12 hearing impaired (HI) listeners were aided individually with the algorithms for both experimental parts, respectively. A small preference towards the proposed model-based algorithm in the direct quality comparison was found. However, no benefit of binaural-synchronisation regarding speech intelligibility was found, suggesting a dominant role of the better ear in all experimental conditions. The suggested binaural synchronisation of compression algorithms showed a limited effect on the tested outcome measures, however, linking could be situationally beneficial to preserve a natural binaural perception of the acoustical environment.

  9. Compressive sensing for urban radar

    CERN Document Server

    Amin, Moeness

    2014-01-01

    With the emergence of compressive sensing and sparse signal reconstruction, approaches to urban radar have shifted toward relaxed constraints on signal sampling schemes in time and space, and to effectively address logistic difficulties in data acquisition. Traditionally, these challenges have hindered high resolution imaging by restricting both bandwidth and aperture, and by imposing uniformity and bounds on sampling rates.Compressive Sensing for Urban Radar is the first book to focus on a hybrid of two key areas: compressive sensing and urban sensing. It explains how reliable imaging, tracki

  10. A reliable spatially normalized template of the human spinal cord--Applications to automated white matter/gray matter segmentation and tensor-based morphometry (TBM) mapping of gray matter alterations occurring with age.

    Science.gov (United States)

    Taso, Manuel; Le Troter, Arnaud; Sdika, Michaël; Cohen-Adad, Julien; Arnoux, Pierre-Jean; Guye, Maxime; Ranjeva, Jean-Philippe; Callot, Virginie

    2015-08-15

    Recently, a T2*-weighted template and probabilistic atlas of the white and gray matter (WM, GM) of the spinal cord (SC) have been reported. Such template can be used as tissue-priors for automated WM/GM segmentation but can also provide a common reference and normalized space for group studies. Here, a new template has been created (AMU40), and accuracy of automatic template-based WM/GM segmentation was quantified. The feasibility of tensor-based morphometry (TBM) for studying voxel-wise morphological differences of SC between young and elderly healthy volunteers was also investigated. Sixty-five healthy subjects were divided into young (n=40, age50years old, mean age 57±5years old) groups and scanned at 3T using an axial high-resolution T2*-weighted sequence. Inhomogeneity correction and affine intensity normalization of the SC and cerebrospinal fluid (CSF) signal intensities across slices were performed prior to both construction of the AMU40 template and WM/GM template-based segmentation. The segmentation was achieved using non-linear spatial normalization of T2*-w MR images to the AMU40 template. Validation of WM/GM segmentations was performed with a leave-one-out procedure by calculating DICE similarity coefficients between manual and automated WM/GM masks. SC morphological differences between young and elderly healthy volunteers were assessed using the same non-linear spatial normalization of the subjects' MRI to a common template, derivation of the Jacobian determinant maps from the warping fields, and a TBM analysis. Results demonstrated robust WM/GM automated segmentation, with mean DICE values greater than 0.8. Concerning the TBM analysis, an anterior GM atrophy was highlighted in elderly volunteers, demonstrating thereby, for the first time, the feasibility of studying local structural alterations in the SC using tensor-based morphometry. This holds great promise for studies of morphological impairment occurring in several central nervous system

  11. Perl Template Toolkit

    CERN Document Server

    Chamberlain, Darren; Cross, David; Torkington, Nathan; Diaz, tatiana Apandi

    2004-01-01

    Among the many different approaches to "templating" with Perl--such as Embperl, Mason, HTML::Template, and hundreds of other lesser known systems--the Template Toolkit is widely recognized as one of the most versatile. Like other templating systems, the Template Toolkit allows programmers to embed Perl code and custom macros into HTML documents in order to create customized documents on the fly. But unlike the others, the Template Toolkit is as facile at producing HTML as it is at producing XML, PDF, or any other output format. And because it has its own simple templating language, templates

  12. Fault Diagnosis for Hydraulic Servo System Using Compressed Random Subspace Based ReliefF

    Directory of Open Access Journals (Sweden)

    Yu Ding

    2018-01-01

    Full Text Available Playing an important role in electromechanical systems, hydraulic servo system is crucial to mechanical systems like engineering machinery, metallurgical machinery, ships, and other equipment. Fault diagnosis based on monitoring and sensory signals plays an important role in avoiding catastrophic accidents and enormous economic losses. This study presents a fault diagnosis scheme for hydraulic servo system using compressed random subspace based ReliefF (CRSR method. From the point of view of feature selection, the scheme utilizes CRSR method to determine the most stable feature combination that contains the most adequate information simultaneously. Based on the feature selection structure of ReliefF, CRSR employs feature integration rules in the compressed domain. Meanwhile, CRSR substitutes information entropy and fuzzy membership for traditional distance measurement index. The proposed CRSR method is able to enhance the robustness of the feature information against interference while selecting the feature combination with balanced information expressing ability. To demonstrate the effectiveness of the proposed CRSR method, a hydraulic servo system joint simulation model is constructed by HyPneu and Simulink, and three fault modes are injected to generate the validation data.

  13. Statistical mechanics approach to 1-bit compressed sensing

    International Nuclear Information System (INIS)

    Xu, Yingying; Kabashima, Yoshiyuki

    2013-01-01

    Compressed sensing is a framework that makes it possible to recover an N-dimensional sparse vector x∈R N from its linear transformation y∈R M of lower dimensionality M 1 -norm-based signal recovery scheme for 1-bit compressed sensing using statistical mechanics methods. We show that the signal recovery performance predicted by the replica method under the replica symmetric ansatz, which turns out to be locally unstable for modes breaking the replica symmetry, is in good consistency with experimental results of an approximate recovery algorithm developed earlier. This suggests that the l 1 -based recovery problem typically has many local optima of a similar recovery accuracy, which can be achieved by the approximate algorithm. We also develop another approximate recovery algorithm inspired by the cavity method. Numerical experiments show that when the density of nonzero entries in the original signal is relatively large the new algorithm offers better performance than the abovementioned scheme and does so with a lower computational cost. (paper)

  14. Optimisation algorithms for ECG data compression.

    Science.gov (United States)

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  15. An experimental digital consumer recorder for MPEG-coded video signals

    NARCIS (Netherlands)

    Saeijs, R.W.J.J.; With, de P.H.N.; Rijckaert, A.M.A.; Wong, C.

    1995-01-01

    The concept and real-time implementation of an experimental home-use digital recorder is presented, capable of recording MPEG-compressed video signals. The system has small recording mechanics based on the DVC standard and it uses MPEG compression for trick-mode signals as well

  16. Facile fabrication of poly(L-lactic acid) microsphere-incorporated calcium alginate/hydroxyapatite porous scaffolds based on Pickering emulsion templates.

    Science.gov (United States)

    Hu, Yang; Ma, Shanshan; Yang, Zhuohong; Zhou, Wuyi; Du, Zhengshan; Huang, Jian; Yi, Huan; Wang, Chaoyang

    2016-04-01

    In this study, we develop a facile one-pot approach to the fabrication of poly(L-lactic acid) (PLLA) microsphere-incorporated calcium alginate (ALG-Ca)/hydroxyapatite (HAp) porous scaffolds based on HAp nanoparticle-stabilized oil-in-water Pickering emulsion templates, which contain alginate in the aqueous phase and PLLA in the oil phase. The emulsion aqueous phase is solidified by in situ gelation of alginate with Ca(2+) released from HAp by decreasing pH with slow hydrolysis of D-gluconic acid δ-lactone (GDL) to produce emulsion droplet-incorporated gels, followed by freeze-drying to form porous scaffolds containing microspheres. The pore structure of porous scaffolds can be adjusted by varying the HAp or GDL concentration. The compressive tests show that the increase of HAp or GDL concentration is beneficial to improve the compressive property of porous scaffolds, while the excessive HAp can lead to the decrease in compressive property. Moreover, the swelling behavior studies display that the swelling ratios of porous scaffolds reduce with increasing HAp or GDL concentration. Furthermore, hydrophobic drug ibuprofen (IBU) and hydrophilic drug bovine serum albumin (BSA) are loaded into the microspheres and scaffold matrix, respectively. In vitro drug release results indicate that BSA has a rapid release while IBU has a sustained release in the dual drug-loaded scaffolds. In vitro cell culture experiments verify that mouse bone mesenchymal stem cells can proliferate on the porous scaffolds well, indicating the good biocompatibility of porous scaffolds. All these results demonstrate that the PLLA microsphere-incorporated ALG-Ca/HAp porous scaffolds have a promising potential for tissue engineering and drug delivery applications. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Informational analysis for compressive sampling in radar imaging.

    Science.gov (United States)

    Zhang, Jingxiong; Yang, Ke

    2015-03-24

    Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.

  18. Light Weight Biomorphous Cellular Ceramics from Cellulose Templates

    Science.gov (United States)

    Singh, Mrityunjay; Yee, Bo-Moon; Gray, Hugh R. (Technical Monitor)

    2003-01-01

    Bimorphous ceramics are a new class of materials that can be fabricated from the cellulose templates derived from natural biopolymers. These biopolymers are abundantly available in nature and are produced by the photosynthesis process. The wood cellulose derived carbon templates have three- dimensional interconnectivity. A wide variety of non-oxide and oxide based ceramics have been fabricated by template conversion using infiltration and reaction-based processes. The cellular anatomy of the cellulose templates plays a key role in determining the processing parameters (pyrolysis, infiltration conditions, etc.) and resulting ceramic materials. The processing approach, microstructure, and mechanical properties of the biomorphous cellular ceramics (silicon carbide and oxide based) have been discussed.

  19. Continuous diffusion signal, EAP and ODF estimation via Compressive Sensing in diffusion MRI.

    Science.gov (United States)

    Merlet, Sylvain L; Deriche, Rachid

    2013-07-01

    In this paper, we exploit the ability of Compressed Sensing (CS) to recover the whole 3D Diffusion MRI (dMRI) signal from a limited number of samples while efficiently recovering important diffusion features such as the Ensemble Average Propagator (EAP) and the Orientation Distribution Function (ODF). Some attempts to use CS in estimating diffusion signals have been done recently. However, this was mainly an experimental insight of CS capabilities in dMRI and the CS theory has not been fully exploited. In this work, we also propose to study the impact of the sparsity, the incoherence and the RIP property on the reconstruction of diffusion signals. We show that an efficient use of the CS theory enables to drastically reduce the number of measurements commonly used in dMRI acquisitions. Only 20-30 measurements, optimally spread on several b-value shells, are shown to be necessary, which is less than previous attempts to recover the diffusion signal using CS. This opens an attractive perspective to measure the diffusion signals in white matter within a reduced acquisition time and shows that CS holds great promise and opens new and exciting perspectives in diffusion MRI (dMRI). Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Template model inspired leg force feedback based control can assist human walking.

    Science.gov (United States)

    Zhao, Guoping; Sharbafi, Maziar; Vlutters, Mark; van Asseldonk, Edwin; Seyfarth, Andre

    2017-07-01

    We present a novel control approach for assistive lower-extremity exoskeletons. In particular, we implement a virtual pivot point (VPP) template model inspired leg force feedback based controller on a lower-extremity powered exoskeleton (LOPES II) and demonstrate that it can effectively assist humans during walking. It has been shown that the VPP template model is capable of stabilizing the trunk and reproduce a human-like hip torque during the stance phase of walking. With leg force and joint angle feedback inspired by the VPP template model, our controller provides hip and knee torque assistance during the stance phase. A pilot experiment was conducted with four healthy subjects. Joint kinematics, leg muscle electromyography (EMG), and metabolic cost were measured during walking with and without assistance. Results show that, for 0.6 m/s walking, our controller can reduce leg muscle activations, especially for the medial gastrocnemius (about 16.0%), while hip and knee joint kinematics remain similar to the condition without the controller. Besides, the controller also reduces 10% of the net metabolic cost during walking. This paper demonstrates walking assistance benefits of the VPP template model for the first time. The support of human walking is achieved by a force feedback of leg force applied to the control of hip and knee joints. It can help us to provide a framework for investigating walking assistance control in the future.

  1. An Agent-Based Modeling Template for a Cohort of Veterans with Diabetic Retinopathy.

    Directory of Open Access Journals (Sweden)

    Theodore Eugene Day

    Full Text Available Agent-based models are valuable for examining systems where large numbers of discrete individuals interact with each other, or with some environment. Diabetic Veterans seeking eye care at a Veterans Administration hospital represent one such cohort.The objective of this study was to develop an agent-based template to be used as a model for a patient with diabetic retinopathy (DR. This template may be replicated arbitrarily many times in order to generate a large cohort which is representative of a real-world population, upon which in-silico experimentation may be conducted.Agent-based template development was performed in java-based computer simulation suite AnyLogic Professional 6.6. The model was informed by medical data abstracted from 535 patient records representing a retrospective cohort of current patients of the VA St. Louis Healthcare System Eye clinic. Logistic regression was performed to determine the predictors associated with advancing stages of DR. Predicted probabilities obtained from logistic regression were used to generate the stage of DR in the simulated cohort.The simulated cohort of DR patients exhibited no significant deviation from the test population of real-world patients in proportion of stage of DR, duration of diabetes mellitus (DM, or the other abstracted predictors. Simulated patients after 10 years were significantly more likely to exhibit proliferative DR (P<0.001.Agent-based modeling is an emerging platform, capable of simulating large cohorts of individuals based on manageable data abstraction efforts. The modeling method described may be useful in simulating many different conditions where course of disease is described in categorical stages.

  2. One-pot synthesis of magnetic hybrid materials based on ovoid-like carboxymethyl-cellulose/cetyltrimethylammonium-bromide templates

    Energy Technology Data Exchange (ETDEWEB)

    Torres-Martínez, Nubia E. [Universidad Autónoma de Nuevo León, Facultad de Ingeniería Mecánica y Eléctrica, San Nicolás de los Garza, 66450 Nuevo León (Mexico); Garza-Navarro, M.A., E-mail: marco.garzanr@uanl.edu.mx [Universidad Autónoma de Nuevo León, Facultad de Ingeniería Mecánica y Eléctrica, San Nicolás de los Garza, 66450 Nuevo León (Mexico); Universidad Autónoma de Nuevo León, Centro de Innovación, Investigación y Desarrollo en Ingeniería y Tecnología, Apodaca, 66600 Nuevo León (Mexico); Lucio-Porto, Raúl [Université de Nantes, CNRS, Institut des Matériaux Jean Rouxel (IMN), 2 rue de la Houssinière, BP32229, 44322 Nantes Cedex 3 (France); and others

    2013-09-16

    A novel one-pot synthetic procedure to obtain magnetic hybrid nanostructured materials (HNM), based on magnetic spinel-metal-oxide (SMO) nanoparticles stabilized in ovoid-like carboxymethyl-cellulose (CMC)/cetyltrimethylammonium-bromide (CTAB) templates, is reported. The HNM were synthesized from the controlled hydrolysis of inorganic salts of Fe (II) and Fe (III) into aqueous dissolutions of CMC and CTAB. The synthesized HNM were characterized by transmission electron microscopy, Fourier transform infrared spectroscopy and static magnetic measurements. The experimental evidence suggests that, due to the competition between CTAB molecules and SMO nanoparticles to occupy CMC intermolecular sites nearby to its carboxylate functional groups, the size of both, SMO nanoparticles and ovoid-like CMC/CTAB templates can be tuned, varying the CTAB:SMO weight ratio. Moreover, it was found that the magnetic response of the HNM depends on the confinement degree of the SMO nanoparticles into the CMC/CTAB template. Hence, their magnetic characteristics can be adjusted controlling the size of the template, the quantity and distribution of the SMO nanoparticles within the template and their size. - Graphical abstract: Display Omitted - Highlights: • The synthesis of magnetic hybrid materials is reported. • The hybrid materials were synthesized following a novel one-pot procedure. • The magnetic nanoparticles were stabilized in ovoid-like templates. • The size of the templates was tuned adjusting nanoparticles weight content. • The magnetic properties of hybrid materials depend on the size of the template.

  3. DNABIT Compress – Genome compression algorithm

    OpenAIRE

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...

  4. Sparse representations and compressive sensing for imaging and vision

    CERN Document Server

    Patel, Vishal M

    2013-01-01

    Compressed sensing or compressive sensing is a new concept in signal processing where one measures a small number of non-adaptive linear combinations of the signal.  These measurements are usually much smaller than the number of samples that define the signal.  From these small numbers of measurements, the signal is then reconstructed by non-linear procedure.  Compressed sensing has recently emerged as a powerful tool for efficiently processing data in non-traditional ways.  In this book, we highlight some of the key mathematical insights underlying sparse representation and compressed sensing and illustrate the role of these theories in classical vision, imaging and biometrics problems.

  5. New Approach Based on Compressive Sampling for Sample Rate Enhancement in DASs for Low-Cost Sensing Nodes

    Directory of Open Access Journals (Sweden)

    Francesco Bonavolontà

    2014-10-01

    Full Text Available The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.

  6. Music preferences with hearing aids: effects of signal properties, compression settings, and listener characteristics.

    Science.gov (United States)

    Croghan, Naomi B H; Arehart, Kathryn H; Kates, James M

    2014-01-01

    Current knowledge of how to design and fit hearing aids to optimize music listening is limited. Many hearing-aid users listen to recorded music, which often undergoes compression limiting (CL) in the music industry. Therefore, hearing-aid users may experience twofold effects of compression when listening to recorded music: music-industry CL and hearing-aid wide dynamic-range compression (WDRC). The goal of this study was to examine the roles of input-signal properties, hearing-aid processing, and individual variability in the perception of recorded music, with a focus on the effects of dynamic-range compression. A group of 18 experienced hearing-aid users made paired-comparison preference judgments for classical and rock music samples using simulated hearing aids. Music samples were either unprocessed before hearing-aid input or had different levels of music-industry CL. Hearing-aid conditions included linear gain and individually fitted WDRC. Combinations of four WDRC parameters were included: fast release time (50 msec), slow release time (1,000 msec), three channels, and 18 channels. Listeners also completed several psychophysical tasks. Acoustic analyses showed that CL and WDRC reduced temporal envelope contrasts, changed amplitude distributions across the acoustic spectrum, and smoothed the peaks of the modulation spectrum. Listener judgments revealed that fast WDRC was least preferred for both genres of music. For classical music, linear processing and slow WDRC were equally preferred, and the main effect of number of channels was not significant. For rock music, linear processing was preferred over slow WDRC, and three channels were preferred to 18 channels. Heavy CL was least preferred for classical music, but the amount of CL did not change the patterns of WDRC preferences for either genre. Auditory filter bandwidth as estimated from psychophysical tuning curves was associated with variability in listeners' preferences for classical music. Fast

  7. StirMark Benchmark: audio watermarking attacks based on lossy compression

    Science.gov (United States)

    Steinebach, Martin; Lang, Andreas; Dittmann, Jana

    2002-04-01

    StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.

  8. A color based face detection system using multiple templates

    Institute of Scientific and Technical Information of China (English)

    王涛; 卜佳俊; 陈纯

    2003-01-01

    A color based system using multiple templates was developed and implemented for detecting human faces in color images. The algorithm consists of three image processing steps. The first step is human skin color statistics. Then it separates skin regions from non-skin regions. After that, it locates the frontal human face(s) within the skin regions. In the first step, 250 skin samples from persons of different ethnicities are used to determine the color distribution of human skin in chromatic color space in order to get a chroma chart showing likelihoods of skin colors. This chroma chart is used to generate, from the original color image, a gray scale image whose gray value at a pixel shows its likelihood of representing the skin. The algorithm uses an adaptive thresholding process to achieve the optimal threshold value for dividing the gray scale image into separate skin regions from non skin regions. Finally, multiple face templates matching is used to determine if a given skin region represents a frontal human face or not. Test of the system with more than 400 color images showed that the resulting detection rate was 83%, which is better than most color-based face detection systems. The average speed for face detection is 0.8 second/image (400×300 pixels) on a Pentium 3 (800MHz) PC.

  9. A color based face detection system using multiple templates

    Institute of Scientific and Technical Information of China (English)

    王涛; 卜佳酸; 陈纯

    2003-01-01

    A color based system using multiple templates was developed and implemented for detecting hu-man faces in color images.The algorithm comsists of three image processing steps.The first step is human skin color statistics.Then it separates skin regions from non-skin regions.After that,it locates the frontal human face(s) within the skin regions.In the first step,250 skin samples from persons of different ethnicities are used to determine the color distribution of human skin in chromatic color space in order to get a chroma chart showing likelihoods of skin colors.This chroma chart is used to generate,from the original color image,a gray scale image whose gray value at a pixel shows its likelihood of representing the shin,The algorithm uses an adaptive thresholding process to achieve the optimal threshold value for dividing the gray scale image into sep-arate skin regions from non skin regions.Finally,multiple face templates matching is used to determine if a given skin region represents a frontal human face or not.Test of the system with more than 400 color images showed that the resulting detection rate was 83%,which is better than most colou-based face detection sys-tems.The average speed for face detection is 0.8 second/image(400×300pixels) on a Pentium 3(800MHz) PC.

  10. Nanowires and nanostructures fabrication using template methods

    DEFF Research Database (Denmark)

    Mátéfi-Tempfli, Stefan; Mátéfi-Tempfli, M.; Vlad, A.

    2009-01-01

    One of the great challenges of today is to find reliable techniques for the fabrication of nanomaterials and nanostructures. Methods based on template synthesis and on self organization are the most promising due to their easiness and low cost. This paper focuses on the electrochemical synthesis ...... of nanowires and nanostructures using nanoporous host materials such as supported anodic aluminum considering it as a key template for nanowires based devices. New ways are opened for applications by combining such template synthesis methods with nanolithographic techniques....

  11. [Manufacture method and clinical application of minimally invasive dental implant guide template based on registration technology].

    Science.gov (United States)

    Lin, Zeming; He, Bingwei; Chen, Jiang; D u, Zhibin; Zheng, Jingyi; Li, Yanqin

    2012-08-01

    To guide doctors in precisely positioning surgical operation, a new production method of minimally invasive implant guide template was presented. The mandible of patient was scanned by CT scanner, and three-dimensional jaw bone model was constructed based on CT images data The professional dental implant software Simplant was used to simulate the plant based on the three-dimensional CT model to determine the location and depth of implants. In the same time, the dental plaster models were scanned by stereo vision system to build the oral mucosa model. Next, curvature registration technology was used to fuse the oral mucosa model and the CT model, then the designed position of implant in the oral mucosa could be determined. The minimally invasive implant guide template was designed in 3-Matic software according to the design position of implant and the oral mucosa model. Finally, the template was produced by rapid prototyping. The three-dimensional registration technology was useful to fuse the CT data and the dental plaster data, and the template was accurate that could provide the doctors a guidance in the actual planting without cut-off mucosa. The guide template which fabricated by comprehensive utilization of three-dimensional registration, Simplant simulation and rapid prototyping positioning are accurate and can achieve the minimally invasive and accuracy implant surgery, this technique is worthy of clinical use.

  12. Template Assembly for Detailed Urban Reconstruction

    KAUST Repository

    Nan, Liangliang

    2015-05-04

    We propose a new framework to reconstruct building details by automatically assembling 3D templates on coarse textured building models. In a preprocessing step, we generate an initial coarse model to approximate a point cloud computed using Structure from Motion and Multi View Stereo, and we model a set of 3D templates of facade details. Next, we optimize the initial coarse model to enforce consistency between geometry and appearance (texture images). Then, building details are reconstructed by assembling templates on the textured faces of the coarse model. The 3D templates are automatically chosen and located by our optimization-based template assembly algorithm that balances image matching and structural regularity. In the results, we demonstrate how our framework can enrich the details of coarse models using various data sets.

  13. A Fully Integrated Wireless Compressed Sensing Neural Signal Acquisition System for Chronic Recording and Brain Machine Interface.

    Science.gov (United States)

    Liu, Xilin; Zhang, Milin; Xiong, Tao; Richardson, Andrew G; Lucas, Timothy H; Chin, Peter S; Etienne-Cummings, Ralph; Tran, Trac D; Van der Spiegel, Jan

    2016-07-18

    Reliable, multi-channel neural recording is critical to the neuroscience research and clinical treatment. However, most hardware development of fully integrated, multi-channel wireless neural recorders to-date, is still in the proof-of-concept stage. To be ready for practical use, the trade-offs between performance, power consumption, device size, robustness, and compatibility need to be carefully taken into account. This paper presents an optimized wireless compressed sensing neural signal recording system. The system takes advantages of both custom integrated circuits and universal compatible wireless solutions. The proposed system includes an implantable wireless system-on-chip (SoC) and an external wireless relay. The SoC integrates 16-channel low-noise neural amplifiers, programmable filters and gain stages, a SAR ADC, a real-time compressed sensing module, and a near field wireless power and data transmission link. The external relay integrates a 32 bit low-power microcontroller with Bluetooth 4.0 wireless module, a programming interface, and an inductive charging unit. The SoC achieves high signal recording quality with minimized power consumption, while reducing the risk of infection from through-skin connectors. The external relay maximizes the compatibility and programmability. The proposed compressed sensing module is highly configurable, featuring a SNDR of 9.78 dB with a compression ratio of 8×. The SoC has been fabricated in a 180 nm standard CMOS technology, occupying 2.1 mm × 0.6 mm silicon area. A pre-implantable system has been assembled to demonstrate the proposed paradigm. The developed system has been successfully used for long-term wireless neural recording in freely behaving rhesus monkey.

  14. The effect of breast compression on mass conspicuity in digital mammography

    International Nuclear Information System (INIS)

    Saunders, Robert S. Jr; Samei, Ehsan

    2008-01-01

    This study analyzed how the inherent quality of diagnostic information in digital mammography could be affected by breast compression. A digital mammography system was modeled using a Monte Carlo algorithm based on the Penelope program, which has been successfully used to model several medical imaging systems. First, the Monte Carlo program was validated against previous measurements and simulations. Once validated, the Monte Carlo software modeled a digital mammography system by tracking photons through a voxelized software breast phantom, containing anatomical structures and breast masses, and following photons until they were absorbed by a selenium-based flat-panel detector. Simulations were performed for two compression conditions (standard compression and 12.5% reduced compression) and three photon flux conditions (constant flux, constant detector signal, and constant glandular dose). The results showed that reduced compression led to higher scatter fractions, as expected. For the constant photon flux condition, decreased compression also reduced glandular dose. For constant glandular dose, the SdNR for a 4 cm breast was 0.60±0.11 and 0.62±0.11 under standard and reduced compressions, respectively. For the 6 cm case with constant glandular dose, the SdNR was 0.50±0.11 and 0.49±0.10 under standard and reduced compressions, respectively. The results suggest that if a particular imaging system can handle an approximately 10% increase in total tube output and 10% decrease in detector signal, breast compression can be reduced by about 12% in terms of breast thickness with little impact on image quality or dose.

  15. Research of Block-Based Motion Estimation Methods for Video Compression

    Directory of Open Access Journals (Sweden)

    Tropchenko Andrey

    2016-08-01

    Full Text Available This work is a review of the block-based algorithms used for motion estimation in video compression. It researches different types of block-based algorithms that range from the simplest named Full Search to the fast adaptive algorithms like Hierarchical Search. The algorithms evaluated in this paper are widely accepted by the video compressing community and have been used in implementing various standards, such as MPEG-4 Visual and H.264. The work also presents a very brief introduction to the entire flow of video compression.

  16. Templated sequence insertion polymorphisms in the human genome

    Science.gov (United States)

    Onozawa, Masahiro; Aplan, Peter

    2016-11-01

    Templated Sequence Insertion Polymorphism (TSIP) is a recently described form of polymorphism recognized in the human genome, in which a sequence that is templated from a distant genomic region is inserted into the genome, seemingly at random. TSIPs can be grouped into two classes based on nucleotide sequence features at the insertion junctions; Class 1 TSIPs show features of insertions that are mediated via the LINE-1 ORF2 protein, including 1) target-site duplication (TSD), 2) polyadenylation 10-30 nucleotides downstream of a “cryptic” polyadenylation signal, and 3) preference for insertion at a 5’-TTTT/A-3’ sequence. In contrast, class 2 TSIPs show features consistent with repair of a DNA double-strand break via insertion of a DNA “patch” that is derived from a distant genomic region. Survey of a large number of normal human volunteers demonstrates that most individuals have 25-30 TSIPs, and that these TSIPs track with specific geographic regions. Similar to other forms of human polymorphism, we suspect that these TSIPs may be important for the generation of human diversity and genetic diseases.

  17. Speech perception at positive signal-to-noise ratios using adaptive adjustment of time compression.

    Science.gov (United States)

    Schlueter, Anne; Brand, Thomas; Lemke, Ulrike; Nitzschner, Stefan; Kollmeier, Birger; Holube, Inga

    2015-11-01

    Positive signal-to-noise ratios (SNRs) characterize listening situations most relevant for hearing-impaired listeners in daily life and should therefore be considered when evaluating hearing aid algorithms. For this, a speech-in-noise test was developed and evaluated, in which the background noise is presented at fixed positive SNRs and the speech rate (i.e., the time compression of the speech material) is adaptively adjusted. In total, 29 younger and 12 older normal-hearing, as well as 24 older hearing-impaired listeners took part in repeated measurements. Younger normal-hearing and older hearing-impaired listeners conducted one of two adaptive methods which differed in adaptive procedure and step size. Analysis of the measurements with regard to list length and estimation strategy for thresholds resulted in a practical method measuring the time compression for 50% recognition. This method uses time-compression adjustment and step sizes according to Versfeld and Dreschler [(2002). J. Acoust. Soc. Am. 111, 401-408], with sentence scoring, lists of 30 sentences, and a maximum likelihood method for threshold estimation. Evaluation of the procedure showed that older participants obtained higher test-retest reliability compared to younger participants. Depending on the group of listeners, one or two lists are required for training prior to data collection.

  18. Compressed Sensing Methods in Radio Receivers Exposed to Noise and Interference

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek

    , there is a problem of interference, which makes digitization of radio receivers even more dicult. High-order low-pass lters are needed to remove interfering signals and secure a high-quality reception. In the mid-2000s a new method of signal acquisition, called compressed sensing, emerged. Compressed sensing...... the downconverted baseband signal and interference, may be replaced by low-order lters. Additional digital signal processing is a price to pay for this feature. Hence, the signal processing is moved from the analog to the digital domain. Filtering compressed sensing, which is a new application of compressed sensing...

  19. Fractal Image Compression Based on High Entropy Values Technique

    Directory of Open Access Journals (Sweden)

    Douaa Younis Abbaas

    2018-04-01

    Full Text Available There are many attempts tried to improve the encoding stage of FIC because it consumed time. These attempts worked by reducing size of the search pool for pair range-domain matching but most of them led to get a bad quality, or a lower compression ratio of reconstructed image. This paper aims to present a method to improve performance of the full search algorithm by combining FIC (lossy compression and another lossless technique (in this case entropy coding is used. The entropy technique will reduce size of the domain pool (i. e., number of domain blocks based on the entropy value of each range block and domain block and then comparing the results of full search algorithm and proposed algorithm based on entropy technique to see each of which give best results (such as reduced the encoding time with acceptable values in both compression quali-ty parameters which are C. R (Compression Ratio and PSNR (Image Quality. The experimental results of the proposed algorithm proven that using the proposed entropy technique reduces the encoding time while keeping compression rates and reconstruction image quality good as soon as possible.

  20. DESIGN AND IMPLEMENTATION OF A VHDL PROCESSOR FOR DCT BASED IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    Md. Shabiul Islam

    2017-11-01

    Full Text Available This paper describes the design and implementation of a VHDL processor meant for performing 2D-Discrete Cosine Transform (DCT to use in image compression applications. The design flow starts from the system specification to implementation on silicon and the entire process is carried out using an advanced workstation based design environment for digital signal processing. The software allows the bit-true analysis to ensure that the designed VLSI processor satisfies the required specifications. The bit-true analysis is performed on all levels of abstraction (behavior, VHDL etc.. The motivation behind the work is smaller size chip area, faster processing, reducing the cost of the chip

  1. l1- and l2-Norm Joint Regularization Based Sparse Signal Reconstruction Scheme

    Directory of Open Access Journals (Sweden)

    Chanzi Liu

    2016-01-01

    Full Text Available Many problems in signal processing and statistical inference involve finding sparse solution to some underdetermined linear system of equations. This is also the application condition of compressive sensing (CS which can find the sparse solution from the measurements far less than the original signal. In this paper, we propose l1- and l2-norm joint regularization based reconstruction framework to approach the original l0-norm based sparseness-inducing constrained sparse signal reconstruction problem. Firstly, it is shown that, by employing the simple conjugate gradient algorithm, the new formulation provides an effective framework to deduce the solution as the original sparse signal reconstruction problem with l0-norm regularization item. Secondly, the upper reconstruction error limit is presented for the proposed sparse signal reconstruction framework, and it is unveiled that a smaller reconstruction error than l1-norm relaxation approaches can be realized by using the proposed scheme in most cases. Finally, simulation results are presented to validate the proposed sparse signal reconstruction approach.

  2. Emergency department documentation templates: variability in template selection and association with physical examination and test ordering in dizziness presentations

    Directory of Open Access Journals (Sweden)

    Meurer William J

    2011-03-01

    Full Text Available Abstract Background Clinical documentation systems, such as templates, have been associated with process utilization. The T-System emergency department (ED templates are widely used but lacking are analyses of the templates association with processes. This system is also unique because of the many different template options available, and thus the selection of the template may also be important. We aimed to describe the selection of templates in ED dizziness presentations and to investigate the association between items on templates and process utilization. Methods Dizziness visits were captured from a population-based study of EDs that use documentation templates. Two relevant process outcomes were assessed: head computerized tomography (CT scan and nystagmus examination. Multivariable logistic regression was used to estimate the probability of each outcome for patients who did or did not receive a relevant-item template. Propensity scores were also used to adjust for selection effects. Results The final cohort was 1,485 visits. Thirty-one different templates were used. Use of a template with a head CT item was associated with an increase in the adjusted probability of head CT utilization from 12.2% (95% CI, 8.9%-16.6% to 29.3% (95% CI, 26.0%-32.9%. The adjusted probability of documentation of a nystagmus assessment increased from 12.0% (95%CI, 8.8%-16.2% when a nystagmus-item template was not used to 95.0% (95% CI, 92.8%-96.6% when a nystagmus-item template was used. The associations remained significant after propensity score adjustments. Conclusions Providers use many different templates in dizziness presentations. Important differences exist in the various templates and the template that is used likely impacts process utilization, even though selection may be arbitrary. The optimal design and selection of templates may offer a feasible and effective opportunity to improve care delivery.

  3. BPFlexTemplate: A Business Process template generation tool based on similarity and flexibility

    Directory of Open Access Journals (Sweden)

    Latifa Ilahi

    2017-01-01

    Full Text Available In large organizations with multiple organizational units, process variants emerge due to many aspects, including local management policies, resources or socio-technical limitations. Organizations then struggle to improve a business process which has no longer a single process model to redesign, implement and adjust. In this paper, we propose an approach to tackle these two challenges: decrease the proliferation of process variants in these organizations, and foresee, at the same time, the need of having flexible business processes that allow for a certain degree of adjustment. To validate our approach, we first conducted case studies where we collected six real-world business process variants from two organizational units of the same healthcare organization. We then proposed an algorithm to derive a template process model from all the variants, which includes common and flexible process elements. We implemented our approach in a software tool called BPFlexTemplate, and tested it with the elicited variants.

  4. Direction-of-Arrival Estimation for Coprime Array Using Compressive Sensing Based Array Interpolation

    Directory of Open Access Journals (Sweden)

    Aihua Liu

    2017-01-01

    Full Text Available A method of direction-of-arrival (DOA estimation using array interpolation is proposed in this paper to increase the number of resolvable sources and improve the DOA estimation performance for coprime array configuration with holes in its virtual array. The virtual symmetric nonuniform linear array (VSNLA of coprime array signal model is introduced, with the conventional MUSIC with spatial smoothing algorithm (SS-MUSIC applied on the continuous lags in the VSNLA; the degrees of freedom (DoFs for DOA estimation are obviously not fully exploited. To effectively utilize the extent of DoFs offered by the coarray configuration, a compressing sensing based array interpolation algorithm is proposed. The compressing sensing technique is used to obtain the coarse initial DOA estimation, and a modified iterative initial DOA estimation based interpolation algorithm (IMCA-AI is then utilized to obtain the final DOA estimation, which maps the sample covariance matrix of the VSNLA to the covariance matrix of a filled virtual symmetric uniform linear array (VSULA with the same aperture size. The proposed DOA estimation method can efficiently improve the DOA estimation performance. The numerical simulations are provided to demonstrate the effectiveness of the proposed method.

  5. Compression-based inference on graph data

    NARCIS (Netherlands)

    Bloem, P.; van den Bosch, A.; Heskes, T.; van Leeuwen, D.

    2013-01-01

    We investigate the use of compression-based learning on graph data. General purpose compressors operate on bitstrings or other sequential representations. A single graph can be represented sequentially in many ways, which may in uence the performance of sequential compressors. Using Normalized

  6. Template measurement for plutonium pit based on neural networks

    International Nuclear Information System (INIS)

    Zhang Changfan; Gong Jian; Liu Suping; Hu Guangchun; Xiang Yongchun

    2012-01-01

    Template measurement for plutonium pit extracts characteristic data from-ray spectrum and the neutron counts emitted by plutonium. The characteristic data of the suspicious object are compared with data of the declared plutonium pit to verify if they are of the same type. In this paper, neural networks are enhanced as the comparison algorithm for template measurement of plutonium pit. Two kinds of neural networks are created, i.e. the BP and LVQ neural networks. They are applied in different aspects for the template measurement and identification. BP neural network is used for classification for different types of plutonium pits, which is often used for management of nuclear materials. LVQ neural network is used for comparison of inspected objects to the declared one, which is usually applied in the field of nuclear disarmament and verification. (authors)

  7. COMPASS: an Interoperable Personal Health System to Monitor and Compress Signals in Chronic Obstructive Pulmonary Disease

    Directory of Open Access Journals (Sweden)

    Thomas Hofer

    2015-11-01

    Full Text Available In the past years the progress on the mobile market has made possible an advancement in terms of telemedicine systems and definition of systems for monitoring chronic illnesses. The distribution of mobile devices in developed countries is increasing. Many of these devices are equipped with wireless standards including Bluetooth and the amount of sold Smartphones is constantly increasing. Our approach is oriented towards this market, using existing devices to enable in-home patient monitoring and even further to ubiquitious monitoring. The idea is to increase the quality of care, reduce costs and gather medical grade data, especially vital signs, with a resolution of minutes or even less, which is nowadays only possible in an ICU (Intensive Care Units. In this paper we will present the COMPASS personal health system (PHS platform, and how this platform enables Android devices to collect, analyze and send sensor data to an observation storage by means of interoperability standards. Furthermore, we will also present how this data can be compressed using advanced compressed sensing techniques and how to optimize these techniques with genetic algorithms to improve the RMSE of the reconstructed signal after compression. We also produce a preliminary evaluation of the algorithm against the state of the art algorithms for compressed sensing.

  8. Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique

    Science.gov (United States)

    Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi

    2013-09-01

    According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.

  9. Dynamic control of a homogeneous charge compression ignition engine

    Science.gov (United States)

    Duffy, Kevin P [Metamora, IL; Mehresh, Parag [Peoria, IL; Schuh, David [Peoria, IL; Kieser, Andrew J [Morton, IL; Hergart, Carl-Anders [Peoria, IL; Hardy, William L [Peoria, IL; Rodman, Anthony [Chillicothe, IL; Liechty, Michael P [Chillicothe, IL

    2008-06-03

    A homogenous charge compression ignition engine is operated by compressing a charge mixture of air, exhaust and fuel in a combustion chamber to an autoignition condition of the fuel. The engine may facilitate a transition from a first combination of speed and load to a second combination of speed and load by changing the charge mixture and compression ratio. This may be accomplished in a consecutive engine cycle by adjusting both a fuel injector control signal and a variable valve control signal away from a nominal variable valve control signal. Thereafter in one or more subsequent engine cycles, more sluggish adjustments are made to at least one of a geometric compression ratio control signal and an exhaust gas recirculation control signal to allow the variable valve control signal to be readjusted back toward its nominal variable valve control signal setting. By readjusting the variable valve control signal back toward its nominal setting, the engine will be ready for another transition to a new combination of engine speed and load.

  10. Template analysis for the MAGIC telescopes

    Energy Technology Data Exchange (ETDEWEB)

    Menzel, Uta [Max-Planck-Institut fuer Physik, Muenchen (Germany); Collaboration: MAGIC-Collaboration

    2016-07-01

    The MAGIC telescopes are two 17-m-diameter Imaging Air Cherenkov Telescopes located on the Canary island of La Palma. They record the Cherenkov light from air showers induced by very high energy photons. The current data analysis uses a parametrization of the two shower images (including Hillas parameters) to determine the characteristics of the primary particle. I am implementing an advanced analysis method that compares shower images on a pixel basis with template images based on Monte Carlo simulations. To reduce the simulation effort the templates contain only pure shower images that are convolved with the telescope response later in the analysis. The primary particle parameters are reconstructed by maximizing the likelihood of the template. By using all the information available in the shower images, the performance of MAGIC is expected to improve. In this presentation I will explain the general idea of a template-based analysis and show the first results of the implementation.

  11. Photonic compressive sensing with a micro-ring-resonator-based microwave photonic filter

    DEFF Research Database (Denmark)

    Chen, Ying; Ding, Yunhong; Zhu, Zhijing

    2015-01-01

    A novel approach to realize photonic compressive sensing (CS) with a multi-tap microwave photonic filter is proposed and demonstrated. The system takes both advantages of CS and photonics to capture wideband sparse signals with sub-Nyquist sampling rate. The low-pass filtering function required...

  12. Wavelet compression of multichannel ECG data by enhanced set partitioning in hierarchical trees algorithm.

    Science.gov (United States)

    Sharifahmadian, Ershad

    2006-01-01

    The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.

  13. Reconstruction of ECG signals in presence of corruption.

    Science.gov (United States)

    Ganeshapillai, Gartheeban; Liu, Jessica F; Guttag, John

    2011-01-01

    We present an approach to identifying and reconstructing corrupted regions in a multi-parameter physiological signal. The method, which uses information in correlated signals, is specifically designed to preserve clinically significant aspects of the signals. We use template matching to jointly segment the multi-parameter signal, morphological dissimilarity to estimate the quality of the signal segment, similarity search using features on a database of templates to find the closest match, and time-warping to reconstruct the corrupted segment with the matching template. In experiments carried out on the MIT-BIH Arrhythmia Database, a two-parameter database with many clinically significant arrhythmias, our method improved the classification accuracy of the beat type by more than 7 times on a signal corrupted with white Gaussian noise, and increased the similarity to the original signal, as measured by the normalized residual distance, by more than 2.5 times.

  14. Subjective Evaluation of Audiovisual Signals

    Directory of Open Access Journals (Sweden)

    F. Fikejz

    2010-01-01

    Full Text Available This paper deals with subjective evaluation of audiovisual signals, with emphasis on the interaction between acoustic and visual quality. The subjective test is realized by a simple rating method. The audiovisual signal used in this test is a combination of images compressed by JPEG compression codec and sound samples compressed by MPEG-1 Layer III. Images and sounds have various contents. It simulates a real situation when the subject listens to compressed music and watches compressed pictures without the access to original, i.e. uncompressed signals.

  15. Use of a structured template to facilitate practice-based learning and improvement projects.

    Science.gov (United States)

    McClain, Elizabeth K; Babbott, Stewart F; Tsue, Terance T; Girod, Douglas A; Clements, Debora; Gilmer, Lisa; Persons, Diane; Unruh, Greg

    2012-06-01

    The Accreditation Council for Graduate Medical Education (ACGME) requires residency programs to meet and demonstrate outcomes across 6 competencies. Measuring residents' competency in practice-based learning and improvement (PBLI) is particularly challenging. We developed an educational tool to meet ACGME requirements for PBLI. The PBLI template helped programs document quality improvement (QI) projects and supported increased scholarly activity surrounding PBLI learning. We reviewed program requirements for 43 residency and fellowship programs and identified specific PBLI requirements for QI activities. We also examined ACGME Program Information Form responses on PBLI core competency questions surrounding QI projects for program sites visited in 2008-2009. Data were integrated by a multidisciplinary committee to develop a peer-protected PBLI template guiding programs through process, documentation, and evaluation of QI projects. All steps were reviewed and approved through our GME Committee structure. An electronic template, companion checklist, and evaluation form were developed using identified project characteristics to guide programs through the PBLI process and facilitate documentation and evaluation of the process. During a 24 month period, 27 programs have completed PBLI projects, and 15 have reviewed the template with their education committees, but have not initiated projects using the template. The development of the tool generated program leaders' support because the tool enhanced the ability to meet program-specific objectives. The peer-protected status of this document for confidentiality and from discovery has been beneficial for program usage. The document aggregates data on PBLI and QI initiatives, offers opportunities to increase scholarship in QI, and meets the ACGME goal of linking measures to outcomes important to meeting accreditation requirements at the program and institutional level.

  16. System and method for detection of dispersed broadband signals

    Science.gov (United States)

    Qian, S.; Dunham, M.E.

    1999-06-08

    A system and method for detecting the presence of dispersed broadband signals in real time are disclosed. The present invention utilizes a bank of matched filters for detecting the received dispersed broadband signals. Each matched filter uses a respective robust time template that has been designed to approximate the dispersed broadband signals of interest, and each time template varies across a spectrum of possible dispersed broadband signal time templates. The received dispersed broadband signal x(t) is received by each of the matched filters, and if one or more matches occurs, then the received data is determined to have signal data of interest. This signal data can then be analyzed and/or transmitted to Earth for analysis, as desired. The system and method of the present invention will prove extremely useful in many fields, including satellite communications, plasma physics, and interstellar research. The varying time templates used in the bank of matched filters are determined as follows. The robust time domain template is assumed to take the form w(t)=A(t)cos[l brace]2[phi](t)[r brace]. Since the instantaneous frequency f(t) is known to be equal to the derivative of the phase [phi](t), the trajectory of a joint time-frequency representation of x(t) is used as an approximation of [phi][prime](t). 10 figs.

  17. Optimizing preventive maintenance with maintenance templates

    International Nuclear Information System (INIS)

    Dozier, I.J.

    1996-01-01

    Rising operating costs has caused maintenance professionals to rethink their strategy for preventive maintenance (PM) programs. Maintenance Templates are pre-engineered PM task recommendations for a component type based on application of the component. Development of the maintenance template considers the dominant failure cause of the component and the type of preventive maintenance that can predict or prevent the failure from occurring. Maintenance template development also attempts to replace fixed frequency tasks with condition monitoring tasks such as vibration analysis or thermography. For those components that have fixed frequency PM intervals, consideration is given to the maintenance drivers such as criticality, environment and usage. This helps to maximize the PM frequency intervals and maximize the component availability. Maintenance Templates have been used at PECO Energy's Limerick Generating Station during the Reliability Centered Maintenance (RCM) Process to optimize their PM program. This paper describes the development and uses of the maintenance templates

  18. Compression and archiving of digital images

    International Nuclear Information System (INIS)

    Huang, H.K.

    1988-01-01

    This paper describes the application of a full-frame bit-allocation image compression technique to a hierarchical digital image archiving system consisting of magnetic disks, optical disks and an optical disk library. The digital archiving system without the compression has been in clinical operation in the Pediatric Radiology for more than half a year. The database in the system consists of all pediatric inpatients including all images from computed radiography, digitized x-ray films, CT, MR, and US. The rate of image accumulation is approximately 1,900 megabytes per week. The hardware design of the compression module is based on a Motorola 68020 microprocessor, A VME bus, a 16 megabyte image buffer memory board, and three Motorola digital signal processing 56001 chips on a VME board for performing the two-dimensional cosine transform and the quantization. The clinical evaluation of the compression module with the image archiving system is expected to be in February 1988

  19. Non-US data compression and coding research. FASAC Technical Assessment Report

    Energy Technology Data Exchange (ETDEWEB)

    Gray, R.M.; Cohn, M.; Craver, L.W.; Gersho, A.; Lookabaugh, T.; Pollara, F.; Vetterli, M.

    1993-11-01

    This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity, though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.

  20. Adaptive compressive learning for prediction of protein-protein interactions from primary sequence.

    Science.gov (United States)

    Zhang, Ya-Nan; Pan, Xiao-Yong; Huang, Yan; Shen, Hong-Bin

    2011-08-21

    Protein-protein interactions (PPIs) play an important role in biological processes. Although much effort has been devoted to the identification of novel PPIs by integrating experimental biological knowledge, there are still many difficulties because of lacking enough protein structural and functional information. It is highly desired to develop methods based only on amino acid sequences for predicting PPIs. However, sequence-based predictors are often struggling with the high-dimensionality causing over-fitting and high computational complexity problems, as well as the redundancy of sequential feature vectors. In this paper, a novel computational approach based on compressed sensing theory is proposed to predict yeast Saccharomyces cerevisiae PPIs from primary sequence and has achieved promising results. The key advantage of the proposed compressed sensing algorithm is that it can compress the original high-dimensional protein sequential feature vector into a much lower but more condensed space taking the sparsity property of the original signal into account. What makes compressed sensing much more attractive in protein sequence analysis is its compressed signal can be reconstructed from far fewer measurements than what is usually considered necessary in traditional Nyquist sampling theory. Experimental results demonstrate that proposed compressed sensing method is powerful for analyzing noisy biological data and reducing redundancy in feature vectors. The proposed method represents a new strategy of dealing with high-dimensional protein discrete model and has great potentiality to be extended to deal with many other complicated biological systems. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Interplay of I-TASSER and QUARK for template-based and ab initio protein structure prediction in CASP10.

    Science.gov (United States)

    Zhang, Yang

    2014-02-01

    We develop and test a new pipeline in CASP10 to predict protein structures based on an interplay of I-TASSER and QUARK for both free-modeling (FM) and template-based modeling (TBM) targets. The most noteworthy observation is that sorting through the threading template pool using the QUARK-based ab initio models as probes allows the detection of distant-homology templates which might be ignored by the traditional sequence profile-based threading alignment algorithms. Further template assembly refinement by I-TASSER resulted in successful folding of two medium-sized FM targets with >150 residues. For TBM, the multiple threading alignments from LOMETS are, for the first time, incorporated into the ab initio QUARK simulations, which were further refined by I-TASSER assembly refinement. Compared with the traditional threading assembly refinement procedures, the inclusion of the threading-constrained ab initio folding models can consistently improve the quality of the full-length models as assessed by the GDT-HA and hydrogen-bonding scores. Despite the success, significant challenges still exist in domain boundary prediction and consistent folding of medium-size proteins (especially beta-proteins) for nonhomologous targets. Further developments of sensitive fold-recognition and ab initio folding methods are critical for solving these problems. Copyright © 2013 Wiley Periodicals, Inc.

  2. Dual soft-template system based on colloidal chemistry for the synthesis of hollow mesoporous silica nanoparticles.

    Science.gov (United States)

    Li, Yunqi; Bastakoti, Bishnu Prasad; Imura, Masataka; Tang, Jing; Aldalbahi, Ali; Torad, Nagy L; Yamauchi, Yusuke

    2015-04-20

    A new dual soft-template system comprising the asymmetric triblock copolymer poly(styrene-b-2-vinyl pyridine-b-ethylene oxide) (PS-b-P2VP-b-PEO) and the cationic surfactant cetyltrimethylammonium bromide (CTAB) is used to synthesize hollow mesoporous silica (HMS) nanoparticles with a center void of around 17 nm. The stable PS-b-P2VP-b-PEO polymeric micelle serves as a template to form the hollow interior, while the CTAB surfactant serves as a template to form mesopores in the shells. The P2VP blocks on the polymeric micelles can interact with positively charged CTA(+) ions via negatively charged hydrolyzed silica species. Thus, dual soft-templates clearly have different roles for the preparation of the HMS nanoparticles. Interestingly, the thicknesses of the mesoporous shell are tunable by varying the amounts of TEOS and CTAB. This study provides new insight on the preparation of mesoporous materials based on colloidal chemistry. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Template-Based Estimation of Time-Varying Tempo

    Directory of Open Access Journals (Sweden)

    Peeters Geoffroy

    2007-01-01

    Full Text Available We present a novel approach to automatic estimation of tempo over time. This method aims at detecting tempo at the tactus level for percussive and nonpercussive audio. The front-end of our system is based on a proposed reassigned spectral energy flux for the detection of musical events. The dominant periodicities of this flux are estimated by a proposed combination of discrete Fourier transform and frequency-mapped autocorrelation function. The most likely meter, beat, and tatum over time are then estimated jointly using proposed meter/beat subdivision templates and a Viterbi decoding algorithm. The performances of our system have been evaluated on four different test sets among which three were used during the ISMIR 2004 tempo induction contest. The performances obtained are close to the best results of this contest.

  4. Improving your target-template alignment with MODalign

    OpenAIRE

    Barbato, Alessandro; Benkert, Pascal; Schwede, Torsten; Tramontano, Anna; Kosinski, Jan

    2012-01-01

    Summary: MODalign is an interactive web-based tool aimed at helping protein structure modelers to inspect and manually modify the alignment between the sequences of a target protein and of its template(s). It interactively computes, displays and, upon modification of the target-template alignment, updates the multiple sequence alignments of the two protein families, their conservation score, secondary structure and solvent accessibility values, and local quality scores of the implied three-di...

  5. Constructing inverse V-type TiO{sub 2}-based photocatalyst via bio-template approach to enhance the photosynthetic water oxidation

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Jinghui; Zhou, Han; Ding, Jian; Zhang, Fan; Fan, Tongxiang, E-mail: txfan@sjtu.edu.cn; Zhang, Di

    2015-08-30

    Graphical abstract: Inverse V-type TiO{sub 2}-based photocatalyst was synthesized by using cross-linked titanium precursor to duplicate bio-template. - Highlights: • Cross-linked titanium precursor can facilitate an accurate duplication of templates. • In situ deposition of Ag{sup 0} from AgBr can maintain the completeness of surface structure. • Perfect inverse V-type Ag{sup 0}/TiO{sub 2} can achieve efficient water oxidation. - Abstract: Bio-template approach was employed to construct inverse V-type TiO{sub 2}-based photocatalyst with well distributed AgBr in TiO{sub 2} matrix by making dead Troides Helena wings with inverse V-type scales as the template. A cross-linked titanium precursor with homogenous hydrolytic rate, good liquidity, and low viscosity was employed to facilitate a perfect duplication of the template and the dispersion of AgBr based on appropriate pretreatment of the template by alkali and acid. The as-synthesized inverse V-type TiO{sub 2}/AgBr can be turned into inverse V-type TiO{sub 2}/Ag{sup 0} from AgBr photolysis during photocatalysis to achieve in situ deposition of Ag{sup 0} in TiO{sub 2} matrix, by this approach, to avoid the deformation of surface microstructure inherited from the template. The result showed that the cooperation of perfect inverse V-type structure and the well distributed TiO{sub 2}/Ag{sup 0} microstructures can efficiently boost the photosynthetic water oxidation compared to non-inverse V-type TiO{sub 2}/Ag{sup 0} and TiO{sub 2}/Ag{sup 0} without using template. The anti-reflection function of inverse V-type structure and the plasmatic effect of Ag{sup 0} might be able to account for the enhanced photon capture and efficient photoelectric conversion.

  6. Accelerated whole-brain multi-parameter mapping using blind compressed sensing.

    Science.gov (United States)

    Bhave, Sampada; Lingala, Sajan Goud; Johnson, Casey P; Magnotta, Vincent A; Jacob, Mathews

    2016-03-01

    To introduce a blind compressed sensing (BCS) framework to accelerate multi-parameter MR mapping, and demonstrate its feasibility in high-resolution, whole-brain T1ρ and T2 mapping. BCS models the evolution of magnetization at every pixel as a sparse linear combination of bases in a dictionary. Unlike compressed sensing, the dictionary and the sparse coefficients are jointly estimated from undersampled data. Large number of non-orthogonal bases in BCS accounts for more complex signals than low rank representations. The low degree of freedom of BCS, attributed to sparse coefficients, translates to fewer artifacts at high acceleration factors (R). From 2D retrospective undersampling experiments, the mean square errors in T1ρ and T2 maps were observed to be within 0.1% up to R = 10. BCS was observed to be more robust to patient-specific motion as compared to other compressed sensing schemes and resulted in minimal degradation of parameter maps in the presence of motion. Our results suggested that BCS can provide an acceleration factor of 8 in prospective 3D imaging with reasonable reconstructions. BCS considerably reduces scan time for multiparameter mapping of the whole brain with minimal artifacts, and is more robust to motion-induced signal changes compared to current compressed sensing and principal component analysis-based techniques. © 2015 Wiley Periodicals, Inc.

  7. Template-based procedures for neural network interpretation.

    Science.gov (United States)

    Alexander, J A.; Mozer, M C.

    1999-04-01

    Although neural networks often achieve impressive learning and generalization performance, their internal workings are typically all but impossible to decipher. This characteristic of the networks, their opacity, is one of the disadvantages of connectionism compared to more traditional, rule-oriented approaches to artificial intelligence. Without a thorough understanding of the network behavior, confidence in a system's results is lowered, and the transfer of learned knowledge to other processing systems - including humans - is precluded. Methods that address the opacity problem by casting network weights in symbolic terms are commonly referred to as rule extraction techniques. This work describes a principled approach to symbolic rule extraction from standard multilayer feedforward networks based on the notion of weight templates, parameterized regions of weight space corresponding to specific symbolic expressions. With an appropriate choice of representation, we show how template parameters may be efficiently identified and instantiated to yield the optimal match to the actual weights of a unit. Depending on the requirements of the application domain, the approach can accommodate n-ary disjunctions and conjunctions with O(k) complexity, simple n-of-m expressions with O(k(2)) complexity, or more general classes of recursive n-of-m expressions with O(k(L+2)) complexity, where k is the number of inputs to an unit and L the recursion level of the expression class. Compared to other approaches in the literature, our method of rule extraction offers benefits in simplicity, computational performance, and overall flexibility. Simulation results on a variety of problems demonstrate the application of our procedures as well as the strengths and the weaknesses of our general approach.

  8. Mode-dependent templates and scan order for H.264/AVC-based intra lossless coding.

    Science.gov (United States)

    Gu, Zhouye; Lin, Weisi; Lee, Bu-Sung; Lau, Chiew Tong; Sun, Ming-Ting

    2012-09-01

    In H.264/advanced video coding (AVC), lossless coding and lossy coding share the same entropy coding module. However, the entropy coders in the H.264/AVC standard were original designed for lossy video coding and do not yield adequate performance for lossless video coding. In this paper, we analyze the problem with the current lossless coding scheme and propose a mode-dependent template (MD-template) based method for intra lossless coding. By exploring the statistical redundancy of the prediction residual in the H.264/AVC intra prediction modes, more zero coefficients are generated. By designing a new scan order for each MD-template, the scanned coefficients sequence fits the H.264/AVC entropy coders better. A fast implementation algorithm is also designed. With little computation increase, experimental results confirm that the proposed fast algorithm achieves about 7.2% bit saving compared with the current H.264/AVC fidelity range extensions high profile.

  9. Synthesis of CdS nanoparticles based on DNA network templates

    International Nuclear Information System (INIS)

    Yao Yong; Song Yonghai; Wang Li

    2008-01-01

    CdS nanoparticles have been successfully synthesized by using DNA networks as templates. The synthesis was carried out by first dropping a mixture of cadmium acetate and DNA on a mica surface for the formation of the DNA network template and then transferring the sample into a heated thiourea solution. The Cd 2+ reacted with thiourea at high temperature and formed CdS nanoparticles on the DNA network template. UV-vis spectroscopy, photoluminescence, x-ray diffraction and atomic force microscopy (AFM) were used to characterize the CdS nanoparticles in detail. AFM results showed that the resulted CdS nanoparticles were directly aligned on the DNA network templates and that the synthesis and assembly of CdS nanoparticles was realized in one step. CdS nanoparticles fabricated with this method were smaller than those directly synthesized in a thiourea solution and were uniformly aligned on the DNA networks. By adjusting the density of the DNA networks and the concentration of Cd 2+ , the size and density of the CdS nanoparticles could be effectively controlled and CdS nanoparticles could grow along the DNA chains into nanowires. The possible growth mechanism has also been discussed in detail

  10. Learning templates for artistic portrait lighting analysis.

    Science.gov (United States)

    Chen, Xiaowu; Jin, Xin; Wu, Hongyu; Zhao, Qinping

    2015-02-01

    Lighting is a key factor in creating impressive artistic portraits. In this paper, we propose to analyze portrait lighting by learning templates of lighting styles. Inspired by the experience of artists, we first define several novel features that describe the local contrasts in various face regions. The most informative features are then selected with a stepwise feature pursuit algorithm to derive the templates of various lighting styles. After that, the matching scores that measure the similarity between a testing portrait and those templates are calculated for lighting style classification. Furthermore, we train a regression model by the subjective scores and the feature responses of a template to predict the score of a portrait lighting quality. Based on the templates, a novel face illumination descriptor is defined to measure the difference between two portrait lightings. Experimental results show that the learned templates can well describe the lighting styles, whereas the proposed approach can assess the lighting quality of artistic portraits as human being does.

  11. Iris Recognition: The Consequences of Image Compression

    Directory of Open Access Journals (Sweden)

    Bishop DanielA

    2010-01-01

    Full Text Available Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  12. Iris Recognition: The Consequences of Image Compression

    Science.gov (United States)

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig

    2010-12-01

    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  13. Pressure-induced phase transitions and templating effect in three-dimensional organic-inorganic hybrid perovskites

    Science.gov (United States)

    Lee, Yongjae; Mitzi, David; Barnes, Paris; Vogt, Thomas

    2003-07-01

    Pressure-induced structural changes of conducting halide perovskites (CH3NH3)SnI3, (CH3NH3)0.5(NH2CH=NH2)0.5SnI3, and (NH2CH=NH2)SnI3, have been investigated using synchrotron x-ray powder diffraction. In contrast to low-temperature structural changes, no evidence of an increased ordering of the organic cations was observed under pressure. Instead, increase in pressure results first in a ReO3-type doubling of the primitive cubic unit cell, followed by a symmetry distortion, and a subsequent amorphization above 4 GPa. This process is reversible and points towards a pressure-induced templating role of the organic cation. Bulk compressions are continuous across the phase boundaries. The compressibilities identify these hybrids as the most compressible perovskite system ever reported. However, the Sn-I bond compressibility in (CH3NH3)SnI3 shows a discontinuity within the supercell phase. This is possibly due to an electronic localization.

  14. Design of a Biorthogonal Wavelet Transform Based R-Peak Detection and Data Compression Scheme for Implantable Cardiac Pacemaker Systems.

    Science.gov (United States)

    Kumar, Ashish; Kumar, Manjeet; Komaragiri, Rama

    2018-04-19

    Bradycardia can be modulated using the cardiac pacemaker, an implantable medical device which sets and balances the patient's cardiac health. The device has been widely used to detect and monitor the patient's heart rate. The data collected hence has the highest authenticity assurance and is convenient for further electric stimulation. In the pacemaker, ECG detector is one of the most important element. The device is available in its new digital form, which is more efficient and accurate in performance with the added advantage of economical power consumption platform. In this work, a joint algorithm based on biorthogonal wavelet transform and run-length encoding (RLE) is proposed for QRS complex detection of the ECG signal and compressing the detected ECG data. Biorthogonal wavelet transform of the input ECG signal is first calculated using a modified demand based filter bank architecture which consists of a series combination of three lowpass filters with a highpass filter. Lowpass and highpass filters are realized using a linear phase structure which reduces the hardware cost of the proposed design approximately by 50%. Then, the location of the R-peak is found by comparing the denoised ECG signal with the threshold value. The proposed R-peak detector achieves the highest sensitivity and positive predictivity of 99.75 and 99.98 respectively with the MIT-BIH arrhythmia database. Also, the proposed R-peak detector achieves a comparatively low data error rate (DER) of 0.002. The use of RLE for the compression of detected ECG data achieves a higher compression ratio (CR) of 17.1. To justify the effectiveness of the proposed algorithm, the results have been compared with the existing methods, like Huffman coding/simple predictor, Huffman coding/adaptive, and slope predictor/fixed length packaging.

  15. Vibration-based monitoring and diagnostics using compressive sensing

    Science.gov (United States)

    Ganesan, Vaahini; Das, Tuhin; Rahnavard, Nazanin; Kauffman, Jeffrey L.

    2017-04-01

    Vibration data from mechanical systems carry important information that is useful for characterization and diagnosis. Standard approaches rely on continually streaming data at a fixed sampling frequency. For applications involving continuous monitoring, such as Structural Health Monitoring (SHM), such approaches result in high volume data and rely on sensors being powered for prolonged durations. Furthermore, for spatial resolution, structures are instrumented with a large array of sensors. This paper shows that both volume of data and number of sensors can be reduced significantly by applying Compressive Sensing (CS) in vibration monitoring applications. The reduction is achieved by using random sampling and capitalizing on the sparsity of vibration signals in the frequency domain. Preliminary experimental results validating CS-based frequency recovery are also provided. By exploiting the sparsity of mode shapes, CS can also enable efficient spatial reconstruction using fewer spatially distributed sensors. CS can thereby reduce the cost and power requirement of sensing as well as streamline data storage and processing in monitoring applications. In well-instrumented structures, CS can enable continued monitoring in case of sensor or computational failures.

  16. Optimum SNR data compression in hardware using an Eigencoil array.

    Science.gov (United States)

    King, Scott B; Varosi, Steve M; Duensing, G Randy

    2010-05-01

    With the number of receivers available on clinical MRI systems now ranging from 8 to 32 channels, data compression methods are being explored to lessen the demands on the computer for data handling and processing. Although software-based methods of compression after reception lessen computational requirements, a hardware-based method before the receiver also reduces the number of receive channels required. An eight-channel Eigencoil array is constructed by placing a hardware radiofrequency signal combiner inline after preamplification, before the receiver system. The Eigencoil array produces signal-to-noise ratio (SNR) of an optimal reconstruction using a standard sum-of-squares reconstruction, with peripheral SNR gains of 30% over the standard array. The concept of "receiver channel reduction" or MRI data compression is demonstrated, with optimal SNR using only four channels, and with a three-channel Eigencoil, superior sum-of-squares SNR was achieved over the standard eight-channel array. A three-channel Eigencoil portion of a product neurovascular array confirms in vivo SNR performance and demonstrates parallel MRI up to R = 3. This SNR-preserving data compression method advantageously allows users of MRI systems with fewer receiver channels to achieve the SNR of higher-channel MRI systems. (c) 2010 Wiley-Liss, Inc.

  17. Ultrafast layer based computer-generated hologram calculation with sparse template holographic fringe pattern for 3-D object.

    Science.gov (United States)

    Kim, Hak Gu; Man Ro, Yong

    2017-11-27

    In this paper, we propose a new ultrafast layer based CGH calculation that exploits the sparsity of hologram fringe pattern in 3-D object layer. Specifically, we devise a sparse template holographic fringe pattern. The holographic fringe pattern on a depth layer can be rapidly calculated by adding the sparse template holographic fringe patterns at each object point position. Since the size of sparse template holographic fringe pattern is much smaller than that of the CGH plane, the computational load can be significantly reduced. Experimental results show that the proposed method achieves 10-20 msec for 1024x1024 pixels providing visually plausible results.

  18. Classifier for gravitational-wave inspiral signals in nonideal single-detector data

    Science.gov (United States)

    Kapadia, S. J.; Dent, T.; Dal Canton, T.

    2017-11-01

    We describe a multivariate classifier for candidate events in a templated search for gravitational-wave (GW) inspiral signals from neutron-star-black-hole (NS-BH) binaries, in data from ground-based detectors where sensitivity is limited by non-Gaussian noise transients. The standard signal-to-noise ratio (SNR) and chi-squared test for inspiral searches use only properties of a single matched filter at the time of an event; instead, we propose a classifier using features derived from a bank of inspiral templates around the time of each event, and also from a search using approximate sine-Gaussian templates. The classifier thus extracts additional information from strain data to discriminate inspiral signals from noise transients. We evaluate a random forest classifier on a set of single-detector events obtained from realistic simulated advanced LIGO data, using simulated NS-BH signals added to the data. The new classifier detects a factor of 1.5-2 more signals at low false positive rates as compared to the standard "reweighted SNR" statistic, and does not require the chi-squared test to be computed. Conversely, if only the SNR and chi-squared values of single-detector events are available, random forest classification performs nearly identically to the reweighted SNR.

  19. SU-F-T-91: Development of Real Time Abdominal Compression Force (ACF) Monitoring System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, T; Kim, D; Kang, S; Cho, M; Kim, K; Shin, D; Noh, Y; Suh, T [Department of Biomedical Engineering and Research Institute of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Kim, S [Virginia Commonwealth University, Richmond, VA (United States)

    2016-06-15

    Purpose: Hard-plate based abdominal compression is known to be effective, but no explicit method exists to quantify abdominal compression force (ACF) and maintain the proper ACF through the whole procedure. In addition, even with compression, it is necessary to do 4D CT to manage residual motion but, 4D CT is often not possible due to reduced surrogating sensitivity. In this study, we developed and evaluated a system that both monitors ACF in real time and provides surrogating signal even under compression. The system can also provide visual-biofeedback. Methods: The system developed consists of a compression plate, an ACF monitoring unit and a visual-biofeedback device. The ACF monitoring unit contains a thin air balloon in the size of compression plate and a gas pressure sensor. The unit is attached to the bottom of the plate thus, placed between the plate and the patient when compression is applied, and detects compression pressure. For reliability test, 3 volunteers were directed to take several different breathing patterns and the ACF variation was compared with the respiratory flow and external respiratory signal to assure that the system provides corresponding behavior. In addition, guiding waveform were generated based on free breathing, and then applied for evaluating the effectiveness of visual-biofeedback. Results: We could monitor ACF variation in real time and confirmed that the data was correlated with both respiratory flow data and external respiratory signal. Even under abdominal compression, in addition, it was possible to make the subjects successfully follow the guide patterns using the visual biofeedback system. Conclusion: The developed real time ACF monitoring system was found to be functional as intended and consistent. With the capability of both providing real time surrogating signal under compression and enabling visual-biofeedback, it is considered that the system would improve the quality of respiratory motion management in radiation

  20. SU-F-T-91: Development of Real Time Abdominal Compression Force (ACF) Monitoring System

    International Nuclear Information System (INIS)

    Kim, T; Kim, D; Kang, S; Cho, M; Kim, K; Shin, D; Noh, Y; Suh, T; Kim, S

    2016-01-01

    Purpose: Hard-plate based abdominal compression is known to be effective, but no explicit method exists to quantify abdominal compression force (ACF) and maintain the proper ACF through the whole procedure. In addition, even with compression, it is necessary to do 4D CT to manage residual motion but, 4D CT is often not possible due to reduced surrogating sensitivity. In this study, we developed and evaluated a system that both monitors ACF in real time and provides surrogating signal even under compression. The system can also provide visual-biofeedback. Methods: The system developed consists of a compression plate, an ACF monitoring unit and a visual-biofeedback device. The ACF monitoring unit contains a thin air balloon in the size of compression plate and a gas pressure sensor. The unit is attached to the bottom of the plate thus, placed between the plate and the patient when compression is applied, and detects compression pressure. For reliability test, 3 volunteers were directed to take several different breathing patterns and the ACF variation was compared with the respiratory flow and external respiratory signal to assure that the system provides corresponding behavior. In addition, guiding waveform were generated based on free breathing, and then applied for evaluating the effectiveness of visual-biofeedback. Results: We could monitor ACF variation in real time and confirmed that the data was correlated with both respiratory flow data and external respiratory signal. Even under abdominal compression, in addition, it was possible to make the subjects successfully follow the guide patterns using the visual biofeedback system. Conclusion: The developed real time ACF monitoring system was found to be functional as intended and consistent. With the capability of both providing real time surrogating signal under compression and enabling visual-biofeedback, it is considered that the system would improve the quality of respiratory motion management in radiation

  1. A Delay Line for Compression of Electromagnetic Pulses

    International Nuclear Information System (INIS)

    Pchelnikov, Yuriy N.; Nyce, David S.

    2003-01-01

    A novel method to obtain an electromagnetic signal delay is described. It is shown that the positive magnetic and electric coupling between impedance conductors produces an increase in the time delay. It is also shown that the increase in delay time is obtained without additional attenuation. This allows a reduction in electromagnetic losses, by a factor of several times, for a delay time. An approximate analysis of electromagnetic delay lines based on coupled impedance conductors with 'spiral' and 'meander' patterns allowed obtaining very simple expressions for the wave deceleration factor, wave impedance, and attenuation factor. The results of the analysis are confirmed by the results of measurements. It is shown that a delay line based on counter-wound radial spirals can be successfully used for compression of electromagnetic pulses. Although the offered delay line was designed to operate with a relatively small signal, the analysis of the 'coupling effect', taking place in this delay line, might be useful in devices for compression of high-power microwave pulses

  2. Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction.

    Science.gov (United States)

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2018-06-01

    To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.

  3. Biometric templates selection and update using quality measures

    Science.gov (United States)

    Abboud, Ali J.; Jassim, Sabah A.

    2012-06-01

    To deal with severe variation in recording conditions, most biometric systems acquire multiple biometric samples, at the enrolment stage, for the same person and then extract their individual biometric feature vectors and store them in the gallery in the form of biometric template(s), labelled with the person's identity. The number of samples/templates and the choice of the most appropriate templates influence the performance of the system. The desired biometric template(s) selection technique must aim to control the run time and storage requirements while improving the recognition accuracy of the biometric system. This paper is devoted to elaborating on and discussing a new two stages approach for biometric templates selection and update. This approach uses a quality-based clustering, followed by a special criterion for the selection of an ultimate set of biometric templates from the various clusters. This approach is developed to select adaptively a specific number of templates for each individual. The number of biometric templates depends mainly on the performance of each individual (i.e. gallery size should be optimised to meet the needs of each target individual). These experiments have been conducted on two face image databases and their results will demonstrate the effectiveness of proposed quality-guided approach.

  4. Computational resources to filter gravitational wave data with P-approximant templates

    International Nuclear Information System (INIS)

    Porter, Edward K

    2002-01-01

    The prior knowledge of the gravitational waveform from compact binary systems makes matched filtering an attractive detection strategy. This detection method involves the filtering of the detector output with a set of theoretical waveforms or templates. One of the most important factors in this strategy is knowing how many templates are needed in order to reduce the loss of possible signals. In this study, we calculate the number of templates and computational power needed for a one-step search for gravitational waves from inspiralling binary systems. We build on previous works by first expanding the post-Newtonian waveforms to 2.5-PN order and second, for the first time, calculating the number of templates needed when using P-approximant waveforms. The analysis is carried out for the four main first-generation interferometers, LIGO, GEO600, VIRGO and TAMA. As well as template number, we also calculate the computational cost of generating banks of templates for filtering GW data. We carry out the calculations for two initial conditions. In the first case we assume a minimum individual mass of 1 M o-dot and in the second, we assume a minimum individual mass of 5 M o-dot . We find that, in general, we need more P-approximant templates to carry out a search than if we use standard PN templates. This increase varies according to the order of PN-approximation, but can be as high as a factor of 3 and is explained by the smaller span of the P-approximant templates as we go to higher masses. The promising outcome is that for 2-PN templates, the increase is small and is outweighed by the known robustness of the 2-PN P-approximant templates

  5. Perceptual Coding of Audio Signals Using Adaptive Time-Frequency Transform

    Directory of Open Access Journals (Sweden)

    Umapathy Karthikeyan

    2007-01-01

    Full Text Available Wide band digital audio signals have a very high data-rate associated with them due to their complex nature and demand for high-quality reproduction. Although recent technological advancements have significantly reduced the cost of bandwidth and miniaturized storage facilities, the rapid increase in the volume of digital audio content constantly compels the need for better compression algorithms. Over the years various perceptually lossless compression techniques have been introduced, and transform-based compression techniques have made a significant impact in recent years. In this paper, we propose one such transform-based compression technique, where the joint time-frequency (TF properties of the nonstationary nature of the audio signals were exploited in creating a compact energy representation of the signal in fewer coefficients. The decomposition coefficients were processed and perceptually filtered to retain only the relevant coefficients. Perceptual filtering (psychoacoustics was applied in a novel way by analyzing and performing TF specific psychoacoustics experiments. An added advantage of the proposed technique is that, due to its signal adaptive nature, it does not need predetermined segmentation of audio signals for processing. Eight stereo audio signal samples of different varieties were used in the study. Subjective (mean opinion score—MOS listening tests were performed and the subjective difference grades (SDG were used to compare the performance of the proposed coder with MP3, AAC, and HE-AAC encoders. Compression ratios in the range of 8 to 40 were achieved by the proposed technique with subjective difference grades (SDG ranging from –0.53 to –2.27.

  6. Compressive sensing based ptychography image encryption

    Science.gov (United States)

    Rawat, Nitin

    2015-09-01

    A compressive sensing (CS) based ptychography combined with an optical image encryption is proposed. The diffraction pattern is recorded through ptychography technique further compressed by non-uniform sampling via CS framework. The system requires much less encrypted data and provides high security. The diffraction pattern as well as the lesser measurements of the encrypted samples serves as a secret key which make the intruder attacks more difficult. Furthermore, CS shows that the linearly projected few random samples have adequate information for decryption with a dramatic volume reduction. Experimental results validate the feasibility and effectiveness of our proposed technique compared with the existing techniques. The retrieved images do not reveal any information with the original information. In addition, the proposed system can be robust even with partial encryption and under brute-force attacks.

  7. Efficient two-dimensional compressive sensing in MIMO radar

    Science.gov (United States)

    Shahbazi, Nafiseh; Abbasfar, Aliazam; Jabbarian-Jahromi, Mohammad

    2017-12-01

    Compressive sensing (CS) has been a way to lower sampling rate leading to data reduction for processing in multiple-input multiple-output (MIMO) radar systems. In this paper, we further reduce the computational complexity of a pulse-Doppler collocated MIMO radar by introducing a two-dimensional (2D) compressive sensing. To do so, we first introduce a new 2D formulation for the compressed received signals and then we propose a new measurement matrix design for our 2D compressive sensing model that is based on minimizing the coherence of sensing matrix using gradient descent algorithm. The simulation results show that our proposed 2D measurement matrix design using gradient decent algorithm (2D-MMDGD) has much lower computational complexity compared to one-dimensional (1D) methods while having better performance in comparison with conventional methods such as Gaussian random measurement matrix.

  8. Quinary excitation method for pulse compression ultrasound measurements.

    Science.gov (United States)

    Cowell, D M J; Freear, S

    2008-04-01

    A novel switched excitation method for linear frequency modulated excitation of ultrasonic transducers in pulse compression systems is presented that is simple to realise, yet provides reduced signal sidelobes at the output of the matched filter compared to bipolar pseudo-chirp excitation. Pulse compression signal sidelobes are reduced through the use of simple amplitude tapering at the beginning and end of the excitation duration. Amplitude tapering using switched excitation is realised through the use of intermediate voltage switching levels, half that of the main excitation voltages. In total five excitation voltages are used creating a quinary excitation system. The absence of analogue signal generation and power amplifiers renders the excitation method attractive for applications with requirements such as a high channel count or low cost per channel. A systematic study of switched linear frequency modulated excitation methods with simulated and laboratory based experimental verification is presented for 2.25 MHz non-destructive testing immersion transducers. The signal to sidelobe noise level of compressed waveforms generated using quinary and bipolar pseudo-chirp excitation are investigated for transmission through a 0.5m water and kaolin slurry channel. Quinary linear frequency modulated excitation consistently reduces signal sidelobe power compared to bipolar excitation methods. Experimental results for transmission between two 2.25 MHz transducers separated by a 0.5m channel of water and 5% kaolin suspension shows improvements in signal to sidelobe noise power in the order of 7-8 dB. The reported quinary switched method for linear frequency modulated excitation provides improved performance compared to pseudo-chirp excitation without the need for high performance excitation amplifiers.

  9. Efficacy of Feed Forward and Feedback Signaling for Inflations and Chest Compression Pressure During Cardiopulmonary Resuscitation in a Newborn Mannequin

    Science.gov (United States)

    Andriessen, Peter; Oetomo, Sidarto Bambang; Chen, Wei; Feijs, Loe MG

    2012-01-01

    Background The objective of the study was to evaluate a device that supports professionals during neonatal cardiopulmonary resuscitation (CPR). The device features a box that generates an audio-prompted rate guidance (feed forward) for inflations and compressions, and a transparent foil that is placed over the chest with marks for inter nipple line and sternum with LED’s incorporated in the foil indicating the exerted force (feedback). Methods Ten pairs (nurse/doctor) performed CPR on a newborn resuscitation mannequin. All pairs initially performed two sessions. Thereafter two sessions were performed in similar way, after randomization in 5 pairs that used the device and 5 pairs that performed CPR without the device (controls). A rhythm score was calculated based on the number of CPR cycles that were performed correctly. Results The rhythm score with the device improved from 85 ± 14 to 99 ± 2% (P CPR device compared to the controls. Conclusion Feed forward and feedback signaling leads to a more constant rhythm and chest compression pressure during CPR. PMID:22870175

  10. Templating mesoporous zeolites

    DEFF Research Database (Denmark)

    Egeblad, Kresten; Christensen, Christina Hviid; Kustova, Marina

    2008-01-01

    The application of templating methods to produce zeolite materials with hierarchical bi- or trimodal pore size distributions is reviewed with emphasis on mesoporous materials. Hierarchical zeolite materials are categorized into three distinctly different types of materials: hierarchical zeolite...... crystals, nanosized zeolite crystals, and supported zeolite crystals. For the pure zeolite materials in the first two categories, the additional meso- or macroporosity can be classified as being either intracrystalline or intercrystalline, whereas for supported zeolite materials, the additional porosity...... originates almost exclusively from the support material. The methods for introducing mesopores into zeolite materials are discussed and categorized. In general, mesopores can be templated in zeolite materials by use of solid templating, supramolecular templating, or indirect templating...

  11. Nuclear data compression and reconstruction via discrete wavelet transform

    Energy Technology Data Exchange (ETDEWEB)

    Park, Young Ryong; Cho, Nam Zin [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1998-12-31

    Discrete Wavelet Transforms (DWTs) are recent mathematics, and begin to be used in various fields. The wavelet transform can be used to compress the signal and image due to its inherent properties. We applied the wavelet transform compression and reconstruction to the neutron cross section data. Numerical tests illustrate that the signal compression using wavelet is very effective to reduce the data saving spaces. 7 refs., 4 figs., 3 tabs. (Author)

  12. Nuclear data compression and reconstruction via discrete wavelet transform

    Energy Technology Data Exchange (ETDEWEB)

    Park, Young Ryong; Cho, Nam Zin [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    Discrete Wavelet Transforms (DWTs) are recent mathematics, and begin to be used in various fields. The wavelet transform can be used to compress the signal and image due to its inherent properties. We applied the wavelet transform compression and reconstruction to the neutron cross section data. Numerical tests illustrate that the signal compression using wavelet is very effective to reduce the data saving spaces. 7 refs., 4 figs., 3 tabs. (Author)

  13. Selection of appropriate template for spatial normalization of brain images: tensor based morphometry

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Sung; Lee, Dong Soo; Kim, Yu Kyeong; Chung, June Key; Lee, Myung Chul [College of Medicine, Seoul National University, Seoul (Korea, Republic of)

    2004-07-01

    Although there have been remarkable advances in spatial normalization techniques, the differences in the shape of the hemispheres and the sulcal pattern of brains relative to age, gender, races, and diseases cannot be fully overcome by the nonlinear spatial normalization techniques. T1 SPGR MR images in 16 elderly male normal volunteers (>55 y. mean age: = 61.8 {+-} 3.5 y) were spatially normalized onto the age/gender specific Korean templates, and the Caucasian MNI template and the extent of the deformations were compared. These particular subjects were never included in the development of the templates. First , the images were matched into the templates using an affine transformation to eliminate the global difference between the templates and source images. Second the affine registration was followed by an estimation of nonlinear deformation. Determinants of the Jacobian matrices of the nonlinear deformation were then calculated for every voxel to estimate the regional volume change during the nonlinear transformation Jacobian determinant images highlighted the great magnitude of the relative local volume changes obtained when the elderly brains were spatially normalized onto the young/midlife male or female templates. They reflect the enlargement of CSF space in the lateral ventricles, sylvian fissures and cisterna magna, and the shrinkage of the cortex noted mainly in frontal, insular and lateral temporal cortexes, and the cerebellums in the aged brains. In the Jacobian determinant images, a regional shrinkage of the brain in the left middle prefrontal cortex was observed in addition to the regional expansion in the ventricles and sylvian fissures, which may be due to the age differences between the template and source images. The regional anatomical difference between template and source images could impose an extreme deformation of the source images during the spatial normalization and therefore. Individual brains should be placed into the appropriate

  14. Selection of appropriate template for spatial normalization of brain images: tensor based morphometry

    International Nuclear Information System (INIS)

    Lee, Jae Sung; Lee, Dong Soo; Kim, Yu Kyeong; Chung, June Key; Lee, Myung Chul

    2004-01-01

    Although there have been remarkable advances in spatial normalization techniques, the differences in the shape of the hemispheres and the sulcal pattern of brains relative to age, gender, races, and diseases cannot be fully overcome by the nonlinear spatial normalization techniques. T1 SPGR MR images in 16 elderly male normal volunteers (>55 y. mean age: = 61.8 ± 3.5 y) were spatially normalized onto the age/gender specific Korean templates, and the Caucasian MNI template and the extent of the deformations were compared. These particular subjects were never included in the development of the templates. First , the images were matched into the templates using an affine transformation to eliminate the global difference between the templates and source images. Second the affine registration was followed by an estimation of nonlinear deformation. Determinants of the Jacobian matrices of the nonlinear deformation were then calculated for every voxel to estimate the regional volume change during the nonlinear transformation Jacobian determinant images highlighted the great magnitude of the relative local volume changes obtained when the elderly brains were spatially normalized onto the young/midlife male or female templates. They reflect the enlargement of CSF space in the lateral ventricles, sylvian fissures and cisterna magna, and the shrinkage of the cortex noted mainly in frontal, insular and lateral temporal cortexes, and the cerebellums in the aged brains. In the Jacobian determinant images, a regional shrinkage of the brain in the left middle prefrontal cortex was observed in addition to the regional expansion in the ventricles and sylvian fissures, which may be due to the age differences between the template and source images. The regional anatomical difference between template and source images could impose an extreme deformation of the source images during the spatial normalization and therefore. Individual brains should be placed into the appropriate template

  15. Species-specific audio detection: a comparison of three template-based detection algorithms using random forests

    Directory of Open Access Journals (Sweden)

    Carlos J. Corrada Bravo

    2017-04-01

    Full Text Available We developed a web-based cloud-hosted system that allow users to archive, listen, visualize, and annotate recordings. The system also provides tools to convert these annotations into datasets that can be used to train a computer to detect the presence or absence of a species. The algorithm used by the system was selected after comparing the accuracy and efficiency of three variants of a template-based detection. The algorithm computes a similarity vector by comparing a template of a species call with time increments across the spectrogram. Statistical features are extracted from this vector and used as input for a Random Forest classifier that predicts presence or absence of the species in the recording. The fastest algorithm variant had the highest average accuracy and specificity; therefore, it was implemented in the ARBIMON web-based system.

  16. Shape-Controlled Fabrication of the Polymer-Based Micromotor Based on the Polydimethylsiloxane Template.

    Science.gov (United States)

    Su, Miaoda; Liu, Mei; Liu, Limei; Sun, Yunyu; Li, Mingtong; Wang, Dalei; Zhang, Hui; Dong, Bin

    2015-11-03

    We report the utilization of the polydimethylsiloxane template to construct polymer-based autonomous micromotors with various structures. Solid or hollow micromotors, which consist of polycaprolactone and platinum nanoparticles, can be obtained with controllable sizes and shapes. The resulting micromotor can not only be self-propelled in solution based on the bubble propulsion mechanism in the presence of the hydrogen peroxide fuel, but also exhibit structure-dependent motion behavior. In addition, the micromotors can exhibit various functions, ranging from fluorescence, magnetic control to cargo transportation. Since the current method can be extended to a variety of organic and inorganic materials, we thus believe it may have great potential in the fabrication of different functional micromotors for diverse applications.

  17. A new hyperspectral image compression paradigm based on fusion

    Science.gov (United States)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  18. Template-based combinatorial enumeration of virtual compound libraries for lipids.

    Science.gov (United States)

    Sud, Manish; Fahy, Eoin; Subramaniam, Shankar

    2012-09-25

    A variety of software packages are available for the combinatorial enumeration of virtual libraries for small molecules, starting from specifications of core scaffolds with attachments points and lists of R-groups as SMILES or SD files. Although SD files include atomic coordinates for core scaffolds and R-groups, it is not possible to control 2-dimensional (2D) layout of the enumerated structures generated for virtual compound libraries because different packages generate different 2D representations for the same structure. We have developed a software package called LipidMapsTools for the template-based combinatorial enumeration of virtual compound libraries for lipids. Virtual libraries are enumerated for the specified lipid abbreviations using matching lists of pre-defined templates and chain abbreviations, instead of core scaffolds and lists of R-groups provided by the user. 2D structures of the enumerated lipids are drawn in a specific and consistent fashion adhering to the framework for representing lipid structures proposed by the LIPID MAPS consortium. LipidMapsTools is lightweight, relatively fast and contains no external dependencies. It is an open source package and freely available under the terms of the modified BSD license.

  19. 3D Bioprinting of Developmentally Inspired Templates for Whole Bone Organ Engineering.

    Science.gov (United States)

    Daly, Andrew C; Cunniffe, Gráinne M; Sathy, Binulal N; Jeon, Oju; Alsberg, Eben; Kelly, Daniel J

    2016-09-01

    The ability to print defined patterns of cells and extracellular-matrix components in three dimensions has enabled the engineering of simple biological tissues; however, bioprinting functional solid organs is beyond the capabilities of current biofabrication technologies. An alternative approach would be to bioprint the developmental precursor to an adult organ, using this engineered rudiment as a template for subsequent organogenesis in vivo. This study demonstrates that developmentally inspired hypertrophic cartilage templates can be engineered in vitro using stem cells within a supporting gamma-irradiated alginate bioink incorporating Arg-Gly-Asp adhesion peptides. Furthermore, these soft tissue templates can be reinforced with a network of printed polycaprolactone fibers, resulting in a ≈350 fold increase in construct compressive modulus providing the necessary stiffness to implant such immature cartilaginous rudiments into load bearing locations. As a proof-of-principal, multiple-tool biofabrication is used to engineer a mechanically reinforced cartilaginous template mimicking the geometry of a vertebral body, which in vivo supported the development of a vascularized bone organ containing trabecular-like endochondral bone with a supporting marrow structure. Such developmental engineering approaches could be applied to the biofabrication of other solid organs by bioprinting precursors that have the capacity to mature into their adult counterparts over time in vivo. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Computer-Aided Template for Model Reuse, Development and Maintenance

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    2014-01-01

    A template-based approach for model development is presented in this work. Based on a model decomposition technique, the computer-aided template concept has been developed. This concept is implemented as a software tool , which provides a user-friendly interface for following the workflow steps...

  1. An effective approach to attenuate random noise based on compressive sensing and curvelet transform

    International Nuclear Information System (INIS)

    Liu, Wei; Cao, Siyuan; Zu, Shaohuan; Chen, Yangkang

    2016-01-01

    Random noise attenuation is an important step in seismic data processing. In this paper, we propose a novel denoising approach based on compressive sensing and the curvelet transform. We formulate the random noise attenuation problem as an L _1 norm regularized optimization problem. We propose to use the curvelet transform as the sparse transform in the optimization problem to regularize the sparse coefficients in order to separate signal and noise and to use the gradient projection for sparse reconstruction (GPSR) algorithm to solve the formulated optimization problem with an easy implementation and a fast convergence. We tested the performance of our proposed approach on both synthetic and field seismic data. Numerical results show that the proposed approach can effectively suppress the distortion near the edge of seismic events during the noise attenuation process and has high computational efficiency compared with the traditional curvelet thresholding and iterative soft thresholding based denoising methods. Besides, compared with f-x deconvolution, the proposed denoising method is capable of eliminating the random noise more effectively while preserving more useful signals. (paper)

  2. Development of Ultrasonic Pulse Compression Using Golay Codes

    International Nuclear Information System (INIS)

    Kim, Young H.; Kim, Young Gil; Jeong, Peter

    1994-01-01

    Conventional ultrasonic flaw detection system uses a large amplitude narrow pulse to excite a transducer. However, these systems are limited in pulse energy. An excessively large amplitude causes a dielectric breakage of the transducer, and an excessively long pulse causes decrease of the resolution. Using the pulse compression, a long pulse of pseudorandom signal can be used without sacrificing resolution by signal correlation. In the present work, the pulse compression technique was implemented into an ultrasonic system. Golay code was used as a pseudorandom signal in this system, since pair sum of autocorrelations has no sidelobe. The equivalent input pulse of the Golay code was derived to analyze the pulse compression system. Throughout the experiment, the pulse compression technique has demonstrated for its improved SNR(signal to noise ratio) by reducing the system's white noise. And the experimental data also indicated that the SNR enhancement was proportional to the square root of the code length used. The technique seems to perform particularly well with highly energy-absorbent materials such as polymers, plastics and rubbers

  3. New template family for the detection of gravitational waves from comparable-mass black hole binaries

    International Nuclear Information System (INIS)

    Porter, Edward K.

    2007-01-01

    In order to improve the phasing of the comparable-mass waveform as we approach the last stable orbit for a system, various resummation methods have been used to improve the standard post-Newtonian waveforms. In this work we present a new family of templates for the detection of gravitational waves from the inspiral of two comparable-mass black hole binaries. These new adiabatic templates are based on reexpressing the derivative of the binding energy and the gravitational wave flux functions in terms of shifted Chebyshev polynomials. The Chebyshev polynomials are a useful tool in numerical methods as they display the fastest convergence of any of the orthogonal polynomials. In this case they are also particularly useful as they eliminate one of the features that plagues the post-Newtonian expansion. The Chebyshev binding energy now has information at all post-Newtonian orders, compared to the post-Newtonian templates which only have information at full integer orders. In this work, we compare both the post-Newtonian and Chebyshev templates against a fiducially exact waveform. This waveform is constructed from a hybrid method of using the test-mass results combined with the mass dependent parts of the post-Newtonian expansions for the binding energy and flux functions. Our results show that the Chebyshev templates achieve extremely high fitting factors at all post-Newtonian orders and provide excellent parameter extraction. We also show that this new template family has a faster Cauchy convergence, gives a better prediction of the position of the last stable orbit and in general recovers higher Signal-to-Noise ratios than the post-Newtonian templates

  4. TEMPLATE-ASSISTED FABRICATION AND DIELECTROPHORETIC MANIPULATION OF PZT MICROTUBES

    Directory of Open Access Journals (Sweden)

    VLADIMÍR KOVAĽ

    2012-09-01

    Full Text Available Mesoscopic high aspect ratio ferroelectric tube structures of a diverse range of compositions with tailored physical properties can be used as key components in miniaturized flexible electronics, nano- and micro-electro-mechanical systems, nonvolatile FeRAM memories, and tunable photonic applications. They are usually produced through advanced “bottom-up” or “topdown” fabrication techniques. In this study, a template wetting approach is employed for fabrication of Pb(Zr0.52Ti0.48O3 (PZT microtubes. The method is based on repeated infiltration of precursor solution into macroporous silicon (Si templates at a sub-atmospheric pressure. Prior to crystallization at 750°C, free-standing tubes of a 2-μm outer diameter, extending to over 30 μm in length were released from the Si template using a selective isotropic-pulsed XeF2 reactive ion etching. To facilitate rapid electrical characterization and enable future integration process, directed positioning and aligning of the PZT tubes was performed by dielectrophoresis. The electric field-assisted technique involves an alternating electric voltage that is applied through pre-patterned microelectrodes to a colloidal suspension of PZT tubes dispersed in isopropyl alcohol. The most efficient biasing for the assembly of tubes across the electrode gap of 12 μm was a square wave signal of 5 Vrms and 10 Hz. By varying the applied frequency in between 1 and 10 Hz, an enhancement in tube alignment was obtained.

  5. MR diagnosis of retropatellar chondral lesions under compression. A comparison with histological findings

    Energy Technology Data Exchange (ETDEWEB)

    Andresen, R. [Dept. of Radiology, Div. of Radiodiagnostics, Steglitz Medical Centre, Free Univ. of Berlin (Germany); Radmer, S. [Dept. of Radiology and Nuclear Medicine, Behring Municipal Hospital, Academic Teaching Hospital, Free Univ. of Berlin (Germany); Koenig, H. [Dept. of Radiology, Div. of Radiodiagnostics, Steglitz Medical Centre, Free Univ. of Berlin (Germany); Banzer, D. [Dept. of Radiology and Nuclear Medicine, Behring Municipal Hospital, Academic Teaching Hospital, Free Univ. of Berlin (Germany); Wolf, K.J. [Dept. of Radiology, Div. of Radiodiagnostics, Steglitz Medical Centre, Free Univ. of Berlin (Germany)

    1996-01-01

    Purpose: The aim of the study was to improve the chondromalacia patellae (CMP) diagnosis by MR imaging under defined compression of the retropatellar cartilage, using a specially designed knee compressor. The results were compared with histological findings to obtain an MR classification of CMP. Method: MR imaging was performed in in vitro studies of 25 knees from cadavers to investigate the effects of compression on the rentropatellar articular cartilage. The results were verified by subsequent histological evaluations. Results: There was significant difference in cartilage thickness reduction and signal intensity behaviour under compression according to the stage of CMP. Conclusion: Based on the decrease in cartilage thickness, signal intensity behaviour under compression, and cartilage morphology, the studies permitted and MR classifiction of CMP into stages I-IV in line with the histological findings. Healthy cartilage was clearly distinguished, a finding which may optimize CMP diagnosis. (orig.).

  6. Compression of TPC data in the ALICE experiment

    International Nuclear Information System (INIS)

    Nicolaucig, A.; Mattavelli, M.; Carrato, S.

    2002-01-01

    In this paper two algorithms for the compression of the data generated by the Time Projection Chamber (TPC) detector of the ALICE experiment at CERN are described. The first algorithm is based on a lossless source code modeling technique, i.e. the original TPC signal information can be reconstructed without errors at the decompression stage. The source model exploits the temporal correlation that is present in the TPC data to reduce the entropy of the source. The second algorithm is based on a source model which is lossy if samples of the TPC signal are considered one by one. Conversely, the source model is lossless or quasi-lossless if some physical quantities that are of main interest for the experiment are considered. These quantities are the area and the location of the center of mass of each TPC signal pulse. Obviously entropy coding is applied to the set of events defined by the two source models to reduce the bit rate to the corresponding source entropy. Using TPC simulated data according to the expected ALICE TPC performance, the lossless and the lossy compression algorithms achieve a data reduction, respectively, to 49.2% and in the range of 34.2% down to 23.7% of the original data rate. The number of operations per input symbol required to implement the compression stage for both algorithms is relatively low, so that a real-time implementation embedded in the TPC data acquisition chain using low-cost integrated electronics is a realistic option to effectively reduce the data storing cost of ALICE experiment

  7. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  8. On Compressed Sensing and the Estimation of Continuous Parameters From Noisy Observations

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2012-01-01

    Compressed sensing (CS) has in recent years become a very popular way of sampling sparse signals. This sparsity is measured with respect to some known dictionary consisting of a finite number of atoms. Most models for real world signals, however, are parametrised by continuous parameters correspo......Compressed sensing (CS) has in recent years become a very popular way of sampling sparse signals. This sparsity is measured with respect to some known dictionary consisting of a finite number of atoms. Most models for real world signals, however, are parametrised by continuous parameters...... corresponding to a dictionary with an infinite number of atoms. Examples of such parameters are the temporal and spatial frequency. In this paper, we analyse how CS affects the estimation performance of any unbiased estimator when we assume such infinite dictionaries. We base our analysis on the Cramer...

  9. Distortion Estimation in Compressed Music Using Only Audio Fingerprints

    NARCIS (Netherlands)

    Doets, P.J.O.; Lagendijk, R.L.

    2008-01-01

    An audio fingerprint is a compact yet very robust representation of the perceptually relevant parts of an audio signal. It can be used for content-based audio identification, even when the audio is severely distorted. Audio compression changes the fingerprint slightly. We show that these small

  10. Perceptual Coding of Audio Signals Using Adaptive Time-Frequency Transform

    Directory of Open Access Journals (Sweden)

    Karthikeyan Umapathy

    2007-08-01

    Full Text Available Wide band digital audio signals have a very high data-rate associated with them due to their complex nature and demand for high-quality reproduction. Although recent technological advancements have significantly reduced the cost of bandwidth and miniaturized storage facilities, the rapid increase in the volume of digital audio content constantly compels the need for better compression algorithms. Over the years various perceptually lossless compression techniques have been introduced, and transform-based compression techniques have made a significant impact in recent years. In this paper, we propose one such transform-based compression technique, where the joint time-frequency (TF properties of the nonstationary nature of the audio signals were exploited in creating a compact energy representation of the signal in fewer coefficients. The decomposition coefficients were processed and perceptually filtered to retain only the relevant coefficients. Perceptual filtering (psychoacoustics was applied in a novel way by analyzing and performing TF specific psychoacoustics experiments. An added advantage of the proposed technique is that, due to its signal adaptive nature, it does not need predetermined segmentation of audio signals for processing. Eight stereo audio signal samples of different varieties were used in the study. Subjective (mean opinion score—MOS listening tests were performed and the subjective difference grades (SDG were used to compare the performance of the proposed coder with MP3, AAC, and HE-AAC encoders. Compression ratios in the range of 8 to 40 were achieved by the proposed technique with subjective difference grades (SDG ranging from –0.53 to –2.27.

  11. RNACompress: Grammar-based compression and informational complexity measurement of RNA secondary structure

    Directory of Open Access Journals (Sweden)

    Chen Chun

    2008-03-01

    Full Text Available Abstract Background With the rapid emergence of RNA databases and newly identified non-coding RNAs, an efficient compression algorithm for RNA sequence and structural information is needed for the storage and analysis of such data. Although several algorithms for compressing DNA sequences have been proposed, none of them are suitable for the compression of RNA sequences with their secondary structures simultaneously. This kind of compression not only facilitates the maintenance of RNA data, but also supplies a novel way to measure the informational complexity of RNA structural data, raising the possibility of studying the relationship between the functional activities of RNA structures and their complexities, as well as various structural properties of RNA based on compression. Results RNACompress employs an efficient grammar-based model to compress RNA sequences and their secondary structures. The main goals of this algorithm are two fold: (1 present a robust and effective way for RNA structural data compression; (2 design a suitable model to represent RNA secondary structure as well as derive the informational complexity of the structural data based on compression. Our extensive tests have shown that RNACompress achieves a universally better compression ratio compared with other sequence-specific or common text-specific compression algorithms, such as Gencompress, winrar and gzip. Moreover, a test of the activities of distinct GTP-binding RNAs (aptamers compared with their structural complexity shows that our defined informational complexity can be used to describe how complexity varies with activity. These results lead to an objective means of comparing the functional properties of heteropolymers from the information perspective. Conclusion A universal algorithm for the compression of RNA secondary structure as well as the evaluation of its informational complexity is discussed in this paper. We have developed RNACompress, as a useful tool

  12. Report Template

    DEFF Research Database (Denmark)

    Bjørn, Anders; Laurent, Alexis; Owsianiak, Mikołaj

    2018-01-01

    To ensure consistent reporting of life cycle assessment (LCA), we provide a report template. The report includes elements of an LCA study as recommended but the ILCD Handbook. Illustrative case study reported according to this template is presented in Chap. 39 ....

  13. Beyond Creation of Mesoporosity: The Advantages of Polymer-Based Dual-Function Templates for Fabricating Hierarchical Zeolites

    KAUST Repository

    Tian, Qiwei

    2016-02-05

    Direct synthesis of hierarchical zeolites currently relies on the use of surfactant-based templates to produce mesoporosity by the random stacking of 2D zeolite sheets or the agglomeration of tiny zeolite grains. The benefits of using nonsurfactant polymers as dual-function templates in the fabrication of hierarchical zeolites are demonstrated. First, the minimal intermolecular interactions of nonsurfactant polymers impose little interference on the crystallization of zeolites, favoring the formation of 3D continuous zeolite frameworks with a long-range order. Second, the mutual interpenetration of the polymer and the zeolite networks renders disordered but highly interconnected mesopores in zeolite crystals. These two factors allow for the synthesis of single-crystalline, mesoporous zeolites of varied compositions and framework types. A representative example, hierarchial aluminosilicate (meso-ZSM-5), has been carefully characterized. It has a unique branched fibrous structure, and far outperforms bulk aluminosilicate (ZSM-5) as a catalyst in two model reactions: conversion of methanol to aromatics and catalytic cracking of canola oil. Third, extra functional groups in the polymer template can be utilized to incorporate desired functionalities into hierarchical zeolites. Last and most importantly, polymer-based templates permit heterogeneous nucleation and growth of mesoporous zeolites on existing surfaces, forming a continuous zeolitic layer. In a proof-of-concept experiment, unprecedented core-shell-structured hierarchical zeolites are synthesized by coating mesoporous zeolites on the surfaces of bulk zeolites. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Compressed sensing based joint-compensation of power amplifier's distortions in OFDMA cognitive radio systems

    KAUST Repository

    Ali, Anum Z.

    2013-12-01

    Linearization of user equipment power amplifiers driven by orthogonal frequency division multiplexing signals is addressed in this paper. Particular attention is paid to the power efficient operation of an orthogonal frequency division multiple access cognitive radio system and realization of such a system using compressed sensing. Specifically, precompensated overdriven amplifiers are employed at the mobile terminal. Over-driven amplifiers result in in-band distortions and out of band interference. Out of band interference mostly occupies the spectrum of inactive users, whereas the in-band distortions are mitigated using compressed sensing at the receiver. It is also shown that the performance of the proposed scheme can be further enhanced using multiple measurements of the distortion signal in single-input multi-output systems. Numerical results verify the ability of the proposed setup to improve error vector magnitude, bit error rate, outage capacity and mean squared error. © 2011 IEEE.

  15. Compressed sensing based joint-compensation of power amplifier's distortions in OFDMA cognitive radio systems

    KAUST Repository

    Ali, Anum Z.; Hammi, Oualid; Al-Naffouri, Tareq Y.

    2013-01-01

    Linearization of user equipment power amplifiers driven by orthogonal frequency division multiplexing signals is addressed in this paper. Particular attention is paid to the power efficient operation of an orthogonal frequency division multiple access cognitive radio system and realization of such a system using compressed sensing. Specifically, precompensated overdriven amplifiers are employed at the mobile terminal. Over-driven amplifiers result in in-band distortions and out of band interference. Out of band interference mostly occupies the spectrum of inactive users, whereas the in-band distortions are mitigated using compressed sensing at the receiver. It is also shown that the performance of the proposed scheme can be further enhanced using multiple measurements of the distortion signal in single-input multi-output systems. Numerical results verify the ability of the proposed setup to improve error vector magnitude, bit error rate, outage capacity and mean squared error. © 2011 IEEE.

  16. System using data compression and hashing adapted for use for multimedia encryption

    Science.gov (United States)

    Coffland, Douglas R [Livermore, CA

    2011-07-12

    A system and method is disclosed for multimedia encryption. Within the system of the present invention, a data compression module receives and compresses a media signal into a compressed data stream. A data acquisition module receives and selects a set of data from the compressed data stream. And, a hashing module receives and hashes the set of data into a keyword. The method of the present invention includes the steps of compressing a media signal into a compressed data stream; selecting a set of data from the compressed data stream; and hashing the set of data into a keyword.

  17. Memory hierarchy using row-based compression

    Science.gov (United States)

    Loh, Gabriel H.; O'Connor, James M.

    2016-10-25

    A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.

  18. Intelligent condition monitoring method for bearing faults from highly compressed measurements using sparse over-complete features

    Science.gov (United States)

    Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.

    2018-01-01

    Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.

  19. A Compression Algorithm in Wireless Sensor Networks of Bearing Monitoring

    International Nuclear Information System (INIS)

    Zheng Bin; Meng Qingfeng; Wang Nan; Li Zhi

    2011-01-01

    The energy consumption of wireless sensor networks (WSNs) is always an important problem in the application of wireless sensor networks. This paper proposes a data compression algorithm to reduce amount of data and energy consumption during the data transmission process in the on-line WSNs-based bearing monitoring system. The proposed compression algorithm is based on lifting wavelets, Zerotree coding and Hoffman coding. Among of that, 5/3 lifting wavelets is used for dividing data into different frequency bands to extract signal characteristics. Zerotree coding is applied to calculate the dynamic thresholds to retain the attribute data. The attribute data are then encoded by Hoffman coding to further enhance the compression ratio. In order to validate the algorithm, simulation is carried out by using Matlab. The result of simulation shows that the proposed algorithm is very suitable for the compression of bearing monitoring data. The algorithm has been successfully used in online WSNs-based bearing monitoring system, in which TI DSP TMS320F2812 is used to realize the algorithm.

  20. A facile template method to synthesize significantly improved LiNi0.5Mn1.5O4 using corn stalk as a bio-template

    International Nuclear Information System (INIS)

    Liu, Guiyang; Kong, Xin; Sun, Hongyan; Wang, Baosen; Yi, Zhongzhou; Wang, Quanbiao

    2014-01-01

    In order to simplify the template method for the synthesis of cathode materials for lithium ion batteries, a facile template method using plant stalks as bio-templates has been introduced. Based on this method, LiNi 0.5 Mn 1.5 O 4 spinel with a significantly improved electrochemical performance has been synthesized using corn stalk as a template. X-ray diffraction (XRD), Fourier transform infrared pectroscopy (FTIR) and scanning electron microscope (SEM) have been used to investigate the phase composition and micro-morphologies of the products. Charge-discharge measurements in lithium cells, cyclic voltammetry (CV) and Electrochemical impedance spectroscopy (EIS) have been used to study the electrochemical performance of the products. The results indicate that the templated product exhibits higher crystallinity than that of non-templated product. Both of the templated product and the non-templated product are combination of the ordered space group P4 3 32 and the disordered Fd-3 m. The specific BET surface area of the templated product is about twice larger than that of the non-templated product. Moreover, the electrochemical performances of the templated product including specific capacity, cycling stability and rate capability are significantly improved as compared with the non-templated product, due to its higher crystallinity, larger Li + diffusion coefficient and lower charge transfer resistance

  1. Lattice and strain analysis of atomic resolution Z-contrast images based on template matching

    Energy Technology Data Exchange (ETDEWEB)

    Zuo, Jian-Min, E-mail: jianzuo@uiuc.edu [Department of Materials Science and Engineering, University of Illinois, Urbana, IL 61801 (United States); Seitz Materials Research Laboratory, University of Illinois, Urbana, IL 61801 (United States); Shah, Amish B. [Center for Microanalysis of Materials, Materials Research Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL 61801 (United States); Kim, Honggyu; Meng, Yifei; Gao, Wenpei [Department of Materials Science and Engineering, University of Illinois, Urbana, IL 61801 (United States); Seitz Materials Research Laboratory, University of Illinois, Urbana, IL 61801 (United States); Rouviére, Jean-Luc [CEA-INAC/UJF-Grenoble UMR-E, SP2M, LEMMA, Minatec, Grenoble 38054 (France)

    2014-01-15

    A real space approach is developed based on template matching for quantitative lattice analysis using atomic resolution Z-contrast images. The method, called TeMA, uses the template of an atomic column, or a group of atomic columns, to transform the image into a lattice of correlation peaks. This is helped by using a local intensity adjusted correlation and by the design of templates. Lattice analysis is performed on the correlation peaks. A reference lattice is used to correct for scan noise and scan distortions in the recorded images. Using these methods, we demonstrate that a precision of few picometers is achievable in lattice measurement using aberration corrected Z-contrast images. For application, we apply the methods to strain analysis of a molecular beam epitaxy (MBE) grown LaMnO{sub 3} and SrMnO{sub 3} superlattice. The results show alternating epitaxial strain inside the superlattice and its variations across interfaces at the spatial resolution of a single perovskite unit cell. Our methods are general, model free and provide high spatial resolution for lattice analysis. - Highlights: • A real space approach is developed for strain analysis using atomic resolution Z-contrast images and template matching. • A precision of few picometers is achievable in the measurement of lattice displacements. • The spatial resolution of a single perovskite unit cell is demonstrated for a LaMnO{sub 3} and SrMnO{sub 3} superlattice grown by MBE.

  2. Template-based automatic extraction of the joint space of foot bones from CT scan

    Science.gov (United States)

    Park, Eunbi; Kim, Taeho; Park, Jinah

    2016-03-01

    Clean bone segmentation is critical in studying the joint anatomy for measuring the spacing between the bones. However, separation of the coupled bones in CT images is sometimes difficult due to ambiguous gray values coming from the noise and the heterogeneity of bone materials as well as narrowing of the joint space. For fine reconstruction of the individual local boundaries, manual operation is a common practice where the segmentation remains to be a bottleneck. In this paper, we present an automatic method for extracting the joint space by applying graph cut on Markov random field model to the region of interest (ROI) which is identified by a template of 3D bone structures. The template includes encoded articular surface which identifies the tight region of the high-intensity bone boundaries together with the fuzzy joint area of interest. The localized shape information from the template model within the ROI effectively separates the bones nearby. By narrowing the ROI down to the region including two types of tissue, the object extraction problem was reduced to binary segmentation and solved via graph cut. Based on the shape of a joint space marked by the template, the hard constraint was set by the initial seeds which were automatically generated from thresholding and morphological operations. The performance and the robustness of the proposed method are evaluated on 12 volumes of ankle CT data, where each volume includes a set of 4 tarsal bones (calcaneus, talus, navicular and cuboid).

  3. Highly compressible and all-solid-state supercapacitors based on nanostructured composite sponge.

    Science.gov (United States)

    Niu, Zhiqiang; Zhou, Weiya; Chen, Xiaodong; Chen, Jun; Xie, Sishen

    2015-10-21

    Based on polyaniline-single-walled carbon nanotubes -sponge electrodes, highly compressible all-solid-state supercapacitors are prepared with an integrated configuration using a poly(vinyl alcohol) (PVA)/H2 SO4 gel as the electrolyte. The unique configuration enables the resultant supercapacitors to be compressed as an integrated unit arbitrarily during 60% compressible strain. Furthermore, the performance of the resultant supercapacitors is nearly unchanged even under 60% compressible strain. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Compression module for the BCM1F microTCA raw data readout

    CERN Document Server

    Dostanic, Milica

    2017-01-01

    BCM1F is a diamond based detector and one of the luminometers and background monitors operated by the BRIL group, part of the CMS experiment. BCM1F's front-end produces analog signals which are digitized in a new microTCA back-end. An FPGA in the back-end part takes care of signal processing and stores raw data. The raw data readout has been improved by implementing a data compression module in the firmware. This module has allowed storing larger amount of data in short time intervals. The module has been implemented in VHDL, using a zero suppression algorithm: only data above a defined threshold is stored into memory, while the samples around the base line are discarded. Thanks to metadata, describing the suppressed data, the shape of input signals and time information are preserved. Tests with simulations and a pulse generator showed good results and proved that the module can achieve large compression factor.

  5. A signal combining technique based on channel shortening for cooperative sensor networks

    KAUST Repository

    Hussain, Syed Imtiaz; Alouini, Mohamed-Slim; Hasna, Mazen Omar

    2010-01-01

    The cooperative relaying process needs proper coordination among the communicating and the relaying nodes. This coordination and the required capabilities may not be available in some wireless systems, e.g. wireless sensor networks where the nodes are equipped with very basic communication hardware. In this paper, we consider a scenario where the source node transmits its signal to the destination through multiple relays in an uncoordinated fashion. The destination can capture the multiple copies of the transmitted signal through a Rake receiver. We analyze a situation where the number of Rake fingers N is less than that of the relaying nodes L. In this case, the receiver can combine N strongest signals out of L. The remaining signals will be lost and act as interference to the desired signal components. To tackle this problem, we develop a novel signal combining technique based on channel shortening. This technique proposes a processing block before the Rake reception which compresses the energy of L signal components over N branches while keeping the noise level at its minimum. The proposed scheme saves the system resources and makes the received signal compatible to the available hardware. Simulation results show that it outperforms the selection combining scheme. ©2010 IEEE.

  6. A signal combining technique based on channel shortening for cooperative sensor networks

    KAUST Repository

    Hussain, Syed Imtiaz

    2010-06-01

    The cooperative relaying process needs proper coordination among the communicating and the relaying nodes. This coordination and the required capabilities may not be available in some wireless systems, e.g. wireless sensor networks where the nodes are equipped with very basic communication hardware. In this paper, we consider a scenario where the source node transmits its signal to the destination through multiple relays in an uncoordinated fashion. The destination can capture the multiple copies of the transmitted signal through a Rake receiver. We analyze a situation where the number of Rake fingers N is less than that of the relaying nodes L. In this case, the receiver can combine N strongest signals out of L. The remaining signals will be lost and act as interference to the desired signal components. To tackle this problem, we develop a novel signal combining technique based on channel shortening. This technique proposes a processing block before the Rake reception which compresses the energy of L signal components over N branches while keeping the noise level at its minimum. The proposed scheme saves the system resources and makes the received signal compatible to the available hardware. Simulation results show that it outperforms the selection combining scheme. ©2010 IEEE.

  7. Proline-catalysed asymmetric ketol cyclizations: The template ...

    Indian Academy of Sciences (India)

    Unknown

    Abstract. A modified template mechanism based on modelling studies of energy minimised complexes is presented for the asymmetric proline-catalysed cyclization of triketones 1, 2 and 3 to the 2S,3S-ketols. 1a, 2a and 3a respectively. The template model involves a three-point contact as favoured in enzyme– substrate ...

  8. Automating Embedded Analysis Capabilities and Managing Software Complexity in Multiphysics Simulation, Part I: Template-Based Generic Programming

    Directory of Open Access Journals (Sweden)

    Roger P. Pawlowski

    2012-01-01

    Full Text Available An approach for incorporating embedded simulation and analysis capabilities in complex simulation codes through template-based generic programming is presented. This approach relies on templating and operator overloading within the C++ language to transform a given calculation into one that can compute a variety of additional quantities that are necessary for many state-of-the-art simulation and analysis algorithms. An approach for incorporating these ideas into complex simulation codes through general graph-based assembly is also presented. These ideas have been implemented within a set of packages in the Trilinos framework and are demonstrated on a simple problem from chemical engineering.

  9. Cloud solution for histopathological image analysis using region of interest based compression.

    Science.gov (United States)

    Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana

    2017-07-01

    Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.

  10. Making Deformable Template Models Operational

    DEFF Research Database (Denmark)

    Fisker, Rune

    2000-01-01

    for estimation of the model parameters, which applies a combination of a maximum likelihood and minimum distance criterion. Another contribution is a very fast search based initialization algorithm using a filter interpretation of the likelihood model. These two methods can be applied to most deformable template......Deformable template models are a very popular and powerful tool within the field of image processing and computer vision. This thesis treats this type of models extensively with special focus on handling their common difficulties, i.e. model parameter selection, initialization and optimization....... A proper handling of the common difficulties is essential for making the models operational by a non-expert user, which is a requirement for intensifying and commercializing the use of deformable template models. The thesis is organized as a collection of the most important articles, which has been...

  11. Rubber stamp templates for improving clinical documentation: A paper-based, m-Health approach for quality improvement in low-resource settings.

    Science.gov (United States)

    Kleczka, Bernadette; Musiega, Anita; Rabut, Grace; Wekesa, Phoebe; Mwaniki, Paul; Marx, Michael; Kumar, Pratap

    2018-06-01

    The United Nations' Sustainable Development Goal #3.8 targets 'access to quality essential healthcare services'. Clinical practice guidelines are an important tool for ensuring quality of clinical care, but many challenges prevent their use in low-resource settings. Monitoring the use of guidelines relies on cumbersome clinical audits of paper records, and electronic systems face financial and other limitations. Here we describe a unique approach to generating digital data from paper using guideline-based templates, rubber stamps and mobile phones. The Guidelines Adherence in Slums Project targeted ten private sector primary healthcare clinics serving informal settlements in Nairobi, Kenya. Each clinic was provided with rubber stamp templates to support documentation and management of commonly encountered outpatient conditions. Participatory design methods were used to customize templates to the workflows and infrastructure of each clinic. Rubber stamps were used to print templates into paper charts, providing clinicians with checklists for use during consultations. Templates used bubble format data entry, which could be digitized from images taken on mobile phones. Besides rubber stamp templates, the intervention included booklets of guideline compilations, one Android phone for digitizing images of templates, and one data feedback/continuing medical education session per clinic each month. In this paper we focus on the effect of the intervention on documentation of three non-communicable diseases in one clinic. Seventy charts of patients enrolled in the chronic disease program (hypertension/diabetes, n=867; chronic respiratory diseases, n=223) at one of the ten intervention clinics were sampled. Documentation of each individual patient encounter in the pre-intervention (January-March 2016) and post-intervention period (May-July) was scored for information in four dimensions - general data, patient assessment, testing, and management. Control criteria included

  12. Programmable imprint lithography template

    Science.gov (United States)

    Cardinale, Gregory F [Oakland, CA; Talin, Albert A [Livermore, CA

    2006-10-31

    A template for imprint lithography (IL) that reduces significantly template production costs by allowing the same template to be re-used for several technology generations. The template is composed of an array of spaced-apart moveable and individually addressable rods or plungers. Thus, the template can be configured to provide a desired pattern by programming the array of plungers such that certain of the plungers are in an "up" or actuated configuration. This arrangement of "up" and "down" plungers forms a pattern composed of protruding and recessed features which can then be impressed onto a polymer film coated substrate by applying a pressure to the template impressing the programmed configuration into the polymer film. The pattern impressed into the polymer film will be reproduced on the substrate by subsequent processing.

  13. Detecting lung cancer symptoms with analogic CNN algorithms based on a constrained diffusion template

    International Nuclear Information System (INIS)

    Hirakawa, Satoshi; Nishio, Yoshifumi; Ushida, Akio; Ueno, Junji; Kasem, I.; Nishitani, Hiromu; Rekeczky, C.; Roska, T.

    1997-01-01

    In this article, a new type of diffusion template and an analogic CNN algorithm using this diffusion template for detecting some lung cancer symptoms in X-ray films are proposed. The performance of the diffusion template is investigated and our CNN algorithm is verified to detect some key lung cancer symptoms, successfully. (author)

  14. View-Invariant Gait Recognition Through Genetic Template Segmentation

    Science.gov (United States)

    Isaac, Ebenezer R. H. P.; Elias, Susan; Rajagopalan, Srinivasan; Easwarakumar, K. S.

    2017-08-01

    Template-based model-free approach provides by far the most successful solution to the gait recognition problem in literature. Recent work discusses how isolating the head and leg portion of the template increase the performance of a gait recognition system making it robust against covariates like clothing and carrying conditions. However, most involve a manual definition of the boundaries. The method we propose, the genetic template segmentation (GTS), employs the genetic algorithm to automate the boundary selection process. This method was tested on the GEI, GEnI and AEI templates. GEI seems to exhibit the best result when segmented with our approach. Experimental results depict that our approach significantly outperforms the existing implementations of view-invariant gait recognition.

  15. Tannin-based monoliths from emulsion-templating

    International Nuclear Information System (INIS)

    Szczurek, A.; Martinez de Yuso, A.; Fierro, V.; Pizzi, A.; Celzard, A.

    2015-01-01

    Highlights: • Efficient preparation procedures are presented for new and “green” tannin-based organic polyHIPEs. • Highest homogeneity and strength are obtained at an oil fraction near the close-packing value. • Structural and mechanical properties abruptly change above such critical value. - Abstract: Highly porous monoliths prepared by emulsion-templating, frequently called polymerised High Internal Phase Emulsions (polyHIPEs) in the literature, were prepared from “green” precursors such as Mimosa bark extract, sunflower oil and ethoxylated castor oil. Various oil fractions, ranging from 43 to 80 vol.%, were used and shown to have a dramatic impact on the resultant porous structure. A critical oil fraction around 70 vol.% was found to exist, close to the theoretical values of 64% and 74% for random and compact sphere packing, respectively, at which the properties of both emulsions and derived porous monoliths changed. Such change of behaviour was observed by many different techniques such as viscosity, electron microscopy, mercury intrusion, and mechanical studies. We show and explain why this critical oil fraction is the one leading to the strongest and most homogeneous porous monoliths

  16. Robust and Secure Watermarking Using Sparse Information of Watermark for Biometric Data Protection

    Directory of Open Access Journals (Sweden)

    Rohit M Thanki

    2016-08-01

    Full Text Available Biometric based human authentication system is used for security purpose in many organizations in the present world. This biometric authentication system has several vulnerable points. Two of vulnerable points are protection of biometric templates at system database and protection of biometric templates at communication channel between two modules of biometric authentication systems. In this paper proposed a robust watermarking scheme using the sparse information of watermark biometric to secure vulnerable point like protection of biometric templates at the communication channel of biometric authentication systems. A compressive sensing theory procedure is used for generation of sparse information on watermark biometric data using detail wavelet coefficients. Then sparse information of watermark biometric data is embedded into DCT coefficients of host biometric data. This proposed scheme is robust to common signal processing and geometric attacks like JPEG compression, adding noise, filtering, and cropping, histogram equalization. This proposed scheme has more advantages and high quality measures compared to existing schemes in the literature.

  17. The possibilities of compressed-sensing-based Kirchhoff prestack migration

    KAUST Repository

    Aldawood, Ali; Hoteit, Ibrahim; Alkhalifah, Tariq Ali

    2014-01-01

    An approximate subsurface reflectivity distribution of the earth is usually obtained through the migration process. However, conventional migration algorithms, including those based on the least-squares approach, yield structure descriptions that are slightly smeared and of low resolution caused by the common migration artifacts due to limited aperture, coarse sampling, band-limited source, and low subsurface illumination. To alleviate this problem, we use the fact that minimizing the L1-norm of a signal promotes its sparsity. Thus, we formulated the Kirchhoff migration problem as a compressed-sensing (CS) basis pursuit denoise problem to solve for highly focused migrated images compared with those obtained by standard and least-squares migration algorithms. The results of various subsurface reflectivity models revealed that solutions computed using the CS based migration provide a more accurate subsurface reflectivity location and amplitude. We applied the CS algorithm to image synthetic data from a fault model using dense and sparse acquisition geometries. Our results suggest that the proposed approach may still provide highly resolved images with a relatively small number of measurements. We also evaluated the robustness of the basis pursuit denoise algorithm in the presence of Gaussian random observational noise and in the case of imaging the recorded data with inaccurate migration velocities.

  18. The possibilities of compressed-sensing-based Kirchhoff prestack migration

    KAUST Repository

    Aldawood, Ali

    2014-05-08

    An approximate subsurface reflectivity distribution of the earth is usually obtained through the migration process. However, conventional migration algorithms, including those based on the least-squares approach, yield structure descriptions that are slightly smeared and of low resolution caused by the common migration artifacts due to limited aperture, coarse sampling, band-limited source, and low subsurface illumination. To alleviate this problem, we use the fact that minimizing the L1-norm of a signal promotes its sparsity. Thus, we formulated the Kirchhoff migration problem as a compressed-sensing (CS) basis pursuit denoise problem to solve for highly focused migrated images compared with those obtained by standard and least-squares migration algorithms. The results of various subsurface reflectivity models revealed that solutions computed using the CS based migration provide a more accurate subsurface reflectivity location and amplitude. We applied the CS algorithm to image synthetic data from a fault model using dense and sparse acquisition geometries. Our results suggest that the proposed approach may still provide highly resolved images with a relatively small number of measurements. We also evaluated the robustness of the basis pursuit denoise algorithm in the presence of Gaussian random observational noise and in the case of imaging the recorded data with inaccurate migration velocities.

  19. Synthesis of copper telluride nanowires using template-based ...

    Indian Academy of Sciences (India)

    Anodized alu- minum oxide foil (AAO) acts as template and electrodeposi- tion is conducted in a ... the nanopores were perpendicular to the AAO membrane sur- face and were uniform in ... Aluminium oxide 0·02–0·2. 13, 21, 47. 60. 105. –.

  20. Development and clinical implementation of a new template for MRI-based intracavitary/interstitial gynecologic brachytherapy for locally advanced cervical cancer: from CT-based MUPIT to the MRI compatible Template Benidorm. Ten years of experience

    Directory of Open Access Journals (Sweden)

    Silvia Rodríguez Villalba

    2016-10-01

    Full Text Available Purpose : To study outcome and toxicity in 59 patients with locally advanced cervix carcinoma treated with computed tomography (CT-based Martinez universal perineal interstitial template (MUPIT and the new magnetic resonance imaging (MRI-compatible template Benidorm (TB. Material and methods: From December 2005 to October 2015, we retrospectively analyzed 34 patients treated with MUPIT and 25 treated with the TB. Six 4 Gy fractions were prescribed to the clinical target volume (CTV combined with external beam radiotherapy (EBRT. The organs at risk (OARs and the CTV were delineated by CT scan in the MUPIT implants and by MRI in the TB implants. Dosimetry was CT-based for MUPIT and exclusively MRI-based for TB. Dose values were biologically normalized to equivalent doses in 2 Gy fractions (EQD2. Results : Median CTV volumes were 163.5 cm3 for CT-based MUPIT (range 81.8-329.4 cm3 and 91.9 cm3 for MRI-based TB (range 26.2-161 cm3. Median D90 CTV (EBRT + BT was 75.8 Gy for CT-based MUPIT (range 69-82 Gy and 78.6 Gy for MRI-based TB (range 62.5-84.2. Median D2cm3 for the rectum was 75.3 Gy for CT-based MUPIT (range 69.8-132.1 Gy and 69.9 Gy for MRI-based TB (range 58.3-83.7 Gy. Median D2cm3 for the bladder was 79.8 Gy for CT-based MUPIT (range 71.2-121.1 Gy and 77.1 Gy for MRI-based TB (range 60.5-90.8 Gy. Local control (LC was 88%. Overall survival (OS, disease free survival (DFS, and LC were not statistically significant in either group. Patients treated with CT-based MUPIT had a significantly higher percentage of rectal bleeding G3 (p = 0.040 than those treated with MRI-based TB, 13% vs. 2%. Conclusions : Template Benidorm treatment using MRI-based dosimetry provides advantages of MRI volume definition, and allows definition of smaller volumes that result in statistically significant decreased rectal toxicity compared to that seen with CT-based MUPIT treatment.

  1. Experimental research of the influence of the strength of ore samples on the parameters of an electromagnetic signal during acoustic excitation in the process of uniaxial compression

    Science.gov (United States)

    Yavorovich, L. V.; Bespal`ko, A. A.; Fedotov, P. I.

    2018-01-01

    Parameters of electromagnetic responses (EMRe) generated during uniaxial compression of rock samples under excitation by deterministic acoustic pulses are presented and discussed. Such physical modeling in the laboratory allows to reveal the main regularities of electromagnetic signals (EMS) generation in rock massive. The influence of the samples mechanical properties on the parameters of the EMRe excited by an acoustic signal in the process of uniaxial compression is considered. It has been established that sulfides and quartz in the rocks of the Tashtagol iron ore deposit (Western Siberia, Russia) contribute to the conversion of mechanical energy into the energy of the electromagnetic field, which is expressed in an increase in the EMS amplitude. The decrease in the EMS amplitude when the stress-strain state of the sample changes during the uniaxial compression is observed when the amount of conductive magnetite contained in the rock is increased. The obtained results are important for the physical substantiation of testing methods and monitoring of changes in the stress-strain state of the rock massive by the parameters of electromagnetic signals and the characteristics of electromagnetic emission.

  2. Action video game play facilitates the development of better perceptual templates

    Science.gov (United States)

    Bejjanki, Vikranth R.; Zhang, Ruyuan; Li, Renjie; Pouget, Alexandre; Green, C. Shawn; Lu, Zhong-Lin; Bavelier, Daphne

    2014-01-01

    The field of perceptual learning has identified changes in perceptual templates as a powerful mechanism mediating the learning of statistical regularities in our environment. By measuring threshold-vs.-contrast curves using an orientation identification task under varying levels of external noise, the perceptual template model (PTM) allows one to disentangle various sources of signal-to-noise changes that can alter performance. We use the PTM approach to elucidate the mechanism that underlies the wide range of improvements noted after action video game play. We show that action video game players make use of improved perceptual templates compared with nonvideo game players, and we confirm a causal role for action video game play in inducing such improvements through a 50-h training study. Then, by adapting a recent neural model to this task, we demonstrate how such improved perceptual templates can arise from reweighting the connectivity between visual areas. Finally, we establish that action gamers do not enter the perceptual task with improved perceptual templates. Instead, although performance in action gamers is initially indistinguishable from that of nongamers, action gamers more rapidly learn the proper template as they experience the task. Taken together, our results establish for the first time to our knowledge the development of enhanced perceptual templates following action game play. Because such an improvement can facilitate the inference of the proper generative model for the task at hand, unlike perceptual learning that is quite specific, it thus elucidates a general learning mechanism that can account for the various behavioral benefits noted after action game play. PMID:25385590

  3. Action video game play facilitates the development of better perceptual templates.

    Science.gov (United States)

    Bejjanki, Vikranth R; Zhang, Ruyuan; Li, Renjie; Pouget, Alexandre; Green, C Shawn; Lu, Zhong-Lin; Bavelier, Daphne

    2014-11-25

    The field of perceptual learning has identified changes in perceptual templates as a powerful mechanism mediating the learning of statistical regularities in our environment. By measuring threshold-vs.-contrast curves using an orientation identification task under varying levels of external noise, the perceptual template model (PTM) allows one to disentangle various sources of signal-to-noise changes that can alter performance. We use the PTM approach to elucidate the mechanism that underlies the wide range of improvements noted after action video game play. We show that action video game players make use of improved perceptual templates compared with nonvideo game players, and we confirm a causal role for action video game play in inducing such improvements through a 50-h training study. Then, by adapting a recent neural model to this task, we demonstrate how such improved perceptual templates can arise from reweighting the connectivity between visual areas. Finally, we establish that action gamers do not enter the perceptual task with improved perceptual templates. Instead, although performance in action gamers is initially indistinguishable from that of nongamers, action gamers more rapidly learn the proper template as they experience the task. Taken together, our results establish for the first time to our knowledge the development of enhanced perceptual templates following action game play. Because such an improvement can facilitate the inference of the proper generative model for the task at hand, unlike perceptual learning that is quite specific, it thus elucidates a general learning mechanism that can account for the various behavioral benefits noted after action game play.

  4. Properties of ordered titanium templates covered with Au thin films for SERS applications

    Energy Technology Data Exchange (ETDEWEB)

    Grochowska, Katarzyna, E-mail: kgrochowska@imp.gda.pl [Centre for Plasma and Laser Engineering, Szewalski Institute of Fluid-Flow Machinery, Polish Academy of Sciences, Fiszera 14 St., 80-231 Gdańsk (Poland); Siuzdak, Katarzyna [Centre for Plasma and Laser Engineering, Szewalski Institute of Fluid-Flow Machinery, Polish Academy of Sciences, Fiszera 14 St., 80-231 Gdańsk (Poland); Sokołowski, Michał; Karczewski, Jakub [Faculty of Applied Physics and Mathematics, Gdańsk University of Technology, Narutowicza 11/12 St., 80-233 Gdańsk (Poland); Szkoda, Mariusz [Centre for Plasma and Laser Engineering, Szewalski Institute of Fluid-Flow Machinery, Polish Academy of Sciences, Fiszera 14 St., 80-231 Gdańsk (Poland); Faculty of Chemistry, Gdańsk University of Technology, Narutowicza 11/12 St., 80-233 Gdańsk (Poland); Śliwiński, Gerard [Centre for Plasma and Laser Engineering, Szewalski Institute of Fluid-Flow Machinery, Polish Academy of Sciences, Fiszera 14 St., 80-231 Gdańsk (Poland)

    2016-12-01

    Graphical abstract: - Highlights: • Dimpled Ti substrates prepared via anodization followed by etching. • Highly ordered nano-patterned titanium templates covered with thin Au films. • Enhanced Raman signal indicates on promising sensing material. - Abstract: Currently, roughened metal nanostructures are widely studied as highly sensitive Raman scattering substrates that show application potential in biochemistry, food safety or medical diagnostic. In this work the structural properties and the enhancement effect due to surface enhanced Raman scattering (SERS) of highly ordered nano-patterned titanium templates covered with thin (5–20 nm) gold films are reported. The templates are formed by preparation of a dense structure of TiO{sub 2} nanotubes on a flat Ti surface (2 × 2 cm{sup 2}) and their subsequent etching down to the substrate. SEM images reveal the formation of honeycomb nanostructures with the cavity diameter of 80 nm. Due to the strongly inhomogeneous distribution of the electromagnetic field in the vicinity of the Au film discontinuities the measured average enhancement factor (10{sup 7}–10{sup 8}) is markedly higher than observed for bare Ti templates. The enhancement factor and Raman signal intensity can be optimized by adjusting the process conditions and thickness of the deposited Au layer. Results confirm that the obtained structures can be used in surface enhanced sensing.

  5. Improving your target-template alignment with MODalign.

    KAUST Repository

    Barbato, Alessandro

    2012-02-04

    SUMMARY: MODalign is an interactive web-based tool aimed at helping protein structure modelers to inspect and manually modify the alignment between the sequences of a target protein and of its template(s). It interactively computes, displays and, upon modification of the target-template alignment, updates the multiple sequence alignments of the two protein families, their conservation score, secondary structure and solvent accessibility values, and local quality scores of the implied three-dimensional model(s). Although it has been designed to simplify the target-template alignment step in modeling, it is suitable for all cases where a sequence alignment needs to be inspected in the context of other biological information. AVAILABILITY AND IMPLEMENTATION: Freely available on the web at http://modorama.biocomputing.it/modalign. Website implemented in HTML and JavaScript with all major browsers supported. CONTACT: jan.kosinski@uniroma1.it.

  6. Computer-aided modelling template: Concept and application

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    2015-01-01

    decomposition technique which identifies generic steps and workflow involved, the computer-aided template concept has been developed. This concept is implemented as a software tool, which provides a user-friendly interface for following the workflow steps and guidance through the steps providing additional......Modelling is an important enabling technology in modern chemical engineering applications. A template-based approach is presented in this work to facilitate the construction and documentation of the models and enable their maintenance for reuse in a wider application range. Based on a model...

  7. Signal-inducing bone cements for MRI-guided spinal cementoplasty: evaluation of contrast-agent-based polymethylmethacrylate cements

    International Nuclear Information System (INIS)

    Bail, Hermann Josef; Tsitsilonis, Serafim; Wichlas, Florian; Sattig, Christoph; Papanikolaou, Ioannis; Teichgraeber, Ulf Karl Mart

    2012-01-01

    The purpose of this work is to evaluate two signal-inducing bone cements for MRI-guided spinal cementoplasty. The bone cements were made of polymethylmethacrylate (PMMA, 5 ml monomeric, 12 g polymeric) and gadoterate meglumine as a contrast agent (CA, 0-40 μl) with either saline solution (NaCl, 2-4 ml) or hydroxyapatite bone substitute (HA, 2-4 ml). The cement's signal was assessed in an open 1-Tesla MR scanner, with T1W TSE and fast interventional T1W TSE pulse sequences, and the ideal amount of each component was determined. The compressive and bending strength for different amounts of NaCl and HA were evaluated. The cement's MRI signal depended on the concentration of CA, the amount of NaCl or HA, and the pulse sequence. The signal peaks were recorded between 1 and 10 μl CA per ml NaCl or HA, and were higher in fast T1W TSE than in T1W TSE images. The NaCl-PMMA-CA cements had a greater MRI signal intensity and compressive strength; the HA-PMMA-CA cements had a superior bending strength. Concerning the MR signal and biomechanical properties, these cements would permit MRI-guided cementoplasty. Due to its higher signal and greater compressive strength, the NaCl-PMMA-CA compound appears to be superior to the HA-PMMA-CA compound. (orig.)

  8. Microporous silica prepared by organic templating: relationship between the molecular template and pore structure

    International Nuclear Information System (INIS)

    Brinker, C. Jeffrey; Cao, Guozhong; Kale, Rahul P.; Lopez, Gabriel P.; Lu, Yunfeng; Prabakar, S.

    1999-01-01

    Microporous silica materials with a controlled pore size and a narrow pore size distribution have been prepared by sol-gel processing using an organic-templating approach. Microporous networks were formed by pyrolytic removal of organic ligands (methacryloxypropyl groups) from organic/inorganic hybrid materials synthesized by copolymerization of 3-methacryloxypropylsilane (MPS) and tetraethoxysilane (TEOS). Molecular simulations and experimental measurements were conducted to examine the relationship between the microstructural characteristics of the porous silica (e.g., pore size, total pore volume, and pore connectivity) and the size and amount of organic template ligands added. Adsorption measurements suggest that the final porosity of the microporous silica is due to both primary pores (those present in the hybrid materials prior to pyrolysis) and secondary pores (those created by pyrolytic removal of organic templates). Primary pores were inaccessible to N(sub 2) at 77 K but accessible to CO(sub 2) at 195 K; secondary pores were accessible to both N(sub 2) (at 77 K) and CO(sub 2) (at 195 K) in adsorption measurements. Primary porosity decreases with the amount of organic ligands added because of the enhanced densification of MPS/TEOS hybrid materials as the mole fraction of trifunctional MPS moieties increases. pore volumes measured by nitrogen adsorption experiments at 77 K suggest that the secondary (template-derived) porosity exhibits a percolation behavior as the template concentration is increased. Gas permeation experiments indicate that the secondary pores are approximately 5(angstrom) in diameter, consistent with predictions based on molecular simulations

  9. Compression of seismic data: filter banks and extended transforms, synthesis and adaptation; Compression de donnees sismiques: bancs de filtres et transformees etendues, synthese et adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Duval, L.

    2000-11-01

    Wavelet and wavelet packet transforms are the most commonly used algorithms for seismic data compression. Wavelet coefficients are generally quantized and encoded by classical entropy coding techniques. We first propose in this work a compression algorithm based on the wavelet transform. The wavelet transform is used together with a zero-tree type coding, with first use in seismic applications. Classical wavelet transforms nevertheless yield a quite rigid approach, since it is often desirable to adapt the transform stage to the properties of each type of signal. We thus propose a second algorithm using, instead of wavelets, a set of so called 'extended transforms'. These transforms, originating from the filter bank theory, are parameterized. Classical examples are Malvar's Lapped Orthogonal Transforms (LOT) or de Queiroz et al. Generalized Lapped Orthogonal Transforms (GenLOT). We propose several optimization criteria to build 'extended transforms' which are adapted the properties of seismic signals. We further show that these transforms can be used with the same zero-tree type coding technique as used with wavelets. Both proposed algorithms provide exact compression rate choice, block-wise compression (in the case of extended transforms) and partial decompression for quality control or visualization. Performances are tested on a set of actual seismic data. They are evaluated for several quality measures. We also compare them to other seismic compression algorithms. (author)

  10. Bi-level image compression with tree coding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1996-01-01

    Presently, tree coders are the best bi-level image coders. The current ISO standard, JBIG, is a good example. By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code. Three general-purpose coders...... are constructed by this principle. A multi-pass free tree coding scheme produces superior compression results for all test images. A multi-pass fast free template coding scheme produces much better results than JBIG for difficult images, such as halftonings. Rissanen's algorithm `Context' is presented in a new...

  11. Graphene Emerges as a Versatile Template for Materials Preparation.

    Science.gov (United States)

    Li, Zhengjie; Wu, Sida; Lv, Wei; Shao, Jiao-Jing; Kang, Feiyu; Yang, Quan-Hong

    2016-05-01

    Graphene and its derivatives are emerging as a class of novel but versatile templates for the controlled preparation and functionalization of materials. In this paper a conceptual review on graphene-based templates is given, highlighting their versatile roles in materials preparation. Graphene is capable of acting as a low-dimensional hard template, where its two-dimensional morphology directs the formation of novel nanostructures. Graphene oxide and other functionalized graphenes are amphiphilic and may be seen as soft templates for formatting the growth or inducing the controlled assembly of nanostructures. In addition, nanospaces in restacked graphene can be used for confining the growth of sheet-like nanostructures, and assemblies of interlinked graphenes can behave either as skeletons for the formation of composite materials or as sacrificial templates for novel materials with a controlled network structure. In summary, flexible graphene and its derivatives together with an increasing number of assembled structures show great potentials as templates for materials production. Many challenges remain, for example precise structural control of such novel templates and the removal of the non-functional remaining templates. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Hiding correlation-based Watermark templates using secret modulation

    NARCIS (Netherlands)

    Lichtenauer, J.; Setyawan, I.; Lagendijk, R.

    2004-01-01

    A possible solution to the difficult problem of geometrical distortion of watermarked images in a blind watermarking scenario is to use a template grid in the autocorrelation function. However, the important drawback of this method is that the watermark itself can be estimated and subtracted, or the

  13. Dictionary Approaches to Image Compression and Reconstruction

    Science.gov (United States)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  14. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  15. CaTiO.sub.3 Interfacial template structure on semiconductor-based material and the growth of electroceramic thin-films in the perovskite class

    Science.gov (United States)

    McKee, Rodney Allen; Walker, Frederick Joseph

    1998-01-01

    A structure including a film of a desired perovskite oxide which overlies and is fully commensurate with the material surface of a semiconductor-based substrate and an associated process for constructing the structure involves the build up of an interfacial template film of perovskite between the material surface and the desired perovskite film. The lattice parameters of the material surface and the perovskite of the template film are taken into account so that during the growth of the perovskite template film upon the material surface, the orientation of the perovskite of the template is rotated 45.degree. with respect to the orientation of the underlying material surface and thereby effects a transition in the lattice structure from fcc (of the semiconductor-based material) to the simple cubic lattice structure of perovskite while the fully commensurate periodicity between the perovskite template film and the underlying material surface is maintained. The film-growth techniques of the invention can be used to fabricate solid state electrical components wherein a perovskite film is built up upon a semiconductor-based material and the perovskite film is adapted to exhibit ferroelectric, piezoelectric, pyroelectric, electro-optic or large dielectric properties during use of the component.

  16. A new method of on-line multiparameter amplitude analysis with compression

    International Nuclear Information System (INIS)

    Morhac, M.; matousek, V.

    1996-01-01

    An algorithm of one-line multidimensional amplitude analysis with compression using fast adaptive orthogonal transform is presented in the paper. The method is based on a direct modification of multiplication coefficients of the signal flow graph of the fast Cooley-Tukey's algorithm. The coefficients are modified according to a reference vector representing the processed data. The method has been tested to compress three parameter experimental nuclear data. The efficiency of the derived adaptive transform is compared with classical orthogonal transforms. (orig.)

  17. Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics

    Directory of Open Access Journals (Sweden)

    Daniel Laney

    2014-01-01

    Full Text Available This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. We compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.

  18. Active RF Pulse Compression Using An Electrically Controlled Semiconductor Switch

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Jiquan; Tantawi, Sami; /SLAC

    2007-01-10

    First we review the theory of active pulse compression systems using resonant delay lines. Then we describe the design of an electrically controlled semiconductor active switch. The switch comprises an active window and an overmoded waveguide three-port network. The active window is based on a four-inch silicon wafer which has 960 PIN diodes. These are spatially combined in an overmoded waveguide. We describe the philosophy and design methodology for the three-port network and the active window. We then present the results of using this device to compress 11.4 GHz RF signals with high compression ratios. We show how the system can be used with amplifier like sources, in which one can change the phase of the source by manipulating the input to the source. We also show how the active switch can be used to compress a pulse from an oscillator like sources, which is not possible with passive pulse compression systems.

  19. Multimodal biometric approach for cancelable face template generation

    Science.gov (United States)

    Paul, Padma Polash; Gavrilova, Marina

    2012-06-01

    Due to the rapid growth of biometric technology, template protection becomes crucial to secure integrity of the biometric security system and prevent unauthorized access. Cancelable biometrics is emerging as one of the best solutions to secure the biometric identification and verification system. We present a novel technique for robust cancelable template generation algorithm that takes advantage of the multimodal biometric using feature level fusion. Feature level fusion of different facial features is applied to generate the cancelable template. A proposed algorithm based on the multi-fold random projection and fuzzy communication scheme is used for this purpose. In cancelable template generation, one of the main difficulties is keeping interclass variance of the feature. We have found that interclass variations of the features that are lost during multi fold random projection can be recovered using fusion of different feature subsets and projecting in a new feature domain. Applying the multimodal technique in feature level, we enhance the interclass variability hence improving the performance of the system. We have tested the system for classifier fusion for different feature subset and different cancelable template fusion. Experiments have shown that cancelable template improves the performance of the biometric system compared with the original template.

  20. SNR in ultrasonic pluse compression using Golay codes

    International Nuclear Information System (INIS)

    Kim, Young Hwan; Kim, Young Gil; Jeong, Peter

    1994-01-01

    The conventional ultrasonic flaw detection system uses a large amplitude narrow pulse to excite a transducer, however, these systems are limited in average transmit power. An excessively large amplitude causes a dielectric breakage of the transducer, and an excessively long pulse cuases decrease of the resolution. Using the pulse compression, a long pulse of psudorandom signal can be used without sacrificing resolution by signal correlation. In the present work, the pulse compression technique was utilized to the ultrasonic system. Golay code was used as a psudorandom signal in this system, since pair sum of auto-correlations has not sidelobe. The equivalent input pulse of the Golay code was proposed to analyze the pulse compression system. In experiment, the material type, material thickness and code length were considered. As results, pulse compression system considerably reduced system's white noise, and approximately 30 dB improvement in SNR was obtained over the conventional ultrasonic system. The technique seems to perform particularly well with highly energy-absorbent materials such as polymers, plastics and rubbers.

  1. Block-Based Compressed Sensing for Neutron Radiation Image Using WDFB

    Directory of Open Access Journals (Sweden)

    Wei Jin

    2015-01-01

    Full Text Available An ideal compression method for neutron radiation image should have high compression ratio while keeping more details of the original image. Compressed sensing (CS, which can break through the restrictions of sampling theorem, is likely to offer an efficient compression scheme for the neutron radiation image. Combining wavelet transform with directional filter banks, a novel nonredundant multiscale geometry analysis transform named Wavelet Directional Filter Banks (WDFB is constructed and applied to represent neutron radiation image sparsely. Then, the block-based CS technique is introduced and a high performance CS scheme for neutron radiation image is proposed. By performing two-step iterative shrinkage algorithm the problem of L1 norm minimization is solved to reconstruct neutron radiation image from random measurements. The experiment results demonstrate that the scheme not only improves the quality of reconstructed image obviously but also retains more details of original image.

  2. Relating working memory to compression parameters in clinically fit hearing AIDS.

    Science.gov (United States)

    Souza, Pamela E; Sirow, Lynn

    2014-12-01

    Several laboratory studies have demonstrated that working memory may influence response to compression speed in controlled (i.e., laboratory) comparisons of compression. In this study, the authors explored whether the same relationship would occur under less controlled conditions, as might occur in a typical audiology clinic. Participants included 27 older adults who sought hearing care in a private practice audiology clinic. Working memory was measured for each participant using a reading span test. The authors examined the relationship between working memory and aided speech recognition in noise, using clinically fit hearing aids with a range of compression speeds. Working memory, amount of hearing loss, and age each contributed to speech recognition, but the contribution depended on the speed of the compression processor. For fast-acting compression, the best performance was obtained by patients with high working memory. For slow-acting compression, speech recognition was affected by age and amount of hearing loss but was not affected by working memory. Despite the expectation of greater variability from differences in compression implementation, number of compression channels, or attendant signal processing, the relationship between working memory and compression speed showed a similar pattern as results from more controlled, laboratory-based studies.

  3. Detecting gravitational waves from test-mass bodies orbiting a Kerr black hole with P-approximant templates

    International Nuclear Information System (INIS)

    Porter, Edward K

    2005-01-01

    In this study, we apply post-Newtonian (T-approximants) and resummed post-Newtonian (P-approximants) to the case of a test particle in equatorial orbit around a Kerr black hole. We compare the two approximants by measuring their effectualness (i.e., larger overlaps with the exact signal) and faithfulness (i.e., smaller biases while measuring the parameters of the signal) with the exact (numerical) waveforms. We find that in the case of prograde orbits, T-approximant templates obtain an effectualness of ∼0.99 for spins q ≤ 0.75. For 0.75 0.99 for all spins up to q = 0.95. The bias in the estimation of parameters is much lower in the case of P-approximants than T-approximants. We find that P-approximants are both effectual and faithful and should be more effective than T-approximants as a detection template family when q > 0. For q < 0, both T- and P-approximants perform equally well so that either of them could be used as a detection template family. However, for parameter estimation, the P-approximant templates still outperform the T-approximants

  4. Cloning nanocrystal morphology with soft templates

    Science.gov (United States)

    Thapa, Dev Kumar; Pandey, Anshu

    2016-08-01

    In most template directed preparative methods, while the template decides the nanostructure morphology, the structure of the template itself is a non-general outcome of its peculiar chemistry. Here we demonstrate a template mediated synthesis that overcomes this deficiency. This synthesis involves overgrowth of silica template onto a sacrificial nanocrystal. Such templates are used to copy the morphologies of gold nanorods. After template overgrowth, gold is removed and silver is regrown in the template cavity to produce a single crystal silver nanorod. This technique allows for duplicating existing nanocrystals, while also providing a quantifiable breakdown of the structure - shape interdependence.

  5. Lossy compression of TPC data and trajectory tracking efficiency for the ALICE experiment

    CERN Document Server

    Nicolaucig, A; Mattavelli, M

    2003-01-01

    In this paper a quasi-lossless algorithm for the on-line compression of the data generated by the Time Projection Chamber (TPC) detector of the ALICE experiment at CERN is described. The algorithm is based on a lossy source code modeling technique, i.e. it is based on a source model which is lossy if samples of the TPC signal are considered one by one; conversely, the source model is lossless or quasi-lossless if some physical quantities that are of main interest for the experiment are considered. These quantities are the area and the location of the center of mass of each TPC signal pulse, representing the pulse charge and the time localization of the pulse. So as to evaluate the consequences of the error introduced by the lossy compression process, the results of the trajectory tracking algorithms that process data off-line after the experiment are analyzed, in particular, versus their sensibility to the noise introduced by the compression. Two different versions of these off- line algorithms are described,...

  6. Using off-the-shelf lossy compression for wireless home sleep staging.

    Science.gov (United States)

    Lan, Kun-Chan; Chang, Da-Wei; Kuo, Chih-En; Wei, Ming-Zhi; Li, Yu-Hung; Shaw, Fu-Zen; Liang, Sheng-Fu

    2015-05-15

    Recently, there has been increasing interest in the development of wireless home sleep staging systems that allow the patient to be monitored remotely while remaining in the comfort of their home. However, transmitting large amount of Polysomnography (PSG) data over the Internet is an important issue needed to be considered. In this work, we aim to reduce the amount of PSG data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to classify sleep stages. We examine the effects of off-the-shelf lossy compression on an all-night PSG dataset from 20 healthy subjects, in the context of automated sleep staging. The popular compression method Set Partitioning in Hierarchical Trees (SPIHT) was used, and a range of compression levels was selected in order to compress the signals with various degrees of loss. In addition, a rule-based automatic sleep staging method was used to automatically classify the sleep stages. Considering the criteria of clinical usefulness, the experimental results show that the system can achieve more than 60% energy saving with a high accuracy (>84%) in classifying sleep stages by using a lossy compression algorithm like SPIHT. As far as we know, our study is the first that focuses how much loss can be tolerated in compressing complex multi-channel PSG data for sleep analysis. We demonstrate the feasibility of using lossy SPIHT compression for wireless home sleep staging. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.

    Science.gov (United States)

    Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua

    2018-03-01

    To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  8. ROI-based DICOM image compression for telemedicine

    Indian Academy of Sciences (India)

    ground and reconstruct the image portions losslessly. The compressed image can ... If the image is compressed by 8:1 compression without any perceptual distortion, the ... Figure 2. Cross-sectional view of medical image (statistical representation). ... The Integer Wavelet Transform (IWT) is used to have lossless processing.

  9. Computing layouts with deformable templates

    KAUST Repository

    Peng, Chi-Han

    2014-07-22

    In this paper, we tackle the problem of tiling a domain with a set of deformable templates. A valid solution to this problem completely covers the domain with templates such that the templates do not overlap. We generalize existing specialized solutions and formulate a general layout problem by modeling important constraints and admissible template deformations. Our main idea is to break the layout algorithm into two steps: a discrete step to lay out the approximate template positions and a continuous step to refine the template shapes. Our approach is suitable for a large class of applications, including floorplans, urban layouts, and arts and design. Copyright © ACM.

  10. Computing layouts with deformable templates

    KAUST Repository

    Peng, Chi-Han; Yang, Yongliang; Wonka, Peter

    2014-01-01

    In this paper, we tackle the problem of tiling a domain with a set of deformable templates. A valid solution to this problem completely covers the domain with templates such that the templates do not overlap. We generalize existing specialized solutions and formulate a general layout problem by modeling important constraints and admissible template deformations. Our main idea is to break the layout algorithm into two steps: a discrete step to lay out the approximate template positions and a continuous step to refine the template shapes. Our approach is suitable for a large class of applications, including floorplans, urban layouts, and arts and design. Copyright © ACM.

  11. Efficient Imaging and Real-Time Display of Scanning Ion Conductance Microscopy Based on Block Compressive Sensing

    Science.gov (United States)

    Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing

    2014-07-01

    Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.

  12. REMOTELY SENSEDC IMAGE COMPRESSION BASED ON WAVELET TRANSFORM

    Directory of Open Access Journals (Sweden)

    Heung K. Lee

    1996-06-01

    Full Text Available In this paper, we present an image compression algorithm that is capable of significantly reducing the vast mount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet trans-form to remove the spatial redundancy. The transformed images are than encoded by hilbert-curve scanning and run-length-encoding, followed by huffman coding. We also present the performance of the proposed algorithm with KITSAT-1 image as well as the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by peak signal to noise ratio (PSNR and classification capability.

  13. Masker phase effects in normal-hearing and hearing-impaired listeners: evidence for peripheral compression at low signal frequencies

    DEFF Research Database (Denmark)

    Oxenham, Andrew J.; Dau, Torsten

    2004-01-01

    curvature. Results from 12 listeners with sensorineural hearing loss showed reduced masker phase effects, when compared with data from normal-hearing listeners, at both 250- and 1000-Hz signal frequencies. The effects of hearing impairment on phase-related masking differences were not well simulated...... are affected by a common underlying mechanism, presumably related to cochlear outer hair cell function. The results also suggest that normal peripheral compression remains strong even at 250 Hz....

  14. Iron oxide nanotubes synthesized via template-based electrodeposition

    Science.gov (United States)

    Lim, Jin-Hee; Min, Seong-Gi; Malkinski, Leszek; Wiley, John B.

    2014-04-01

    Considerable effort has been invested in the development of synthetic methods for the preparation iron oxide nanostructures for applications in nanotechnology. While a variety of structures have been reported, only a few studies have focused on iron oxide nanotubes. Here, we present details on the synthesis and characterization of iron oxide nanotubes along with a proposed mechanism for FeOOH tube formation. The FeOOH nanotubes, fabricated via a template-based electrodeposition method, are found to exhibit a unique inner-surface. Heat treatment of these tubes under oxidizing or reducing atmospheres can produce either hematite (α-Fe2O3) or magnetite (Fe3O4) structures, respectively. Hematite nanotubes are composed of small nanoparticles less than 20 nm in diameter and the magnetization curves and FC-ZFC curves show superparamagnetic properties without the Morin transition. In the case of magnetite nanotubes, which consist of slightly larger nanoparticles, magnetization curves show ferromagnetism with weak coercivity at room temperature, while FC-ZFC curves exhibit the Verwey transition at 125 K.Considerable effort has been invested in the development of synthetic methods for the preparation iron oxide nanostructures for applications in nanotechnology. While a variety of structures have been reported, only a few studies have focused on iron oxide nanotubes. Here, we present details on the synthesis and characterization of iron oxide nanotubes along with a proposed mechanism for FeOOH tube formation. The FeOOH nanotubes, fabricated via a template-based electrodeposition method, are found to exhibit a unique inner-surface. Heat treatment of these tubes under oxidizing or reducing atmospheres can produce either hematite (α-Fe2O3) or magnetite (Fe3O4) structures, respectively. Hematite nanotubes are composed of small nanoparticles less than 20 nm in diameter and the magnetization curves and FC-ZFC curves show superparamagnetic properties without the Morin transition

  15. Hollow colloidal particles by emulsion templating, from synthesis to self-assembly

    NARCIS (Netherlands)

    Zoldesi, C.I.

    2006-01-01

    This research was focused on developing a new method to prepare hollow colloidal particles in the micrometer range, based on emulsion templating, characterization of both the templates and the resulting particles from physical and chemical viewpoint, and fabrication of materials based on such

  16. Beyond Creation of Mesoporosity: The Advantages of Polymer-Based Dual-Function Templates for Fabricating Hierarchical Zeolites

    KAUST Repository

    Tian, Qiwei; Liu, Zhaohui; Zhu, Yihan; Dong, Xinglong; Saih, Youssef; Basset, Jean-Marie; Sun, Miao; Xu, Wei; Zhu, Liangkui; Zhang, Daliang; Huang, Jianfeng; Meng, Xiangju; Xiao, Feng-Shou; Han, Yu

    2016-01-01

    Direct synthesis of hierarchical zeolites currently relies on the use of surfactant-based templates to produce mesoporosity by the random stacking of 2D zeolite sheets or the agglomeration of tiny zeolite grains. The benefits of using nonsurfactant

  17. Subsampling-based compression and flow visualization

    Energy Technology Data Exchange (ETDEWEB)

    Agranovsky, Alexy; Camp, David; Joy, I; Childs, Hank

    2016-01-19

    As computational capabilities increasingly outpace disk speeds on leading supercomputers, scientists will, in turn, be increasingly unable to save their simulation data at its native resolution. One solution to this problem is to compress these data sets as they are generated and visualize the compressed results afterwards. We explore this approach, specifically subsampling velocity data and the resulting errors for particle advection-based flow visualization. We compare three techniques: random selection of subsamples, selection at regular locations corresponding to multi-resolution reduction, and introduce a novel technique for informed selection of subsamples. Furthermore, we explore an adaptive system which exchanges the subsampling budget over parallel tasks, to ensure that subsampling occurs at the highest rate in the areas that need it most. We perform supercomputing runs to measure the effectiveness of the selection and adaptation techniques. Overall, we find that adaptation is very effective, and, among selection techniques, our informed selection provides the most accurate results, followed by the multi-resolution selection, and with the worst accuracy coming from random subsamples.

  18. A Deformable Template Model, with Special Reference to Elliptical Templates

    DEFF Research Database (Denmark)

    Hobolth, Asger; Pedersen, Jan; Jensen, Eva Bjørn Vedel

    2002-01-01

    This paper suggests a high-level continuous image model for planar star-shaped objects. Under this model, a planar object is a stochastic deformation of a star-shaped template. The residual process, describing the difference between the radius-vector function of the template and the object...

  19. Low-resolution gamma-ray spectrometry for an information barrier based on a multi-criteria template-matching approach

    Energy Technology Data Exchange (ETDEWEB)

    Göttsche, Malte; Schirm, Janet; Glaser, Alexander

    2016-12-21

    Gamma-ray spectrometry has been successfully employed to identify unique items containing special nuclear materials. Template information barriers have been developed in the past to confirm items as warheads by comparing their gamma signature to the signature of true warheads. Their development has, however, not been fully transparent, and they may not be sensitive to some relevant evasion scenarios. We develop a fully open template information barrier concept, based on low-resolution measurements, which, by design, reduces the extent of revealed sensitive information. The concept is based on three signatures of an item to be compared to a recorded template. The similarity of the spectrum is assessed by a modification of the Kolmogorov–Smirnov test to confirm the isotopic composition. The total gamma count rate must agree with the template as a measure of the projected surface of the object. In order to detect the diversion of fissile material from the interior of an item, a polyethylene mask is placed in front of the detector. Neutrons from spontaneous and induced fission events in the item produce 2.223 MeV gamma rays from neutron capture by hydrogen-1 in the mask. This peak is detected and its intensity scales with the item's fissile mass. The analysis based on MCNP Monte Carlo simulations of various plutonium configurations suggests that this concept can distinguish a valid item from a variety of invalid ones. The concept intentionally avoids any assumptions about specific spectral features, such as looking for specific gamma peaks of specific isotopes, thereby facilitating a fully unclassified discussion. By making all aspects public and allowing interested participants to contribute to the development and benchmarking, we enable a more open and inclusive discourse on this matter.

  20. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    Science.gov (United States)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  1. An Enhanced Run-Length Encoding Compression Method for Telemetry Data

    Directory of Open Access Journals (Sweden)

    Shan Yanhu

    2017-09-01

    Full Text Available The telemetry data are essential in evaluating the performance of aircraft and diagnosing its failures. This work combines the oversampling technology with the run-length encoding compression algorithm with an error factor to further enhance the compression performance of telemetry data in a multichannel acquisition system. Compression of telemetry data is carried out with the use of FPGAs. In the experiments there are used pulse signals and vibration signals. The proposed method is compared with two existing methods. The experimental results indicate that the compression ratio, precision, and distortion degree of the telemetry data are improved significantly compared with those obtained by the existing methods. The implementation and measurement of the proposed telemetry data compression method show its effectiveness when used in a high-precision high-capacity multichannel acquisition system.

  2. An Image Compression Scheme in Wireless Multimedia Sensor Networks Based on NMF

    Directory of Open Access Journals (Sweden)

    Shikang Kong

    2017-02-01

    Full Text Available With the goal of addressing the issue of image compression in wireless multimedia sensor networks with high recovered quality and low energy consumption, an image compression and transmission scheme based on non-negative matrix factorization (NMF is proposed in this paper. First, the NMF algorithm theory is studied. Then, a collaborative mechanism of image capture, block, compression and transmission is completed. Camera nodes capture images and send them to ordinary nodes which use an NMF algorithm for image compression. Compressed images are transmitted to the station by the cluster head node and received from ordinary nodes. The station takes on the image restoration. Simulation results show that, compared with the JPEG2000 and singular value decomposition (SVD compression schemes, the proposed scheme has a higher quality of recovered images and lower total node energy consumption. It is beneficial to reduce the burden of energy consumption and prolong the life of the whole network system, which has great significance for practical applications of WMSNs.

  3. Finite-element modeling of compression and gravity on a population of breast phantoms for multimodality imaging simulation.

    Science.gov (United States)

    Sturgeon, Gregory M; Kiarashi, Nooshin; Lo, Joseph Y; Samei, E; Segars, W P

    2016-05-01

    The authors are developing a series of computational breast phantoms based on breast CT data for imaging research. In this work, the authors develop a program that will allow a user to alter the phantoms to simulate the effect of gravity and compression of the breast (craniocaudal or mediolateral oblique) making the phantoms applicable to multimodality imaging. This application utilizes a template finite-element (FE) breast model that can be applied to their presegmented voxelized breast phantoms. The FE model is automatically fit to the geometry of a given breast phantom, and the material properties of each element are set based on the segmented voxels contained within the element. The loading and boundary conditions, which include gravity, are then assigned based on a user-defined position and compression. The effect of applying these loads to the breast is computed using a multistage contact analysis in FEBio, a freely available and well-validated FE software package specifically designed for biomedical applications. The resulting deformation of the breast is then applied to a boundary mesh representation of the phantom that can be used for simulating medical images. An efficient script performs the above actions seamlessly. The user only needs to specify which voxelized breast phantom to use, the compressed thickness, and orientation of the breast. The authors utilized their FE application to simulate compressed states of the breast indicative of mammography and tomosynthesis. Gravity and compression were simulated on example phantoms and used to generate mammograms in the craniocaudal or mediolateral oblique views. The simulated mammograms show a high degree of realism illustrating the utility of the FE method in simulating imaging data of repositioned and compressed breasts. The breast phantoms and the compression software can become a useful resource to the breast imaging research community. These phantoms can then be used to evaluate and compare imaging

  4. Strain-dependent magnetic anisotropy in GaMnAs on InGaAs templates

    Energy Technology Data Exchange (ETDEWEB)

    Daeubler, Joachim; Glunk, Michael; Schwaiger, Stephan; Dreher, Lukas; Schoch, Wladimir; Sauer, Rolf; Limmer, Wolfgang [Institut fuer Halbleiterphysik, Universitaet Ulm, 89069 Ulm (Germany)

    2008-07-01

    We have systematically studied the influence of strain on the magnetic anisotropy of GaMnAs by means of HRXRD reciprocal space mapping and angle-dependent magnetotransport. For this purpose, a series of GaMnAs layers with Mn contents of {proportional_to}5% was grown by low-temperature MBE on relaxed InGaAs/GaAs templates with different In concentrations, enabling us to vary the strain in the GaMnAs layers continuously from tensile to compressive, including the unstrained state. Considering both, as-grown and annealed samples, the anisotropy parameter describing the uniaxial out-of-plane magnetic anisotropy has been found to vary linearly with hole density and strain. As a consequence, the out-of-plane direction gradually undergoes a transition from a magnetic hard axis to a magnetic easy axis from compressive to tensile strain.

  5. FPGA based hardware optimized implementation of signal processing system for LFM pulsed radar

    Science.gov (United States)

    Azim, Noor ul; Jun, Wang

    2016-11-01

    Signal processing is one of the main parts of any radar system. Different signal processing algorithms are used to extract information about different parameters like range, speed, direction etc, of a target in the field of radar communication. This paper presents LFM (Linear Frequency Modulation) pulsed radar signal processing algorithms which are used to improve target detection, range resolution and to estimate the speed of a target. Firstly, these algorithms are simulated in MATLAB to verify the concept and theory. After the conceptual verification in MATLAB, the simulation is converted into implementation on hardware using Xilinx FPGA. Chosen FPGA is Xilinx Virtex-6 (XC6LVX75T). For hardware implementation pipeline optimization is adopted and also other factors are considered for resources optimization in the process of implementation. Focusing algorithms in this work for improving target detection, range resolution and speed estimation are hardware optimized fast convolution processing based pulse compression and pulse Doppler processing.

  6. Templates for Cross-Cultural and Culturally Specific Usability Testing

    DEFF Research Database (Denmark)

    Clemmensen, Torkil

    2011-01-01

    The cultural diversity of users of technology challenges our methods for usability testing. This article suggests templates for cross-culturally and culturally specific usability testing, based on studies of usability testing in companies in Mumbai, Beijing, and Copenhagen. Study 1 was a cross...... tests. The result was the construction of templates for usability testing. The culturally specific templates were in Mumbai “user-centered evaluation,” Copenhagen “client-centered evaluation,” and Beijing “evaluator-centered evaluation.” The findings are compared with related research...

  7. System and method for constructing filters for detecting signals whose frequency content varies with time

    Science.gov (United States)

    Qian, S.; Dunham, M.E.

    1996-11-12

    A system and method are disclosed for constructing a bank of filters which detect the presence of signals whose frequency content varies with time. The present invention includes a novel system and method for developing one or more time templates designed to match the received signals of interest and the bank of matched filters use the one or more time templates to detect the received signals. Each matched filter compares the received signal x(t) with a respective, unique time template that has been designed to approximate a form of the signals of interest. The robust time domain template is assumed to be of the order of w(t)=A(t)cos(2{pi}{phi}(t)) and the present invention uses the trajectory of a joint time-frequency representation of x(t) as an approximation of the instantaneous frequency function {phi}{prime}(t). First, numerous data samples of the received signal x(t) are collected. A joint time frequency representation is then applied to represent the signal, preferably using the time frequency distribution series. The joint time-frequency transformation represents the analyzed signal energy at time t and frequency f, P(t,f), which is a three-dimensional plot of time vs. frequency vs. signal energy. Then P(t,f) is reduced to a multivalued function f(t), a two dimensional plot of time vs. frequency, using a thresholding process. Curve fitting steps are then performed on the time/frequency plot, preferably using Levenberg-Marquardt curve fitting techniques, to derive a general instantaneous frequency function {phi}{prime}(t) which best fits the multivalued function f(t). Integrating {phi}{prime}(t) along t yields {phi}{prime}(t), which is then inserted into the form of the time template equation. A suitable amplitude A(t) is also preferably determined. Once the time template has been determined, one or more filters are developed which each use a version or form of the time template. 7 figs.

  8. Photonic compressive sensing enabled data efficient time stretch optical coherence tomography

    Science.gov (United States)

    Mididoddi, Chaitanya K.; Wang, Chao

    2018-03-01

    Photonic time stretch (PTS) has enabled real time spectral domain optical coherence tomography (OCT). However, this method generates a torrent of massive data at GHz stream rate, which requires capturing as per Nyquist principle. If the OCT interferogram signal is sparse in Fourier domain, which is always true for samples with limited number of layers, it can be captured at lower (sub-Nyquist) acquisition rate as per compressive sensing method. In this work we report a data compressed PTS-OCT system based on photonic compressive sensing with 66% compression with low acquisition rate of 50MHz and measurement speed of 1.51MHz per depth profile. A new method has also been proposed to improve the system with all-optical random pattern generation, which completely avoids electronic bottleneck in traditional binary pseudorandom binary sequence (PRBS) generators.

  9. Facial Image Compression Based on Structured Codebooks in Overcomplete Domain

    Directory of Open Access Journals (Sweden)

    Vila-Forcén JE

    2006-01-01

    Full Text Available We advocate facial image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: image compression is considered from the position of source coding with side information and, contrarily to the existing scenarios where the side information is given explicitly; the side information is created based on a deterministic approximation of the local image features. We consider an image in the overcomplete transform domain as a realization of a random source with a structured codebook of symbols where each symbol represents a particular edge shape. Due to the partial availability of the side information at both encoder and decoder, we treat our problem as a modification of the Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available at the decoder. Finally, the paper presents a practical image compression algorithm for facial images based on our concept that demonstrates the superior performance in the very-low-bit-rate regime.

  10. Priming and the guidance by visual and categorical templates in visual search

    NARCIS (Netherlands)

    Wilschut, A.M.; Theeuwes, J.; Olivers, C.N.L.

    2014-01-01

    Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual

  11. DSP accelerator for the wavelet compression/decompression of high- resolution images

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, M.A.; Gleason, S.S.; Jatko, W.B.

    1993-07-23

    A Texas Instruments (TI) TMS320C30-based S-Bus digital signal processing (DSP) module was used to accelerate a wavelet-based compression and decompression algorithm applied to high-resolution fingerprint images. The law enforcement community, together with the National Institute of Standards and Technology (NISI), is adopting a standard based on the wavelet transform for the compression, transmission, and decompression of scanned fingerprint images. A two-dimensional wavelet transform of the input image is computed. Then spatial/frequency regions are automatically analyzed for information content and quantized for subsequent Huffman encoding. Compression ratios range from 10:1 to 30:1 while maintaining the level of image quality necessary for identification. Several prototype systems were developed using SUN SPARCstation 2 with a 1280 {times} 1024 8-bit display, 64-Mbyte random access memory (RAM), Tiber distributed data interface (FDDI), and Spirit-30 S-Bus DSP-accelerators from Sonitech. The final implementation of the DSP-accelerated algorithm performed the compression or decompression operation in 3.5 s per print. Further increases in system throughput were obtained by adding several DSP accelerators operating in parallel.

  12. Mean template for tensor-based morphometry using deformation tensors.

    Science.gov (United States)

    Leporé, Natasha; Brun, Caroline; Pennec, Xavier; Chou, Yi-Yu; Lopez, Oscar L; Aizenstein, Howard J; Becker, James T; Toga, Arthur W; Thompson, Paul M

    2007-01-01

    Tensor-based morphometry (TBM) studies anatomical differences between brain images statistically, to identify regions that differ between groups, over time, or correlate with cognitive or clinical measures. Using a nonlinear registration algorithm, all images are mapped to a common space, and statistics are most commonly performed on the Jacobian determinant (local expansion factor) of the deformation fields. In, it was shown that the detection sensitivity of the standard TBM approach could be increased by using the full deformation tensors in a multivariate statistical analysis. Here we set out to improve the common space itself, by choosing the shape that minimizes a natural metric on the deformation tensors from that space to the population of control subjects. This method avoids statistical bias and should ease nonlinear registration of new subjects data to a template that is 'closest' to all subjects' anatomies. As deformation tensors are symmetric positive-definite matrices and do not form a vector space, all computations are performed in the log-Euclidean framework. The control brain B that is already the closest to 'average' is found. A gradient descent algorithm is then used to perform the minimization that iteratively deforms this template and obtains the mean shape. We apply our method to map the profile of anatomical differences in a dataset of 26 HIV/AIDS patients and 14 controls, via a log-Euclidean Hotelling's T2 test on the deformation tensors. These results are compared to the ones found using the 'best' control, B. Statistics on both shapes are evaluated using cumulative distribution functions of the p-values in maps of inter-group differences.

  13. Biometric Template Security

    Directory of Open Access Journals (Sweden)

    Abhishek Nagar

    2008-03-01

    Full Text Available Biometric recognition offers a reliable solution to the problem of user authentication in identity management systems. With the widespread deployment of biometric systems in various applications, there are increasing concerns about the security and privacy of biometric technology. Public acceptance of biometrics technology will depend on the ability of system designers to demonstrate that these systems are robust, have low error rates, and are tamper proof. We present a high-level categorization of the various vulnerabilities of a biometric system and discuss countermeasures that have been proposed to address these vulnerabilities. In particular, we focus on biometric template security which is an important issue because, unlike passwords and tokens, compromised biometric templates cannot be revoked and reissued. Protecting the template is a challenging task due to intrauser variability in the acquired biometric traits. We present an overview of various biometric template protection schemes and discuss their advantages and limitations in terms of security, revocability, and impact on matching accuracy. A template protection scheme with provable security and acceptable recognition performance has thus far remained elusive. Development of such a scheme is crucial as biometric systems are beginning to proliferate into the core physical and information infrastructure of our society.

  14. Compressive Sensing Based Bayesian Sparse Channel Estimation for OFDM Communication Systems: High Performance and Low Complexity

    Science.gov (United States)

    Xu, Li; Shan, Lin; Adachi, Fumiyuki

    2014-01-01

    In orthogonal frequency division modulation (OFDM) communication systems, channel state information (CSI) is required at receiver due to the fact that frequency-selective fading channel leads to disgusting intersymbol interference (ISI) over data transmission. Broadband channel model is often described by very few dominant channel taps and they can be probed by compressive sensing based sparse channel estimation (SCE) methods, for example, orthogonal matching pursuit algorithm, which can take the advantage of sparse structure effectively in the channel as for prior information. However, these developed methods are vulnerable to both noise interference and column coherence of training signal matrix. In other words, the primary objective of these conventional methods is to catch the dominant channel taps without a report of posterior channel uncertainty. To improve the estimation performance, we proposed a compressive sensing based Bayesian sparse channel estimation (BSCE) method which cannot only exploit the channel sparsity but also mitigate the unexpected channel uncertainty without scarifying any computational complexity. The proposed method can reveal potential ambiguity among multiple channel estimators that are ambiguous due to observation noise or correlation interference among columns in the training matrix. Computer simulations show that proposed method can improve the estimation performance when comparing with conventional SCE methods. PMID:24983012

  15. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  16. Functional Programming with C++ Template Metaprograms

    Science.gov (United States)

    Porkoláb, Zoltán

    Template metaprogramming is an emerging new direction of generative programming. With the clever definitions of templates we can force the C++ compiler to execute algorithms at compilation time. Among the application areas of template metaprograms are the expression templates, static interface checking, code optimization with adaption, language embedding and active libraries. However, as template metaprogramming was not an original design goal, the C++ language is not capable of elegant expression of metaprograms. The complicated syntax leads to the creation of code that is hard to write, understand and maintain. Although template metaprogramming has a strong relationship with functional programming, this is not reflected in the language syntax and existing libraries. In this paper we give a short and incomplete introduction to C++ templates and the basics of template metaprogramming. We will enlight the role of template metaprograms, and some important and widely used idioms. We give an overview of the possible application areas as well as debugging and profiling techniques. We suggest a pure functional style programming interface for C++ template metaprograms in the form of embedded Haskell code which is transformed to standard compliant C++ source.

  17. Loss less real-time data compression based on LZO for steady-state Tokamak DAS

    International Nuclear Information System (INIS)

    Pujara, H.D.; Sharma, Manika

    2008-01-01

    The evolution of data acquisition system (DAS) for steady-state operation of Tokamak has been technology driven. Steady-state Tokamak demands a data acquisition system which is capable enough to acquire data losslessly from diagnostics. The needs of loss less continuous acquisition have a significant effect on data storage and takes up a greater portion of any data acquisition systems. Another basic need of steady state of nature of operation demands online viewing of data which loads the LAN significantly. So there is strong demand for something that would control the expansion of both these portion by a way of employing compression technique in real time. This paper presents a data acquisition systems employing real-time data compression technique based on LZO. It is a data compression library which is suitable for data compression and decompression in real time. The algorithm used favours speed over compression ratio. The system has been rigged up based on PXI bus and dual buffer mode architecture is implemented for loss less acquisition. The acquired buffer is compressed in real time and streamed to network and hard disk for storage. Observed performance of measure on various data type like binary, integer float, types of different type of wave form as well as compression timing overheads has been presented in the paper. Various software modules for real-time acquiring, online viewing of data on network nodes have been developed in LabWindows/CVI based on client server architecture

  18. Beam steering performance of compressed Luneburg lens based on transformation optics

    Science.gov (United States)

    Gao, Ju; Wang, Cong; Zhang, Kuang; Hao, Yang; Wu, Qun

    2018-06-01

    In this paper, two types of compressed Luneburg lenses based on transformation optics are investigated and simulated using two different sources, namely, waveguides and dipoles, which represent plane and spherical wave sources, respectively. We determined that the largest beam steering angle and the related feed point are intrinsic characteristics of a certain type of compressed Luneburg lens, and that the optimized distance between the feed and lens, gain enhancement, and side-lobe suppression are related to the type of source. Based on our results, we anticipate that these lenses will prove useful in various future antenna applications.

  19. Templates, Numbers & Watercolors.

    Science.gov (United States)

    Clemesha, David J.

    1990-01-01

    Describes how a second-grade class used large templates to draw and paint five-digit numbers. The lesson integrated artistic knowledge and vocabulary with their mathematics lesson in place value. Students learned how draftspeople use templates, and they studied number paintings by Charles Demuth and Jasper Johns. (KM)

  20. Compressive sensing-based wideband capacitance measurement with a fixed sampling rate lower than the highest exciting frequency

    International Nuclear Information System (INIS)

    Xu, Lijun; Ren, Ying; Sun, Shijie; Cao, Zhang

    2016-01-01

    In this paper, an under-sampling method for wideband capacitance measurement was proposed by using the compressive sensing strategy. As the excitation signal is sparse in the frequency domain, the compressed sampling method that uses a random demodulator was adopted, which could greatly decrease the sampling rate. Besides, four switches were used to replace the multiplier in the random demodulator. As a result, not only the sampling rate can be much smaller than the signal excitation frequency, but also the circuit’s structure is simpler and its power consumption is lower. A hardware prototype was constructed to validate the method. In the prototype, an excitation voltage with a frequency up to 200 kHz was applied to a capacitance-to-voltage converter. The output signal of the converter was randomly modulated by a pseudo-random sequence through four switches. After a low-pass filter, the signal was sampled by an analog-to-digital converter at a sampling rate of 50 kHz, which was three times lower than the highest exciting frequency. The frequency and amplitude of the signal were then reconstructed to obtain the measured capacitance. Both theoretical analysis and experiments were carried out to show the feasibility of the proposed method and to evaluate the performance of the prototype, including its linearity, sensitivity, repeatability, accuracy and stability within a given measurement range. (paper)

  1. Stochastic Template Bank for Gravitational Wave Searches for Precessing Neutron Star-Black Hole Coalescence Events

    Science.gov (United States)

    Indik, Nathaniel; Haris, K.; Dal Canton, Tito; Fehrmann, Henning; Krishnan, Badri; Lundgren, Andrew; Nielsen, Alex B.; Pai, Archana

    2017-01-01

    Gravitational wave searches to date have largely focused on non-precessing systems. Including precession effects greatly increases the number of templates to be searched over. This leads to a corresponding increase in the computational cost and can increase the false alarm rate of a realistic search. On the other hand, there might be astrophysical systems that are entirely missed by non-precessing searches. In this paper we consider the problem of constructing a template bank using stochastic methods for neutron star-black hole binaries allowing for precession, but with the restrictions that the total angular momentum of the binary is pointing toward the detector and that the neutron star spin is negligible relative to that of the black hole. We quantify the number of templates required for the search, and we explicitly construct the template bank. We show that despite the large number of templates, stochastic methods can be adapted to solve the problem. We quantify the parameter space region over which the non-precessing search might miss signals.

  2. Compressive strength and microstructural analysis of fly ash/palm oil fuel ash based geopolymer mortar

    International Nuclear Information System (INIS)

    Ranjbar, Navid; Mehrali, Mehdi; Behnia, Arash; Alengaram, U. Johnson; Jumaat, Mohd Zamin

    2014-01-01

    Highlights: • Results show POFA is adaptable as replacement in FA based geopolymer mortar. • The increase in POFA/FA ratio delay of the compressive development of geopolymer. • The density of POFA based geoploymer is lower than FA based geopolymer mortar. - Abstract: This paper presents the effects and adaptability of palm oil fuel ash (POFA) as a replacement material in fly ash (FA) based geopolymer mortar from the aspect of microstructural and compressive strength. The geopolymers developed were synthesized with a combination of sodium hydroxide and sodium silicate as activator and POFA and FA as high silica–alumina resources. The development of compressive strength of POFA/FA based geopolymers was investigated using X-ray florescence (XRF), X-ray diffraction (XRD), Fourier transform infrared (FTIR), and field emission scanning electron microscopy (FESEM). It was observed that the particle shapes and surface area of POFA and FA as well as chemical composition affects the density and compressive strength of the mortars. The increment in the percentages of POFA increased the silica/alumina (SiO 2 /Al 2 O 3 ) ratio and that resulted in reduction of the early compressive strength of the geopolymer and delayed the geopolymerization process

  3. A lightweight approach for biometric template protection

    Science.gov (United States)

    Al-Assam, Hisham; Sellahewa, Harin; Jassim, Sabah

    2009-05-01

    Privacy and security are vital concerns for practical biometric systems. The concept of cancelable or revocable biometrics has been proposed as a solution for biometric template security. Revocable biometric means that biometric templates are no longer fixed over time and could be revoked in the same way as lost or stolen credit cards are. In this paper, we describe a novel and an efficient approach to biometric template protection that meets the revocability property. This scheme can be incorporated into any biometric verification scheme while maintaining, if not improving, the accuracy of the original biometric system. However, we shall demonstrate the result of applying such transforms on face biometric templates and compare the efficiency of our approach with that of the well-known random projection techniques. We shall also present the results of experimental work on recognition accuracy before and after applying the proposed transform on feature vectors that are generated by wavelet transforms. These results are based on experiments conducted on a number of well-known face image databases, e.g. Yale and ORL databases.

  4. Organic or organometallic template mediated clay synthesis

    Science.gov (United States)

    Gregar, Kathleen C.; Winans, Randall E.; Botto, Robert E.

    1994-01-01

    A method for incorporating diverse Varieties of intercalants or templates directly during hydrothermal synthesis of clays such as hectorite or montmorillonite-type layer-silicate clays. For a hectorite layer-silicate clay, refluxing a gel of silica sol, magnesium hydroxide sol and lithium fluoride for two days in the presence of an organic or organometallic intercalant or template results in crystalline products containing either (a) organic dye molecules such as ethyl violet and methyl green, (b) dye molecules such as alcian blue that are based on a Cu(II)-phthalocyannine complex, or (c) transition metal complexes such as Ru(II)phenanthroline and Co(III)sepulchrate or (d) water-soluble porphyrins and metalloporphyrins. Montmorillonite-type clays are made by the method taught by U.S. Pat. No. 3,887,454 issued to Hickson, Jun. 13, 1975; however, a variety of intercalants or templates may be introduced. The intercalants or templates should have (i) water-solubility, (ii) positive charge, and (iii) thermal stability under moderately basic (pH 9-10) aqueous reflux conditions or hydrothermal pressurized conditions for the montmorillonite-type clays.

  5. Toward topology-based characterization of small-scale mixing in compressible turbulence

    Science.gov (United States)

    Suman, Sawan; Girimaji, Sharath

    2011-11-01

    Turbulent mixing rate at small scales of motion (molecular mixing) is governed by the steepness of the scalar-gradient field which in turn is dependent upon the prevailing velocity gradients. Thus motivated, we propose a velocity-gradient topology-based approach for characterizing small-scale mixing in compressible turbulence. We define a mixing efficiency metric that is dependent upon the topology of the solenoidal and dilatational deformation rates of a fluid element. The mixing characteristics of solenoidal and dilatational velocity fluctuations are clearly delineated. We validate this new approach by employing mixing data from direct numerical simulations (DNS) of compressible decaying turbulence with passive scalar. For each velocity-gradient topology, we compare the mixing efficiency predicted by the topology-based model with the corresponding conditional scalar variance obtained from DNS. The new mixing metric accurately distinguishes good and poor mixing topologies and indeed reasonably captures the numerical values. The results clearly demonstrate the viability of the proposed approach for characterizing and predicting mixing in compressible flows.

  6. Optimal context quantization in lossless compression of image data sequences

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Wu, X.; Andersen, Jakob Dahl

    2004-01-01

    In image compression context-based entropy coding is commonly used. A critical issue to the performance of context-based image coding is how to resolve the conflict of a desire for large templates to model high-order statistic dependency of the pixels and the problem of context dilution due...... to insufficient sample statistics of a given input image. We consider the problem of finding the optimal quantizer Q that quantizes the K-dimensional causal context C/sub t/=(X/sub t-t1/,X/sub t-t2/,...,X/sub t-tK/) of a source symbol X/sub t/ into one of a set of conditioning states. The optimality of context...... quantization is defined to be the minimum static or minimum adaptive code length of given a data set. For a binary source alphabet an optimal context quantizer can be computed exactly by a fast dynamic programming algorithm. Faster approximation solutions are also proposed. In case of m-ary source alphabet...

  7. A Hybrid Approach to Protect Palmprint Templates

    Directory of Open Access Journals (Sweden)

    Hailun Liu

    2014-01-01

    Full Text Available Biometric template protection is indispensable to protect personal privacy in large-scale deployment of biometric systems. Accuracy, changeability, and security are three critical requirements for template protection algorithms. However, existing template protection algorithms cannot satisfy all these requirements well. In this paper, we propose a hybrid approach that combines random projection and fuzzy vault to improve the performances at these three points. Heterogeneous space is designed for combining random projection and fuzzy vault properly in the hybrid scheme. New chaff point generation method is also proposed to enhance the security of the heterogeneous vault. Theoretical analyses of proposed hybrid approach in terms of accuracy, changeability, and security are given in this paper. Palmprint database based experimental results well support the theoretical analyses and demonstrate the effectiveness of proposed hybrid approach.

  8. The Affordance Template ROS Package for Robot Task Programming

    Science.gov (United States)

    Hart, Stephen; Dinh, Paul; Hambuchen, Kimberly

    2015-01-01

    This paper introduces the Affordance Template ROS package for quickly programming, adjusting, and executing robot applications in the ROS RViz environment. This package extends the capabilities of RViz interactive markers by allowing an operator to specify multiple end-effector waypoint locations and grasp poses in object-centric coordinate frames and to adjust these waypoints in order to meet the run-time demands of the task (specifically, object scale and location). The Affordance Template package stores task specifications in a robot-agnostic XML description format such that it is trivial to apply a template to a new robot. As such, the Affordance Template package provides a robot-generic ROS tool appropriate for building semi-autonomous, manipulation-based applications. Affordance Templates were developed by the NASA-JSC DARPA Robotics Challenge (DRC) team and have since successfully been deployed on multiple platforms including the NASA Valkyrie and Robonaut 2 humanoids, the University of Texas Dreamer robot and the Willow Garage PR2. In this paper, the specification and implementation of the affordance template package is introduced and demonstrated through examples for wheel (valve) turning, pick-and-place, and drill grasping, evincing its utility and flexibility for a wide variety of robot applications.

  9. Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique

    Science.gov (United States)

    Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi

    Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.

  10. Ultrasound imaging using coded signals

    DEFF Research Database (Denmark)

    Misaridis, Athanasios

    Modulated (or coded) excitation signals can potentially improve the quality and increase the frame rate in medical ultrasound scanners. The aim of this dissertation is to investigate systematically the applicability of modulated signals in medical ultrasound imaging and to suggest appropriate...... methods for coded imaging, with the goal of making better anatomic and flow images and three-dimensional images. On the first stage, it investigates techniques for doing high-resolution coded imaging with improved signal-to-noise ratio compared to conventional imaging. Subsequently it investigates how...... coded excitation can be used for increasing the frame rate. The work includes both simulated results using Field II, and experimental results based on measurements on phantoms as well as clinical images. Initially a mathematical foundation of signal modulation is given. Pulse compression based...

  11. LoopIng: a template-based tool for predicting the structure of protein loops.

    KAUST Repository

    Messih, Mario Abdel

    2015-08-06

    Predicting the structure of protein loops is very challenging, mainly because they are not necessarily subject to strong evolutionary pressure. This implies that, unlike the rest of the protein, standard homology modeling techniques are not very effective in modeling their structure. However, loops are often involved in protein function, hence inferring their structure is important for predicting protein structure as well as function.We describe a method, LoopIng, based on the Random Forest automated learning technique, which, given a target loop, selects a structural template for it from a database of loop candidates. Compared to the most recently available methods, LoopIng is able to achieve similar accuracy for short loops (4-10 residues) and significant enhancements for long loops (11-20 residues). The quality of the predictions is robust to errors that unavoidably affect the stem regions when these are modeled. The method returns a confidence score for the predicted template loops and has the advantage of being very fast (on average: 1 min/loop).www.biocomputing.it/loopinganna.tramontano@uniroma1.itSupplementary data are available at Bioinformatics online.

  12. A design approach for systems based on magnetic pulse compression

    International Nuclear Information System (INIS)

    Praveen Kumar, D. Durga; Mitra, S.; Senthil, K.; Sharma, D. K.; Rajan, Rehim N.; Sharma, Archana; Nagesh, K. V.; Chakravarthy, D. P.

    2008-01-01

    A design approach giving the optimum number of stages in a magnetic pulse compression circuit and gain per stage is given. The limitation on the maximum gain per stage is discussed. The total system volume minimization is done by considering the energy storage capacitor volume and magnetic core volume at each stage. At the end of this paper, the design of a magnetic pulse compression based linear induction accelerator of 200 kV, 5 kA, and 100 ns with a repetition rate of 100 Hz is discussed with its experimental results

  13. Template banks to search for compact binaries with spinning components in gravitational wave data

    International Nuclear Information System (INIS)

    Van Den Broeck, Chris; Cokelaer, Thomas; Harry, Ian; Jones, Gareth; Sathyaprakash, B. S.; Brown, Duncan A.; Tagoshi, Hideyuki; Takahashi, Hirotaka

    2009-01-01

    Gravitational waves from coalescing compact binaries are one of the most promising sources for detectors such as LIGO, Virgo, and GEO600. If the components of the binary possess significant angular momentum (spin), as is likely to be the case if one component is a black hole, spin-induced precession of a binary's orbital plane causes modulation of the gravitational-wave amplitude and phase. If the templates used in a matched-filter search do not accurately model these effects then the sensitivity, and hence the detection rate, will be reduced. We investigate the ability of several search pipelines to detect gravitational waves from compact binaries with spin. We use the post-Newtonian approximation to model the inspiral phase of the signal and construct two new template banks using the phenomenological waveforms of Buonanno, Chen, and Vallisneri [A. Buonanno, Y. Chen, and M. Vallisneri, Phys. Rev. D 67, 104025 (2003)]. We compare the performance of these template banks to that of banks constructed using the stationary phase approximation to the nonspinning post-Newtonian inspiral waveform currently used by LIGO and Virgo in the search for compact binary coalescence. We find that, at the same false alarm rate, a search pipeline using phenomenological templates is no more effective than a pipeline which uses nonspinning templates. We recommend the continued use of the nonspinning stationary phase template bank until the false alarm rate associated with templates which include spin effects can be substantially reduced.

  14. Radiological Image Compression

    Science.gov (United States)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  15. Multi-template polymerase chain reaction.

    Science.gov (United States)

    Kalle, Elena; Kubista, Mikael; Rensing, Christopher

    2014-12-01

    PCR is a formidable and potent technology that serves as an indispensable tool in a wide range of biological disciplines. However, due to the ease of use and often lack of rigorous standards many PCR applications can lead to highly variable, inaccurate, and ultimately meaningless results. Thus, rigorous method validation must precede its broad adoption to any new application. Multi-template samples possess particular features, which make their PCR analysis prone to artifacts and biases: multiple homologous templates present in copy numbers that vary within several orders of magnitude. Such conditions are a breeding ground for chimeras and heteroduplexes. Differences in template amplification efficiencies and template competition for reaction compounds undermine correct preservation of the original template ratio. In addition, the presence of inhibitors aggravates all of the above-mentioned problems. Inhibitors might also have ambivalent effects on the different templates within the same sample. Yet, no standard approaches exist for monitoring inhibitory effects in multitemplate PCR, which is crucial for establishing compatibility between samples.

  16. Research on compression performance of ultrahigh-definition videos

    Science.gov (United States)

    Li, Xiangqun; He, Xiaohai; Qing, Linbo; Tao, Qingchuan; Wu, Di

    2017-11-01

    With the popularization of high-definition (HD) images and videos (1920×1080 pixels and above), there are even 4K (3840×2160) television signals and 8 K (8192×4320) ultrahigh-definition videos. The demand for HD images and videos is increasing continuously, along with the increasing data volume. The storage and transmission cannot be properly solved only by virtue of the expansion capacity of hard disks and the update and improvement of transmission devices. Based on the full use of the coding standard high-efficiency video coding (HEVC), super-resolution reconstruction technology, and the correlation between the intra- and the interprediction, we first put forward a "division-compensation"-based strategy to further improve the compression performance of a single image and frame I. Then, by making use of the above thought and HEVC encoder and decoder, a video compression coding frame is designed. HEVC is used inside the frame. Last, with the super-resolution reconstruction technology, the reconstructed video quality is further improved. The experiment shows that by the proposed compression method for a single image (frame I) and video sequence here, the performance is superior to that of HEVC in a low bit rate environment.

  17. Matrix polyelectrolyte capsules based on polysaccharide/MnCO₃ hybrid microparticle templates.

    Science.gov (United States)

    Wei, Qingrong; Ai, Hua; Gu, Zhongwei

    2011-06-15

    An efficient strategy for biomacromolecule encapsulation based on spontaneous deposition into polysaccharide matrix-containing capsules is introduced in this study. First, hybrid microparticles composed of manganese carbonate and ionic polysaccharides including sodium hyaluronate (HA), sodium alginate (SA) and dextran sulfate sodium (DS) with narrow size distribution were synthesized to provide monodisperse templates. Incorporation of polysaccharide into the hybrid templates was successful as verified by thermogravimetric analysis (TGA) and confocal laser scanning microscopy (CLSM). Matrix polyelectrolyte microcapsules were fabricated through layer-by-layer (LbL) self-assembly of oppositely charged polyelectrolytes (PEs) onto the hybrid particles, followed by removal of the inorganic part of the cores, leaving polysaccharide matrix inside the capsules. The loading and release properties of the matrix microcapsules were investigated using myoglobin as a model biomacromolecule. Compared to matrix-free capsules, the matrix capsules had a much higher loading capacity up to four times; the driving force is mostly due to electrostatic interactions between myoglobin and the polysaccharide matrix. From our observations, for the same kind of polysaccharide, a higher amount of polysaccharide inside the capsules usually led to better loading capacity. The release behavior of the loaded myoglobin could be readily controlled by altering the environmental pH. These matrix microcapsules may be used as efficient delivery systems for various charged water-soluble macromolecules with applications in biomedical fields. Copyright © 2010 Elsevier B.V. All rights reserved.

  18. The Unifying Moral Dyad: Liberals and Conservatives Share the Same Harm-Based Moral Template.

    Science.gov (United States)

    Schein, Chelsea; Gray, Kurt

    2015-08-01

    Do moral disagreements regarding specific issues (e.g., patriotism, chastity) reflect deep cognitive differences (i.e., distinct cognitive mechanisms) between liberals and conservatives? Dyadic morality suggests that the answer is "no." Despite moral diversity, we reveal that moral cognition--in both liberals and conservatives--is rooted in a harm-based template. A dyadic template suggests that harm should be central within moral cognition, an idea tested--and confirmed--through six specific hypotheses. Studies suggest that moral judgment occurs via dyadic comparison, in which counter-normative acts are compared with a prototype of harm. Dyadic comparison explains why harm is the most accessible and important of moral content, why harm organizes--and overlaps with--diverse moral content, and why harm best translates across moral content. Dyadic morality suggests that various moral content (e.g., loyalty, purity) are varieties of perceived harm and that past research has substantially exaggerated moral differences between liberals and conservatives. © 2015 by the Society for Personality and Social Psychology, Inc.

  19. Computational simulation of breast compression based on segmented breast and fibroglandular tissues on magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Shih, Tzu-Ching [Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, 40402, Taiwan (China); Chen, Jeon-Hor; Nie Ke; Lin Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying [Tu and Yuen Center for Functional Onco-Imaging and Radiological Sciences, University of California, Irvine, CA 92697 (United States); Liu Dongxu; Sun Lizhi, E-mail: shih@mail.cmu.edu.t [Department of Civil and Environmental Engineering, University of California, Irvine, CA 92697 (United States)

    2010-07-21

    This study presents a finite element-based computational model to simulate the three-dimensional deformation of a breast and fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and craniocaudal and mediolateral oblique compression, as used in mammography, was applied. The geometry of the whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo (registered) 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the nonlinear elastic tissue deformation under compression, using the MSC.Marc (registered) software package. The model was tested in four cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these four cases at a compression ratio of 60% was in the range of 5-7 cm, which is a typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at a compression ratio of 60% was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on magnetic resonance imaging (MRI), which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities--such as MRI, mammography, whole breast ultrasound and molecular imaging--that are performed using different body positions and under

  20. SeqCompress: an algorithm for biological sequence compression.

    Science.gov (United States)

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. A TSR Visual Servoing System Based on a Novel Dynamic Template Matching Method

    Directory of Open Access Journals (Sweden)

    Jia Cai

    2015-12-01

    Full Text Available The so-called Tethered Space Robot (TSR is a novel active space debris removal system. To solve its problem of non-cooperative target recognition during short-distance rendezvous events, this paper presents a framework for a real-time visual servoing system using non-calibrated monocular-CMOS (Complementary Metal Oxide Semiconductor. When a small template is used for matching with a large scene, it always leads to mismatches, so a novel template matching algorithm to solve the problem is presented. Firstly, the novel matching algorithm uses a hollow annulus structure according to a FAST (Features from Accelerated Segment algorithm and makes the method be rotation-invariant. Furthermore, the accumulative deviation can be decreased by the hollow structure. The matching function is composed of grey and gradient differences between template and object image, which help it reduce the effects of illumination and noises. Then, a dynamic template update strategy is designed to avoid tracking failures brought about by wrong matching or occlusion. Finally, the system synthesizes the least square integrated predictor, realizing tracking online in complex circumstances. The results of ground experiments show that the proposed algorithm can decrease the need for sophisticated computation and improves matching accuracy.

  2. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    Science.gov (United States)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-02-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults.

  3. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    International Nuclear Information System (INIS)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-01-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults. (paper)

  4. Image compression-encryption algorithms by combining hyper-chaotic system with discrete fractional random transform

    Science.gov (United States)

    Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun

    2018-07-01

    Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.

  5. Wavelet transform analysis of transient signals: the seismogram and the electrocardiogram

    Energy Technology Data Exchange (ETDEWEB)

    Anant, K.S.

    1997-06-01

    In this dissertation I quantitatively demonstrate how the wavelet transform can be an effective mathematical tool for the analysis of transient signals. The two key signal processing applications of the wavelet transform, namely feature identification and representation (i.e., compression), are shown by solving important problems involving the seismogram and the electrocardiogram. The seismic feature identification problem involved locating in time the P and S phase arrivals. Locating these arrivals accurately (particularly the S phase) has been a constant issue in seismic signal processing. In Chapter 3, I show that the wavelet transform can be used to locate both the P as well as the S phase using only information from single station three-component seismograms. This is accomplished by using the basis function (wave-let) of the wavelet transform as a matching filter and by processing information across scales of the wavelet domain decomposition. The `pick` time results are quite promising as compared to analyst picks. The representation application involved the compression of the electrocardiogram which is a recording of the electrical activity of the heart. Compression of the electrocardiogram is an important problem in biomedical signal processing due to transmission and storage limitations. In Chapter 4, I develop an electrocardiogram compression method that applies vector quantization to the wavelet transform coefficients. The best compression results were obtained by using orthogonal wavelets, due to their ability to represent a signal efficiently. Throughout this thesis the importance of choosing wavelets based on the problem at hand is stressed. In Chapter 5, I introduce a wavelet design method that uses linear prediction in order to design wavelets that are geared to the signal or feature being analyzed. The use of these designed wavelets in a test feature identification application led to positive results. The methods developed in this thesis; the

  6. Supracolloidal Assemblies as Sacrificial Templates for Porous Silk-Based Biomaterials

    Directory of Open Access Journals (Sweden)

    John G. Hardy

    2015-08-01

    Full Text Available Tissues in the body are hierarchically structured composite materials with tissue-specific properties. Urea self-assembles via hydrogen bonding interactions into crystalline supracolloidal assemblies that can be used to impart macroscopic pores to polymer-based tissue scaffolds. In this communication, we explain the solvent interactions governing the solubility of urea and thereby the scope of compatible polymers. We also highlight the role of solvent interactions on the morphology of the resulting supracolloidal crystals. We elucidate the role of polymer-urea interactions on the morphology of the pores in the resulting biomaterials. Finally, we demonstrate that it is possible to use our urea templating methodology to prepare Bombyx mori silk protein-based biomaterials with pores that human dermal fibroblasts respond to by aligning with the long axis of the pores. This methodology has potential for application in a variety of different tissue engineering niches in which cell alignment is observed, including skin, bone, muscle and nerve.

  7. Lossy compression of TPC data and trajectory tracking efficiency for the ALICE experiment

    International Nuclear Information System (INIS)

    Nicolaucig, A.; Ivanov, M.; Mattavelli, M.

    2003-01-01

    In this paper a quasi-lossless algorithm for the on-line compression of the data generated by the Time Projection Chamber (TPC) detector of the ALICE experiment at CERN is described. The algorithm is based on a lossy source code modeling technique, i.e. it is based on a source model which is lossy if samples of the TPC signal are considered one by one; conversely, the source model is lossless or quasi-lossless if some physical quantities that are of main interest for the experiment are considered. These quantities are the area and the location of the center of mass of each TPC signal pulse, representing the pulse charge and the time localization of the pulse. So as to evaluate the consequences of the error introduced by the lossy compression process, the results of the trajectory tracking algorithms that process data off-line after the experiment are analyzed, in particular, versus their sensibility to the noise introduced by the compression. Two different versions of these off-line algorithms are described, performing cluster finding and particle tracking. The results on how these algorithms are affected by the lossy compression are reported. Entropy coding can be applied to the set of events defined by the source model to reduce the bit rate to the corresponding source entropy. Using TPC simulated data according to the expected ALICE TPC performance, the compression algorithm achieves a data reduction in the range of 34.2% down to 23.7% of the original data rate depending on the desired precision on the pulse center of mass. The number of operations per input symbol required to implement the algorithm is relatively low, so that a real-time implementation of the compression process embedded in the TPC data acquisition chain using low-cost integrated electronics is a realistic option to effectively reduce the data storing cost of ALICE experiment

  8. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  9. Multi-template polymerase chain reaction

    Directory of Open Access Journals (Sweden)

    Elena Kalle

    2014-12-01

    Full Text Available PCR is a formidable and potent technology that serves as an indispensable tool in a wide range of biological disciplines. However, due to the ease of use and often lack of rigorous standards many PCR applications can lead to highly variable, inaccurate, and ultimately meaningless results. Thus, rigorous method validation must precede its broad adoption to any new application. Multi-template samples possess particular features, which make their PCR analysis prone to artifacts and biases: multiple homologous templates present in copy numbers that vary within several orders of magnitude. Such conditions are a breeding ground for chimeras and heteroduplexes. Differences in template amplification efficiencies and template competition for reaction compounds undermine correct preservation of the original template ratio. In addition, the presence of inhibitors aggravates all of the above-mentioned problems. Inhibitors might also have ambivalent effects on the different templates within the same sample. Yet, no standard approaches exist for monitoring inhibitory effects in multitemplate PCR, which is crucial for establishing compatibility between samples.

  10. Influence of video compression on the measurement error of the television system

    Science.gov (United States)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.

  11. A consensus-based template for uniform reporting of data from pre-hospital advanced airway management

    DEFF Research Database (Denmark)

    Sollid, Stephen J M; Lockey, David; Lossius, Hans Morten

    2009-01-01

    with airway management have recently propagated the need for guidelines and standards in pre-hospital airway management. Following the path of other initiatives to establish templates for uniform data reporting, like the many Utstein-style templates, we initiated and carried out a structured consensus process......, the group defined 19 optional variables for which a consensus could not be achieved or the data were considered as valuable but not essential. CONCLUSION: We successfully developed an Utstein-style template for documenting and reporting pre-hospital airway management. The core dataset for this template......BACKGROUND: Advanced airway management is a critical intervention that can harm the patient if performed poorly. The available literature on this subject is rich, but it is difficult to interpret due to a huge variability and poor definitions. Several initiatives from large organisations concerned...

  12. Hard template synthesis of metal nanowires

    Science.gov (United States)

    Kawamura, Go; Muto, Hiroyuki; Matsuda, Atsunori

    2014-11-01

    Metal nanowires (NWs) have attracted much attention because of their high electron conductivity, optical transmittance and tunable magnetic properties. Metal NWs have been synthesized using soft templates such as surface stabilizing molecules and polymers, and hard templates such as anodic aluminum oxide, mesoporous oxide, carbon nanotubes. NWs prepared from hard templates are composites of metals and the oxide/carbon matrix. Thus, selecting appropriate elements can simplify the production of composite devices. The resulting NWs are immobilized and spatially arranged, as dictated by the ordered porous structure of the template. This avoids the NWs from aggregating, which is common for NWs prepared with soft templates in solution. Herein, the hard template synthesis of metal NWs is reviewed, and the resulting structures, properties and potential applications are discussed.

  13. Comparison of high-accuracy numerical simulations of black-hole binaries with stationary-phase post-Newtonian template waveforms for initial and advanced LIGO

    International Nuclear Information System (INIS)

    Boyle, Michael; Brown, Duncan A; Pekowsky, Larne

    2009-01-01

    We study the effectiveness of stationary-phase approximated post-Newtonian waveforms currently used by ground-based gravitational-wave detectors to search for the coalescence of binary black holes by comparing them to an accurate waveform obtained from numerical simulation of an equal-mass non-spinning binary black hole inspiral, merger and ringdown. We perform this study for the initial- and advanced-LIGO detectors. We find that overlaps between the templates and signal can be improved by integrating the match filter to higher frequencies than used currently. We propose simple analytic frequency cutoffs for both initial and advanced LIGO, which achieve nearly optimal matches, and can easily be extended to unequal-mass, spinning systems. We also find that templates that include terms in the phase evolution up to 3.5 post-Newtonian (pN) order are nearly always better, and rarely significantly worse, than 2.0 pN templates currently in use. For initial LIGO we recommend a strategy using templates that include a recently introduced pseudo-4.0 pN term in the low-mass (M ≤ 35 M o-dot ) region, and 3.5 pN templates allowing unphysical values of the symmetric reduced mass η above this. This strategy always achieves overlaps within 0.3% of the optimum, for the data used here. For advanced LIGO we recommend a strategy using 3.5 pN templates up to M = 12 M o-dot , 2.0 pN templates up to M = 21 M o-dot , pseudo-4.0 pN templates up to 65 M o-dot , and 3.5 pN templates with unphysical η for higher masses. This strategy always achieves overlaps within 0.7% of the optimum for advanced LIGO.

  14. OTDM-WDM Conversion Based on Time-Domain Optical Fourier Transformation with Spectral Compression

    DEFF Research Database (Denmark)

    Mulvad, Hans Christian Hansen; Palushani, Evarist; Galili, Michael

    2011-01-01

    We propose a scheme enabling direct serial-to-parallel conversion of OTDM data tributaries onto a WDM grid, based on optical Fourier transformation with spectral compression. Demonstrations on 320 Gbit/s and 640 Gbit/s OTDM data are shown.......We propose a scheme enabling direct serial-to-parallel conversion of OTDM data tributaries onto a WDM grid, based on optical Fourier transformation with spectral compression. Demonstrations on 320 Gbit/s and 640 Gbit/s OTDM data are shown....

  15. Tailoring silver nanoparticle construction using dendrimer templated silica networks

    International Nuclear Information System (INIS)

    Liu Xiaojun; Kakkar, Ashok

    2008-01-01

    We have examined the role of the internal environment of dendrimer templated silica networks in tailoring the construction of silver nanoparticle assemblies. Silica networks from which 3,5-dihydroxybenzyl alcohol based dendrimer templates have been completely removed, slowly wet with an aqueous solution of silver acetate. The latter then reacts with internal silica silanol groups, leading to chemisorption of silver ions, followed by the growth of silver oxide nanoparticles. Silica network constructed using generation 4 dendrimer contains residual dendrimer template, and mixes with aqueous silver acetate solution easily. Upon chemisorption, silver ions get photolytically reduced to silver metal under a stabilizing dendrimer environment, leading to the formation of silver metal nanoparticles

  16. Searching for gravitational waves from the inspiral of precessing binary systems: New hierarchical scheme using 'spiky' templates

    International Nuclear Information System (INIS)

    Grandclement, Philippe; Kalogera, Vassiliki

    2003-01-01

    In a recent investigation of the effects of precession on the anticipated detection of gravitational-wave inspiral signals from compact object binaries with moderate total masses · , we found that (i) if precession is ignored, the inspiral detection rate can decrease by almost a factor of 10, and (ii) previously proposed 'mimic' templates cannot improve the detection rate significantly (by more than a factor of 2). In this paper we propose a new family of templates that can improve the detection rate by a factor of 5 or 6 in cases where precession is most important. Our proposed method for these new 'mimic' templates involves a hierarchical scheme of efficient, two-parameter template searches that can account for a sequence of spikes that appear in the residual inspiral phase, after one corrects for any oscillatory modification in the phase. We present our results for two cases of compact object masses (10 and 1.4 M · and 7 and 3 M · ) as a function of spin properties. Although further work is needed to fully assess the computational efficiency of this newly proposed template family, we conclude that these 'spiky templates' are good candidates for a family of precession templates used in realistic searches that can improve detection rates of inspiral events

  17. Analytical template protection performance and maximum key size given a Gaussian-modeled biometric source

    NARCIS (Netherlands)

    Kelkboom, E.J.C.; Breebaart, Jeroen; Buhan, I.R.; Veldhuis, Raymond N.J.; Vijaya Kumar, B.V.K.; Prabhakar, Salil; Ross, Arun A.

    2010-01-01

    Template protection techniques are used within biometric systems in order to protect the stored biometric template against privacy and security threats. A great portion of template protection techniques are based on extracting a key from or binding a key to a biometric sample. The achieved

  18. Hard template synthesis of metal nanowires

    Directory of Open Access Journals (Sweden)

    Go eKawamura

    2014-11-01

    Full Text Available Metal nanowires (NWs have attracted much attention because of their high electron conductivity, optical transmittance and tunable magnetic properties. Metal NWs have been synthesized using soft templates such as surface stabilizing molecules and polymers, and hard templates such as anodic aluminum oxide, mesoporous oxide, carbon nanotubes. NWs prepared from hard templates are composites of metals and the oxide/carbon matrix. Thus, selecting appropriate elements can simplify the production of composite devices. The resulting NWs are immobilized and spatially arranged, as dictated by the ordered porous structure of the template. This avoids the NWs from aggregating, which is common for NWs prepared with soft templates in solution. Herein, the hard template synthesis of metal NWs is reviewed, and the resulting structures, properties and potential applications are discussed.

  19. Compressive sensing in a photonic link with optical integration

    DEFF Research Database (Denmark)

    Chen, Ying; Yu, Xianbin; Chi, Hao

    2014-01-01

    In this Letter, we present a novel structure to realize photonics-assisted compressive sensing (CS) with optical integration. In the system, a spectrally sparse signal modulates a multiwavelength continuous-wave light and then is mixed with a random sequence in optical domain. The optical signal......, which is equivalent to the function of integration required in CS. A proof-of-concept experiment with four wavelengths, corresponding to a compression factor of 4, is demonstrated. More simulation results are also given to show the potential of the technique....

  20. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    OpenAIRE

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with vari...

  1. A SCHEME FOR TEMPLATE SECURITY AT FEATURE FUSION LEVEL IN MULTIMODAL BIOMETRIC SYSTEM

    Directory of Open Access Journals (Sweden)

    Arvind Selwal

    2016-09-01

    Full Text Available Biometric is the science of human recognition based upon using their biological, chemical or behavioural traits. These systems are used in many real life applications simply from biometric based attendance system to providing security at very sophisticated level. A biometric system deals with raw data captured using a sensor and feature template extracted from raw image. One of the challenges being faced by designers of these systems is to secure template data extracted from the biometric modalities of the user and protect the raw images. To minimize spoof attacks on biometric systems by unauthorised users one of the solutions is to use multi-biometric systems. Multi-modal biometric system works by using fusion technique to merge feature templates generated from different modalities of the human. In this work a new scheme is proposed to secure template during feature fusion level. Scheme is based on union operation of fuzzy relations of templates of modalities during fusion process of multimodal biometric systems. This approach serves dual purpose of feature fusion as well as transformation of templates into a single secured non invertible template. The proposed technique is cancelable and experimentally tested on a bimodal biometric system comprising of fingerprint and hand geometry. Developed scheme removes the problem of an attacker learning the original minutia position in fingerprint and various measurements of hand geometry. Given scheme provides improved performance of the system with reduction in false accept rate and improvement in genuine accept rate.

  2. Tensorial dynamic time warping with articulation index representation for efficient audio-template learning.

    Science.gov (United States)

    Le, Long N; Jones, Douglas L

    2018-03-01

    Audio classification techniques often depend on the availability of a large labeled training dataset for successful performance. However, in many application domains of audio classification (e.g., wildlife monitoring), obtaining labeled data is still a costly and laborious process. Motivated by this observation, a technique is proposed to efficiently learn a clean template from a few labeled, but likely corrupted (by noise and interferences), data samples. This learning can be done efficiently via tensorial dynamic time warping on the articulation index-based time-frequency representations of audio data. The learned template can then be used in audio classification following the standard template-based approach. Experimental results show that the proposed approach outperforms both (1) the recurrent neural network approach and (2) the state-of-the-art in the template-based approach on a wildlife detection application with few training samples.

  3. A Fast, Open EEG Classification Framework Based on Feature Compression and Channel Ranking

    Directory of Open Access Journals (Sweden)

    Jiuqi Han

    2018-04-01

    Full Text Available Superior feature extraction, channel selection and classification methods are essential for designing electroencephalography (EEG classification frameworks. However, the performance of most frameworks is limited by their improper channel selection methods and too specifical design, leading to high computational complexity, non-convergent procedure and narrow expansibility. In this paper, to remedy these drawbacks, we propose a fast, open EEG classification framework centralized by EEG feature compression, low-dimensional representation, and convergent iterative channel ranking. First, to reduce the complexity, we use data clustering to compress the EEG features channel-wise, packing the high-dimensional EEG signal, and endowing them with numerical signatures. Second, to provide easy access to alternative superior methods, we structurally represent each EEG trial in a feature vector with its corresponding numerical signature. Thus, the recorded signals of many trials shrink to a low-dimensional structural matrix compatible with most pattern recognition methods. Third, a series of effective iterative feature selection approaches with theoretical convergence is introduced to rank the EEG channels and remove redundant ones, further accelerating the EEG classification process and ensuring its stability. Finally, a classical linear discriminant analysis (LDA model is employed to classify a single EEG trial with selected channels. Experimental results on two real world brain-computer interface (BCI competition datasets demonstrate the promising performance of the proposed framework over state-of-the-art methods.

  4. Improved critical current densities and compressive strength in porous superconducting structures containing calcium

    International Nuclear Information System (INIS)

    Walsh, D; Hall, S R; Wimbush, S C

    2008-01-01

    Templated control of crystallization by biopolymers is a new technique in the synthesis of high temperature superconducting phases. By controlling the way YBa 2 Cu 3 O 7-δ (Y123) materials crystallize and are organized in three dimensions, the critical current density can be improved. In this work, we present the results of doping superconducting sponges with calcium ions, which result in higher critical current densities (J c ) and improved compressive strength compared to that of commercially available Y123, in spite of minor reductions in T c . Y123 synthesis using the biopolymer dextran achieves not only an extremely effective oxygenation of the superconductor but also an in situ template-directing of the crystal morphology producing high J c , homogeneous superconducting structures with nano-scale crystallinity

  5. Mammographic compression in Asian women.

    Science.gov (United States)

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (pAsian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  6. Ice-templated hydrogels based on chitosan with tailored porous morphology

    Czech Academy of Sciences Publication Activity Database

    Dinu, M. V.; Přádný, Martin; Dragan, E. S.; Michálek, Jiří

    2013-01-01

    Roč. 94, č. 1 (2013), s. 170-178 ISSN 0144-8617 R&D Projects: GA ČR GAP108/12/1538 Institutional support: RVO:61389013 Keywords : chitosan * ice-templated hydrogels * morphology Subject RIV: CD - Macromolecular Chemistry Impact factor: 3.916, year: 2013

  7. Archaeal DNA Polymerase-B as a DNA Template Guardian: Links between Polymerases and Base/Alternative Excision Repair Enzymes in Handling the Deaminated Bases Uracil and Hypoxanthine

    Directory of Open Access Journals (Sweden)

    Javier Abellón-Ruiz

    2016-01-01

    Full Text Available In Archaea repair of uracil and hypoxanthine, which arise by deamination of cytosine and adenine, respectively, is initiated by three enzymes: Uracil-DNA-glycosylase (UDG, which recognises uracil; Endonuclease V (EndoV, which recognises hypoxanthine; and Endonuclease Q (EndoQ, (which recognises both uracil and hypoxanthine. Two archaeal DNA polymerases, Pol-B and Pol-D, are inhibited by deaminated bases in template strands, a feature unique to this domain. Thus the three repair enzymes and the two polymerases show overlapping specificity for uracil and hypoxanthine. Here it is demonstrated that binding of Pol-D to primer-templates containing deaminated bases inhibits the activity of UDG, EndoV, and EndoQ. Similarly Pol-B almost completely turns off EndoQ, extending earlier work that demonstrated that Pol-B reduces catalysis by UDG and EndoV. Pol-B was observed to be a more potent inhibitor of the enzymes compared to Pol-D. Although Pol-D is directly inhibited by template strand uracil, the presence of Pol-B further suppresses any residual activity of Pol-D, to near-zero levels. The results are compatible with Pol-D acting as the replicative polymerase and Pol-B functioning primarily as a guardian preventing deaminated base-induced DNA mutations.

  8. Electrochemical impedance spectroscopy of nanoporous anodic alumina template

    International Nuclear Information System (INIS)

    Shahzad, K.

    2010-01-01

    Room temperature EIS characterization of nanoporous anodic alumina prepared at 40 V and 60 V has been done in 0.3 M oxalic acid solution. Rapid decrease in impedance was observed for the template prepared at 40 V. EIS study of porous anodic alumina template prepared in 0.3 M oxalic acid has been done in different electrolytes. Templates prepared in 0.3 M sulfuric acid solution were also characterized for comparison. Rapid decrease in the thickness of nonporous anodic film was observed with an increase of aggressiveness of electrolyte. Temperature based systematic study of EIS measurement has been done for porous anodic alumina template at different temperatures. Formation of micropores was observed in the nanoporous anodic alumina film formed on aluminum in 0.3 M oxalic acid solution which accelerates the dissolution rate with increase of measurement temperature. In addition to these, electropolishing behavior of pure aluminum has also been studied in different electrolytes and it was observed that electropolishing conditions prior to anodization are extremely important. (author)

  9. Compressed sensing for distributed systems

    CERN Document Server

    Coluccia, Giulio; Magli, Enrico

    2015-01-01

    This book presents a survey of the state-of-the art in the exciting and timely topic of compressed sensing for distributed systems. It has to be noted that, while compressed sensing has been studied for some time now, its distributed applications are relatively new. Remarkably, such applications are ideally suited to exploit all the benefits that compressed sensing can provide. The objective of this book is to provide the reader with a comprehensive survey of this topic, from the basic concepts to different classes of centralized and distributed reconstruction algorithms, as well as a comparison of these techniques. This book collects different contributions on these aspects. It presents the underlying theory in a complete and unified way for the first time, presenting various signal models and their use cases. It contains a theoretical part collecting latest results in rate-distortion analysis of distributed compressed sensing, as well as practical implementations of algorithms obtaining performance close to...

  10. Hot-compress: A new postdeposition treatment for ZnO-based flexible dye-sensitized solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Haque Choudhury, Mohammad Shamimul, E-mail: shamimul129@gmail.com [Department of Frontier Material, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya, Aichi 466-8555 (Japan); Department of Electrical and Electronic Engineering, International Islamic University Chittagong, b154/a, College Road, Chittagong 4203 (Bangladesh); Kishi, Naoki; Soga, Tetsuo [Department of Frontier Material, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya, Aichi 466-8555 (Japan)

    2016-08-15

    Highlights: • A new postdeposition treatment named hot-compress is introduced. • Hot-compression gives homogeneous compact layer ZnO photoanode. • I-V and EIS analysis data confirms the efficacy of this method. • Charge transport resistance was reduced by the application of hot-compression. - Abstract: This article introduces a new postdeposition treatment named hot-compress for flexible zinc oxide–base dye-sensitized solar cells. This postdeposition treatment includes the application of compression pressure at an elevated temperature. The optimum compression pressure of 130 Ma at an optimum compression temperature of 70 °C heating gives better photovoltaic performance compared to the conventional cells. The aptness of this method was confirmed by investigating scanning electron microscopy image, X-ray diffraction, current-voltage and electrochemical impedance spectroscopy analysis of the prepared cells. Proper heating during compression lowers the charge transport resistance, longer the electron lifetime of the device. As a result, the overall power conversion efficiency of the device was improved about 45% compared to the conventional room temperature compressed cell.

  11. Rational design of mesoporous metals and related nanomaterials by a soft-template approach.

    Science.gov (United States)

    Yamauchi, Yusuke; Kuroda, Kazuyuki

    2008-04-07

    We review recent developments in the preparation of mesoporous metals and related metal-based nanomaterials. Among the many types of mesoporous materials, mesoporous metals hold promise for a wide range of potential applications, such as in electronic devices, magnetic recording media, and metal catalysts, owing to their metallic frameworks. Mesoporous metals with highly ordered networks and narrow pore-size distributions have traditionally been produced by using mesoporous silica as a hard template. This method involves the formation of an original template followed by deposition of metals within the mesopores and subsequent removal of the template. Another synthetic method is the direct-template approach from lyotropic liquid crystals (LLCs) made of nonionic surfactants at high concentrations. Direct-template synthesis creates a novel avenue for the production of mesoporous metals as well as related metal-based nanomaterials. Many mesoporous metals have been prepared by the chemical or electrochemical reduction of metal salts dissolved in aqueous LLC domains. As a soft template, LLCs are more versatile and therefore more advantageous than hard templates. It is possible to produce various nanostructures (e.g., lamellar, 2D hexagonal (p6mm), and 3D cubic (Ia\\3d)), nanoparticles, and nanotubes simply by controlling the composition of the reaction bath.

  12. File compression and encryption based on LLS and arithmetic coding

    Science.gov (United States)

    Yu, Changzhi; Li, Hengjian; Wang, Xiyu

    2018-03-01

    e propose a file compression model based on arithmetic coding. Firstly, the original symbols, to be encoded, are input to the encoder one by one, we produce a set of chaotic sequences by using the Logistic and sine chaos system(LLS), and the values of this chaotic sequences are randomly modified the Upper and lower limits of current symbols probability. In order to achieve the purpose of encryption, we modify the upper and lower limits of all character probabilities when encoding each symbols. Experimental results show that the proposed model can achieve the purpose of data encryption while achieving almost the same compression efficiency as the arithmetic coding.

  13. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar.

    Science.gov (United States)

    Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun; Huang, Yuan-Hao

    2018-04-05

    Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256 × 13 real-time radar image display with a throughput of 28.2 frames per second.

  14. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar

    Directory of Open Access Journals (Sweden)

    Kuei-Chi Tsao

    2018-04-01

    Full Text Available Complementary metal-oxide-semiconductor (CMOS radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA. The proposed reconstruction processor can support the 256 × 13 real-time radar image display with a throughput of 28.2 frames per second.

  15. Airship Sparse Array Antenna Radar Real Aperture Imaging Based on Compressed Sensing and Sparsity in Transform Domain

    Directory of Open Access Journals (Sweden)

    Li Liechen

    2016-02-01

    Full Text Available A conformal sparse array based on combined Barker code is designed for airship platform. The performance of the designed array such as signal-to-noise ratio is analyzed. Using the hovering characteristics of the airship, interferometry operation can be applied on the real aperture imaging results of two pulses, which can eliminate the random backscatter phase and make the image sparse in the transform domain. Building the relationship between echo and transform coefficients, the Compressed Sensing (CS theory can be introduced to solve the formula and achieving imaging. The image quality of the proposed method can reach the image formed by the full array imaging. The simulation results show the effectiveness of the proposed method.

  16. A Sparsity-Promoted Decomposition for Compressed Fault Diagnosis of Roller Bearings

    Directory of Open Access Journals (Sweden)

    Huaqing Wang

    2016-09-01

    Full Text Available The traditional approaches for condition monitoring of roller bearings are almost always achieved under Shannon sampling theorem conditions, leading to a big-data problem. The compressed sensing (CS theory provides a new solution to the big-data problem. However, the vibration signals are insufficiently sparse and it is difficult to achieve sparsity using the conventional techniques, which impedes the application of CS theory. Therefore, it is of great significance to promote the sparsity when applying the CS theory to fault diagnosis of roller bearings. To increase the sparsity of vibration signals, a sparsity-promoted method called the tunable Q-factor wavelet transform based on decomposing the analyzed signals into transient impact components and high oscillation components is utilized in this work. The former become sparser than the raw signals with noise eliminated, whereas the latter include noise. Thus, the decomposed transient impact components replace the original signals for analysis. The CS theory is applied to extract the fault features without complete reconstruction, which means that the reconstruction can be completed when the components with interested frequencies are detected and the fault diagnosis can be achieved during the reconstruction procedure. The application cases prove that the CS theory assisted by the tunable Q-factor wavelet transform can successfully extract the fault features from the compressed samples.

  17. Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks.

    Science.gov (United States)

    Zheng, Haifeng; Li, Jiayin; Feng, Xinxin; Guo, Wenzhong; Chen, Zhonghui; Xiong, Neal

    2017-11-08

    Compressive sensing (CS) provides an energy-efficient paradigm for data gathering in wireless sensor networks (WSNs). However, the existing work on spatial-temporal data gathering using compressive sensing only considers either multi-hop relaying based or multiple random walks based approaches. In this paper, we exploit the mobility pattern for spatial-temporal data collection and propose a novel mobile data gathering scheme by employing the Metropolis-Hastings algorithm with delayed acceptance, an improved random walk algorithm for a mobile collector to collect data from a sensing field. The proposed scheme exploits Kronecker compressive sensing (KCS) for spatial-temporal correlation of sensory data by allowing the mobile collector to gather temporal compressive measurements from a small subset of randomly selected nodes along a random routing path. More importantly, from the theoretical perspective we prove that the equivalent sensing matrix constructed from the proposed scheme for spatial-temporal compressible signal can satisfy the property of KCS models. The simulation results demonstrate that the proposed scheme can not only significantly reduce communication cost but also improve recovery accuracy for mobile data gathering compared to the other existing schemes. In particular, we also show that the proposed scheme is robust in unreliable wireless environment under various packet losses. All this indicates that the proposed scheme can be an efficient alternative for data gathering application in WSNs .

  18. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    Science.gov (United States)

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  19. Real-time lossless data compression techniques for long-pulse operation

    International Nuclear Information System (INIS)

    Jesus Vega, J.; Sanchez, E.; Portas, A.; Pereira, A.; Ruiz, M.

    2006-01-01

    Data logging and data distribution will be two main tasks connected with data handling in ITER. Data logging refers to the recovery and ultimate storage of all data, independent on the data source. Control data and physics data distribution is related, on the one hand, to the on-line data broadcasting for immediate data availability for both data analysis and data visualization. On the other hand, delayed analyses require off-line data access. Due to the large data volume expected, data compression will be mandatory in order to save storage and bandwidth. On-line data distribution in a long pulse environment requires the use of a deterministic approach to be able to ensure a proper response time for data availability. However, an essential feature for all the above purposes is to apply compression techniques that ensure the recovery of the initial signals without spectral distortion when compacted data are expanded (lossless techniques). Delta compression methods are independent on the analogue characteristics of waveforms and there exist a variety of implementations that have been applied to the databases of several fusion devices such as Alcator, JET and TJ-II among others. Delta compression techniques are carried out in a two step algorithm. The first step consists of a delta calculation, i.e. the computation of the differences between the digital codes of adjacent signal samples. The resultant deltas are then encoded according to constant- or variable-length bit allocation. Several encoding forms can be considered for the second step and they have to satisfy a prefix code property. However, and in order to meet the requirement of on-line data distribution, the encoding forms have to be defined prior to data capture. This article reviews different lossless data compression techniques based on delta compression. In addition, the concept of cyclic delta transformation is introduced. Furthermore, comparative results concerning compression rates on different

  20. CEPRAM: Compression for Endurance in PCM RAM

    OpenAIRE

    González Alberquilla, Rodrigo; Castro Rodríguez, Fernando; Piñuel Moreno, Luis; Tirado Fernández, Francisco

    2017-01-01

    We deal with the endurance problem of Phase Change Memories (PCM) by proposing Compression for Endurance in PCM RAM (CEPRAM), a technique to elongate the lifespan of PCM-based main memory through compression. We introduce a total of three compression schemes based on already existent schemes, but targeting compression for PCM-based systems. We do a two-level evaluation. First, we quantify the performance of the compression, in terms of compressed size, bit-flips and how they are affected by e...

  1. Reducing Individual Variation for fMRI Studies in Children by Minimizing Template Related Errors.

    Directory of Open Access Journals (Sweden)

    Jian Weng

    Full Text Available Spatial normalization is an essential process for group comparisons in functional MRI studies. In practice, there is a risk of normalization errors particularly in studies involving children, seniors or diseased populations and in regions with high individual variation. One way to minimize normalization errors is to create a study-specific template based on a large sample size. However, studies with a large sample size are not always feasible, particularly for children studies. The performance of templates with a small sample size has not been evaluated in fMRI studies in children. In the current study, this issue was encountered in a working memory task with 29 children in two groups. We compared the performance of different templates: a study-specific template created by the experimental population, a Chinese children template and the widely used adult MNI template. We observed distinct differences in the right orbitofrontal region among the three templates in between-group comparisons. The study-specific template and the Chinese children template were more sensitive for the detection of between-group differences in the orbitofrontal cortex than the MNI template. Proper templates could effectively reduce individual variation. Further analysis revealed a correlation between the BOLD contrast size and the norm index of the affine transformation matrix, i.e., the SFN, which characterizes the difference between a template and a native image and differs significantly across subjects. Thereby, we proposed and tested another method to reduce individual variation that included the SFN as a covariate in group-wise statistics. This correction exhibits outstanding performance in enhancing detection power in group-level tests. A training effect of abacus-based mental calculation was also demonstrated, with significantly elevated activation in the right orbitofrontal region that correlated with behavioral response time across subjects in the trained group.

  2. Colloidal micro- and nano-particles as templates for polyelectrolyte multilayer capsules.

    Science.gov (United States)

    Parakhonskiy, Bogdan V; Yashchenok, Alexey M; Konrad, Manfred; Skirtach, Andre G

    2014-05-01

    Colloidal particles play an important role in various areas of material and pharmaceutical sciences, biotechnology, and biomedicine. In this overview we describe micro- and nano-particles used for the preparation of polyelectrolyte multilayer capsules and as drug delivery vehicles. An essential feature of polyelectrolyte multilayer capsule preparations is the ability to adsorb polymeric layers onto colloidal particles or templates followed by dissolution of these templates. The choice of the template is determined by various physico-chemical conditions: solvent needed for dissolution, porosity, aggregation tendency, as well as release of materials from capsules. Historically, the first templates were based on melamine formaldehyde, later evolving towards more elaborate materials such as silica and calcium carbonate. Their advantages and disadvantages are discussed here in comparison to non-particulate templates such as red blood cells. Further steps in this area include development of anisotropic particles, which themselves can serve as delivery carriers. We provide insights into application of particles as drug delivery carriers in comparison to microcapsules templated on them. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Flexible hemispheric microarrays of highly pressure-sensitive sensors based on breath figure method.

    Science.gov (United States)

    Wang, Zhihui; Zhang, Ling; Liu, Jin; Jiang, Hao; Li, Chunzhong

    2018-05-30

    Recently, flexible pressure sensors featuring high sensitivity, broad sensing range and real-time detection have aroused great attention owing to their crucial role in the development of artificial intelligent devices and healthcare systems. Herein, highly sensitive pressure sensors based on hemisphere-microarray flexible substrates are fabricated via inversely templating honeycomb structures deriving from a facile and static breath figure process. The interlocked and subtle microstructures greatly improve the sensing characteristics and compressibility of the as-prepared pressure sensor, endowing it a sensitivity as high as 196 kPa-1 and a wide pressure sensing range (0-100 kPa), as well as other superior performance, including a lower detection limit of 0.5 Pa, fast response time (10 000 cycles). Based on the outstanding sensing performance, the potential capability of our pressure sensor in capturing physiological information and recognizing speech signals has been demonstrated, indicating promising application in wearable and intelligent electronics.

  4. Division Multiplexing of 10 Gbit/s Ethernet Signals Synchronized by All-Optical Signal Processing Based on a Time-Lens

    DEFF Research Database (Denmark)

    Areal, Janaina Laguardia

    This Thesis presents 3 years work of an optical circuit that performs both pulse compression and frame synchronization and retiming. Our design aims at directly multiplexing several 10G Ethernet data packets (frames) to a high-speed OTDM link. This scheme is optically trans-parent and does not re...... coupler, completing the OTDM signal generation. We demonstrate the effectiveness of the design by laboratory experi-ments and simulations with VPI and MatLab....... not require clock recovery, resulting in a potentially very efficient solution. The scheme uses a time-lens, implemented through a sinusoidally driven optical phase modulation, combined with a linear dispersion element. As time-lenses are also used for pulse compression, we de-sign the circuit also to perform...

  5. Template-mediated, Hierarchical Engineering of Ordered Mesoporous Films and Powders

    Science.gov (United States)

    Tian, Zheng

    Hierarchical control over pore size, pore topology, and meso/mictrostructure as well as material morphology (e.g., powders, monoliths, thin films) is crucial for meeting diverse materials needs among applications spanning next generation catalysts, sensors, batteries, sorbents, etc. The overarching goal of this thesis is to establish fundamental mechanistic insight enabling new strategies for realizing such hierarchical textural control for carbon materials that is not currently achievable with sacrificial pore formation by 'one-pot' surfactant-based 'soft'-templating or multi-step inorganic 'hard-templating. While 'hard'-templating is often tacitly discounted based upon its perceived complexity, it offers potential for overcoming key 'soft'-templating challenges, including bolstering pore stability, accommodating a more versatile palette of replica precursors, realizing ordered/spanning porosity in the case of porous thin films, simplifying formation of bi-continuous pore topologies, and inducing microstructure control within porous replica materials. In this thesis, we establish strategies for hard-templating of hierarchically porous and structured carbon powders and tunable thin films by both multi-step hard-templating and a new 'one-pot' template-replica precursor co-assembly process. We first develop a nominal hard-templating technique to successfully prepare three-dimensionally ordered mesoporous (3DOm) and 3DOm-supported microporous carbon thin films by exploiting our ability to synthesize and assemble size-tunable silica nanoparticles into scalable, colloidal crystalline thin film templates of tunable mono- to multi-layer thickness. This robust thin film template accommodates liquid and/or vapor-phase infiltration, polymerization, and pyrolysis of various carbon sources without pore contraction and/or collapse upon template sacrifice. The result is robust, flexible 3DOm or 3DOm-supported ultra-thin microporous films that can be transferred by stamp

  6. Compressed Domain Packet Loss Concealment of Sinusoidally Coded Speech

    DEFF Research Database (Denmark)

    Rødbro, Christoffer A.; Christensen, Mads Græsbøll; Andersen, Søren Vang

    2003-01-01

    We consider the problem of packet loss concealment for voice over IP (VoIP). The speech signal is compressed at the transmitter using a sinusoidal coding scheme working at 8 kbit/s. At the receiver, packet loss concealment is carried out working directly on the quantized sinusoidal parameters......, based on time-scaling of the packets surrounding the missing ones. Subjective listening tests show promising results indicating the potential of sinusoidal speech coding for VoIP....

  7. Template assisted solid state electrochemical growth of silver micro- and nanowires

    International Nuclear Information System (INIS)

    Peppler, Klaus; Janek, Juergen

    2007-01-01

    We report on a template based solid state electrochemical method for fabricating silver nanowires with predefined diameter, depending only on the pore diameter of the template. As templates we used porous silicon with pore diameters in the μm range and porous alumina with pore diameters in the nm range. The template pores were filled with silver sulfide (a mixed silver cation and electronic conductor) by direct chemical reaction of silver and sulfur. The filled template was then placed between a silver foil as anode (bottom side) and a microelectrode (top side) as cathode. An array of small cylindrical transference cells with diameters in the range of either micro- or nanometers was thus obtained. By applying a cathodic voltage to the microelectrode silver in the form of either micro- or nanowires was deposited at about 150 deg. C. The growth rate is controllable by the electric current

  8. Schwannosis induced medullary compression in VACTERL syndrome.

    LENUS (Irish Health Repository)

    Treacy, A

    2011-10-21

    A 7-year-old boy with a history of VACTERL syndrome was found collapsed in bed. MRI had shown basilar invagination of the skull base and narrowing of the foramen magnum. Angulation, swelling and abnormal high signal at the cervicomedullary junction were felt to be secondary to compression of the medulla. Neuropathologic examination showed bilateral replacement of the medullary tegmentum by an irregularly circumscribed cellular lesion which was composed of elongated GFAP\\/S 100-positive cells with spindled nuclei and minimal atypia. The pathologic findings were interpreted as intramedullary schwannosis with mass effect. Schwannosis, is observed in traumatized spinal cords where its presence may represent attempted, albeit aberrant, repair by inwardly migrating Schwann cells ofperipheral origin. In our view the compressive effect of the basilar invagination on this boy\\'s medulla was of sufficient magnitude to have caused tumoral medullary schwannosis with resultant intermittent respiratory compromise leading to reflex anoxic seizures.

  9. Single-photon compressive imaging with some performance benefits over raster scanning

    International Nuclear Information System (INIS)

    Yu, Wen-Kai; Liu, Xue-Feng; Yao, Xu-Ri; Wang, Chao; Zhai, Guang-Jie; Zhao, Qing

    2014-01-01

    A single-photon imaging system based on compressed sensing has been developed to image objects under ultra-low illumination. With this system, we have successfully realized imaging at the single-photon level with a single-pixel avalanche photodiode without point-by-point raster scanning. From analysis of the signal-to-noise ratio in the measurement we find that our system has much higher sensitivity than conventional ones based on point-by-point raster scanning, while the measurement time is also reduced. - Highlights: • We design a single photon imaging system with compressed sensing. • A single point avalanche photodiode is used without raster scanning. • The Poisson shot noise in the measurement is analyzed. • The sensitivity of our system is proved to be higher than that of raster scanning

  10. A blended pressure/density based method for the computation of incompressible and compressible flows

    International Nuclear Information System (INIS)

    Rossow, C.-C.

    2003-01-01

    An alternative method to low speed preconditioning for the computation of nearly incompressible flows with compressible methods is developed. For this approach the leading terms of the flux difference splitting (FDS) approximate Riemann solver are analyzed in the incompressible limit. In combination with the requirement of the velocity field to be divergence-free, an elliptic equation to solve for a pressure correction to enforce the divergence-free velocity field on the discrete level is derived. The pressure correction equation established is shown to be equivalent to classical methods for incompressible flows. In order to allow the computation of flows at all speeds, a blending technique for the transition from the incompressible, pressure based formulation to the compressible, density based formulation is established. It is found necessary to use preconditioning with this blending technique to account for a remaining 'compressible' contribution in the incompressible limit, and a suitable matrix directly applicable to conservative residuals is derived. Thus, a coherent framework is established to cover the discretization of both incompressible and compressible flows. Compared with standard preconditioning techniques, the blended pressure/density based approach showed improved robustness for high lift flows close to separation

  11. Geothermally Coupled Well-Based Compressed Air Energy Storage

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, C L [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bearden, Mark D [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Horner, Jacob A [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Appriou, Delphine [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McGrail, B Peter [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-12-01

    . This project assessed the technical and economic feasibility of implementing geothermally coupled well-based CAES for grid-scale energy storage. Based on an evaluation of design specifications for a range of casing grades common in U.S. oil and gas fields, a 5-MW CAES project could be supported by twenty to twenty-five 5,000-foot, 7-inch wells using lower-grade casing, and as few as eight such wells for higher-end casing grades. Using this information, along with data on geothermal resources, well density, and potential future markets for energy storage systems, The Geysers geothermal field was selected to parameterize a case study to evaluate the potential match between the proven geothermal resource present at The Geysers and the field’s existing well infrastructure. Based on calculated wellbore compressed air mass, the study shows that a single average geothermal production well could provide enough geothermal energy to support a 15.4-MW (gross) power generation facility using 34 to 35 geothermal wells repurposed for compressed air storage, resulting in a simplified levelized cost of electricity (sLCOE) estimated at 11.2 ¢/kWh (Table S.1). Accounting for the power loss to the geothermal power project associated with diverting geothermal resources for air heating results in a net 2-MW decrease in generation capacity, increasing the CAES project’s sLCOE by 1.8 ¢/kWh.

  12. Geothermally Coupled Well-Based Compressed Air Energy Storage

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, Casie L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bearden, Mark D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Horner, Jacob A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cabe, James E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Appriou, Delphine [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McGrail, B. Peter [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-12-20

    . This project assessed the technical and economic feasibility of implementing geothermally coupled well-based CAES for grid-scale energy storage. Based on an evaluation of design specifications for a range of casing grades common in U.S. oil and gas fields, a 5-MW CAES project could be supported by twenty to twenty-five 5,000-foot, 7-inch wells using lower-grade casing, and as few as eight such wells for higher-end casing grades. Using this information, along with data on geothermal resources, well density, and potential future markets for energy storage systems, The Geysers geothermal field was selected to parameterize a case study to evaluate the potential match between the proven geothermal resource present at The Geysers and the field’s existing well infrastructure. Based on calculated wellbore compressed air mass, the study shows that a single average geothermal production well could provide enough geothermal energy to support a 15.4-MW (gross) power generation facility using 34 to 35 geothermal wells repurposed for compressed air storage, resulting in a simplified levelized cost of electricity (sLCOE) estimated at 11.2 ¢/kWh (Table S.1). Accounting for the power loss to the geothermal power project associated with diverting geothermal resources for air heating results in a net 2-MW decrease in generation capacity, increasing the CAES project’s sLCOE by 1.8 ¢/kWh.

  13. Chloride transport under compressive load in bacteria-based self-healing concrete

    NARCIS (Netherlands)

    Binti Md Yunus, B.; Schlangen, E.; Jonkers, H.M.

    2015-01-01

    An experiment was carried out in this study to investigate the effect of compressive load on chloride penetration in self-healing concrete containing bacterial-based healing agent. Bacteria-based healing agent with the fraction of 2 mm – 4 mm of particles sizes were used in this contribution. ESEM

  14. Image compression using the W-transform

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, W.D. Jr. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1995-12-31

    The authors present the W-transform for a multiresolution signal decomposition. One of the differences between the wavelet transform and W-transform is that the W-transform leads to a nonorthogonal signal decomposition. Another difference between the two is the manner in which the W-transform handles the endpoints (boundaries) of the signal. This approach does not restrict the length of the signal to be a power of two. Furthermore, it does not call for the extension of the signal thus, the W-transform is a convenient tool for image compression. They present the basic theory behind the W-transform and include experimental simulations to demonstrate its capabilities.

  15. The optical properties of ZnO films grown on porous Si templates

    International Nuclear Information System (INIS)

    Liu, Y L; Liu, Y C; Yang, H; Wang, W B; Ma, J G; Zhang, J Y; Lu, Y M; Shen, D Z; Fan, X W

    2003-01-01

    ZnO films were electrodeposited on porous silicon templates with different porosities. The photoluminescence (PL) spectra of the samples before and after deposition of ZnO were measured to study the effect of template porosity on the luminescence properties of ZnO/porous Si composites. As-prepared porous Si (PS) templates emit strong red light. The red PL peak of porous Si after deposition of ZnO shows an obvious blueshift, and the trend of blueshift increases with an increase in template porosity. A green emission at about 550 nm was also observed when the porosity of template increases, which is ascribed to the deep-level emission band of ZnO. A model-based band diagram of the ZnO/porous Si composite is suggested to interpret the properties of the composite

  16. Machine intelligence and signal processing

    CERN Document Server

    Vatsa, Mayank; Majumdar, Angshul; Kumar, Ajay

    2016-01-01

    This book comprises chapters on key problems in machine learning and signal processing arenas. The contents of the book are a result of a 2014 Workshop on Machine Intelligence and Signal Processing held at the Indraprastha Institute of Information Technology. Traditionally, signal processing and machine learning were considered to be separate areas of research. However in recent times the two communities are getting closer. In a very abstract fashion, signal processing is the study of operator design. The contributions of signal processing had been to device operators for restoration, compression, etc. Applied Mathematicians were more interested in operator analysis. Nowadays signal processing research is gravitating towards operator learning – instead of designing operators based on heuristics (for example wavelets), the trend is to learn these operators (for example dictionary learning). And thus, the gap between signal processing and machine learning is fast converging. The 2014 Workshop on Machine Intel...

  17. A reweighted ℓ1-minimization based compressed sensing for the spectral estimation of heart rate variability using the unevenly sampled data.

    Directory of Open Access Journals (Sweden)

    Szi-Wen Chen

    Full Text Available In this paper, a reweighted ℓ1-minimization based Compressed Sensing (CS algorithm incorporating the Integral Pulse Frequency Modulation (IPFM model for spectral estimation of HRV is introduced. Knowing as a novel sensing/sampling paradigm, the theory of CS asserts certain signals that are considered sparse or compressible can be possibly reconstructed from substantially fewer measurements than those required by traditional methods. Our study aims to employ a novel reweighted ℓ1-minimization CS method for deriving the spectrum of the modulating signal of IPFM model from incomplete RR measurements for HRV assessments. To evaluate the performance of HRV spectral estimation, a quantitative measure, referred to as the Percent Error Power (PEP that measures the percentage of difference between the true spectrum and the spectrum derived from the incomplete RR dataset, was used. We studied the performance of spectral reconstruction from incomplete simulated and real HRV signals by experimentally truncating a number of RR data accordingly in the top portion, in the bottom portion, and in a random order from the original RR column vector. As a result, for up to 20% data truncation/loss the proposed reweighted ℓ1-minimization CS method produced, on average, 2.34%, 2.27%, and 4.55% PEP in the top, bottom, and random data-truncation cases, respectively, on Autoregressive (AR model derived simulated HRV signals. Similarly, for up to 20% data loss the proposed method produced 5.15%, 4.33%, and 0.39% PEP in the top, bottom, and random data-truncation cases, respectively, on a real HRV database drawn from PhysioNet. Moreover, results generated by a number of intensive numerical experiments all indicated that the reweighted ℓ1-minimization CS method always achieved the most accurate and high-fidelity HRV spectral estimates in every aspect, compared with the ℓ1-minimization based method and Lomb's method used for estimating the spectrum of HRV from

  18. A joint image encryption and watermarking algorithm based on compressive sensing and chaotic map

    International Nuclear Information System (INIS)

    Xiao Di; Cai Hong-Kun; Zheng Hong-Ying

    2015-01-01

    In this paper, a compressive sensing (CS) and chaotic map-based joint image encryption and watermarking algorithm is proposed. The transform domain coefficients of the original image are scrambled by Arnold map firstly. Then the watermark is adhered to the scrambled data. By compressive sensing, a set of watermarked measurements is obtained as the watermarked cipher image. In this algorithm, watermark embedding and data compression can be performed without knowing the original image; similarly, watermark extraction will not interfere with decryption. Due to the characteristics of CS, this algorithm features compressible cipher image size, flexible watermark capacity, and lossless watermark extraction from the compressed cipher image as well as robustness against packet loss. Simulation results and analyses show that the algorithm achieves good performance in the sense of security, watermark capacity, extraction accuracy, reconstruction, robustness, etc. (paper)

  19. Fan fault diagnosis based on symmetrized dot pattern analysis and image matching

    Science.gov (United States)

    Xu, Xiaogang; Liu, Haixiao; Zhu, Hao; Wang, Songling

    2016-07-01

    To detect the mechanical failure of fans, a new diagnostic method based on the symmetrized dot pattern (SDP) analysis and image matching is proposed. Vibration signals of 13 kinds of running states are acquired on a centrifugal fan test bed and reconstructed by the SDP technique. The SDP pattern templates of each running state are established. An image matching method is performed to diagnose the fault. In order to improve the diagnostic accuracy, the single template, multiple templates and clustering fault templates are used to perform the image matching.

  20. Image quality (IQ) guided multispectral image compression

    Science.gov (United States)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  1. Pengenalan Angka Pada Sistem Operasi Android Dengan Menggunakan Metode Template Matching

    Directory of Open Access Journals (Sweden)

    Abdi Pandu Kusuma

    2016-07-01

    Full Text Available AbstrakUsia dini merupakan usia yang efektif untuk mengembangkan berbagai potensi yang dimiliki oleh anak. Upaya pengembangan potensi dapat dilakukan melalui berbagai cara termasuk dengan cara bermain. Bermain bagi anak merupakan cara yang tepat untuk belajar. Berdasarkan fenomena tersebut, maka perlu dibuat sebuah aplikasi pengenalan angka yang interaktif dengan unsur edukasi. Aplikasi tersebut diharapkan dapat mengambil keputusan secara otomatis apa yang ditulis anak itu bernilai benar atau salah dan juga dapat membangkitkan semangat belajar anak dalam mengenal pola angka. Solusi yang sesuai agar aplikasi tersebut dapat memberikan jawaban salah atau benar digunakan satu metode yaitu template matching. Pengenalan angka dengan menggunakan metode template matching dilakukan dengan cara membandingkan citra masukan dengan citra template. Hasil template matching dihitung dari banyaknya titik pada citra masukan yang sesuai dengan citra template. Template disediakan pada database untuk memberikan contoh cara penulisan pola angka. Uji coba dilakukan pada aplikasi sebanyak 40 kali dengan pola yang berbeda. Dari hasil uji coba didapat prosentase keberhasilan aplikasi ini mencapai 75,75%.Kata kunci: Belajar, bermain, Template Matching, dan pola. AbstractEarly childhood is an effective age to develop the potential of the child. Potential development efforts can be done through various ways, including by playing. Playing for children is a great way to learn. Based on this phenomenon, it should be made an introduction to the numbers interactive application with elements of education. The application is expected to take decisions automatically what the child is written is true or false, and also can encourage a child's learning in recognizing number patterns. Appropriate solutions so that the app can give an answer right or wrong to use the methods that template matching. The introduction of the numbers by using template matching is done by comparing the

  2. Compression of Human Motion Animation Using the Reduction of Interjoint Correlation

    Directory of Open Access Journals (Sweden)

    Shiyu Li

    2008-01-01

    Full Text Available We propose two compression methods for the human motion in 3D space, based on the forward and inverse kinematics. In a motion chain, a movement of each joint is represented by a series of vector signals in 3D space. In general, specific types of joints such as end effectors often require higher precision than other general types of joints in, for example, CG animation and robot manipulation. The first method, which combines wavelet transform and forward kinematics, enables users to reconstruct the end effectors more precisely. Moreover, progressive decoding can be realized. The distortion of parent joint coming from quantization affects its child joint in turn and is accumulated to the end effector. To address this problem and to control the movement of the whole body, we propose a prediction method further based on the inverse kinematics. This method achieves efficient compression with a higher compression ratio and higher quality of the motion data. By comparing with some conventional methods, we demonstrate the advantage of ours with typical motions.

  3. Compression of Human Motion Animation Using the Reduction of Interjoint Correlation

    Directory of Open Access Journals (Sweden)

    Li Shiyu

    2008-01-01

    Full Text Available Abstract We propose two compression methods for the human motion in 3D space, based on the forward and inverse kinematics. In a motion chain, a movement of each joint is represented by a series of vector signals in 3D space. In general, specific types of joints such as end effectors often require higher precision than other general types of joints in, for example, CG animation and robot manipulation. The first method, which combines wavelet transform and forward kinematics, enables users to reconstruct the end effectors more precisely. Moreover, progressive decoding can be realized. The distortion of parent joint coming from quantization affects its child joint in turn and is accumulated to the end effector. To address this problem and to control the movement of the whole body, we propose a prediction method further based on the inverse kinematics. This method achieves efficient compression with a higher compression ratio and higher quality of the motion data. By comparing with some conventional methods, we demonstrate the advantage of ours with typical motions.

  4. Wireless EEG System Achieving High Throughput and Reduced Energy Consumption Through Lossless and Near-Lossless Compression.

    Science.gov (United States)

    Alvarez, Guillermo Dufort Y; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo

    2018-02-01

    This work presents a wireless multichannel electroencephalogram (EEG) recording system featuring lossless and near-lossless compression of the digitized EEG signal. Two novel, low-complexity, efficient compression algorithms were developed and tested in a low-power platform. The algorithms were tested on six public EEG databases comparing favorably with the best compression rates reported up to date in the literature. In its lossless mode, the platform is capable of encoding and transmitting 59-channel EEG signals, sampled at 500 Hz and 16 bits per sample, at a current consumption of 337 A per channel; this comes with a guarantee that the decompressed signal is identical to the sampled one. The near-lossless mode allows for significant energy savings and/or higher throughputs in exchange for a small guaranteed maximum per-sample distortion in the recovered signal. Finally, we address the tradeoff between computation cost and transmission savings by evaluating three alternatives: sending raw data, or encoding with one of two compression algorithms that differ in complexity and compression performance. We observe that the higher the throughput (number of channels and sampling rate) the larger the benefits obtained from compression.

  5. Nacre-like calcium carbonate controlled by ionic liquid/graphene oxide composite template.

    Science.gov (United States)

    Yao, Chengli; Xie, Anjian; Shen, Yuhua; Zhu, Jinmiao; Li, Hongying

    2015-06-01

    Nacre-like calcium carbonate nanostructures have been mediated by an ionic liquid (IL)-graphene oxide (GO) composite template. The resultant crystals were characterized by scanning electron microscopy (SEM), Fourier transform infrared (FT-IR) spectroscopy, and X-ray powder diffractometry (XRD). The results showed that either 1-butyl-3-methylimidazolium tetrafluoroborate ([BMIM]BF4) or graphene oxide can act as a soft template for calcium carbonate formation with unusual morphologies. Based on the time-dependent morphology changes of calcium carbonate particles, it is concluded that nacre-like calcium carbonate nanostructures can be formed gradually utilizing [BMIM]BF4/GO composite template. During the process of calcium carbonate formation, [BMIM]BF4 acted not only as solvents but also as morphology templates for the fabrication of calcium carbonate materials with nacre-like morphology. Based on the observations, the possible mechanisms were also discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Formation of novel morphologies of aragonite induced by inorganic template

    International Nuclear Information System (INIS)

    Wang, Xiaoming; Nan, Zhaodong

    2011-01-01

    Graphical abstract: Glass-slices were used as a template to induce formation and assembly of aragonite. Different morphologies, such as hemisphere, twinborn hemisphere and flower-shaped particles, were produced by direction of the glass-slices. Highlights: → Glass-slices were used as a template to induce formation and assembly of aragonite. → Hemisphere, twinborn hemisphere and flower-shaped particles were produced by direction of the glass-slices. → Planes were always appeared in these as-synthesized samples. → Thermodynamic theory was applied to explain the production of the aragonite. -- Abstract: A glass-slice was used as a template to induce formation and assembly of aragonite. Thermodynamic theory was applied to explain the production of the aragonite. Transformation of three-dimensional nucleation to template-based two-dimensional surface nucleation caused the production of aragonite. Hemisphere, twinborn hemisphere and flower-shaped particles were produced by direction of the glass-slices. Planes were always appeared in these as-synthesized samples because the nucleation and the growth of these samples were adsorbed at the surfaces of the glass-slices. The formation mechanism of the as-formed sample was proposed. Compared with organic template, the present study provides a facile method to apply inorganic template to prepare functional materials.

  7. A Novel Surgical Template Design in Staged Dental Implant Rehabilitations

    Directory of Open Access Journals (Sweden)

    Michael Patras

    2012-05-01

    Full Text Available Background: The philosophy of a gradual transition to an implant retained prosthesis in cases of full-mouth or extensive rehabilitation usually involves a staged treatment concept. In this therapeutic approach, the placement of implants may sometimes be divided into phases. During a subsequent surgical phase of treatment, the pre-existing implants can serve as anchors for the surgical template. Those modified surgical templates help in the precise transferring of restorative information into the surgical field and guide the optimal three-dimensional implant positioning. Methods: This article highlights the rationale of implant-retained surgical templates and illustrates them through the presentation of two clinical cases. The templates are duplicates of the provisional restorations and are secured to the existing implants through the utilization of implant mounts. Results: This template design in such staged procedures provided stability in the surgical field and enhanced the accuracy in implant positioning based upon the planned restoration, thus ensuring predictable treatment outcomes.Conclusions: Successful rehabilitation lies in the correct sequence of surgical and prosthetic procedures. Whenever a staged approach of implant placement is planned, the clinician can effectively use the initially placed implants as anchors for the surgical template during the second phase of implant surgery.

  8. Edge-based compression of cartoon-like images with homogeneous diffusion

    DEFF Research Database (Denmark)

    Mainberger, Markus; Bruhn, Andrés; Weickert, Joachim

    2011-01-01

    Edges provide semantically important image features. In this paper a lossy compression method for cartoon-like images is presented, which is based on edge information. Edges together with some adjacent grey/colour values are extracted and encoded using a classical edge detector, binary compressio...

  9. Facial contour deformity correction with microvascular flaps based on the 3-dimentional template and facial moulage

    Directory of Open Access Journals (Sweden)

    Dinesh Kadam

    2013-01-01

    Full Text Available Introduction: Facial contour deformities presents with varied aetiology and degrees severity. Accurate assessment, selecting a suitable tissue and sculpturing it to fill the defect is challenging and largely subjective. Objective assessment with imaging and software is not always feasible and preparing a template is complicated. A three-dimensional (3D wax template pre-fabricated over the facial moulage aids surgeons to fulfil these tasks. Severe deformities demand a stable vascular tissue for an acceptable outcome. Materials and Methods: We present review of eight consecutive patients who underwent augmentation of facial contour defects with free flaps between June 2005 and January 2011. De-epithelialised free anterolateral thigh (ALT flap in three, radial artery forearm flap and fibula osteocutaneous flap in two each and groin flap was used in one patient. A 3D wax template was fabricated by augmenting the deformity on facial moulage. It was utilised to select the flap, to determine the exact dimensions and to sculpture intraoperatively. Ancillary procedures such as genioplasty, rhinoplasty and coloboma correction were performed. Results: The average age at the presentation was 25 years and average disease free interval was 5.5 years and all flaps survived. Mean follow-up period was 21.75 months. The correction was aesthetically acceptable and was maintained without any recurrence or atrophy. Conclusion: The 3D wax template on facial moulage is simple, inexpensive and precise objective tool. It provides accurate guide for the planning and execution of the flap reconstruction. The selection of the flap is based on the type and extent of the defect. Superiority of vascularised free tissue is well-known and the ALT flap offers a versatile option for correcting varying degrees of the deformities. Ancillary procedures improve the overall aesthetic outcomes and minor flap touch-up procedures are generally required.

  10. Templated Dry Printing of Conductive Metal Nanoparticles

    Science.gov (United States)

    Rolfe, David Alexander

    Printed electronics can lower the cost and increase the ubiquity of electrical components such as batteries, sensors, and telemetry systems. Unfortunately, the advance of printed electronics has been held back by the limited minimum resolution, aspect ratio, and feature fidelity of present printing techniques such as gravure, screen printing and inkjet printing. Templated dry printing offers a solution to these problems by patterning nanoparticle inks into templates before drying. This dissertation shows advancements in two varieties of templated dry nanoprinting. The first, advective micromolding in vapor-permeable templates (AMPT) is a microfluidic approach that uses evaporation-driven mold filling to create submicron features with a 1:1 aspect ratio. We will discuss submicron surface acoustic wave (SAW) resonators made through this process, and the refinement process in the template manufacturing process necessary to make these devices. We also present modeling techniques that can be applied to future AMPT templates. We conclude with a modified templated dry printing that improves throughput and isolated feature patterning by transferring dry-templated features with laser ablation. This method utilizes surface energy-defined templates to pattern features via doctor blade coating. Patterned and dried features can be transferred to a polymer substrate with an Nd:YAG MOPA fiber laser, and printed features can be smaller than the laser beam width.

  11. Template protection and its implementation in 3D face recognition systems

    Science.gov (United States)

    Zhou, Xuebing

    2007-04-01

    As biometric recognition systems are widely applied in various application areas, security and privacy risks have recently attracted the attention of the biometric community. Template protection techniques prevent stored reference data from revealing private biometric information and enhance the security of biometrics systems against attacks such as identity theft and cross matching. This paper concentrates on a template protection algorithm that merges methods from cryptography, error correction coding and biometrics. The key component of the algorithm is to convert biometric templates into binary vectors. It is shown that the binary vectors should be robust, uniformly distributed, statistically independent and collision-free so that authentication performance can be optimized and information leakage can be avoided. Depending on statistical character of the biometric template, different approaches for transforming biometric templates into compact binary vectors are presented. The proposed methods are integrated into a 3D face recognition system and tested on the 3D facial images of the FRGC database. It is shown that the resulting binary vectors provide an authentication performance that is similar to the original 3D face templates. A high security level is achieved with reasonable false acceptance and false rejection rates of the system, based on an efficient statistical analysis. The algorithm estimates the statistical character of biometric templates from a number of biometric samples in the enrollment database. For the FRGC 3D face database, the small distinction of robustness and discriminative power between the classification results under the assumption of uniquely distributed templates and the ones under the assumption of Gaussian distributed templates is shown in our tests.

  12. A 172 $\\mu$W Compressively Sampled Photoplethysmographic (PPG) Readout ASIC With Heart Rate Estimation Directly From Compressively Sampled Data.

    Science.gov (United States)

    Pamula, Venkata Rajesh; Valero-Sarmiento, Jose Manuel; Yan, Long; Bozkurt, Alper; Hoof, Chris Van; Helleputte, Nick Van; Yazicioglu, Refet Firat; Verhelst, Marian

    2017-06-01

    A compressive sampling (CS) photoplethysmographic (PPG) readout with embedded feature extraction to estimate heart rate (HR) directly from compressively sampled data is presented. It integrates a low-power analog front end together with a digital back end to perform feature extraction to estimate the average HR over a 4 s interval directly from compressively sampled PPG data. The application-specified integrated circuit (ASIC) supports uniform sampling mode (1x compression) as well as CS modes with compression ratios of 8x, 10x, and 30x. CS is performed through nonuniformly subsampling the PPG signal, while feature extraction is performed using least square spectral fitting through Lomb-Scargle periodogram. The ASIC consumes 172  μ W of power from a 1.2 V supply while reducing the relative LED driver power consumption by up to 30 times without significant loss of relevant information for accurate HR estimation.

  13. MRI non-uniformity correction through interleaved bias estimation and B-spline deformation with a template.

    Science.gov (United States)

    Fletcher, E; Carmichael, O; Decarli, C

    2012-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer's disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions.

  14. MRI Non-Uniformity Correction Through Interleaved Bias Estimation and B-Spline Deformation with a Template*

    Science.gov (United States)

    Fletcher, E.; Carmichael, O.; DeCarli, C.

    2013-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer’s disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions. PMID:23365843

  15. Adaptive bit plane quadtree-based block truncation coding for image compression

    Science.gov (United States)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  16. Identification of Coupled Map Lattice Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Dong Xie

    2016-01-01

    Full Text Available A novel approach for the parameter identification of coupled map lattice (CML based on compressed sensing is presented in this paper. We establish a meaningful connection between these two seemingly unrelated study topics and identify the weighted parameters using the relevant recovery algorithms in compressed sensing. Specifically, we first transform the parameter identification problem of CML into the sparse recovery problem of underdetermined linear system. In fact, compressed sensing provides a feasible method to solve underdetermined linear system if the sensing matrix satisfies some suitable conditions, such as restricted isometry property (RIP and mutual coherence. Then we give a low bound on the mutual coherence of the coefficient matrix generated by the observed values of CML and also prove that it satisfies the RIP from a theoretical point of view. If the weighted vector of each element is sparse in the CML system, our proposed approach can recover all the weighted parameters using only about M samplings, which is far less than the number of the lattice elements N. Another important and significant advantage is that if the observed data are contaminated with some types of noises, our approach is still effective. In the simulations, we mainly show the effects of coupling parameter and noise on the recovery rate.

  17. Development of Total Knee Replacement Digital Templating Software

    Science.gov (United States)

    Yusof, Siti Fairuz; Sulaiman, Riza; Thian Seng, Lee; Mohd. Kassim, Abdul Yazid; Abdullah, Suhail; Yusof, Shahril; Omar, Masbah; Abdul Hamid, Hamzaini

    In this study, by taking full advantage of digital X-ray and computer technology, we have developed a semi-automated procedure to template knee implants, by making use of digital templating method. Using this approach, a software system called OrthoKneeTMhas been designed and developed. The system is to be utilities as a study in the Department of Orthopaedic and Traumatology in medical faculty, UKM (FPUKM). OrthoKneeTMtemplating process employs uses a technique similar to those used by many surgeons, using acetate templates over X-ray films. Using template technique makes it easy to template various implant from every Implant manufacturers who have with a comprehensive database of templates. The templating functionality includes, template (knee) and manufactures templates (Smith & Nephew; and Zimmer). From an image of patient x-ray OrthoKneeTMtemplates help in quickly and easily reads to the approximate template size needed. The visual templating features then allow us quickly review multiple template sizes against the X-ray and thus obtain the nearly precise view of the implant size required. The system can assist by templating on one patient image and will generate reports that can accompany patient notes. The software system was implemented in Visual basic 6.0 Pro using the object-oriented techniques to manage the graphics and objects. The approaches for image scaling will be discussed. Several of measurement in orthopedic diagnosis process have been studied and added in this software as measurement tools features using mathematic theorem and equations. The study compared the results of the semi-automated (using digital templating) method to the conventional method to demonstrate the accuracy of the system.

  18. Adaptive Binary Arithmetic Coder-Based Image Feature and Segmentation in the Compressed Domain

    Directory of Open Access Journals (Sweden)

    Hsi-Chin Hsin

    2012-01-01

    Full Text Available Image compression is necessary in various applications, especially for efficient transmission over a band-limited channel. It is thus desirable to be able to segment an image in the compressed domain directly such that the burden of decompressing computation can be avoided. Motivated by the adaptive binary arithmetic coder (MQ coder of JPEG2000, we propose an efficient scheme to segment the feature vectors that are extracted from the code stream of an image. We modify the Compression-based Texture Merging (CTM algorithm to alleviate the influence of overmerging problem by making use of the rate distortion information. Experimental results show that the MQ coder-based image segmentation is preferable in terms of the boundary displacement error (BDE measure. It has the advantage of saving computational cost as the segmentation results even at low rates of bits per pixel (bpp are satisfactory.

  19. Mechanical properties of tannin-based rigid foams undergoing compression

    Energy Technology Data Exchange (ETDEWEB)

    Celzard, A., E-mail: Alain.Celzard@enstib.uhp-nancy.fr [Institut Jean Lamour - UMR CNRS 7198, CNRS - Nancy-Universite - UPV-Metz, Departement Chimie et Physique des Solides et des Surfaces, ENSTIB, 27 rue du Merle Blanc, BP 1041, 88051 Epinal cedex 9 (France); Zhao, W. [Institut Jean Lamour - UMR CNRS 7198, CNRS - Nancy-Universite - UPV-Metz, Departement Chimie et Physique des Solides et des Surfaces, ENSTIB, 27 rue du Merle Blanc, BP 1041, 88051 Epinal cedex 9 (France); Pizzi, A. [ENSTIB-LERMAB, Nancy-University, 27 rue du Merle Blanc, BP 1041, 88051 Epinal cedex 9 (France); Fierro, V. [Institut Jean Lamour - UMR CNRS 7198, CNRS - Nancy-Universite - UPV-Metz, Departement Chimie et Physique des Solides et des Surfaces, ENSTIB, 27 rue du Merle Blanc, BP 1041, 88051 Epinal cedex 9 (France)

    2010-06-25

    The mechanical properties of a new class of extremely lightweight tannin-based materials, namely organic foams and their carbonaceous counterparts are detailed. Scaling laws are shown to describe correctly the observed behaviour. Information about the mechanical characteristics of the elementary forces acting within these solids is derived. It is suggested that organic materials present a rather bending-dominated behaviour and are partly plastic. On the contrary, carbon foams obtained by pyrolysis of the former present a fracture-dominated behaviour and are purely brittle. These conclusions are supported by the differences in the exponent describing the change of Young's modulus as a function of relative density, while that describing compressive strength is unchanged. Features of the densification strain also support such conclusions. Carbon foams of very low density may absorb high energy when compressed, making them valuable materials for crash protection.

  20. Manufacturing ontology through templates

    Directory of Open Access Journals (Sweden)

    Diciuc Vlad

    2017-01-01

    Full Text Available The manufacturing industry contains a high volume of knowhow and of high value, much of it being held by key persons in the company. The passing of this know-how is the basis of manufacturing ontology. Among other methods like advanced filtering and algorithm based decision making, one way of handling the manufacturing ontology is via templates. The current paper tackles this approach and highlights the advantages concluding with some recommendations.