WorldWideScience

Sample records for video compression algorithms

  1. A novel 3D scalable video compression algorithm

    Science.gov (United States)

    Somasundaram, Siva; Subbalakshmi, Koduvayur P.

    2003-05-01

    In this paper we propose a scalable video coding scheme that utilizes the embedded block coding with optimal truncation (EBCOT) compression algorithm. Three dimensional spatio-temporal decomposition of the video sequence succeeded by compression using the EBCOT generates a SNR and resolution scalable bit stream. The proposed video coding algorithm not only performs closer to the MPEG-4 video coding standard in compression efficiency but also provides better SNR and resolution scalability. Experimental results show that the performance of the proposed algorithm does better than the 3-D SPIHT (Set Partitioning in Hierarchial Trees) algorithm by 1.5dB.

  2. MPEG-2 Compressed-Domain Algorithms for Video Analysis

    Directory of Open Access Journals (Sweden)

    Hesseler Wolfgang

    2006-01-01

    Full Text Available This paper presents new algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Three algorithms for efficient low-level metadata extraction in preprocessing stages are described. The first algorithm detects camera motion using the motion vector field of an MPEG-2 video. The second method extends the idea of motion detection to a limited region of interest, yielding an efficient algorithm to track objects inside video sequences. The third algorithm performs a cut detection using macroblock types and motion vectors.

  3. An Algorithm of Extracting I-Frame in Compressed Video

    Directory of Open Access Journals (Sweden)

    Zhu Yaling

    2015-01-01

    Full Text Available The MPEG video data includes three types of frames, that is: I-frame, P-frame and B-frame. However, the I-frame records the main information of video data, the P-frame and the B-frame are just regarded as motion compensations of the I-frame. This paper presents the approach which analyzes the MPEG video stream in the compressed domain, and find out the key frame of MPEG video stream by extracting the I-frame. Experiments indicated that this method can be automatically realized in the compressed MPEG video and it will lay the foundation for the video processing in the future.

  4. Real-time demonstration hardware for enhanced DPCM video compression algorithm

    Science.gov (United States)

    Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.

    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development

  5. Edge Polynomial Fractal Compression Algorithm for High Quality Video Transmission. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Freddie

    1999-06-01

    In this final report, Physical Optics Corporation (POC) provides a review of its Edge Polynomial Autonomous Compression (EPAC) technology. This project was undertaken to meet the need for low bandwidth transmission of full-motion video images. In addition, this report offers a synopsis of the logical data representation study that was performed to compress still images and video. The mapping singularities and polynomial representation of 3-D surfaces were found to be ideal for very high image compression. Our efforts were then directed to extending the EPAC algorithm for the motion of singularities by tracking the 3-D coordinates of characteristic points and the development of system components. Finally, we describe the integration of the software with the hardware components. This process consists of acquiring and processing each separate camera view, combining the information from different cameras to calculate the location of an object in three dimensions, and tracking the information history and the behavior of the objects.

  6. Low-Cost Super-Resolution Algorithms Implementation Over a HW/SW Video Compression Platform

    Directory of Open Access Journals (Sweden)

    Llopis Rafael Peset

    2006-01-01

    Full Text Available Two approaches are presented in this paper to improve the quality of digital images over the sensor resolution using super-resolution techniques: iterative super-resolution (ISR and noniterative super-resolution (NISR algorithms. The results show important improvements in the image quality, assuming that sufficient sample data and a reasonable amount of aliasing are available at the input images. These super-resolution algorithms have been implemented over a codesign video compression platform developed by Philips Research, performing minimal changes on the overall hardware architecture. In this way, a novel and feasible low-cost implementation has been obtained by using the resources encountered in a generic hybrid video encoder. Although a specific video codec platform has been used, the methodology presented in this paper is easily extendable to any other video encoder architectures. Finally a comparison in terms of memory, computational load, and image quality for both algorithms, as well as some general statements about the final impact of the sampling process on the quality of the super-resolved (SR image, are also presented.

  7. Still image and video compression with MATLAB

    CERN Document Server

    Thyagarajan, K

    2010-01-01

    This book describes the principles of image and video compression techniques and introduces current and popular compression standards, such as the MPEG series. Derivations of relevant compression algorithms are developed in an easy-to-follow fashion. Numerous examples are provided in each chapter to illustrate the concepts. The book includes complementary software written in MATLAB SIMULINK to give readers hands-on experience in using and applying various video compression methods. Readers can enhance the software by including their own algorithms.

  8. Transmission of compressed video

    Science.gov (United States)

    Pasch, H. L.

    1990-09-01

    An overview of video coding is presented. The aim is not to give a technical summary of possible coding techniques, but to address subjects related to video compression in general and to the transmission of compressed video in more detail. Bit rate reduction is in general possible by removing redundant information; removing information the eye does not use anyway; and reducing the quality of the video. The codecs which are used for reducing the bit rate, can be divided into two groups: Constant Bit rate Codecs (CBC's), which keep the bit rate constant, but vary the video quality; and Variable Bit rate Codecs (VBC's), which keep the video quality constant by varying the bit rate. VBC's can be in general reach a higher video quality than CBC's using less bandwidth, but need a transmission system that allows the bandwidth of a connection to fluctuate in time. The current and the next generation of the PSTN does not allow this; ATM might. There are several factors which influence the quality of video: the bit error rate of the transmission channel, slip rate, packet loss rate/packet insertion rate, end-to-end delay, phase shift between voice and video, and bit rate. Based on the bit rate of the coded video, the following classification of coded video can be made: High Definition Television (HDTV); Broadcast Quality Television (BQTV); video conferencing; and video telephony. The properties of these classes are given. The video conferencing and video telephony equipment available now and in the next few years can be divided into three categories: conforming to 1984 CCITT standard for video conferencing; conforming to 1988 CCITT standard; and conforming to no standard.

  9. Temporal compressive imaging for video

    Science.gov (United States)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  10. Video Coding Technique using MPEG Compression Standards

    Directory of Open Access Journals (Sweden)

    A. J. Falade

    2013-06-01

    Full Text Available Digital video compression technologies have become part of life, in the way visual information is created, communicated and consumed. Some application areas of video compression focused on the problem of optimizing storage space and transmission bandwidth (BW. The two dimensional discrete cosine transform (2-D DCT is an integral part of video and image compression, which is used in Moving Picture Expert Group (MPEG encoding standards. Thus, several video compression algorithms had been developed to reduce the data quantity and provide the acceptable quality standard. In the proposed study, the Matlab Simulink Model (MSM has been used for video coding/compression. The approach is more modern and reduces error resilience image distortion.

  11. TEM Video Compressive Sensing

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-01

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental conditions

  12. Content-based image and video compression

    Science.gov (United States)

    Du, Xun; Li, Honglin; Ahalt, Stanley C.

    2002-08-01

    The term Content-Based appears often in applications for which MPEG-7 is expected to play a significant role. MPEG-7 standardizes descriptors of multimedia content, and while compression is not the primary focus of MPEG-7, the descriptors defined by MPEG-7 can be used to reconstruct a rough representation of an original multimedia source. In contrast, current image and video compression standards such as JPEG and MPEG are not designed to encode at the very low bit-rates that could be accomplished with MPEG-7 using descriptors. In this paper we show that content-based mechanisms can be introduced into compression algorithms to improve the scalability and functionality of current compression methods such as JPEG and MPEG. This is the fundamental idea behind Content-Based Compression (CBC). Our definition of CBC is a compression method that effectively encodes a sufficient description of the content of an image or a video in order to ensure that the recipient is able to reconstruct the image or video to some degree of accuracy. The degree of accuracy can be, for example, the classification error rate of the encoded objects, since in MPEG-7 the classification error rate measures the performance of the content descriptors. We argue that the major difference between a content-based compression algorithm and conventional block-based or object-based compression algorithms is that content-based compression replaces the quantizer with a more sophisticated classifier, or with a quantizer which minimizes classification error. Compared to conventional image and video compression methods such as JPEG and MPEG, our results show that content-based compression is able to achieve more efficient image and video coding by suppressing the background while leaving the objects of interest nearly intact.

  13. Lossless Compression of Broadcast Video

    DEFF Research Database (Denmark)

    Martins, Bo; Eriksen, N.; Faber, E.

    1998-01-01

    We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...... artificial sequence containing uncompressible data all the 4:2:2, 8-bit test video material easily compresses losslessly to a rate below 125 Mbit/s. At this rate, video plus overhead can be contained in a single telecom 4th order PDH channel or a single STM-1 channel. Difficult 4:2:2, 10-bit test material...

  14. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    Science.gov (United States)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.

    2008-12-01

    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  15. Chest compression rate measurement from smartphone video.

    Science.gov (United States)

    Engan, Kjersti; Hinna, Thomas; Ryen, Tom; Birkenes, Tonje S; Myklebust, Helge

    2016-08-11

    Out-of-hospital cardiac arrest is a life threatening situation where the first person performing cardiopulmonary resuscitation (CPR) most often is a bystander without medical training. Some existing smartphone apps can call the emergency number and provide for example global positioning system (GPS) location like Hjelp 113-GPS App by the Norwegian air ambulance. We propose to extend functionality of such apps by using the built in camera in a smartphone to capture video of the CPR performed, primarily to estimate the duration and rate of the chest compression executed, if any. All calculations are done in real time, and both the caller and the dispatcher will receive the compression rate feedback when detected. The proposed algorithm is based on finding a dynamic region of interest in the video frames, and thereafter evaluating the power spectral density by computing the fast fourier transform over sliding windows. The power of the dominating frequencies is compared to the power of the frequency area of interest. The system is tested on different persons, male and female, in different scenarios addressing target compression rates, background disturbances, compression with mouth-to-mouth ventilation, various background illuminations and phone placements. All tests were done on a recording Laerdal manikin, providing true compression rates for comparison. Overall, the algorithm is seen to be promising, and it manages a number of disturbances and light situations. For target rates at 110 cpm, as recommended during CPR, the mean error in compression rate (Standard dev. over tests in parentheses) is 3.6 (0.8) for short hair bystanders, and 8.7 (6.0) including medium and long haired bystanders. The presented method shows that it is feasible to detect the compression rate of chest compressions performed by a bystander by placing the smartphone close to the patient, and using the built-in camera combined with a video processing algorithm performed real-time on the device.

  16. Compressed Video Segmentation

    National Research Council Canada - National Science Library

    Kobla, Vikrant; Doermann, David S; Rosenfeld, Azriel

    1996-01-01

    ... changes in content and camera motion. The analysis is performed in the compressed domain using available macroblock and motion vector information, and if necessary, discrete cosine transform (DCT) information...

  17. Image and video compression fundamentals, techniques, and applications

    CERN Document Server

    Joshi, Madhuri A; Dandawate, Yogesh H; Joshi, Kalyani R; Metkar, Shilpa P

    2014-01-01

    Image and video signals require large transmission bandwidth and storage, leading to high costs. The data must be compressed without a loss or with a small loss of quality. Thus, efficient image and video compression algorithms play a significant role in the storage and transmission of data.Image and Video Compression: Fundamentals, Techniques, and Applications explains the major techniques for image and video compression and demonstrates their practical implementation using MATLAB® programs. Designed for students, researchers, and practicing engineers, the book presents both basic principles

  18. MPEG-4 video compression optimization research

    Science.gov (United States)

    Wei, Xianmin

    2011-10-01

    In order to make a large amount of video data compression and effectively with limited network bandwidth to transfer smoothly, this article using the MPEG-4 compression technology to compress video stream. In the network transmission, according to the characteristics of video stream, for transmission technology to carry out full analysis and optimization, and combining current network bandwidth status and protocol, to establish one network model with transferring and playback video streaming effectively. Through a combination of these two areas, significantly improved compression and storage of video files and network transmission efficiency, increased video processing power.

  19. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  20. Interactive video algorithms and technologies

    CERN Document Server

    Hammoud, Riad

    2006-01-01

    This book covers both algorithms and technologies of interactive videos, so that businesses in IT and data managements, scientists and software engineers in video processing and computer vision, coaches and instructors that use video technology in teaching, and finally end-users will greatly benefit from it. This book contains excellent scientific contributions made by a number of pioneering scientists and experts from around the globe. It consists of five parts. The first part introduces the reader to interactive video and video summarization and presents effective methodologies for automatic abstraction of a single video sequence, a set of video sequences, and a combined audio-video sequence. In the second part, a list of advanced algorithms and methodologies for automatic and semi-automatic analysis and editing of audio-video documents are presented. The third part tackles a more challenging level of automatic video re-structuring, filtering of video stream by extracting of highlights, events, and meaningf...

  1. Video special effects editing in MPEG-2 compressed video

    OpenAIRE

    Fernando, WAC; Canagarajah, CN; Bull, David

    2000-01-01

    With the increase of digital technology in video production, several types of complex video special effects editing have begun to appear in video clips. In this paper we consider fade-out and fade-in special effects editing in MPEG-2 compressed video without full frame decompression and motion estimation. We estimated the DCT coefficients and use these coefficients together with the existing motion vectors to produce these special effects editing in compressed domain. Results show that both o...

  2. Eye-Movement Tracking Using Compressed Video Images

    Science.gov (United States)

    Mulligan, Jeffrey B.; Beutter, Brent R.; Hull, Cynthia H. (Technical Monitor)

    1994-01-01

    Infrared video cameras offer a simple noninvasive way to measure the position of the eyes using relatively inexpensive equipment. Several commercial systems are available which use special hardware to localize features in the image in real time, but the constraint of realtime performance limits the complexity of the applicable algorithms. In order to get better resolution and accuracy, we have used off-line processing to apply more sophisticated algorithms to the images. In this case, a major technical challenge is the real-time acquisition and storage of the video images. This has been solved using a strictly digital approach, exploiting the burgeoning field of hardware video compression. In this paper we describe the algorithms we have developed for tracking the movements of the eyes in video images, and present experimental results showing how the accuracy is affected by the degree of video compression.

  3. Error Resilient Video Compression Using Behavior Models

    NARCIS (Netherlands)

    Taal, J.R.; Chen, Z.; He, Y.; Lagendijk, R.I.L.

    2004-01-01

    Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression

  4. Video Coding Technique using MPEG Compression Standards ...

    African Journals Online (AJOL)

    Digital video compression technologies have become part of life, in the way visual information is created, communicated and consumed. Some application areas of video compression focused on the problem of optimizing storage space and transmission bandwidth (BW). The two dimensional discrete cosine transform (2-D ...

  5. H.264/AVC Video Compression on Smartphones

    Science.gov (United States)

    Sharabayko, M. P.; Markov, N. G.

    2017-01-01

    In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.

  6. Perceptual compressive sensing scalability in mobile video

    Science.gov (United States)

    Bivolarski, Lazar

    2011-09-01

    Scalability features embedded within the video sequences allows for streaming over heterogeneous networks to a variety of end devices. Compressive sensing techniques that will allow for lowering the complexity increase the robustness of the video scalability are reviewed. Human visual system models are often used in establishing perceptual metrics that would evaluate quality of video. Combining of perceptual and compressive sensing approach outlined from recent investigations. The performance and the complexity of different scalability techniques are evaluated. Application of perceptual models to evaluation of the quality of compressive sensing scalability is considered in the near perceptually lossless case and to the appropriate coding schemes is reviewed.

  7. Defocus cue and saliency preserving video compression

    Science.gov (United States)

    Khanna, Meera Thapar; Chaudhury, Santanu; Lall, Brejesh

    2016-11-01

    There are monocular depth cues present in images or videos that aid in depth perception in two-dimensional images or videos. Our objective is to preserve the defocus depth cue present in the videos along with the salient regions during compression application. A method is provided for opportunistic bit allocation during the video compression using visual saliency information comprising both the image features, such as color and contrast, and the defocus-based depth cue. The method is divided into two steps: saliency computation followed by compression. A nonlinear method is used to combine pure and defocus saliency maps to form the final saliency map. Then quantization values are assigned on the basis of these saliency values over a frame. The experimental results show that the proposed scheme yields good results over standard H.264 compression as well as pure and defocus saliency methods.

  8. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  9. Impact of Compression on the Video Quality

    Directory of Open Access Journals (Sweden)

    Miroslav Uhrina

    2012-01-01

    Full Text Available This article deals with the impact of compression on the video quality. In the first part, a short characteristic of the most used MPEG compression standards is written. In the second part, the parameter Group of Picture (GOP with particular I, P, B frames is explained. The third part focuses on the objective metrics which were used for evaluating the video quality. In the fourth part, the measurements and the experimental results are described.

  10. Video Coding Technique using MPEG Compression Standards

    African Journals Online (AJOL)

    Akorede

    The two dimensional discrete cosine transform (2-D DCT) is an integral part of video and image compression, which is used ... Park, 1989). MPEG-1 systems and MPEG-2 video have been developed collaboratively with the International. Telecommunications Union- (ITU-T). The DVB selected. MPEG-2 added specifications ...

  11. 3D Video Compression and Transmission

    DEFF Research Database (Denmark)

    Zamarin, Marco; Forchhammer, Søren

    In this short paper we provide a brief introduction to 3D and multi-view video technologies - like three-dimensional television and free-viewpoint video - focusing on the aspects related to data compression and transmission. Geometric information represented by depth maps is introduced as well...

  12. Data compression for full motion video transmission

    Science.gov (United States)

    Whyte, Wayne A., Jr.; Sayood, Khalid

    1991-01-01

    Clearly transmission of visual information will be a major, if not dominant, factor in determining the requirements for, and assessing the performance of the Space Exploration Initiative (SEI) communications systems. Projected image/video requirements which are currently anticipated for SEI mission scenarios are presented. Based on this information and projected link performance figures, the image/video data compression requirements which would allow link closure are identified. Finally several approaches which could satisfy some of the compression requirements are presented and possible future approaches which show promise for more substantial compression performance improvement are discussed.

  13. Data compression for full motion video transmission

    Science.gov (United States)

    Whyte, Wayne A., Jr.; Sayood, Khalid

    1991-01-01

    Clearly transmission of visual information will be a major, if not dominant, factor in determining the requirements for, and assessing the performance of, the SEI communications systems. Projected image/video requirements which are currently anticipated for SEI mission scenarios are presented. Based on this information and projected link performance figures, the image/video data compression requirements which would allow link closure are identified. Finally several approaches which could satisfy some of the compression requirements are presented and possible future approaches which show promise for more substantial compression performance improvement are discussed.

  14. Research of Block-Based Motion Estimation Methods for Video Compression

    Directory of Open Access Journals (Sweden)

    Tropchenko Andrey

    2016-08-01

    Full Text Available This work is a review of the block-based algorithms used for motion estimation in video compression. It researches different types of block-based algorithms that range from the simplest named Full Search to the fast adaptive algorithms like Hierarchical Search. The algorithms evaluated in this paper are widely accepted by the video compressing community and have been used in implementing various standards, such as MPEG-4 Visual and H.264. The work also presents a very brief introduction to the entire flow of video compression.

  15. Moving traffic object retrieval in H.264/MPEG compressed video

    Science.gov (United States)

    Shi, Xu-li; Xiao, Guang; Wang, Shuo-zhong; Zhang, Zhao-yang; An, Ping

    2006-05-01

    Moving object retrieval technique in compressed domain plays an important role in many real-time applications, e.g. Vehicle Detection and Classification. A number of retrieval techniques that operate in compressed domain have been reported in the literature. H.264/AVC is the up-to-date video-coding standard that is likely to lead to the proliferation of retrieval techniques in the compressed domain. Up to now, few literatures on H.264/AVC compressed video have been reported. Compared with the MPEG standard, H.264/AVC employs several new coding block types and different entropy coding method, which result in moving object retrieval in H.264/ AVC compressed video a new task and challenging work. In this paper, an approach to extract and retrieval moving traffic object in H.264/AVC compressed video is proposed. Our algorithm first Interpolates the sparse motion vector of p-frame that is composed of 4*4 blocks, 4*8 blocks and 8*4 blocks and so on. After forward projecting each p-frame vector to the immediate adjacent I-frame and calculating the DCT coefficients of I-frame using information of spatial intra-prediction, the method extracts moving VOPs (video object plan) using an interactive 4*4 block classification process. In Vehicle Detection application, the segmented VOP in 4*4 block-level accuracy is insufficient. Once we locate the target VOP, the actual edges of the VOP in 4*4 block accuracy can be extracted by applying Canny Edge Detection only on the moving VOP in 4*4 block accuracy. The VOP in pixel accuracy is then achieved by decompressing the DCT blocks of the VOPs. The edge-tracking algorithm is applied to find the missing edge pixels. After the segmentation process a retrieval algorithm that based on CSS (Curvature Scale Space) is used to search the interested shape of vehicle in H.264/AVC compressed video sequence. Experiments show that our algorithm can extract and retrieval moving vehicles efficiency and robustly.

  16. Rate-Adaptive Video Compression (RAVC) Universal Video Stick (UVS)

    Science.gov (United States)

    Hench, David L.

    2009-05-01

    The H.264 video compression standard, aka MPEG 4 Part 10 aka Advanced Video Coding (AVC) allows new flexibility in the use of video in the battlefield. This standard necessitates encoder chips to effectively utilize the increased capabilities. Such chips are designed to cover the full range of the standard with designers of individual products given the capability of selecting the parameters that differentiate a broadcast system from a video conferencing system. The SmartCapture commercial product and the Universal Video Stick (UVS) military versions are about the size of a thumb drive with analog video input and USB (Universal Serial Bus) output and allow the user to select the parameters of imaging to the. Thereby, allowing the user to select video bandwidth (and video quality) using four dimensions of quality, on the fly, without stopping video transmission. The four dimensions are: 1) spatial, change from 720 pixel x 480 pixel to 320 pixel x 360 pixel to 160 pixel x 180 pixel, 2) temporal, change from 30 frames/ sec to 5 frames/sec, 3) transform quality with a 5 to 1 range, 4) and Group of Pictures (GOP) that affects noise immunity. The host processor simply wraps the H.264 network abstraction layer packets into the appropriate network packets. We also discuss the recently adopted scalable amendment to H.264 that will allow limit RAVC at any point in the communication chain by throwing away preselected packets.

  17. Compressive Video Acquisition, Fusion and Processing

    Science.gov (United States)

    2010-12-14

    that we can explore in detail exploits the fact that even though each φm is testing a different 2D image slice, the image slices are often related...space-time cube. We related temporal bandwidth to the spacial resolution of the camera and the speed of objects in the scene. We applied our findings to...performed directly on the compressive measurements without requiring a potentially expensive video reconstruction. Accomplishments In our work exploring

  18. Considerations and Algorithms for Compression of Sets

    DEFF Research Database (Denmark)

    Larsson, Jesper

    We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....

  19. VORTEX: video retrieval and tracking from compressed multimedia databases--template matching from MPEG-2 video compression standard

    Science.gov (United States)

    Schonfeld, Dan; Lelescu, Dan

    1998-10-01

    In this paper, a novel visual search engine for video retrieval and tracking from compressed multimedia databases is proposed. Our approach exploits the structure of video compression standards in order to perform object matching directly on the compressed video data. This is achieved by utilizing motion compensation--a critical prediction filter embedded in video compression standards--to estimate and interpolate the desired method for template matching. Motion analysis is used to implement fast tracking of objects of interest on the compressed video data. Being presented with a query in the form of template images of objects, the system operates on the compressed video in order to find the images or video sequences where those objects are presented and their positions in the image. This in turn enables the retrieval and display of the query-relevant sequences.

  20. Telesurgery. Acceptability of compressed video for remote surgical proctoring.

    Science.gov (United States)

    Hiatt, J R; Shabot, M M; Phillips, E H; Haines, R F; Grant, T L

    1996-04-01

    To determine the clinical acceptability of various levels of video compression for remote proctoring of laparoscopic surgical procedures. Observational, controlled study. Community-based teaching hospital. Physician and nurse observers. Controlled surgical video scenes were subjected to various levels of data compression for digital transmission and display and shown to participant observers. Clinical acceptability of video scenes after application of video compression. Clinically acceptable video compression was achieved with a 1.25-megabit/second data rate, with the use of odd-screen 43.3:1 Joint Photographic Expert Group compression and a small screen for remote viewing. With proper video compression, remote proctoring of laparoscopic procedures may be performed with standard 1.5-megabit/second telecommunication data lines and services.

  1. TEXT COMPRESSION ALGORITHMS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    S. Senthil

    2011-12-01

    Full Text Available Data Compression may be defined as the science and art of the representation of information in a crisply condensed form. For decades, Data compression has been one of the critical enabling technologies for the ongoing digital multimedia revolution. There are a lot of data compression algorithms which are available to compress files of different formats. This paper provides a survey of different basic lossless data compression algorithms. Experimental results and comparisons of the lossless compression algorithms using Statistical compression techniques and Dictionary based compression techniques were performed on text data. Among the Statistical coding techniques, the algorithms such as Shannon-Fano Coding, Huffman coding, Adaptive Huffman coding, Run Length Encoding and Arithmetic coding are considered. Lempel Ziv scheme which is a dictionary based technique is divided into two families: one derived from LZ77 (LZ77, LZSS, LZH, LZB and LZR and the other derived from LZ78 (LZ78, LZW, LZFG, LZC and LZT. A set of interesting conclusions are derived on this basis.

  2. Algorithm for Compressing Time-Series Data

    Science.gov (United States)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  3. H.264/AVC Video Compressed Traces: Multifractal and Fractal Analysis

    Directory of Open Access Journals (Sweden)

    Samčović Andreja

    2006-01-01

    Full Text Available Publicly available long video traces encoded according to H.264/AVC were analyzed from the fractal and multifractal points of view. It was shown that such video traces, as compressed videos (H.261, H.263, and MPEG-4 Version 2 exhibit inherent long-range dependency, that is, fractal, property. Moreover they have high bit rate variability, particularly at higher compression ratios. Such signals may be better characterized by multifractal (MF analysis, since this approach describes both local and global features of the process. From multifractal spectra of the frame size video traces it was shown that higher compression ratio produces broader and less regular MF spectra, indicating to higher MF nature and the existence of additive components in video traces. Considering individual frames (I, P, and B and their MF spectra one can approve additive nature of compressed video and the particular influence of these frames to a whole MF spectrum. Since compressed video occupies a main part of transmission bandwidth, results obtained from MF analysis of compressed video may contribute to more accurate modeling of modern teletraffic. Moreover, by appropriate choice of the method for estimating MF quantities, an inverse MF analysis is possible, that means, from a once derived MF spectrum of observed signal it is possible to recognize and extract parts of the signal which are characterized by particular values of multifractal parameters. Intensive simulations and results obtained confirm the applicability and efficiency of MF analysis of compressed video.

  4. An improvement analysis on video compression using file segmentation

    Science.gov (United States)

    Sharma, Shubhankar; Singh, K. John; Priya, M.

    2017-11-01

    From the past two decades the extreme evolution of the Internet has lead a massive rise in video technology and significantly video consumption over the Internet which inhabits the bulk of data traffic in general. Clearly, video consumes that so much data size on the World Wide Web, to reduce the burden on the Internet and deduction of bandwidth consume by video so that the user can easily access the video data.For this, many video codecs are developed such as HEVC/H.265 and V9. Although after seeing codec like this one gets a dilemma of which would be improved technology in the manner of rate distortion and the coding standard.This paper gives a solution about the difficulty for getting low delay in video compression and video application e.g. ad-hoc video conferencing/streaming or observation by surveillance. Also this paper describes the benchmark of HEVC and V9 technique of video compression on subjective oral estimations of High Definition video content, playback on web browsers. Moreover, this gives the experimental ideology of dividing the video file into several segments for compression and putting back together to improve the efficiency of video compression on the web as well as on the offline mode.

  5. Compressive sensing based algorithms for electronic defence

    CERN Document Server

    Mishra, Amit Kumar

    2017-01-01

    This book details some of the major developments in the implementation of compressive sensing in radio applications for electronic defense and warfare communication use. It provides a comprehensive background to the subject and at the same time describes some novel algorithms. It also investigates application value and performance-related parameters of compressive sensing in scenarios such as direction finding, spectrum monitoring, detection, and classification.

  6. Compression of mixed video and graphics images for TV systems

    Science.gov (United States)

    van der Schaar-Mitrea, Mihaela; de With, Peter H. N.

    1998-01-01

    The diversity in TV images has augmented with the increased application of computer graphics. In this paper we study z coding system that supports both the lossless coding of such graphics data and regular lossy video compression. The lossless coding techniques are based on runlength and arithmetical coding. For video compression, we introduce a simple block predictive coding technique featuring individual pixel access, so that it enables a gradual shift from lossless coding of graphics to the lossy coding of video. An overall bit rate control completes the system. Computer simulations show a very high quality with a compression factor between 2-3.

  7. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  8. Statistical conditional sampling for variable-resolution video compression.

    Directory of Open Access Journals (Sweden)

    Alexander Wong

    Full Text Available In this study, we investigate a variable-resolution approach to video compression based on Conditional Random Field and statistical conditional sampling in order to further improve compression rate while maintaining high-quality video. In the proposed approach, representative key-frames within a video shot are identified and stored at full resolution. The remaining frames within the video shot are stored and compressed at a reduced resolution. At the decompression stage, a region-based dictionary is constructed from the key-frames and used to restore the reduced resolution frames to the original resolution via statistical conditional sampling. The sampling approach is based on the conditional probability of the CRF modeling by use of the constructed dictionary. Experimental results show that the proposed variable-resolution approach via statistical conditional sampling has potential for improving compression rates when compared to compressing the video at full resolution, while achieving higher video quality when compared to compressing the video at reduced resolution.

  9. Motion estimation for video coding efficient algorithms and architectures

    CERN Document Server

    Chakrabarti, Indrajit; Chatterjee, Sumit Kumar

    2015-01-01

    The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.

  10. Accelerating decomposition of light field video for compressive multi-layer display.

    Science.gov (United States)

    Cao, Xuan; Geng, Zheng; Li, Tuotuo; Zhang, Mei; Zhang, Zhaoxing

    2015-12-28

    Compressive light field display based on multi-layer LCDs is becoming a popular solution for 3D display. Decomposing light field into layer images is the most challenging task. Iterative algorithm is an effective solver for this high-dimensional decomposition problem. Existing algorithms, however, iterate from random initial values. As such, significant computation time is required due to the deviation between random initial estimate and target values. Real-time 3D display at video rate is difficult based on existing algorithms. In this paper, we present a new algorithm to provide better initial values and accelerate decomposition of light field video. We utilize internal coherence of single light field frame to transfer the ignorance-to-target to a much lower resolution level. In addition, we explored external coherence for further accelerating light field video and achieved 5.91 times speed improvement. We built a prototype and developed parallel algorithm based on CUDA.

  11. The effects of video compression on acceptability of images for monitoring life sciences experiments

    Science.gov (United States)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters

  12. COMPARATIVE STUDY OF COMPRESSION TECHNIQUES FOR SYNTHETIC VIDEOS

    OpenAIRE

    Ayman Abdalla; Ahmad Mazhar; Mosa Salah

    2014-01-01

    We evaluate the performance of three state of the art video codecs on synthetic videos. The evaluation is based on both subjective and objective quality metrics. The subjective quality of the compressed video sequences is evaluated using the Double Stimulus Impairment Scale (DSIS) assessment metric while the Peak Signal-to-Noise Ratio (PSNR) is used for the objective evaluation. An extensive number of experiments are conducted to study the effect of frame rate and resolution o...

  13. Sub-band/transform compression of video sequences

    Science.gov (United States)

    Sauer, Ken; Bauer, Peter

    1992-01-01

    The progress on compression of video sequences is discussed. The overall goal of the research was the development of data compression algorithms for high-definition television (HDTV) sequences, but most of our research is general enough to be applicable to much more general problems. We have concentrated on coding algorithms based on both sub-band and transform approaches. Two very fundamental issues arise in designing a sub-band coder. First, the form of the signal decomposition must be chosen to yield band-pass images with characteristics favorable to efficient coding. A second basic consideration, whether coding is to be done in two or three dimensions, is the form of the coders to be applied to each sub-band. Computational simplicity is of essence. We review the first portion of the year, during which we improved and extended some of the previous grant period's results. The pyramid nonrectangular sub-band coder limited to intra-frame application is discussed. Perhaps the most critical component of the sub-band structure is the design of bandsplitting filters. We apply very simple recursive filters, which operate at alternating levels on rectangularly sampled, and quincunx sampled images. We will also cover the techniques we have studied for the coding of the resulting bandpass signals. We discuss adaptive three-dimensional coding which takes advantage of the detection algorithm developed last year. To this point, all the work on this project has been done without the benefit of motion compensation (MC). Motion compensation is included in many proposed codecs, but adds significant computational burden and hardware expense. We have sought to find a lower-cost alternative featuring a simple adaptation to motion in the form of the codec. In sequences of high spatial detail and zooming or panning, it appears that MC will likely be necessary for the proposed quality and bit rates.

  14. High Definition Video Streaming Using H.264 Video Compression

    OpenAIRE

    Bechqito, Yassine

    2009-01-01

    This thesis presents high definition video streaming using H.264 codec implementation. The experiment carried out in this study was done for an offline streaming video but a model for live high definition streaming is introduced, as well. Prior to the actual experiment, this study describes digital media streaming. Also, the different technologies involved in video streaming are covered. These include streaming architecture and a brief overview on H.264 codec as well as high definition t...

  15. Image Compression Algorithms Optimized for MATLAB

    Directory of Open Access Journals (Sweden)

    S. Hanus

    2003-12-01

    Full Text Available This paper describes implementation of the Discrete Cosine Transform(DCT algorithm to MATLAB. This approach is used in JPEG or MPEGstandards for instance. The substance of these specifications is toremove the considerable correlation between adjacent picture elements.The objective of this paper is not to improve the DCT algorithm itself, but to re-write it to the preferable version for MATLAB thusallows the enumeration with insignificant delay. The method proposed inthis paper allows image compression calculation almost two hundredtimes faster compared with the DCT definition.

  16. Artifact reduction of compressed images and video combining adaptive fuzzy filtering and directional anisotropic diffusion

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Forchhammer, Søren; Korhonen, Jari

    2011-01-01

    and ringing artifacts, we have applied directional anisotropic diffusion. Besides that, the selection of the adaptive threshold parameter for the diffusion coefficient has also improved the performance of the algorithm. Experimental results on JPEG compressed images as well as MJPEG and H.264 compressed......Fuzzy filtering is one of the recently developed methods for reducing distortion in compressed images and video. In this paper, we combine the powerful anisotropic diffusion equations with fuzzy filtering in order to reduce the impact of artifacts. Based on the directional nature of the blocking...

  17. Fast motion prediction algorithm for multiview video coding

    Science.gov (United States)

    Abdelazim, Abdelrahman; Zhang, Guang Y.; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel

    2011-06-01

    Multiview Video Coding (MVC) is an extension to the H.264/MPEG-4 AVC video compression standard developed with joint efforts by MPEG/VCEG to enable efficient encoding of sequences captured simultaneously from multiple cameras using a single video stream. Therefore the design is aimed at exploiting inter-view dependencies in addition to reducing temporal redundancies. However, this further increases the overall encoding complexity In this paper, the high correlation between a macroblock and its enclosed partitions is utilised to estimate motion homogeneity, and based on the result inter-view prediction is selectively enabled or disabled. Moreover, if the MVC is divided into three layers in terms of motion prediction; the first being the full and sub-pixel motion search, the second being the mode selection process and the third being repetition of the first and second for inter-view prediction, the proposed algorithm significantly reduces the complexity in the three layers. To assess the proposed algorithm, a comprehensive set of experiments were conducted. The results show that the proposed algorithm significantly reduces the motion estimation time whilst maintaining similar Rate Distortion performance, when compared to both the H.264/MVC reference software and recently reported work.

  18. Video compressed sensing using iterative self-similarity modeling and residual reconstruction

    Science.gov (United States)

    Kim, Yookyung; Oh, Han; Bilgin, Ali

    2013-04-01

    Compressed sensing (CS) has great potential for use in video data acquisition and storage because it makes it unnecessary to collect an enormous amount of data and to perform the computationally demanding compression process. We propose an effective CS algorithm for video that consists of two iterative stages. In the first stage, frames containing the dominant structure are estimated. These frames are obtained by thresholding the coefficients of similar blocks. In the second stage, refined residual frames are reconstructed from the original measurements and the measurements corresponding to the frames estimated in the first stage. These two stages are iterated until convergence. The proposed algorithm exhibits superior subjective image quality and significantly improves the peak-signal-to-noise ratio and the structural similarity index measure compared to other state-of-the-art CS algorithms.

  19. Attacking Automatic Video Analysis Algorithms: A Case Study of Google Cloud Video Intelligence API

    OpenAIRE

    Hosseini, Hossein; Xiao, Baicen; Clark, Andrew; Poovendran, Radha

    2017-01-01

    Due to the growth of video data on Internet, automatic video analysis has gained a lot of attention from academia as well as companies such as Facebook, Twitter and Google. In this paper, we examine the robustness of video analysis algorithms in adversarial settings. Specifically, we propose targeted attacks on two fundamental classes of video analysis algorithms, namely video classification and shot detection. We show that an adversary can subtly manipulate a video in such a way that a human...

  20. Quality Assessment of Compressed Video for Automatic License Plate Recognition

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Støttrup-Andersen, Jesper; Forchhammer, Søren

    2014-01-01

    Definition of video quality requirements for video surveillance poses new questions in the area of quality assessment. This paper presents a quality assessment experiment for an automatic license plate recognition scenario. We explore the influence of the compression by H.264/AVC and H.265/HEVC...... standards on the recognition performance. We compare logarithmic and logistic functions for quality modeling. Our results show that a logistic function can better describe the dependence of recognition performance on the quality for both compression standards. We observe that automatic license plate...

  1. Lossless Compression of Video using Motion Compensation

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1998-01-01

    We investigate lossless coding of video using predictive coding andmotion compensation. The methods incorporate state-of-the-art lossless techniques such ascontext based prediction and bias cancellation, Golomb coding, high resolution motion field estimation,3d-dimensional predictors, prediction ...

  2. Fast prediction algorithm for multiview video coding

    Science.gov (United States)

    Abdelazim, Abdelrahman; Mein, Stephen James; Varley, Martin Roy; Ait-Boudaoud, Djamel

    2013-03-01

    The H.264/multiview video coding (MVC) standard has been developed to enable efficient coding for three-dimensional and multiple viewpoint video sequences. The inter-view statistical dependencies are utilized and an inter-view prediction is employed to provide more efficient coding; however, this increases the overall encoding complexity. Motion homogeneity is exploited here to selectively enable inter-view prediction, and to reduce complexity in the motion estimation (ME) and the mode selection processes. This has been accomplished by defining situations that relate macro-blocks' motion characteristics to the mode selection and the inter-view prediction processes. When comparing the proposed algorithm to the H.264/MVC reference software and other recent work, the experimental results demonstrate a significant reduction in ME time while maintaining similar rate-distortion performance.

  3. Low-latency video transmission over high-speed WPANs based on low-power video compression

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Ann

    2010-01-01

    This paper presents latency-constrained video transmission over high-speed wireless personal area networks (WPANs). Low-power video compression is proposed as an alternative to uncompressed video transmission. A video source rate control based on MINMAX quality criteria is introduced. Practical...... results for video encoder based on H.264/AVC standard are also given....

  4. Low-latency video transmission over high-speed WPANs based on low-power video compression

    OpenAIRE

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Ann

    2010-01-01

    This paper presents latency-constrained video transmission over high-speed wireless personal area networks (WPANs). Low-power video compression is proposed as an alternative to uncompressed video transmission. A video source rate control based on MINMAX quality criteria is introduced. Practical results for video encoder based on H.264/AVC standard are also given.

  5. Visual Acuity and Contrast Sensitivity with compressed motion video

    NARCIS (Netherlands)

    Bijl, P.; Vries, S.C. de

    2009-01-01

    Video of Visual Acuity (VA) and Contrast Sensitivity (CS) test charts in a complex background was recorded using a CCD camera mounted on a computer-controlled tripod and fed into real-time MPEG2 compression/decompression equipment. The test charts were based on the Triangle Orientation

  6. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Directory of Open Access Journals (Sweden)

    Rik Van de Walle

    2007-01-01

    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4%.

  7. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Directory of Open Access Journals (Sweden)

    Lerouge Sam

    2007-01-01

    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4 .

  8. Visual acuity, contrast sensitivity, and range performance with compressed motion video

    Science.gov (United States)

    Bijl, Piet; de Vries, Sjoerd C.

    2010-10-01

    Video of visual acuity (VA) and contrast sensitivity (CS) test charts in a complex background was recorded using a CCD color camera mounted on a computer-controlled tripod and was fed into real-time MPEG-2 compression/decompression equipment. The test charts were based on the triangle orientation discrimination (TOD) test method and contained triangle test patterns of different sizes and contrasts in four possible orientations. In a perception experiment, observers judged the orientation of the triangles in order to determine VA and CS thresholds at the 75% correct level. Three camera velocities (0, 1.0, and 2.0 deg/s, or 0, 4.1, and 8.1 pixels/frame) and four compression rates (no compression, 4 Mb/s, 2 Mb/s, and 1 Mb/s) were used. VA is shown to be rather robust to any combination of motion and compression. CS, however, dramatically decreases when motion is combined with high compression ratios. The measured thresholds were fed into the TOD target acquisition model to predict the effect of motion and compression on acquisition ranges for tactical military vehicles. The effect of compression on static performance is limited but strong with motion video. The data suggest that with the MPEG2 algorithm, the emphasis is on the preservation of image detail at the cost of contrast loss.

  9. Lossless Compression of Video using Motion Compensation

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1998-01-01

    Summary form only given. We investigate lossless coding of video using predictive coding and motion compensation. The new coding methods combine state-of-the-art lossless techniques as JPEG (context based prediction and bias cancellation, Golomb coding), with high resolution motion field estimation......-predictors and intra-frame predictors as well. As proposed by Ribas-Corbera (see PhD thesis, University of Michigan, 1996), we use bi-linear interpolation in order to achieve sub-pixel precision of the motion field. Using more reference images is another way of achieving higher accuracy of the match. The motion...

  10. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  11. Fast generation of three-dimensional video holograms by combined use of data compression and lookup table techniques.

    Science.gov (United States)

    Kim, Seung-Cheol; Yoon, Jung-Hoon; Kim, Eun-Soo

    2008-11-10

    Even though there are many types of methods to generate CGH (computer-generated hologram) patterns of three-dimensional (3D) objects, most of them have been applied to still images but not to video images due to their computational complexity in applications of 3D video holograms. A new method for fast computation of CGH patterns for 3D video images is proposed by combined use of data compression and lookup table techniques. Temporally redundant data of the 3D video images are removed with the differential pulse code modulation (DPCM) algorithm, and then the CGH patterns for these compressed videos are generated with the novel lookup table (N-LUT) technique. To confirm the feasibility of the proposed method, some experiments with test 3D videos are carried out, and the results are comparatively discussed with the conventional methods in terms of the number of object points and computation time.

  12. 3D video coding for embedded devices energy efficient algorithms and architectures

    CERN Document Server

    Zatt, Bruno; Bampi, Sergio; Henkel, Jörg

    2013-01-01

    This book shows readers how to develop energy-efficient algorithms and hardware architectures to enable high-definition 3D video coding on resource-constrained embedded devices.  Users of the Multiview Video Coding (MVC) standard face the challenge of exploiting its 3D video-specific coding tools for increasing compression efficiency at the cost of increasing computational complexity and, consequently, the energy consumption.  This book enables readers to reduce the multiview video coding energy consumption through jointly considering the algorithmic and architectural levels.  Coverage includes an introduction to 3D videos and an extensive discussion of the current state-of-the-art of 3D video coding, as well as energy-efficient algorithms for 3D video coding and energy-efficient hardware architecture for 3D video coding.     ·         Discusses challenges related to performance and power in 3D video coding for embedded devices; ·         Describes energy-efficient algorithms for reduci...

  13. Influence of video compression on the measurement error of the television system

    Science.gov (United States)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.

  14. Fast compressed domain motion detection in H.264 video streams for video surveillance applications

    DEFF Research Database (Denmark)

    Szczerba, Krzysztof; Forchhammer, Søren; Støttrup-Andersen, Jesper

    2009-01-01

    numbers of video streams on a single server. The focus of the work is on using the information in coded video streams to reduce the computational complexity and memory requirements, which translates into reduced hardware requirements and costs. The devised algorithm detects and segments activity based...

  15. Online sparse representation for remote sensing compressed-sensed video sampling

    Science.gov (United States)

    Wang, Jie; Liu, Kun; Li, Sheng-liang; Zhang, Li

    2014-11-01

    Most recently, an emerging Compressed Sensing (CS) theory has brought a major breakthrough for data acquisition and recovery. It asserts that a signal, which is highly compressible in a known basis, can be reconstructed with high probability through sampling frequency which is well below Nyquist Sampling Frequency. When applying CS to Remote Sensing (RS) Video imaging, it can directly and efficiently acquire compressed image data by randomly projecting original data to obtain linear and non-adaptive measurements. In this paper, with the help of distributed video coding scheme which is a low-complexity technique for resource limited sensors, the frames of a RS video sequence are divided into Key frames (K frames) and Non-Key frames (CS frames). In other words, the input video sequence consists of many groups of pictures (GOPs) and each GOP consists of one K frame followed by several CS frames. Both of them are measured based on block, but at different sampling rates. In this way, the major encoding computation burden will be shifted to the decoder. At the decoder, the Side Information (SI) is generated for the CS frames using traditional Motion-Compensated Interpolation (MCI) technique according to the reconstructed key frames. The over-complete dictionary is trained by dictionary learning methods based on SI. These learning methods include ICA-like, PCA, K-SVD, MOD, etc. Using these dictionaries, the CS frames could be reconstructed according to sparse-land model. In the numerical experiments, the reconstruction performance of ICA algorithm, which is often evaluated by Peak Signal-to-Noise Ratio (PSNR), has been made compared with other online sparse representation algorithms. The simulation results show its advantages in reducing reconstruction time and robustness in reconstruction performance when applying ICA algorithm to remote sensing video reconstruction.

  16. A watermarking algorithm for anti JPEG compression attacks

    Science.gov (United States)

    Han, Baoru; Huang, Guo

    2017-07-01

    JPEG compression is a compression standard, which is widely used in image processing. A new medical image watermarking algorithm against Joint Photographic Experts Group (JPEG) compression is proposed. The original watermarking image is made scrambled by Legendre chaotic neural network to enhance watermarking security. The watermarking algorithm uses three-dimensional discrete wavelet transform (3D DWT), three-dimensional discrete cosine transform (3D DCT) properties and differences hashing to produce watermarking. Experimental results show that the watermarking algorithm has good transparency and robustness against JPEG compression attack.

  17. Recent and future applications of video compression in broadcasting

    Science.gov (United States)

    Drury, G. M.

    1995-02-01

    This paper considers the role of video compression in the transmission of television signals. It highlights contribution and distribution as well as direct-to-home broadcasting applications on satellite which are current and describes aspects of the standards, technology and systems required to support them. Some aspects of system performance are discussed together with a brief summary of future developments particularly on other media than those traditionally used by broadcasters.

  18. Higher-order singular value decomposition-based discrete fractional random transform for simultaneous compression and encryption of video images

    Science.gov (United States)

    Wang, Qingzhu; Chen, Xiaoming; Zhu, Yihai

    2017-09-01

    Existing image compression and encryption methods have several shortcomings: they have low reconstruction accuracy and are unsuitable for three-dimensional (3D) images. To overcome these limitations, this paper proposes a tensor-based approach adopting tensor compressive sensing and tensor discrete fractional random transform (TDFRT). The source video images are measured by three key-controlled sensing matrices. Subsequently, the resulting tensor image is further encrypted using 3D cat map and the proposed TDFRT, which is based on higher-order singular value decomposition. A multiway projection algorithm is designed to reconstruct the video images. The proposed algorithm can greatly reduce the data volume and improve the efficiency of the data transmission and key distribution. The simulation results validate the good compression performance, efficiency, and security of the proposed algorithm.

  19. A Blind Video Watermarking Scheme Robust To Frame Attacks Combined With MPEG2 Compression

    Directory of Open Access Journals (Sweden)

    C. Cruz-Ramos

    2010-12-01

    Full Text Available ABSTRACTIn this paper, we propose a robust digital video watermarking scheme with completely blind extraction process wherethe original video data, original watermark or any other information derivative of them are not required in order toretrieve the embedded watermark. The proposed algorithm embeds 2D binary visually recognizable patterns such ascompany trademarks and owner’s logotype, etc., in the DWT domain of the video frames for copyright protection.Before the embedding process, only two numerical keys are required to transform the watermark data into a noise-likepattern using the chaotic mixing method which helps to increase the security. The main advantages of the proposedscheme are its completely blind detection scheme, robustness against common video attacks, combined attacks andits low complexity implementation. The combined attacks consist of MPEG-2 compression and common video attackssuch as noise contamination, collusion attacks, frame dropping and swapping. Extensive simulation results also showthat the watermark imperceptibility and robustness outperform other previously reported methods. The extractedwatermark data from the watermarked video sequences is clear enough even after the watermarked video hadsuffered from several attacks.

  20. Nonlinear spectrum compression for the hearing impaired via a frequency-domain processing algorithm.

    Science.gov (United States)

    Paarmann, Larry D

    2006-01-01

    In this paper, the results of both normal-hearing, and profoundly hearing-impaired adults, tested with spectrum compressed speech via the modified chirp-z algorithm, with and without visual stimuli, are reported. Ten normal-hearing adult listeners and five profoundly hearing-impaired adult listeners were asked to identify nonsense syllables presented auditorily and bimodally (audition and vision) via video tape in two conditions: lowpass filtered or unprocessed, and spectrum compressed. The lowpass filtered and spectrum compressed speech occupies the same spectrum width of 840 Hz; at 900 Hz and above, the attenuation is at least 60 dB. The spectrum compression is performed by means of a modified chirp-z algorithm, and is described in this paper. The testing results are significant and are reported in this paper. While the signal processing approach is somewhat intensive, the realtime throughput delay is small. Recent advances in hardware speed suggest that realization in a hearing aid is feasible.

  1. A Class of Coning Algorithms Based on a Half-Compressed Structure

    Directory of Open Access Journals (Sweden)

    Chuanye Tang

    2014-08-01

    Full Text Available Aiming to advance the coning algorithm performance of strapdown inertial navigation systems, a new half-compressed coning correction structure is presented. The half-compressed algorithm structure is analytically proven to be equivalent to the traditional compressed structure under coning environments. The half-compressed algorithm coefficients allow direct configuration from traditional compressed algorithm coefficients. A type of algorithm error model is defined for coning algorithm performance evaluation under maneuver environment conditions. Like previous uncompressed algorithms, the half-compressed algorithm has improved maneuver accuracy and retained coning accuracy compared with its corresponding compressed algorithm. Compared with prior uncompressed algorithms, the formula for the new algorithm coefficients is simpler.

  2. Data compression techniques applied to high resolution high frame rate video technology

    Science.gov (United States)

    Hartz, William G.; Alexovich, Robert E.; Neustadter, Marc S.

    1989-01-01

    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended.

  3. A methodology to evaluate the effect of video compression on the performance of analytics systems

    Science.gov (United States)

    Tsifouti, Anastasia; Nasralla, Moustafa M.; Razaak, Manzoor; Cope, James; Orwell, James M.; Martini, Maria G.; Sage, Kingsley

    2012-10-01

    The Image Library for Intelligent Detection Systems (i-LIDS) provides benchmark surveillance datasets for analytics systems. This paper proposes a methodology to investigate the effect of compression and frame-rate reduction, and to recommend an appropriate suite of degraded datasets for public release. The library consists of six scenarios, including Sterile Zone (SZ) and Parked Vehicle (PV), which are investigated using two different compression algorithms (H.264 and JPEG) and a number of detection systems. PV has higher spatio-temporal complexity than the SZ. Compression performance is dependent on scene content hence PV will require larger bit-streams in comparison with SZ, for any given distortion rate. The study includes both industry standard algorithms (for transmission) and CCTV recorders (for storage). CCTV recorders generally use proprietary formats, which may significantly affect the visual information. Encoding standards such as H.264 and JPEG use the Discrete Cosine Transform (DCT) technique, which introduces blocking artefacts. The H.264 compression algorithm follows a hybrid predictive coding approach to achieve high compression gains, exploiting both spatial and temporal redundancy. The highly predictive approach of H.264 may introduce more artefacts resulting in a greater effect on the performance of analytics systems than JPEG. The paper describes the two main components of the proposed methodology to measure the effect of degradation on analytics performance. Firstly, the standard tests, using the `f-measure' to evaluate the performance on a range of degraded video sets. Secondly, the characterisation of the datasets, using quantification of scene features, defined using image processing techniques. This characterization permits an analysis of the points of failure introduced by the video degradation.

  4. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2007-01-01

    Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

  5. Algorithms and data structures for grammar-compressed strings

    DEFF Research Database (Denmark)

    Cording, Patrick Hagge

    demanding task. In this thesis we design data structures for accessing and searching compressed data efficiently. Our results can be divided into two categories. In the first category we study problems related to pattern matching. In particular, we present new algorithms for counting and comparing...... substrings, and a new algorithm for finding all occurrences of a pattern in which we may insert gaps. In the other category we deal with accessing and decompressing parts of the compressed string. We show how to quickly access a single character of the compressed string, and present a data structure......Textual databases for e.g. biological or web-data are growing rapidly, and it is often only feasible to store the data in compressed form. However, compressing the data comes at a price. Traditional algorithms for e.g. pattern matching requires all data to be decompressed - a computationally...

  6. Rate-adaptive compressive video acquisition with sliding-window total-variation-minimization reconstruction

    Science.gov (United States)

    Liu, Ying; Pados, Dimitris A.

    2013-05-01

    We consider a compressive video acquisition system where frame blocks are sensed independently. Varying block sparsity is exploited in the form of individual per-block open-loop sampling rate allocation with minimal system overhead. At the decoder, video frames are reconstructed via sliding-window inter-frame total variation minimization. Experimental results demonstrate that such rate-adaptive compressive video acquisition improves noticeably the rate-distortion performance of the video stream over fixed-rate acquisition approaches.

  7. High efficiency video coding (HEVC) algorithms and architectures

    CERN Document Server

    Budagavi, Madhukar; Sullivan, Gary

    2014-01-01

    This book provides developers, engineers, researchers and students with detailed knowledge about the High Efficiency Video Coding (HEVC) standard. HEVC is the successor to the widely successful H.264/AVC video compression standard, and it provides around twice as much compression as H.264/AVC for the same level of quality. The applications for HEVC will not only cover the space of the well-known current uses and capabilities of digital video – they will also include the deployment of new services and the delivery of enhanced video quality, such as ultra-high-definition television (UHDTV) and video with higher dynamic range, wider range of representable color, and greater representation precision than what is typically found today. HEVC is the next major generation of video coding design – a flexible, reliable and robust solution that will support the next decade of video applications and ease the burden of video on world-wide network traffic. This book provides a detailed explanation of the various parts ...

  8. Quasi Gradient Projection Algorithm for Sparse Reconstruction in Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Xin Meng

    2014-02-01

    Full Text Available Compressed sensing is a novel signal sampling theory under the condition that the signal is sparse or compressible. The existing recovery algorithms based on the gradient projection can either need prior knowledge or recovery the signal poorly. In this paper, a new algorithm based on gradient projection is proposed, which is referred as Quasi Gradient Projection. The algorithm presented quasi gradient direction and two step sizes schemes along this direction. The algorithm doesn’t need any prior knowledge of the original signal. Simulation results demonstrate that the presented algorithm cans recovery the signal more correctly than GPSR which also don’t need prior knowledge. Meanwhile, the algorithm has a lower computation complexity.

  9. A new modified fast fractal image compression algorithm

    DEFF Research Database (Denmark)

    Salarian, Mehdi; Nadernejad, Ehsan; MiarNaimi, Hossein

    2013-01-01

    In this paper, a new fractal image compression algorithm is proposed, in which the time of the encoding process is considerably reduced. The algorithm exploits a domain pool reduction approach, along with the use of innovative predefined values for contrast scaling factor, S, instead of searching...... it across. Only the domain blocks with entropy greater than a threshold are considered to belong to the domain pool. The algorithm has been tested for some well-known images and the results have been compared with the state-of-the-art algorithms. The experiments show that our proposed algorithm has...

  10. Adaptive Resolution Upconversion for Compressed Video Using Pixel Classification

    Directory of Open Access Journals (Sweden)

    Shao Ling

    2007-01-01

    Full Text Available A novel adaptive resolution upconversion algorithm that is robust to compression artifacts is proposed. This method is based on classification of local image patterns using both structure information and activity measure to explicitly distinguish pixels into content or coding artifacts. The structure information is represented by adaptive dynamic-range coding and the activity measure is the combination of local entropy and dynamic range. For each pattern class, the weighting coefficients of upscaling are optimized by a least-mean-square (LMS training technique, which trains on the combination of the original images and the compressed downsampled versions of the original images. Experimental results show that our proposed upconversion approach outperforms other classification-based upconversion and artifact reduction techniques in concatenation.

  11. Compressive sampling by artificial neural networks for video

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Jenkins, Jeffrey; Reinhardt, Kitt

    2011-06-01

    We describe a smart surveillance strategy for handling novelty changes. Current sensors seem to keep all, redundant or not. The Human Visual System's Hubel-Wiesel (wavelet) edge detection mechanism pays attention to changes in movement, which naturally produce organized sparseness because a stagnant edge is not reported to the brain's visual cortex by retinal neurons. Sparseness is defined as an ordered set of ones (movement or not) relative to zeros that could be pseudo-orthogonal among themselves; then suited for fault tolerant storage and retrieval by means of Associative Memory (AM). The firing is sparse at the change locations. Unlike purely random sparse masks adopted in medical Compressive Sensing, these organized ones have an additional benefit of using the image changes to make retrievable graphical indexes. We coined this organized sparseness as Compressive Sampling; sensing but skipping over redundancy without altering the original image. Thus, we turn illustrate with video the survival tactics which animals that roam the Earth use daily. They acquire nothing but the space-time changes that are important to satisfy specific prey-predator relationships. We have noticed a similarity between the mathematical Compressive Sensing and this biological mechanism used for survival. We have designed a hardware implementation of the Human Visual System's Compressive Sampling scheme. To speed up further, our mixedsignal circuit design of frame differencing is built in on-chip processing hardware. A CMOS trans-conductance amplifier is designed here to generate a linear current output using a pair of differential input voltages from 2 photon detectors for change detection---one for the previous value and the other the subsequent value, ("write" synaptic weight by Hebbian outer products; "read" by inner product & pt. NL threshold) to localize and track the threat targets.

  12. Continuity-Aware Scheduling Algorithm for Scalable Video Streaming

    Directory of Open Access Journals (Sweden)

    Atinat Palawan

    2016-05-01

    Full Text Available The consumer demand for retrieving and delivering visual content through consumer electronic devices has increased rapidly in recent years. The quality of video in packet networks is susceptible to certain traffic characteristics: average bandwidth availability, loss, delay and delay variation (jitter. This paper presents a scheduling algorithm that modifies the stream of scalable video to combat jitter. The algorithm provides unequal look-ahead by safeguarding the base layer (without the need for overhead of the scalable video. The results of the experiments show that our scheduling algorithm reduces the number of frames with a violated deadline and significantly improves the continuity of the video stream without compromising the average Y Peek Signal-to-Noise Ratio (PSNR.

  13. A high-efficient significant coefficient scanning algorithm for 3-D embedded wavelet video coding

    Science.gov (United States)

    Song, Haohao; Yu, Songyu; Song, Li; Xiong, Hongkai

    2005-07-01

    3-D embedded wavelet video coding (3-D EWVC) algorithms become a vital scheme for state-of-the-art scalable video coding. A major objective in a progressive transmission scheme is to select the most important information which yields the largest distortion reduction to be transmitted first, so traditional 3-D EWVC algorithms scan coefficients according to bit-plane order. To significant bit information of the same bit-plane, however, these algorithms neglect the different effect of coefficients in different subbands to distortion. In this paper, we analyze different effect of significant information bits of the same bit-plane in different subbands to distortion and propose a high-efficient significant coefficient scanning algorithm. Experimental results of 3-D SPIHT and 3-D SPECK show that high-efficient significant coefficient scanning algorithm can improve traditional 3-D EWVC algorithms' ability of compression, and make reconstructed videos have higher PSNR and better visual effects in the same bit rate compared to original significant coefficient scanning algorithms respectively.

  14. A Super-resolution Reconstruction Algorithm for Surveillance Video

    Directory of Open Access Journals (Sweden)

    Jian Shao

    2017-01-01

    Full Text Available Recent technological developments have resulted in surveillance video becoming a primary method of preserving public security. Many city crimes are observed in surveillance video. The most abundant evidence collected by the police is also acquired through surveillance video sources. Surveillance video footage offers very strong support for solving criminal cases, therefore, creating an effective policy, and applying useful methods to the retrieval of additional evidence is becoming increasingly important. However, surveillance video has had its failings, namely, video footage being captured in low resolution (LR and bad visual quality. In this paper, we discuss the characteristics of surveillance video and describe the manual feature registration – maximum a posteriori – projection onto convex sets to develop a super-resolution reconstruction method, which improves the quality of surveillance video. From this method, we can make optimal use of information contained in the LR video image, but we can also control the image edge clearly as well as the convergence of the algorithm. Finally, we make a suggestion on how to adjust the algorithm adaptability by analyzing the prior information of target image.

  15. A Compression Algorithm in Wireless Sensor Networks of Bearing Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Zheng Bin; Meng Qingfeng; Wang Nan [Theory of Lubrication and Bearing Institute, Xi' an Jiaotong University Xi' an, 710049 (China); Li Zhi, E-mail: rthree.zhengbin@stu.xjtu.edu.cn [Dalian Machine Tool Group Corp. Dalian, 116620 (China)

    2011-07-19

    The energy consumption of wireless sensor networks (WSNs) is always an important problem in the application of wireless sensor networks. This paper proposes a data compression algorithm to reduce amount of data and energy consumption during the data transmission process in the on-line WSNs-based bearing monitoring system. The proposed compression algorithm is based on lifting wavelets, Zerotree coding and Hoffman coding. Among of that, 5/3 lifting wavelets is used for dividing data into different frequency bands to extract signal characteristics. Zerotree coding is applied to calculate the dynamic thresholds to retain the attribute data. The attribute data are then encoded by Hoffman coding to further enhance the compression ratio. In order to validate the algorithm, simulation is carried out by using Matlab. The result of simulation shows that the proposed algorithm is very suitable for the compression of bearing monitoring data. The algorithm has been successfully used in online WSNs-based bearing monitoring system, in which TI DSP TMS320F2812 is used to realize the algorithm.

  16. JPEG XS-based frame buffer compression inside HEVC for power-aware video compression

    Science.gov (United States)

    Willème, Alexandre; Descampe, Antonin; Rouvroy, Gaël.; Pellegrin, Pascal; Macq, Benoit

    2017-09-01

    With the emergence of Ultra-High Definition video, reference frame buffers (FBs) inside HEVC-like encoders and decoders have to sustain huge bandwidth. The power consumed by these external memory accesses accounts for a significant share of the codec's total consumption. This paper describes a solution to significantly decrease the FB's bandwidth, making HEVC encoder more suitable for use in power-aware applications. The proposed prototype consists in integrating an embedded lightweight, low-latency and visually lossless codec at the FB interface inside HEVC in order to store each reference frame as several compressed bitstreams. As opposed to previous works, our solution compresses large picture areas (ranging from a CTU to a frame stripe) independently in order to better exploit the spatial redundancy found in the reference frame. This work investigates two data reuse schemes namely Level-C and Level-D. Our approach is made possible thanks to simplified motion estimation mechanisms further reducing the FB's bandwidth and inducing very low quality degradation. In this work, we integrated JPEG XS, the upcoming standard for lightweight low-latency video compression, inside HEVC. In practice, the proposed implementation is based on HM 16.8 and on XSM 1.1.2 (JPEG XS Test Model). Through this paper, the architecture of our HEVC with JPEG XS-based frame buffer compression is described. Then its performance is compared to HM encoder. Compared to previous works, our prototype provides significant external memory bandwidth reduction. Depending on the reuse scheme, one can expect bandwidth and FB size reduction ranging from 50% to 83.3% without significant quality degradation.

  17. High-Performance Motion Estimation for Image Sensors with Video Compression

    OpenAIRE

    Weizhi Xu; Shouyi Yin; Leibo Liu; Zhiyong Liu; Shaojun Wei

    2015-01-01

    It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME) is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME). Pixels of reconstructed...

  18. Compression of Video Tracking and Bandwidth Balancing Routing in Wireless Multimedia Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yin Wang

    2015-12-01

    Full Text Available There has been a tremendous growth in multimedia applications over wireless networks. Wireless Multimedia Sensor Networks(WMSNs have become the premier choice in many research communities and industry. Many state-of-art applications, such as surveillance, traffic monitoring, and remote heath care are essentially video tracking and transmission in WMSNs. The transmission speed is constrained by the big file size of video data and fixed bandwidth allocation in constant routing paths. In this paper, we present a CamShift based algorithm to compress the tracking of videos. Then we propose a bandwidth balancing strategy in which each sensor node is able to dynamically select the node for the next hop with the highest potential bandwidth capacity to resume communication. Key to this strategy is that each node merely maintains two parameters that contain its historical bandwidth varying trend and then predict its near future bandwidth capacity. Then, the forwarding node selects the next hop with the highest potential bandwidth capacity. Simulations demonstrate that our approach significantly increases the data received by the sink node and decreases the delay on video transmission in Wireless Multimedia Sensor Network environments.

  19. Schwarz-based algorithms for compressible flows

    Energy Technology Data Exchange (ETDEWEB)

    Tidriri, M.D. [ICASE, Hampton, VA (United States)

    1996-12-31

    To compute steady compressible flows one often uses an implicit discretization approach which leads to a large sparse linear system that must be solved at each time step. In the derivation of this system one often uses a defect-correction procedure, in which the left-hand side of the system is discretized with a lower order approximation than that used for the right-hand side. This is due to storage considerations and computational complexity, and also to the fact that the resulting lower order matrix is better conditioned than the higher order matrix. The resulting schemes are only moderately implicit. In the case of structured, body-fitted grids, the linear system can easily be solved using approximate factorization (AF), which is among the most widely used methods for such grids. However, for unstructured grids, such techniques are no longer valid, and the system is solved using direct or iterative techniques. Because of the prohibitive computational costs and large memory requirements for the solution of compressible flows, iterative methods are preferred. In these defect-correction methods, which are implemented in most CFD computer codes, the mismatch in the right and left hand side operators, together with explicit treatment of the boundary conditions, lead to a severely limited CFL number, which results in a slow convergence to steady state aerodynamic solutions. Many authors have tried to replace explicit boundary conditions with implicit ones. Although they clearly demonstrate that high CFL numbers are possible, the reduction in CPU time is not clear cut.

  20. Using general-purpose compression algorithms for music analysis

    DEFF Research Database (Denmark)

    Louboutin, Corentin; Meredith, David

    2016-01-01

    General-purpose compression algorithms encode files as dictionaries of substrings with the positions of these strings’ occurrences. We hypothesized that such algorithms could be used for pattern discovery in music. We compared LZ77, LZ78, Burrows–Wheeler and COSIATEC on classifying folk song...... in the input data, COSIATEC outperformed LZ77 with a mean F1 score of 0.123, compared with 0.053 for LZ77. However, when the music was processed a voice at a time, the F1 score for LZ77 more than doubled to 0.124. We also discovered a significant correlation between compression factor and F1 score for all...

  1. RESURFACING OLD DATA COMPRESSION & ENCRYPTION ALGORITHMS FOR EXTRA SECURITY SHELL

    Directory of Open Access Journals (Sweden)

    MITRUT CARAIVAN

    2016-06-01

    Full Text Available This paper presents a short history of data compression and encryption technologies starting with World War I and their possible value today by resurfacing old and forgotten algorithms as an increased security shell possibility for modern data files storage. It focuses on a case study using available internet tools as of 2016 and emphasizes on the results which relieve a blind eye over old and dusty data compression and encryption algorithms following data encapsulation, therefore showing the possibility of adding easily an extra security layer to any contemporary cutting-edge data protection method.

  2. Filtered gradient reconstruction algorithm for compressive spectral imaging

    Science.gov (United States)

    Mejia, Yuri; Arguello, Henry

    2017-04-01

    Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.

  3. Research of Video Steganalysis Algorithm Based on H265 Protocol

    Directory of Open Access Journals (Sweden)

    Wu Kaicheng

    2015-01-01

    This paper researches LSB matching VSA based on H265 protocol with the research background of 26 original Video sequences, it firstly extracts classification features out from training samples as input of SVM, and trains in SVM to obtain high-quality category classification model, and then tests whether there is suspicious information in the video sample. The experimental results show that VSA algorithm based on LSB matching can be more practical to obtain all frame embedded secret information and carrier and video of local frame embedded. In addition, VSA adopts the method of frame by frame with a strong robustness in resisting attack in the corresponding time domain.

  4. Cyclic Pure Greedy Algorithms for Recovering Compressively Sampled Sparse Signals

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Christensen, Mads Græsbøll; Gribonval, Remi

    2011-01-01

    The pure greedy algorithms matching pursuit (MP) and complementary MP (CompMP) are extremely computationally simple, but can perform poorly in solving the linear inverse problems posed by the recovery of compressively sampled sparse signals. We show that by applying a cyclic minimization principle...

  5. Compression of fingerprint data using the wavelet vector quantization image compression algorithm. 1992 progress report

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1992-04-11

    This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.

  6. Self-aligning and compressed autosophy video databases

    Science.gov (United States)

    Holtz, Klaus E.

    1993-04-01

    Autosophy, an emerging new science, explains `self-assembling structures,' such as crystals or living trees, in mathematical terms. This research provides a new mathematical theory of `learning' and a new `information theory' which permits the growing of self-assembling data network in a computer memory similar to the growing of `data crystals' or `data trees' without data processing or programming. Autosophy databases are educated very much like a human child to organize their own internal data storage. Input patterns, such as written questions or images, are converted to points in a mathematical omni dimensional hyperspace. The input patterns are then associated with output patterns, such as written answers or images. Omni dimensional information storage will result in enormous data compression because each pattern fragment is only stored once. Pattern recognition in the text or image files is greatly simplified by the peculiar omni dimensional storage method. Video databases will absorb input images from a TV camera and associate them with textual information. The `black box' operations are totally self-aligning where the input data will determine their own hyperspace storage locations. Self-aligning autosophy databases may lead to a new generation of brain-like devices.

  7. Analysing Music with Point-Set Compression Algorithms

    DEFF Research Database (Denmark)

    Meredith, David

    2016-01-01

    Several point-set pattern-discovery and compression algorithms designed for analysing music are reviewed and evaluated. Each algorithm takes as input a point-set representation of a score in which each note is represented as a point in pitch-time space. Each algorithm computes the maximal...... translatable patterns (MTPs) in this input and the translational equivalence classes (TECs) of these MTPs, where each TEC contains all the occurrences of a given MTP. Each TEC is encoded as a ⟨pattern,vector set⟩ pair, in which the vector set gives all the vectors by which the pattern can be translated...... and sections in pieces of classical music. On the first task, the best-performing algorithms achieved success rates of around 84%. In the second task, the best algorithms achieved mean F1 scores of around 0.49, with scores for individual pieces rising as high as 0.71....

  8. An efficient video dehazing algorithm based on spectral clustering

    Science.gov (United States)

    Zhao, Fan; Yao, Zao; Song, XiaoFang; Yao, Yi

    2017-07-01

    Image and video dehazing is a popular topic in the field of computer vision and digital image processing. A fast, optimized dehazing algorithm was recently proposed that enhances contrast and reduces flickering artifacts in a dehazed video sequence by minimizing a cost function that makes transmission values spatially and temporally coherent. However, its fixed-size block partitioning leads to block effects. Further, the weak edges in a hazy image are not addressed. Hence, a video dehazing algorithm based on customized spectral clustering is proposed. To avoid block artifacts, the spectral clustering is customized to segment static scenes to ensure the same target has the same transmission value. Assuming that dehazed edge images have richer detail than before restoration, an edge cost function is added to the ransmission model. The experimental results demonstrate that the proposed method provides higher dehazing quality and lower time complexity than the previous technique.

  9. A review of lossless audio compression standards and algorithms

    Science.gov (United States)

    Muin, Fathiah Abdul; Gunawan, Teddy Surya; Kartiwi, Mira; Elsheikh, Elsheikh M. A.

    2017-09-01

    Over the years, lossless audio compression has gained popularity as researchers and businesses has become more aware of the need for better quality and higher storage demand. This paper will analyse various lossless audio coding algorithm and standards that are used and available in the market focusing on Linear Predictive Coding (LPC) specifically due to its popularity and robustness in audio compression, nevertheless other prediction methods are compared to verify this. Advanced representation of LPC such as LSP decomposition techniques are also discussed within this paper.

  10. Compressive Sensing Image Fusion Based on Particle Swarm Optimization Algorithm

    Science.gov (United States)

    Li, X.; Lv, J.; Jiang, S.; Zhou, H.

    2017-09-01

    In order to solve the problem that the spatial matching is difficult and the spectral distortion is large in traditional pixel-level image fusion algorithm. We propose a new method of image fusion that utilizes HIS transformation and the recently developed theory of compressive sensing that is called HIS-CS image fusion. In this algorithm, the particle swarm optimization algorithm is used to select the fusion coefficient ω. In the iterative process, the image fusion coefficient ω is taken as particle, and the optimal value is obtained by combining the optimal objective function. Then we use the compression-aware weighted fusion algorithm for remote sensing image fusion, taking the coefficient ω as the weight value. The algorithm ensures the optimal selection of fusion effect with a certain degree of self-adaptability. To evaluate the fused images, this paper uses five kinds of index parameters such as Entropy, Standard Deviation, Average Gradient, Degree of Distortion and Peak Signal-to-Noise Ratio. The experimental results show that the image fusion effect of the algorithm in this paper is better than that of traditional methods.

  11. Video Segmentation Using Fast Marching and Region Growing Algorithms

    Directory of Open Access Journals (Sweden)

    Eftychis Sifakis

    2002-04-01

    Full Text Available The algorithm presented in this paper is comprised of three main stages: (1 classification of the image sequence and, in the case of a moving camera, parametric motion estimation, (2 change detection having as reference a fixed frame, an appropriately selected frame or a displaced frame, and (3 object localization using local colour features. The image sequence classification is based on statistical tests on the frame difference. The change detection module uses a two-label fast marching algorithm. Finally, the object localization uses a region growing algorithm based on the colour similarity. Video object segmentation results are shown using the COST 211 data set.

  12. A New Frame Memory Compression Algorithm with DPCM and VLC in a 4×4 Block

    Directory of Open Access Journals (Sweden)

    Yongje Lee

    2009-01-01

    Full Text Available Frame memory compression (FMC is a technique to reduce memory bandwidth by compressing the video data to be stored in the frame memory. This paper proposes a new FMC algorithm integrated into an H.264 encoder that compresses a 4×4 block by differential pulse code modulation (DPCM followed by Golomb-Rice coding. For DPCM, eight scan orders are predefined and the best scan order is selected using the results of H.264 intra prediction. FMC can also be used for other systems that require a frame memory to store images in RGB color space. In the proposed FMC, RGB color space is transformed into another color space, such as YCbCr or G, R-G, B-G color space. The best scan order for DPCM is selected by comparing the efficiency of all scan orders. Experimental results show that the new FMC algorithm in an H.264 encoder achieves 1.34 dB better image quality than a previous MHT-based FMC for HD-size sequences. For systems using RGB color space, the transform to G, R-G, B-G color space makes most efficient compression. The average PSNR values of R, G, and B colors are 46.70 dB, 50.80 dB, and 44.90 dB, respectively, for 768×512-size images.

  13. A novel high-frequency encoding algorithm for image compression

    Science.gov (United States)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  14. Nonmonotone Adaptive Barzilai-Borwein Gradient Algorithm for Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Yuanying Qiu

    2014-01-01

    Full Text Available We study a nonmonotone adaptive Barzilai-Borwein gradient algorithm for l1-norm minimization problems arising from compressed sensing. At each iteration, the generated search direction enjoys descent property and can be easily derived by minimizing a local approximal quadratic model and simultaneously taking the favorable structure of the l1-norm. Under some suitable conditions, its global convergence result could be established. Numerical results illustrate that the proposed method is promising and competitive with the existing algorithms NBBL1 and TwIST.

  15. A comparison between compressed sensing algorithms in electrical impedance tomography.

    Science.gov (United States)

    Nasehi Tehrani, Joubin; Jin, Craig; McEwan, Alistair; van Schaik, André

    2010-01-01

    Electrical Impedance Tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. Conventional EIT reconstruction methods solve a linear model by minimizing the least squares error, i.e., the Euclidian or L2-norm, with regularization. Compressed sensing provides unique advantages in Magnetic Resonance Imaging (MRI) [1] when the images are transformed to a sparse basis. EIT images are generally sparser than MRI images due to their lower spatial resolution. This leads us to investigate ability of compressed sensing algorithms currently applied to MRI in EIT without transformation to a new basis. In particular, we examine four new iterative algorithms for L1 and L0 minimization with applications to compressed sensing and compare these with current EIT inverse L1-norm regularization methods. The four compressed sensing methods are as follows: (1) an interior point method for solving L1-regularized least squares problems (L1-LS); (2) total variation using a Lagrangian multiplier method (TVAL3); (3) a two-step iterative shrinkage / thresholding method (TWIST) for solving the L0-regularized least squares problem; (4) The Least Absolute Shrinkage and Selection Operator (LASSO) with tracing the Pareto curve, which estimates the least squares parameters subject to a L1-norm constraint. In our investigation, using 1600 elements, we found all four CS algorithms provided an improvement over the best conventional EIT reconstruction method, Total Variation, in three important areas: robustness to noise, increased computational speed of at least 40x and a visually apparent improvement in spatial resolution. Out of the four CS algorithms we found TWIST was the fastest with at least a 100x speed increase.

  16. Towards the compression of parton densities through machine learning algorithms

    CERN Document Server

    Carrazza, Stefano

    2016-01-01

    One of the most fascinating challenges in the context of parton density function (PDF) is the determination of the best combined PDF uncertainty from individual PDF sets. Since 2014 multiple methodologies have been developed to achieve this goal. In this proceedings we first summarize the strategy adopted by the PDF4LHC15 recommendation and then, we discuss about a new approach to Monte Carlo PDF compression based on clustering through machine learning algorithms.

  17. Compressive Sensing in Signal Processing: Algorithms and Transform Domain Formulations

    Directory of Open Access Journals (Sweden)

    Irena Orović

    2016-01-01

    Full Text Available Compressive sensing has emerged as an area that opens new perspectives in signal acquisition and processing. It appears as an alternative to the traditional sampling theory, endeavoring to reduce the required number of samples for successful signal reconstruction. In practice, compressive sensing aims to provide saving in sensing resources, transmission, and storage capacities and to facilitate signal processing in the circumstances when certain data are unavailable. To that end, compressive sensing relies on the mathematical algorithms solving the problem of data reconstruction from a greatly reduced number of measurements by exploring the properties of sparsity and incoherence. Therefore, this concept includes the optimization procedures aiming to provide the sparsest solution in a suitable representation domain. This work, therefore, offers a survey of the compressive sensing idea and prerequisites, together with the commonly used reconstruction methods. Moreover, the compressive sensing problem formulation is considered in signal processing applications assuming some of the commonly used transformation domains, namely, the Fourier transform domain, the polynomial Fourier transform domain, Hermite transform domain, and combined time-frequency domain.

  18. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  19. A novel hybrid total variation minimization algorithm for compressed sensing

    Science.gov (United States)

    Li, Hongyu; Wang, Yong; Liang, Dong; Ying, Leslie

    2017-05-01

    Compressed sensing (CS) is a technology to acquire and reconstruct sparse signals below the Nyquist rate. For images, total variation of the signal is usually minimized to promote sparseness of the image in gradient. However, similar to all L1-minimization algorithms, total variation has the issue of penalizing large gradient, thus causing large errors on image edges. Many non-convex penalties have been proposed to address the issue of L1 minimization. For example, homotopic L0 minimization algorithms have shown success in reconstructing images from magnetic resonance imaging (MRI). Homotopic L0 minimization may suffer from local minimum which may not be sufficiently robust when the signal is not strictly sparse or the measurements are contaminated by noise. In this paper, we propose a hybrid total variation minimization algorithm to integrate the benefits of both L1 and homotopic L0 minimization algorithms for image recovery from reduced measurements. The algorithm minimizes the conventional total variation when the gradient is small, and minimizes the L0 of gradient when the gradient is large. The transition between L1 and L0 of the gradients is determined by an auto-adaptive threshold. The proposed algorithm has the benefits of L1 minimization being robust to noise/approximation errors, and also the benefits of L0 minimization requiring fewer measurements for recovery. Experimental results using MRI data are presented to demonstrate the proposed hybrid total variation minimization algorithm yields improved image quality over other existing methods in terms of the reconstruction accuracy.

  20. A baseline algorithm for face detection and tracking in video

    Science.gov (United States)

    Manohar, Vasant; Soundararajan, Padmanabhan; Korzhova, Valentina; Boonstra, Matthew; Goldgof, Dmitry; Kasturi, Rangachar

    2007-10-01

    Establishing benchmark datasets, performance metrics and baseline algorithms have considerable research significance in gauging the progress in any application domain. These primarily allow both users and developers to compare the performance of various algorithms on a common platform. In our earlier works, we focused on developing performance metrics and establishing a substantial dataset with ground truth for object detection and tracking tasks (text and face) in two video domains -- broadcast news and meetings. In this paper, we present the results of a face detection and tracking algorithm on broadcast news videos with the objective of establishing a baseline performance for this task-domain pair. The detection algorithm uses a statistical approach that was originally developed by Viola and Jones and later extended by Lienhart. The algorithm uses a feature set that is Haar-like and a cascade of boosted decision tree classifiers as a statistical model. In this work, we used the Intel Open Source Computer Vision Library (OpenCV) implementation of the Haar face detection algorithm. The optimal values for the tunable parameters of this implementation were found through an experimental design strategy commonly used in statistical analyses of industrial processes. Tracking was accomplished as continuous detection with the detected objects in two frames mapped using a greedy algorithm based on the distances between the centroids of bounding boxes. Results on the evaluation set containing 50 sequences (~ 2.5 mins.) using the developed performance metrics show good performance of the algorithm reflecting the state-of-the-art which makes it an appropriate choice as the baseline algorithm for the problem.

  1. YAMPA: Yet Another Matching Pursuit Algorithm for compressive sensing

    Science.gov (United States)

    Lodhi, Muhammad A.; Voronin, Sergey; Bajwa, Waheed U.

    2016-05-01

    State-of-the-art sparse recovery methods often rely on the restricted isometry property for their theoretical guarantees. However, they cannot explicitly incorporate metrics such as restricted isometry constants within their recovery procedures due to the computational intractability of calculating such metrics. This paper formulates an iterative algorithm, termed yet another matching pursuit algorithm (YAMPA), for recovery of sparse signals from compressive measurements. YAMPA differs from other pursuit algorithms in that: (i) it adapts to the measurement matrix using a threshold that is explicitly dependent on two computable coherence metrics of the matrix, and (ii) it does not require knowledge of the signal sparsity. Performance comparisons of YAMPA against other matching pursuit and approximate message passing algorithms are made for several types of measurement matrices. These results show that while state-of-the-art approximate message passing algorithms outperform other algorithms (including YAMPA) in the case of well-conditioned random matrices, they completely break down in the case of ill-conditioned measurement matrices. On the other hand, YAMPA and comparable pursuit algorithms not only result in reasonable performance for well-conditioned matrices, but their performance also degrades gracefully for ill-conditioned matrices. The paper also shows that YAMPA uniformly outperforms other pursuit algorithms for the case of thresholding parameters chosen in a clairvoyant fashion. Further, when combined with a simple and fast technique for selecting thresholding parameters in the case of ill-conditioned matrices, YAMPA outperforms other pursuit algorithms in the regime of low undersampling, although some of these algorithms can outperform YAMPA in the regime of high undersampling in this setting.

  2. Spatial compression algorithm for the analysis of very large multivariate images

    Science.gov (United States)

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  3. Quality assessment for wireless capsule endoscopy videos compressed via HEVC: From diagnostic quality to visual perception.

    Science.gov (United States)

    Usman, Muhammad Arslan; Usman, Muhammad Rehan; Shin, Soo Young

    2017-12-01

    Maintaining the quality of medical images and videos is an essential part of the e-services provided by the healthcare sector. The convergence of modern communication systems and the healthcare industry necessitates the provision of better quality of service and experience by the service provider. Recent inclusion and standardization of the high efficiency video coder (HEVC) has made it possible for medical data to be compressed and transmitted over wireless networks with minimal compromise of the quality. Quality evaluation and assessment of these medical videos transmitted over wireless networks is another important research area that requires further exploration and attention. In this paper, we have conducted an in-depth study of video quality assessment for compressed wireless capsule endoscopy (WCE) videos. Our study includes the performance evaluation of several state-of-the-art objective video quality metrics in terms of determining the quality of compressed WCE videos. Subjective video quality experiments were conducted with the assistance of experts and non-experts in order to predict the diagnostic and visual quality of these medical videos, respectively. The evaluation of the metrics is based on three major performance metrics that include, correlation between the subjective and objective scores, relative statistical performance and computation time. Results show that the metrics information fidelity criterion (IFC), visual information fidelity-(VIF) and especially pixel based VIF stand out as best performing metrics. Furthermore, our paper reports the performance of HEVC compression on medical videos and according to the results, it performs optimally in preserving the diagnostic and visual quality of WCE videos at Quantization Parameter (QP) values of up to 35 and 37 respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A Multiresolution Image Completion Algorithm for Compressing Digital Color Images

    Directory of Open Access Journals (Sweden)

    R. Gomathi

    2014-01-01

    Full Text Available This paper introduces a new framework for image coding that uses image inpainting method. In the proposed algorithm, the input image is subjected to image analysis to remove some of the portions purposefully. At the same time, edges are extracted from the input image and they are passed to the decoder in the compressed manner. The edges which are transmitted to decoder act as assistant information and they help inpainting process fill the missing regions at the decoder. Textural synthesis and a new shearlet inpainting scheme based on the theory of p-Laplacian operator are proposed for image restoration at the decoder. Shearlets have been mathematically proven to represent distributed discontinuities such as edges better than traditional wavelets and are a suitable tool for edge characterization. This novel shearlet p-Laplacian inpainting model can effectively reduce the staircase effect in Total Variation (TV inpainting model whereas it can still keep edges as well as TV model. In the proposed scheme, neural network is employed to enhance the value of compression ratio for image coding. Test results are compared with JPEG 2000 and H.264 Intracoding algorithms. The results show that the proposed algorithm works well.

  5. PERFORMANCE ANALYSIS OF SET PARTITIONING IN HIERARCHICAL TREES (SPIHT ALGORITHM FOR A FAMILY OF WAVELETS USED IN COLOR IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    A. Sreenivasa Murthy

    2014-11-01

    Full Text Available With the spurt in the amount of data (Image, video, audio, speech, & text available on the net, there is a huge demand for memory & bandwidth savings. One has to achieve this, by maintaining the quality & fidelity of the data acceptable to the end user. Wavelet transform is an important and practical tool for data compression. Set partitioning in hierarchal trees (SPIHT is a widely used compression algorithm for wavelet transformed images. Among all wavelet transform and zero-tree quantization based image compression algorithms SPIHT has become the benchmark state-of-the-art algorithm because it is simple to implement & yields good results. In this paper we present a comparative study of various wavelet families for image compression with SPIHT algorithm. We have conducted experiments with Daubechies, Coiflet, Symlet, Bi-orthogonal, Reverse Bi-orthogonal and Demeyer wavelet types. The resulting image quality is measured objectively, using peak signal-to-noise ratio (PSNR, and subjectively, using perceived image quality (human visual perception, HVP for short. The resulting reduction in the image size is quantified by compression ratio (CR.

  6. A Joint Compression Scheme of Video Feature Descriptors and Visual Content.

    Science.gov (United States)

    Zhang, Xiang; Ma, Siwei; Wang, Shiqi; Zhang, Xinfeng; Sun, Huifang; Gao, Wen

    2017-02-01

    High-efficiency compression of visual feature descriptors has recently emerged as an active topic due to the rapidly increasing demand in mobile visual retrieval over bandwidth-limited networks. However, transmitting only those feature descriptors may largely restrict its application scale due to the lack of necessary visual content. To facilitate the wide spread of feature descriptors, a hybrid framework of jointly compressing the feature descriptors and visual content is highly desirable. In this paper, such a content-plus-feature coding scheme is investigated, aiming to shape the next generation of video compression system toward visual retrieval, where the high-efficiency coding of both feature descriptors and visual content can be achieved by exploiting the interactions between each other. On the one hand, visual feature descriptors can achieve compact and efficient representation by taking advantages of the structure and motion information in the compressed video stream. To optimize the retrieval performance, a novel rate-accuracy optimization technique is proposed to accurately estimate the retrieval performance degradation in feature coding. On the other hand, the already compressed feature data can be utilized to further improve the video coding efficiency by applying feature matching-based affine motion compensation. Extensive simulations have shown that the proposed joint compression framework can offer significant bitrate reduction in representing both feature descriptors and video frames, while simultaneously maintaining the state-of-the-art visual retrieval performance.

  7. New algorithm for iris recognition based on video sequences

    Science.gov (United States)

    Bourennane, Salah; Fossati, Caroline; Ketchantang, William

    2010-07-01

    Among existing biometrics, iris recognition systems are among the most accurate personal biometric identification systems. However, the acquisition of a workable iris image requires strict cooperation of the user; otherwise, the image will be rejected by a verification module because of its poor quality, inducing a high false reject rate (FRR). The FRR may also increase when iris localization fails or when the pupil is too dilated. To improve the existing methods, we propose to use video sequences acquired in real time by a camera. In order to keep the same computational load to identify the iris, we propose a new method to estimate the iris characteristics. First, we propose a new iris texture characterization based on Fourier-Mellin transform, which is less sensitive to pupil dilatations than previous methods. Then, we develop a new iris localization algorithm that is robust to variations of quality (partial occlusions due to eyelids and eyelashes, light reflects, etc.), and finally, we introduce a fast and new criterion of suitable image selection from an iris video sequence for an accurate recognition. The accuracy of each step of the algorithm in the whole proposed recognition process is tested and evaluated using our own iris video database and several public image databases, such as CASIA, UBIRIS, and BATH.

  8. Novel Intermode Prediction Algorithm for High Efficiency Video Coding Encoder

    Directory of Open Access Journals (Sweden)

    Chan-seob Park

    2014-01-01

    Full Text Available The joint collaborative team on video coding (JCT-VC is developing the next-generation video coding standard which is called high efficiency video coding (HEVC. In the HEVC, there are three units in block structure: coding unit (CU, prediction unit (PU, and transform unit (TU. The CU is the basic unit of region splitting like macroblock (MB. Each CU performs recursive splitting into four blocks with equal size, starting from the tree block. In this paper, we propose a fast CU depth decision algorithm for HEVC technology to reduce its computational complexity. In 2N×2N PU, the proposed method compares the rate-distortion (RD cost and determines the depth using the compared information. Moreover, in order to speed up the encoding time, the efficient merge SKIP detection method is developed additionally based on the contextual mode information of neighboring CUs. Experimental result shows that the proposed algorithm achieves the average time-saving factor of 44.84% in the random access (RA at Main profile configuration with the HEVC test model (HM 10.0 reference software. Compared to HM 10.0 encoder, a small BD-bitrate loss of 0.17% is also observed without significant loss of image quality.

  9. Real-time data compression of broadcast video signals

    Science.gov (United States)

    Shalkauser, Mary Jo W. (Inventor); Whyte, Wayne A., Jr. (Inventor); Barnes, Scott P. (Inventor)

    1991-01-01

    A non-adaptive predictor, a nonuniform quantizer, and a multi-level Huffman coder are incorporated into a differential pulse code modulation system for coding and decoding broadcast video signals in real time.

  10. ROI image compression algorithm for reconnaissance satellite systems

    Science.gov (United States)

    Tian, Xin; Wang, Chun-Ming; Tan, Yi-Hua; Tian, Jin-Wen

    2009-10-01

    The visual effect is an important fact in the coding algorithm. So the saliency of visual attention(SVA) can be used to determine the region of interest(ROI) in the ROI image coding. A novel SVA based ROI(SVA-ROI) image coding scheme is presented for the reconnaissance satellite systems. As the SVA of the original image and reconstructed image are usually the same, the same ROI can be automatically determined in the encoder and decoder with the SVA. Then the ROI side information is no need to be transmitted and the compression efficiency can be improved. Experimental results have demonstrated that SVA-ROI has better visual effect than the similar algorithms, which will be suitable for the reconnaissance satellite systems.

  11. Investigation of the performance of video analytics systems with compressed video using the i-LIDS sterile zone dataset

    Science.gov (United States)

    Mahendrarajah, Prashath

    2011-11-01

    Recent years have seen significant investment and increasingly effective use of Video Analytics (VA) systems to detect intrusion or attacks in sterile areas. Currently there are a number of manufacturers who have achieved the Imagery Library for Intelligent Detection System (i-LIDS) primary detection classification performance standard for the sterile zone detection scenario. These manufacturers have demonstrated the performance of their systems under evaluation conditions using an uncompressed evaluation video. In this paper we consider the effect on the detection rate of an i-LIDS primary approved sterile zone system using compressed sterile zone scenario video clips as the input. The preliminary test results demonstrate a change time of detection rate with compression as the time to alarm increased with greater compression. Initial experiments suggest that the detection performance does not linearly degrade as a function of compression ratio. These experiments form a starting point for a wider set of planned trials that the Home Office will carry out over the next 12 months.

  12. A compression algorithm for the combination of PDF sets.

    Science.gov (United States)

    Carrazza, Stefano; Latorre, José I; Rojo, Juan; Watt, Graeme

    The current PDF4LHC recommendation to estimate uncertainties due to parton distribution functions (PDFs) in theoretical predictions for LHC processes involves the combination of separate predictions computed using PDF sets from different groups, each of which comprises a relatively large number of either Hessian eigenvectors or Monte Carlo (MC) replicas. While many fixed-order and parton shower programs allow the evaluation of PDF uncertainties for a single PDF set at no additional CPU cost, this feature is not universal, and, moreover, the a posteriori combination of the predictions using at least three different PDF sets is still required. In this work, we present a strategy for the statistical combination of individual PDF sets, based on the MC representation of Hessian sets, followed by a compression algorithm for the reduction of the number of MC replicas. We illustrate our strategy with the combination and compression of the recent NNPDF3.0, CT14 and MMHT14 NNLO PDF sets. The resulting compressed Monte Carlo PDF sets are validated at the level of parton luminosities and LHC inclusive cross sections and differential distributions. We determine that around 100 replicas provide an adequate representation of the probability distribution for the original combined PDF set, suitable for general applications to LHC phenomenology.

  13. ICARUS: imaging pulse compression algorithm through remapping of ultrasound.

    Science.gov (United States)

    Biagi, Elena; Dreoni, Nicola; Masotti, Leonardo; Rossi, Iacopo; Scabia, Marco

    2005-02-01

    In this work we tackle the problem of applying to echographic imaging those synthetic aperture focusing techniques (SAFT) in the frequency domain commonly used in the field of synthetic aperture radars (SAR). The aim of this research is to improve echographic image resolution by using chirp transmit signals, and by performing pulse compression in both dimensions (depth and lateral). The curved geometry present in the unfocused radio-frequency (RF) ultrasonic image is the main cause of inaccuracy in the direct application of frequency domain SAFT algorithms to echographic imaging. The focusing method proposed in this work, after pulse compression in the depth dimension, performs lateral focusing in the mixed depth-lateral spatial frequency domain by means of a depth variant remapping followed by lateral pulse compression. This technique has the advantage of providing a resolution that is uniform in nonfrequency selective attenuation media, and improved with respect to conventional time domain SAFT, without requiring the acquisition and processing of channel data necessary for the most advanced synthetic transmit aperture techniques. Therefore, the presented method is suitable for easy real-time implementation with current generation hardware.

  14. Influence of acquisition frame-rate and video compression techniques on pulse-rate variability estimation from vPPG signal.

    Science.gov (United States)

    Cerina, Luca; Iozzia, Luca; Mainardi, Luca

    2017-11-14

    In this paper, common time- and frequency-domain variability indexes obtained by pulse rate variability (PRV) series extracted from video-photoplethysmographic signal (vPPG) were compared with heart rate variability (HRV) parameters calculated from synchronized ECG signals. The dual focus of this study was to analyze the effect of different video acquisition frame-rates starting from 60 frames-per-second (fps) down to 7.5 fps and different video compression techniques using both lossless and lossy codecs on PRV parameters estimation. Video recordings were acquired through an off-the-shelf GigE Sony XCG-C30C camera on 60 young, healthy subjects (age 23±4 years) in the supine position. A fully automated, signal extraction method based on the Kanade-Lucas-Tomasi (KLT) algorithm for regions of interest (ROI) detection and tracking, in combination with a zero-phase principal component analysis (ZCA) signal separation technique was employed to convert the video frames sequence to a pulsatile signal. The frame-rate degradation was simulated on video recordings by directly sub-sampling the ROI tracking and signal extraction modules, to correctly mimic videos recorded at a lower speed. The compression of the videos was configured to avoid any frame rejection caused by codec quality leveling, FFV1 codec was used for lossless compression and H.264 with variable quality parameter as lossy codec. The results showed that a reduced frame-rate leads to inaccurate tracking of ROIs, increased time-jitter in the signals dynamics and local peak displacements, which degrades the performances in all the PRV parameters. The root mean square of successive differences (RMSSD) and the proportion of successive differences greater than 50 ms (PNN50) indexes in time-domain and the low frequency (LF) and high frequency (HF) power in frequency domain were the parameters which highly degraded with frame-rate reduction. Such a degradation can be partially mitigated by up-sampling the measured

  15. Spectral compression algorithms for the analysis of very large multivariate images

    Science.gov (United States)

    Keenan, Michael R.

    2007-10-16

    A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.

  16. 3D wavelet-based codec for lossy compression of pre-scan-converted ultrasound video

    Science.gov (United States)

    Andrew, Rex K.; Stewart, Brent K.; Langer, Steven G.; Stegbauer, Keith C.

    1999-05-01

    We present a wavelet-based video codec based on a 3D wavelet transformer, a uniform quantizer/dequantizer and an arithmetic encoder/decoder. The wavelet transformer uses biorthogonal Antonini wavelets in the two spatial dimensions and Haar wavelets in the time dimensions. Multiple levels of decomposition are supported. The codec has been applied to pre-scan-converted ultrasound image data and does not produce the type of blocking artifacts that occur in MPEG- compressed video. The PSNR at a given compression rate increases with the number of levels of decomposition: for our data at 50:1 compression, the PSNR increases from 18.4 dB at one level to 24.0 dB at four levels of decomposition. Our 3D wavelet-based video codec provides the high compression rates required to transmit diagnostic ultrasound video over existing low bandwidth links without introducing the blocking artifacts which have been demonstrated to diminish clinical utility.

  17. Visual acuity, contrast sensitivity, and range performance with compressed motion video

    NARCIS (Netherlands)

    Bijl, P.; Vries, S.C. de

    2010-01-01

    Video of visual acuity (VA) and contrast sensitivity (CS) test charts in a complex background was recorded using a CCD color camera mounted on a computer-controlled tripod and was fed into real-time MPEG-2 compression/decompression equipment. The test charts were based on the triangle orientation

  18. Comprehensive Analysis and Evaluation of Background Subtraction Algorithms for Surveillance Video

    National Research Council Canada - National Science Library

    Yan Feng; Shengmei Luo; Yumin Tian; Shuo Deng; Haihong Zheng

    2014-01-01

    .... Then, the algorithms were implemented and tested using different videos with ground truth, such as baseline, dynamic background, camera jitter, and intermittent object motion and shadow scenarios...

  19. Subjective Video Quality Assessment of H.265 Compression Standard for Full HD Resolution

    Directory of Open Access Journals (Sweden)

    Miroslav Uhrina

    2015-01-01

    Full Text Available Recently increasing interest in multimedia services leads to requirements for quality assessment, especially in the video domain. There are many factors that influence the video quality. Compression technology and transmission link imperfection can be considered as the main ones. This paper deals with the assessment of the impact of H.265/HEVC compression standard on the video quality using subjective metrics. The evaluation is done for two types of sequences with Full HD resolution depending on content. The paper is divided as follows. In the first part of the article, a short characteristic of the H.265/HEVC compression standard is written. In the second part, the subjective video quality methods used in our experiments are described. The last part of this article deals with the measurements and experimental results. They showed that quality of sequences coded between 5 and 7 Mbps is for observers sufficient, so there is no need for providers to use higher bitrates in streaming than this threshold. These results are part of a new model that is still being created and will be used for predicting the video quality in networks based on IP.

  20. Brief compression-only cardiopulmonary resuscitation training video and simulation with homemade mannequin improves CPR skills.

    Science.gov (United States)

    Wanner, Gregory K; Osborne, Arayel; Greene, Charlotte H

    2016-11-29

    Cardiopulmonary resuscitation (CPR) training has traditionally involved classroom-based courses or, more recently, home-based video self-instruction. These methods typically require preparation and purchase fee; which can dissuade many potential bystanders from receiving training. This study aimed to evaluate the effectiveness of teaching compression-only CPR to previously untrained individuals using our 6-min online CPR training video and skills practice on a homemade mannequin, reproduced by viewers with commonly available items (towel, toilet paper roll, t-shirt). Participants viewed the training video and practiced with the homemade mannequin. This was a parallel-design study with pre and post training evaluations of CPR skills (compression rate, depth, hand position, release), and hands-off time (time without compressions). CPR skills were evaluated using a sensor-equipped mannequin and two blinded CPR experts observed testing of participants. Twenty-four participants were included: 12 never-trained and 12 currently certified in CPR. Comparing pre and post training, the never-trained group had improvements in average compression rate per minute (64.3 to 103.9, p = 0.006), compressions with correct hand position in 1 min (8.3 to 54.3, p = 0.002), and correct compression release in 1 min (21.2 to 76.3, p CPR-certified group had adequate pre and post-test compression rates (>100/min), but an improved number of compressions with correct release (53.5 to 94.7, p 50 mm) remained problematic in both groups. Comparisons made between groups indicated significant improvements in compression depth, hand position, and hands-off time in never-trained compared to CPR-certified participants. Inter-rater agreement values were also calculated between the CPR experts and sensor-equipped mannequin. A brief internet-based video coupled with skill practice on a homemade mannequin improved compression-only CPR skills, especially in the previously untrained

  1. Assessment of H.264 video compression on automated face recognition performance in surveillance and mobile video scenarios

    Science.gov (United States)

    Klare, Brendan; Burge, Mark

    2010-04-01

    We assess the impact of the H.264 video codec on the match performance of automated face recognition in surveillance and mobile video applications. A set of two hundred access control (90 pixel inter-pupilary distance) and distance surveillance (45 pixel inter-pupilary distance) videos taken under non-ideal imaging and facial recognition (e.g., pose, illumination, and expression) conditions were matched using two commercial face recognition engines in the studies. The first study evaluated automated face recognition performance on access control and distance surveillance videos at CIF and VGA resolutions using the H.264 baseline profile at nine bitrates rates ranging from 8kbs to 2048kbs. In our experiments, video signals were able to be compressed up to 128kbs before a significant drop face recognition performance occurred. The second study evaluated automated face recognition on mobile devices at QCIF, iPhone, and Android resolutions for each of the H.264 PDA profiles. Rank one match performance, cumulative match scores, and failure to enroll rates are reported.

  2. Dynamic range compression and detail enhancement algorithm for infrared image.

    Science.gov (United States)

    Sun, Gang; Liu, Songlin; Wang, Weihua; Chen, Zengping

    2014-09-10

    For infrared imaging systems with high sampling width applying to the traditional display device or real-time processing system with 8-bit data width, this paper presents a new high dynamic range compression and detail enhancement (DRCDDE) algorithm for infrared images. First, a bilateral filter is adopted to separate the original image into two parts: the base component that contains large-scale signal variations, and the detail component that contains high-frequency information. Then, the operator model for DRC with local-contrast preservation is established, along with a new proposed nonlinear intensity transfer function (ITF) to implement adaptive DRC of the base component. For the detail component, depending on the local statistical characteristics, we set up suitable intensity level extension criteria to enhance the low-contrast details and suppress noise. Finally, the results of the two components are recombined with a weighted coefficient. Experiment results by real infrared data, and quantitative comparison with other well-established methods, show the better performance of the proposed algorithm. Furthermore, the technique could effectively project a dim target while suppressing noise, which is beneficial to image display and target detection.

  3. The MUSIC algorithm for sparse objects: a compressed sensing analysis

    Science.gov (United States)

    Fannjiang, Albert C.

    2011-03-01

    The multiple signal classification (MUSIC) algorithm, and its extension for imaging sparse extended objects, with noisy data is analyzed by compressed sensing (CS) techniques. A thresholding rule is developed to augment the standard MUSIC algorithm. The notion of restricted isometry property (RIP) and an upper bound on the restricted isometry constant (RIC) are employed to establish sufficient conditions for the exact localization by MUSIC with or without noise. In the noiseless case, the sufficient condition gives an upper bound on the numbers of random sampling and incident directions necessary for exact localization. In the noisy case, the sufficient condition assumes additionally an upper bound for the noise-to-object ratio in terms of the RIC and the dynamic range of objects. This bound points to the super-resolution capability of the MUSIC algorithm. Rigorous comparison of performance between MUSIC and the CS minimization principle, basis pursuit denoising (BPDN), is given. In general, the MUSIC algorithm guarantees to recover, with high probability, s scatterers with n= {O}(s^2) random sampling and incident directions and sufficiently high frequency. For the favorable imaging geometry where the scatterers are distributed on a transverse plane MUSIC guarantees to recover, with high probability, s scatterers with a median frequency and n= {O}(s) random sampling/incident directions. Moreover, for the problems of spectral estimation and source localizations both BPDN and MUSIC guarantee, with high probability, to identify exactly the frequencies of random signals with the number n= {O}(s) of sampling times. However, in the absence of abundant realizations of signals, BPDN is the preferred method for spectral estimation. Indeed, BPDN can identify the frequencies approximately with just one realization of signals with the recovery error at worst linearly proportional to the noise level. Numerical results confirm that BPDN outperforms MUSIC in the well

  4. A BitTorrent-Based Dynamic Bandwidth Adaptation Algorithm for Video Streaming

    Science.gov (United States)

    Hsu, Tz-Heng; Liang, You-Sheng; Chiang, Meng-Shu

    In this paper, we propose a BitTorrent-based dynamic bandwidth adaptation algorithm for video streaming. Two mechanisms to improve the original BitTorrent protocol are proposed: (1) the decoding order frame first (DOFF) frame selection algorithm and (2) the rarest I frame first (RIFF) frame selection algorithm. With the proposed algorithms, a peer can periodically check the number of downloaded frames in the buffer and then allocate the available bandwidth adaptively for video streaming. As a result, users can have smooth video playout experience with the proposed algorithms.

  5. A Novel Range Compression Algorithm for Resolution Enhancement in GNSS-SARs.

    Science.gov (United States)

    Zheng, Yu; Yang, Yang; Chen, Wu

    2017-06-25

    In this paper, a novel range compression algorithm for enhancing range resolutions of a passive Global Navigation Satellite System-based Synthetic Aperture Radar (GNSS-SAR) is proposed. In the proposed algorithm, within each azimuth bin, firstly range compression is carried out by correlating a reflected GNSS intermediate frequency (IF) signal with a synchronized direct GNSS base-band signal in the range domain. Thereafter, spectrum equalization is applied to the compressed results for suppressing side lobes to obtain a final range-compressed signal. Both theoretical analysis and simulation results have demonstrated that significant range resolution improvement in GNSS-SAR images can be achieved by the proposed range compression algorithm, compared to the conventional range compression algorithm.

  6. A Near-Lossless Image Compression Algorithm Suitable for Hardware Design in Wireless Endoscopy System

    Directory of Open Access Journals (Sweden)

    Xie Xiang

    2007-01-01

    Full Text Available In order to decrease the communication bandwidth and save the transmitting power in the wireless endoscopy capsule, this paper presents a new near-lossless image compression algorithm based on the Bayer format image suitable for hardware design. This algorithm can provide low average compression rate ( bits/pixel with high image quality (larger than dB for endoscopic images. Especially, it has low complexity hardware overhead (only two line buffers and supports real-time compressing. In addition, the algorithm can provide lossless compression for the region of interest (ROI and high-quality compression for other regions. The ROI can be selected arbitrarily by varying ROI parameters. In addition, the VLSI architecture of this compression algorithm is also given out. Its hardware design has been implemented in m CMOS process.

  7. Video Compression Schemes Using Edge Feature on Wireless Video Sensor Networks

    National Research Council Canada - National Science Library

    Nguyen Huu, Phat; Tran-Quang, Vinh; Miyoshi, Takumi

    2012-01-01

    .... In these schemes, we divide the compression process into several small processing components, which are then distributed to multiple nodes along a path from a source node to a cluster head in a cluster...

  8. High-Performance Motion Estimation for Image Sensors with Video Compression

    Directory of Open Access Journals (Sweden)

    Weizhi Xu

    2015-08-01

    Full Text Available It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME. Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME.

  9. High-Performance Motion Estimation for Image Sensors with Video Compression.

    Science.gov (United States)

    Xu, Weizhi; Yin, Shouyi; Liu, Leibo; Liu, Zhiyong; Wei, Shaojun

    2015-08-21

    It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME) is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME). Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME.

  10. Compression of Video-Otoscope Images for Tele-Otology: A Pilot Study

    Science.gov (United States)

    2001-10-25

    algorithm used in image compression is the one developed by the Joint Picture Expert Group (JPEG), which has been deployed in almost all imaging ...recognised the image , nor go back to view the previous images . This was designed to minimise the affect of memory . After the assessments were tabulated...also have contributed such as the memory effect, or the experience of the assessor. V. CONCLUSION 1. Images can probably be compressed to about

  11. Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding

    Directory of Open Access Journals (Sweden)

    Pengyu Liu

    2013-01-01

    Full Text Available A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI. The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding.

  12. Multi-color image compression-encryption algorithm based on chaotic system and fuzzy transform

    OpenAIRE

    Zarebnia, M.; Kianfar, R.; Parvaz, R.

    2017-01-01

    In this paper an algorithm for multi-color image compression-encryption is introduced. For compression step fuzzy transform based on exponential b-spline function is used. In encryption step, a novel combination chaotic system based on Sine and Tent systems is proposed. Also in the encryption algorithm, 3D shift based on chaotic system is introduced. The simulation results and security analysis show that the proposed algorithm is secure and efficient.

  13. Verification-Based Interval-Passing Algorithm for Compressed Sensing

    OpenAIRE

    Wu, Xiaofu; Yang, Zhen

    2013-01-01

    We propose a verification-based Interval-Passing (IP) algorithm for iteratively reconstruction of nonnegative sparse signals using parity check matrices of low-density parity check (LDPC) codes as measurement matrices. The proposed algorithm can be considered as an improved IP algorithm by further incorporation of the mechanism of verification algorithm. It is proved that the proposed algorithm performs always better than either the IP algorithm or the verification algorithm. Simulation resul...

  14. Impact of H.264/AVC and H.265/HEVC Compression Standards on the Video Quality for 4K Resolution

    Directory of Open Access Journals (Sweden)

    Miroslav Uhrina

    2014-01-01

    Full Text Available This article deals with the impact of H.264/AVC and H.265/HEVC compression standards on the video quality for 4K resolution. In the first part a short characteristic of both compression standards is written. The second part focuses on the well-known objective metrics which were used for evaluating the video quality. In the third part the measurements and the experimental results are described.

  15. Video error concealment using block matching and frequency selective extrapolation algorithms

    Science.gov (United States)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  16. FPGA Implementation of Gaussian Mixture Model Algorithm for 47 fps Segmentation of 1080p Video

    Directory of Open Access Journals (Sweden)

    Mariangela Genovese

    2013-01-01

    Full Text Available Circuits and systems able to process high quality video in real time are fundamental in nowadays imaging systems. The circuit proposed in the paper, aimed at the robust identification of the background in video streams, implements the improved formulation of the Gaussian Mixture Model (GMM algorithm that is included in the OpenCV library. An innovative, hardware oriented, formulation of the GMM equations, the use of truncated binary multipliers, and ROM compression techniques allow reduced hardware complexity and increased processing capability. The proposed circuit has been designed having commercial FPGA devices as target and provides speed and logic resources occupation that overcome previously proposed implementations. The circuit, when implemented on Virtex6 or StratixIV, processes more than 45 frame per second in 1080p format and uses few percent of FPGA logic resources.

  17. Comparison of Open Source Compression Algorithms on Vhr Remote Sensing Images for Efficient Storage Hierarchy

    Science.gov (United States)

    Akoguz, A.; Bozkurt, S.; Gozutok, A. A.; Alp, G.; Turan, E. G.; Bogaz, M.; Kent, S.

    2016-06-01

    High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence & Spaces SPOT 6 & 7 satellites having 1.5 m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA & LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate & Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.

  18. Impact of GoP on the Video Quality of VP9 Compression Standard for Full HD Resolution

    Directory of Open Access Journals (Sweden)

    Miroslav Uhrina

    2016-01-01

    Full Text Available In the last years, the interest on multimedia services has significantly increased. This leads to requirements for quality assessment, especially in video domain. Compression together with the transmission link imperfection are two main factors that influence the quality. This paper deals with the assessment of the Group of Pictures (GoP impact on the video quality of VP9 compression standard. The evaluation was done using selected objective and subjective methods for two types of Full HD sequences depending on content. These results are part of a new model that is still being created and will be used for predicting the video quality in networks based on IP.

  19. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    Science.gov (United States)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  20. Endotracheal Intubation Using the Macintosh Laryngoscope or KingVision Video Laryngoscope during Uninterrupted Chest Compression

    Directory of Open Access Journals (Sweden)

    Ewelina Gaszynska

    2014-01-01

    Full Text Available Objective. Advanced airway management, endotracheal intubation (ETI, during CPR is more difficult than, for example, during anesthesia. However, new devices such as video laryngoscopes should help in such circumstances. The aim of this study was to assess the performance of the KingVision video laryngoscopes in a manikin cardiopulmonary resuscitation (CPR scenario. Methods. Thirty students enrolled in the third year of paramedic school took part in the study. The simulated CPR scenario was ETI using the standard laryngoscope with a Macintosh blade (MCL and ETI using the KingVision video laryngoscope performed during uninterrupted chest compressions. The primary endpoints were the time needed for ETI and the success ratio. Results. The mean time required for intubation was similar for both laryngoscopes: 16.6 (SD 5.11, median 15.64, range 7.9–27.9 seconds versus 17.91 (SD 5.6, median 16.28, range 10.6–28.6 seconds for the MCL and KingVision, respectively (P=0.1888. On the first attempt at ETI, the success rate during CPR was comparable between the evaluated laryngoscopes: P=0.9032. Conclusion. The KingVision video laryngoscope proves to be less superior when used for endotracheal intubation during CPR compared to the standard laryngoscope with a Mackintosh blade. This proves true in terms of shortening the time needed for ETI and increasing the success ratio.

  1. Advantages of a non-linear frequency compression algorithm in noise.

    Science.gov (United States)

    Bohnert, Andrea; Nyffeler, Myriel; Keilmann, Annerose

    2010-07-01

    A multichannel non-linear frequency compression algorithm was evaluated in comparison to conventional amplification hearing aids using a test of speech understanding in noise (Oldenburger Satztest-OLSA) and subjective questionnaires. The new algorithm compresses frequencies above a pre-calculated cut off frequency and shifts them to a lower frequency range, thereby providing high-frequency audibility. Low-frequencies, below the compression cut off frequency, are amplified normally. This algorithm is called SoundRecover (SR). In this study, 11 experienced hearing aid users with a severe to profound sensorineural hearing loss were tested. Seven subjects showed enhanced levels of understanding in noise (OLSA) using frequency compression. However, 4 out of the 11 subjects could not benefit from the high-frequency gain. Evaluation using questionnaires demonstrated an increased level of satisfaction after 2 months of experimental devices wearing (p = 0.08) and after 4 months of wearing (p = 0.09), respectively, compared to conventional hearing instruments.

  2. A Novel Face Segmentation Algorithm from a Video Sequence for Real-Time Face Recognition

    Directory of Open Access Journals (Sweden)

    Sudhaker Samuel RD

    2007-01-01

    Full Text Available The first step in an automatic face recognition system is to localize the face region in a cluttered background and carefully segment the face from each frame of a video sequence. In this paper, we propose a fast and efficient algorithm for segmenting a face suitable for recognition from a video sequence. The cluttered background is first subtracted from each frame, in the foreground regions, a coarse face region is found using skin colour. Then using a dynamic template matching approach the face is efficiently segmented. The proposed algorithm is fast and suitable for real-time video sequence. The algorithm is invariant to large scale and pose variation. The segmented face is then handed over to a recognition algorithm based on principal component analysis and linear discriminant analysis. The online face detection, segmentation, and recognition algorithms take an average of 0.06 second on a 3.2 GHz P4 machine.

  3. Resource efficient data compression algorithms for demanding, WSN based biomedical applications.

    Science.gov (United States)

    Antonopoulos, Christos P; Voros, Nikolaos S

    2016-02-01

    During the last few years, medical research areas of critical importance such as Epilepsy monitoring and study, increasingly utilize wireless sensor network technologies in order to achieve better understanding and significant breakthroughs. However, the limited memory and communication bandwidth offered by WSN platforms comprise a significant shortcoming to such demanding application scenarios. Although, data compression can mitigate such deficiencies there is a lack of objective and comprehensive evaluation of relative approaches and even more on specialized approaches targeting specific demanding applications. The research work presented in this paper focuses on implementing and offering an in-depth experimental study regarding prominent, already existing as well as novel proposed compression algorithms. All algorithms have been implemented in a common Matlab framework. A major contribution of this paper, that differentiates it from similar research efforts, is the employment of real world Electroencephalography (EEG) and Electrocardiography (ECG) datasets comprising the two most demanding Epilepsy modalities. Emphasis is put on WSN applications, thus the respective metrics focus on compression rate and execution latency for the selected datasets. The evaluation results reveal significant performance and behavioral characteristics of the algorithms related to their complexity and the relative negative effect on compression latency as opposed to the increased compression rate. It is noted that the proposed schemes managed to offer considerable advantage especially aiming to achieve the optimum tradeoff between compression rate-latency. Specifically, proposed algorithm managed to combine highly completive level of compression while ensuring minimum latency thus exhibiting real-time capabilities. Additionally, one of the proposed schemes is compared against state-of-the-art general-purpose compression algorithms also exhibiting considerable advantages as far as the

  4. Exploiting chaos-based compressed sensing and cryptographic algorithm for image encryption and compression

    Science.gov (United States)

    Chen, Junxin; Zhang, Yu; Qi, Lin; Fu, Chong; Xu, Lisheng

    2018-02-01

    This paper presents a solution for simultaneous image encryption and compression. The primary introduced techniques are compressed sensing (CS) using structurally random matrix (SRM), and permutation-diffusion type image encryption. The encryption performance originates from both the techniques, whereas the compression effect is achieved by CS. Three-dimensional (3-D) cat map is employed for key stream generation. The simultaneously produced three state variables of 3-D cat map are respectively used for the SRM generation, image permutation and diffusion. Numerical simulations and security analyses have been carried out, and the results demonstrate the effectiveness and security performance of the proposed system.

  5. A Fast PDE Algorithm Using Adaptive Scan and Search for Video Coding

    Science.gov (United States)

    Kim, Jong-Nam

    In this paper, we propose an algorithm that reduces unnecessary computations, while keeping the same prediction quality as that of the full search algorithm. In the proposed algorithm, we can reduce unnecessary computations efficiently by calculating initial matching error point from first 1/N partial errors. We can increase the probability that hits minimum error point as soon as possible. Our algorithm decreases the computational amount by about 20% of the conventional PDE algorithm without any degradation of prediction quality. Our algorithm would be useful in real-time video coding applications using MPEG-2/4 AVC standards.

  6. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    Science.gov (United States)

    Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu

    2017-01-01

    Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112

  7. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    Directory of Open Access Journals (Sweden)

    Guangle Yao

    2017-08-01

    Full Text Available Background subtraction (BS is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR.

  8. Preserving color fidelity for display devices using scalable memory compression architecture for text, graphics, and video

    Science.gov (United States)

    Lebowsky, Fritz; Nicolas, Marina

    2014-01-01

    High-end monitors and TVs based on LCD technology continue to increase their native display resolution to 4k by 2k and beyond. Subsequently, uncompressed pixel amplitude processing becomes costly not only when transmitting over cable or wireless communication channels, but also when processing with array processor architectures. For motion video content, spatial preprocessing from YCbCr 444 to YCbCr 420 is widely accepted. However, due to spatial low pass filtering in horizontal and vertical direction, quality and readability of small text and graphics content is heavily compromised when color contrast is high in chrominance channels. On the other hand, straight forward YCbCr 444 compression based on mathematical error coding schemes quite often lacks optimal adaptation to visually significant image content. We present a block-based memory compression architecture for text, graphics, and video enabling multidimensional error minimization with context sensitive control of visually noticeable artifacts. As a result of analyzing image context locally, the number of operations per pixel can be significantly reduced, especially when implemented on array processor architectures. A comparative analysis based on some competitive solutions highlights the effectiveness of our approach, identifies its current limitations with regard to high quality color rendering, and illustrates remaining visual artifacts.

  9. Compression algorithm for data analysis in a radio link (preparation of PACEM 2 experiment)

    Science.gov (United States)

    Leroux, G.; Sylvain, M.

    1982-11-01

    The Hadamard transformation for image compression is applied to a radio data transmission system. The programs used and the performance obtained are described. The algorithms use PASCAL and the listed programs are written in FORTRAN 77. The experimental results of 62 images of 64 lines, show a standard deviation of 1.5% with a compression rate of 18.5, which is in accordance with the proposed goals.

  10. Research on the Compression Algorithm of the Infrared Thermal Image Sequence Based on Differential Evolution and Double Exponential Decay Model

    Directory of Open Access Journals (Sweden)

    Jin-Yu Zhang

    2014-01-01

    Full Text Available This paper has proposed a new thermal wave image sequence compression algorithm by combining double exponential decay fitting model and differential evolution algorithm. This study benchmarked fitting compression results and precision of the proposed method was benchmarked to that of the traditional methods via experiment; it investigated the fitting compression performance under the long time series and improved model and validated the algorithm by practical thermal image sequence compression and reconstruction. The results show that the proposed algorithm is a fast and highly precise infrared image data processing method.

  11. Analysis of compressive properties of the BioAid hearing aid algorithm.

    Science.gov (United States)

    Clark, Nicholas R; Lecluyse, Wendy; Jürgens, Tim

    2017-09-25

    This technical paper describes a biologically inspired hearing aid algorithm based on a computer model of the peripheral auditory system simulating basilar membrane compression, reflexive efferent feedback and its resulting properties. Two evaluations were conducted on the core part of the algorithm, which is an instantaneous compression sandwiched between the attenuation and envelope extraction processes of a relatively slow feedback compressor. The algorithm's input/output (I/O) function was analysed for different stationary (ambient) sound levels, and the algorithm's response to transient sinusoidal tone complexes was analysed and contrasted to that of a reference dynamic compressor. The algorithm's emergent properties are: (1) the I/O function adapts to the average sound level such that processing is linear for levels close to the ambient sound level and (2) onsets of transient signals are marked across time and frequency. Adaptive linearisation and onset marking, as inherent compressive features of the algorithm, provide potentially beneficial features to hearing-impaired listeners with a relatively simple circuit. The algorithm offers a new, biological perspective on hearing aid amplification.

  12. General Video Game Evaluation Using Relative Algorithm Performance Profiles

    DEFF Research Database (Denmark)

    Nielsen, Thorbjørn; Barros, Gabriella; Togelius, Julian

    2015-01-01

    In order to generate complete games through evolution we need generic and reliably evaluation functions for games. It has been suggested that game quality could be characterised through playing a game with different controllers and comparing their performance. This paper explores that idea through...... investigating the relative performance of different general game-playing algorithms. Seven game-playing algorithms was used to play several hand-designed, mutated and randomly generated VGDL game descriptions. Results discussed appear to support the conjecture that well-designed games have, in average, a higher...... performance difference between better and worse game-playing algorithms....

  13. Performance evaluation of wavelet-based ECG compression algorithms for telecardiology application over CDMA network.

    Science.gov (United States)

    Kim, Byung S; Yoo, Sun K

    2007-09-01

    The use of wireless networks bears great practical importance in instantaneous transmission of ECG signals during movement. In this paper, three typical wavelet-based ECG compression algorithms, Rajoub (RA), Embedded Zerotree Wavelet (EZ), and Wavelet Transform Higher-Order Statistics Coding (WH), were evaluated to find an appropriate ECG compression algorithm for scalable and reliable wireless tele-cardiology applications, particularly over a CDMA network. The short-term and long-term performance characteristics of the three algorithms were analyzed using normal, abnormal, and measurement noise-contaminated ECG signals from the MIT-BIH database. In addition to the processing delay measurement, compression efficiency and reconstruction sensitivity to error were also evaluated via simulation models including the noise-free channel model, random noise channel model, and CDMA channel model, as well as over an actual CDMA network currently operating in Korea. This study found that the EZ algorithm achieves the best compression efficiency within a low-noise environment, and that the WH algorithm is competitive for use in high-error environments with degraded short-term performance with abnormal or contaminated ECG signals.

  14. Fast vector quantization using a Bat algorithm for image compression

    OpenAIRE

    Karri, Chiranjeevi; Jena, Umaranjan

    2017-01-01

    Linde–Buzo–Gray (LBG), a traditional method of vector quantization (VQ) generates a local optimal codebook which results in lower PSNR value. The performance of vector quantization (VQ) depends on the appropriate codebook, so researchers proposed optimization techniques for global codebook generation. Particle swarm optimization (PSO) and Firefly algorithm (FA) generate an efficient codebook, but undergoes instability in convergence when particle velocity is high and non-availability of brigh...

  15. Fast vector quantization using a Bat algorithm for image compression

    Directory of Open Access Journals (Sweden)

    Chiranjeevi Karri

    2016-06-01

    Full Text Available Linde–Buzo–Gray (LBG, a traditional method of vector quantization (VQ generates a local optimal codebook which results in lower PSNR value. The performance of vector quantization (VQ depends on the appropriate codebook, so researchers proposed optimization techniques for global codebook generation. Particle swarm optimization (PSO and Firefly algorithm (FA generate an efficient codebook, but undergoes instability in convergence when particle velocity is high and non-availability of brighter fireflies in the search space respectively. In this paper, we propose a new algorithm called BA-LBG which uses Bat Algorithm on initial solution of LBG. It produces an efficient codebook with less computational time and results very good PSNR due to its automatic zooming feature using adjustable pulse emission rate and loudness of bats. From the results, we observed that BA-LBG has high PSNR compared to LBG, PSO-LBG, Quantum PSO-LBG, HBMO-LBG and FA-LBG, and its average convergence speed is 1.841 times faster than HBMO-LBG and FA-LBG but no significance difference with PSO.

  16. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    Science.gov (United States)

    Yao, Juncai; Liu, Guizhong

    2017-03-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  17. Delta modulation. [overshoot suppression algorithm for video data transmission

    Science.gov (United States)

    Schilling, D. L.

    1973-01-01

    The overshoot suppression algorithm has been more extensively studied. Computer generated test-pictures show a radical improvement due to the overshoot suppression algorithm. Considering the delta modulator link as a nonlinear digital filter, a formula that relates the minimum rise time that can be handled for given filter parameters and voltage swings has been developed. The settling time has been calculated for the case of overshoot suppression as well as when no suppression is employed. The results indicate a significant decrease in settling time when overshoot suppression is used. An algorithm for correcting channel errors has been developed. It is shown that pulse stuffing PCM words in the DM bit stream results in a significant reduction in error length.

  18. Algorithm-Based Motion Magnification for Video Processing in Urological Laparoscopy.

    Science.gov (United States)

    Adams, Fabian; Schoelly, Reto; Schlager, Daniel; Schoenthaler, Martin; Schoeb, Dominik S; Wilhelm, Konrad; Hein, Simon; Wetterauer, Ulrich; Miernik, Arkadiusz

    2017-06-01

    Minimally invasive surgery is in constant further development and has replaced many conventional operative procedures. If vascular structure movement could be detected during these procedures, it could reduce the risk of vascular injury and conversion to open surgery. The recently proposed motion-amplifying algorithm, Eulerian Video Magnification (EVM), has been shown to substantially enhance minimal object changes in digitally recorded video that is barely perceptible to the human eye. We adapted and examined this technology for use in urological laparoscopy. Video sequences of routine urological laparoscopic interventions were recorded and further processed using spatial decomposition and filtering algorithms. The freely available EVM algorithm was investigated for its usability in real-time processing. In addition, a new image processing technology, the CRS iimotion Motion Magnification (CRSMM) algorithm, was specifically adjusted for endoscopic requirements, applied, and validated by our working group. Using EVM, no significant motion enhancement could be detected without severe impairment of the image resolution, motion, and color presentation. The CRSMM algorithm significantly improved image quality in terms of motion enhancement. In particular, the pulsation of vascular structures could be displayed more accurately than in EVM. Motion magnification image processing technology has the potential for clinical importance as a video optimizing modality in endoscopic and laparoscopic surgery. Barely detectable (micro)movements can be visualized using this noninvasive marker-free method. Despite these optimistic results, the technology requires considerable further technical development and clinical tests.

  19. Multiple Moving Object Detection for Fast Video Content Description in Compressed Domain

    Directory of Open Access Journals (Sweden)

    Boris Mansencal

    2007-11-01

    Full Text Available Indexing deals with the automatic extraction of information with the objective of automatically describing and organizing the content. Thinking of a video stream, different types of information can be considered semantically important. Since we can assume that the most relevant one is linked to the presence of moving foreground objects, their number, their shape, and their appearance can constitute a good mean for content description. For this reason, we propose to combine both motion information and region-based color segmentation to extract moving objects from an MPEG2 compressed video stream starting only considering low-resolution data. This approach, which we refer to as “rough indexing,” consists in processing P-frame motion information first, and then in performing I-frame color segmentation. Next, since many details can be lost due to the low-resolution data, to improve the object detection results, a novel spatiotemporal filtering has been developed which is constituted by a quadric surface modeling the object trace along time. This method enables to effectively correct possible former detection errors without heavily increasing the computational effort.

  20. BIND – An algorithm for loss-less compression of nucleotide ...

    Indian Academy of Sciences (India)

    2012-08-13

    Aug 13, 2012 ... Storage, archival and dissemination of such huge data sets require efficient solutions, both from the hardware as well as software perspective. The present paper describes BIND – an algorithm specialized for compressing nucleotide sequence data. By adopting a unique 'block-length' encoding for ...

  1. An Improved Fast Compressive Tracking Algorithm Based on Online Random Forest Classifier

    Directory of Open Access Journals (Sweden)

    Xiong Jintao

    2016-01-01

    Full Text Available The fast compressive tracking (FCT algorithm is a simple and efficient algorithm, which is proposed in recent years. But, it is difficult to deal with the factors such as occlusion, appearance changes, pose variation, etc in processing. The reasons are that, Firstly, even if the naive Bayes classifier is fast in training, it is not robust concerning the noise. Secondly, the parameters are required to vary with the unique environment for accurate tracking. In this paper, we propose an improved fast compressive tracking algorithm based on online random forest (FCT-ORF for robust visual tracking. Firstly, we combine ideas with the adaptive compressive sensing theory regarding the weighted random projection to exploit both local and discriminative information of the object. The second reason is the online random forest classifier for online tracking which is demonstrated with more robust to the noise adaptively and high computational efficiency. The experimental results show that the algorithm we have proposed has a better performance in the field of occlusion, appearance changes, and pose variation than the fast compressive tracking algorithm’s contribution.

  2. Multiscale Architectures and Parallel Algorithms for Video Object Tracking

    Science.gov (United States)

    2011-10-01

    Black River Systems. This may have inadvertently introduced bugs that were later discovered by AFRL during testing (of the June 22, 2011 version of...Parallelism in Algorithms and Architectures, pages 289–298, 2007. [3] S. Ali and M. Shah. COCOA - Tracking in aerial imagery. In Daniel J. Henry

  3. Proposal for Preliminary Evaluation of an Approximate Data Compression Algorithm for the FITS Standard.

    Science.gov (United States)

    Freedman, I.

    1999-09-01

    We review the requirements for a compression algorithm compatible with the FITS Standard. We propose an object-oriented generic mechanism for the encoding of data structures with multiple fields and stochastic properties. Using syntax reminiscent of the World Coordinate System FITS extension, we decompose the data into an approximate component and residuals. We propose an adaptive wavelet image coder based on Universal Trellis-Coded Quantization (UTCQ). This algorithm outperforms other vector and lattice quantization techniques of much higher complexity. Coupled with Diagonal Coding of residuals, it permits high-performance data compression with random access to subsets of exact reconstructed data. UTCQ requires no training algorithm, a simple modeling procedure is required for a given source statistical distribution. The model allows UTCQ access to continuous rates without code book storage. The coder is quite robust and even at 64:1 compression performs within 0.3 dB of the rate-distortion bound for a memoryless gaussian source. The compressed imagery is robust to post-processing including histogram equalization, sharpening and edge enhancement without blocking artifact. An embedded zero-tree algorithm permits control of the compression to any desired bit-rate or quantitative quality criterion. Such control ensures the ability to meet any target data quality requirement to preserve the overall error budget in scientific data processing. We thus reduce expenditure on storage devices and computer networks and their manageability is improved. When combined with hierarchical region-based encoding such as the k-d tree considered by AXAF, the computational resource requirements improve by a factor of ln(N)/N where N is the dimensionality of the data. We accelerate calculation of wavelet coefficients for scientific analysis via ultra-low cost commercially-available wavelet compression hardware (e.g. Analog Devices' ADV601) may. We demonstrate some results based on

  4. ALGORITHM OF PLACEMENT OF VIDEO SURVEILLANCE CAMERAS AND ITS SOFTWARE IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available Comprehensive distributed safety, control, and monitoring systems applied by companies and organizations of different ownership structure play a substantial role in the present-day society. Video surveillance elements that ensure image processing and decision making in automated or automatic modes are the essential components of new systems. This paper covers the modeling of video surveillance systems installed in buildings, and the algorithm, or pattern, of video camera placement with due account for nearly all characteristics of buildings, detection and recognition facilities, and cameras themselves. This algorithm will be subsequently implemented as a user application. The project contemplates a comprehensive approach to the automatic placement of cameras that take account of their mutual positioning and compatibility of tasks. The project objective is to develop the principal elements of the algorithm of recognition of a moving object to be detected by several cameras. The image obtained by different cameras will be processed. Parameters of motion are to be identified to develop a table of possible options of routes. The implementation of the recognition algorithm represents an independent research project to be covered by a different article. This project consists in the assessment of the degree of complexity of an algorithm of camera placement designated for identification of cases of inaccurate algorithm implementation, as well as in the formulation of supplementary requirements and input data by means of intercrossing sectors covered by neighbouring cameras. The project also contemplates identification of potential problems in the course of development of a physical security and monitoring system at the stage of the project design development and testing. The camera placement algorithm has been implemented as a software application that has already been pilot tested on buildings and inside premises that have irregular dimensions. The

  5. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    Science.gov (United States)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  6. Shot Boundary Detection in Soccer Video using Twin-comparison Algorithm and Dominant Color Region

    Directory of Open Access Journals (Sweden)

    Matko Šarić

    2008-06-01

    Full Text Available The first step in generic video processing is temporal segmentation, i.e. shot boundary detection. Camera shot transitions can be either abrupt (e.g. cuts or gradual (e.g. fades, dissolves, wipes. Sports video is one of the most challenging domains for robust shot boundary detection. We proposed a shot boundary detection algorithm for soccer video based on the twin-comparison method and the absolute difference between frames in their ratios of dominant colored pixels to total number of pixels. With this approach the detection of gradual transitions is improved by decreasing the number of false positives caused by some camera operations. We also compared performances of our algorithm and the standard twin-comparison method.

  7. Design of a Lossless Image Compression System for Video Capsule Endoscopy and Its Performance in In-Vivo Trials

    Directory of Open Access Journals (Sweden)

    Tareq H. Khan

    2014-11-01

    Full Text Available In this paper, a new low complexity and lossless image compression system for capsule endoscopy (CE is presented. The compressor consists of a low-cost YEF color space converter and variable-length predictive with a combination of Golomb-Rice and unary encoding. All these components have been heavily optimized for low-power and low-cost and lossless in nature. As a result, the entire compression system does not incur any loss of image information. Unlike transform based algorithms, the compressor can be interfaced with commercial image sensors which send pixel data in raster-scan fashion that eliminates the need of having large buffer memory. The compression algorithm is capable to work with white light imaging (WLI and narrow band imaging (NBI with average compression ratio of 78% and 84% respectively. Finally, a complete capsule endoscopy system is developed on a single, low-power, 65-nm field programmable gate arrays (FPGA chip. The prototype is developed using circular PCBs having a diameter of 16 mm. Several in-vivo and ex-vivo trials using pig's intestine have been conducted using the prototype to validate the performance of the proposed lossless compression algorithm. The results show that, compared with all other existing works, the proposed algorithm offers a solution to wireless capsule endoscopy with lossless and yet acceptable level of compression.

  8. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    Science.gov (United States)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina

    2017-11-01

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS_CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS_CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.

  9. The Development of Video Learning to Deliver a Basic Algorithm Learning

    Directory of Open Access Journals (Sweden)

    slamet kurniawan fahrurozi

    2017-12-01

    Full Text Available The world of education is currently entering the era of the media world, where learning activities demand reduction of lecture methods and Should be replaced by the use of many medias. In relation to the function of instructional media, it can be emphasized as follows: as a tool to make learning more effective, accelerate the teaching and learning process and improve the quality of teaching and learning process. This research aimed to develop a learning video programming basic materials algorithm that is appropriate to be applied as a learning resource in class X SMK. This study was also aimed to know the feasibility of learning video media developed. The research method used was research was research and development using development model developed by Alessi and Trollip (2001. The development model was divided into 3 stages namely Planning, Design, and Develpoment. Data collection techniques used interview method, literature method and instrument method. In the next stage, learning video was validated or evaluated by the material experts, media experts and users who are implemented to 30 Learners. The result of the research showed that video learning has been successfully made on basic programming subjects which consist of 8 scane video. Based on the learning video validation result, the percentage of learning video's eligibility is 90.5% from material experts, 95.9% of media experts, and 84% of users or learners. From the testing result that the learning videos that have been developed can be used as learning resources or instructional media programming subjects basic materials algorithm.

  10. "Can you see me now?" An objective metric for predicting intelligibility of compressed American Sign Language video

    Science.gov (United States)

    Ciaramello, Francis M.; Hemami, Sheila S.

    2007-02-01

    For members of the Deaf Community in the United States, current communication tools include TTY/TTD services, video relay services, and text-based communication. With the growth of cellular technology, mobile sign language conversations are becoming a possibility. Proper coding techniques must be employed to compress American Sign Language (ASL) video for low-rate transmission while maintaining the quality of the conversation. In order to evaluate these techniques, an appropriate quality metric is needed. This paper demonstrates that traditional video quality metrics, such as PSNR, fail to predict subjective intelligibility scores. By considering the unique structure of ASL video, an appropriate objective metric is developed. Face and hand segmentation is performed using skin-color detection techniques. The distortions in the face and hand regions are optimally weighted and pooled across all frames to create an objective intelligibility score for a distorted sequence. The objective intelligibility metric performs significantly better than PSNR in terms of correlation with subjective responses.

  11. Efficient Compression of Far Field Matrices in Multipole Algorithms based on Spherical Harmonics and Radiating Modes

    Directory of Open Access Journals (Sweden)

    A. Schroeder

    2012-09-01

    Full Text Available This paper proposes a compression of far field matrices in the fast multipole method and its multilevel extension for electromagnetic problems. The compression is based on a spherical harmonic representation of radiation patterns in conjunction with a radiating mode expression of the surface current. The method is applied to study near field effects and the far field of an antenna placed on a ship surface. Furthermore, the electromagnetic scattering of an electrically large plate is investigated. It is demonstrated, that the proposed technique leads to a significant memory saving, making multipole algorithms even more efficient without compromising the accuracy.

  12. Batched QR and SVD Algorithms on GPUs with Applications in Hierarchical Matrix Compression

    KAUST Repository

    Halim Boukaram, Wajih

    2017-09-14

    We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.

  13. New Algorithms and Lower Bounds for Sequential-Access Data Compression

    Science.gov (United States)

    Gagie, Travis

    2009-02-01

    This thesis concerns sequential-access data compression, i.e., by algorithms that read the input one or more times from beginning to end. In one chapter we consider adaptive prefix coding, for which we must read the input character by character, outputting each character's self-delimiting codeword before reading the next one. We show how to encode and decode each character in constant worst-case time while producing an encoding whose length is worst-case optimal. In another chapter we consider one-pass compression with memory bounded in terms of the alphabet size and context length, and prove a nearly tight tradeoff between the amount of memory we can use and the quality of the compression we can achieve. In a third chapter we consider compression in the read/write streams model, which allows us passes and memory both polylogarithmic in the size of the input. We first show how to achieve universal compression using only one pass over one stream. We then show that one stream is not sufficient for achieving good grammar-based compression. Finally, we show that two streams are necessary and sufficient for achieving entropy-only bounds.

  14. Comprehensive Analysis and Evaluation of Background Subtraction Algorithms for Surveillance Video

    Directory of Open Access Journals (Sweden)

    Yan Feng

    2014-08-01

    Full Text Available Background Subtraction techniques are the basis for moving target detection and tracking in the domain of video surveillance, while the robust and reliable detection and tracking algorithms in complex environment is a challenging subject, so evaluations of various background subtraction algorithms are of great significance. Nine state of the art methods ranging from simple to sophisticated ones are discussed. Then the algorithms were implemented and tested using different videos with ground truth, such as baseline, dynamic background, camera jitter, and intermittent object motion and shadow scenarios. The best suited background modeling methods for each scenario are given by comprehensive analysis of three parameters: recall, precision and F-Measure, which facilitates more accurate target detection and tracking.

  15. Detection of Defective Sensors in Phased Array Using Compressed Sensing and Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Shafqat Ullah Khan

    2016-01-01

    Full Text Available A compressed sensing based array diagnosis technique has been presented. This technique starts from collecting the measurements of the far-field pattern. The system linking the difference between the field measured using the healthy reference array and the field radiated by the array under test is solved using a genetic algorithm (GA, parallel coordinate descent (PCD algorithm, and then a hybridized GA with PCD algorithm. These algorithms are applied for fully and partially defective antenna arrays. The simulation results indicate that the proposed hybrid algorithm outperforms in terms of localization of element failure with a small number of measurements. In the proposed algorithm, the slow and early convergence of GA has been avoided by combining it with PCD algorithm. It has been shown that the hybrid GA-PCD algorithm provides an accurate diagnosis of fully and partially defective sensors as compared to GA or PCD alone. Different simulations have been provided to validate the performance of the designed algorithms in diversified scenarios.

  16. Low-complexity JPEG-based progressive video codec for wireless video transmission

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Forchhammer, Søren

    2010-01-01

    This paper discusses the question of video codec enhancement for wireless video transmission of high definition video data taking into account constraints on memory and complexity. Starting from parameter adjustment for JPEG2000 compression algorithm used for wireless transmission and achieving...

  17. DMPDS: A Fast Motion Estimation Algorithm Targeting High Resolution Videos and Its FPGA Implementation

    Directory of Open Access Journals (Sweden)

    Gustavo Sanchez

    2012-01-01

    Full Text Available This paper presents a new fast motion estimation (ME algorithm targeting high resolution digital videos and its efficient hardware architecture design. The new Dynamic Multipoint Diamond Search (DMPDS algorithm is a fast algorithm which increases the ME quality when compared with other fast ME algorithms. The DMPDS achieves a better digital video quality reducing the occurrence of local minima falls, especially in high definition videos. The quality results show that the DMPDS is able to reach an average PSNR gain of 1.85 dB when compared with the well-known Diamond Search (DS algorithm. When compared to the optimum results generated by the Full Search (FS algorithm the DMPDS shows a lose of only 1.03 dB in the PSNR. On the other hand, the DMPDS reached a complexity reduction higher than 45 times when compared to FS. The quality gains related to DS caused an expected increase in the DMPDS complexity which uses 6.4-times more calculations than DS. The DMPDS architecture was designed focused on high performance and low cost, targeting to process Quad Full High Definition (QFHD videos in real time (30 frames per second. The architecture was described in VHDL and synthesized to Altera Stratix 4 and Xilinx Virtex 5 FPGAs. The synthesis results show that the architecture is able to achieve processing rates higher than 53 QFHD fps, reaching the real-time requirements. The DMPDS architecture achieved the highest processing rate when compared to related works in the literature. This high processing rate was obtained designing an architecture with a high operation frequency and low numbers of cycles necessary to process each block.

  18. Feature-based fast coding unit partition algorithm for high efficiency video coding

    Directory of Open Access Journals (Sweden)

    Yih-Chuan Lin

    2015-04-01

    Full Text Available High Efficiency Video Coding (HEVC, which is the newest video coding standard, has been developed for the efficient compression of ultra high definition videos. One of the important features in HEVC is the adoption of a quad-tree based video coding structure, in which each incoming frame is represented as a set of non-overlapped coding tree blocks (CTB by variable-block sized prediction and coding process. To do this, each CTB needs to be recursively partitioned into coding unit (CU, predict unit (PU and transform unit (TU during the coding process, leading to a huge computational load in the coding of each video frame. This paper proposes to extract visual features in a CTB and uses them to simplify the coding procedure by reducing the depth of quad-tree partition for each CTB in HEVC intra coding mode. A measure for the edge strength in a CTB, which is defined with simple Sobel edge detection, is used to constrain the possible maximum depth of quad-tree partition of the CTB. With the constrained partition depth, the proposed method can reduce a lot of encoding time. Experimental results by HM10.1 show that the average time-savings is about 13.4% under the increase of encoded BD-Rate by only 0.02%, which is a less performance degradation in comparison to other similar methods.

  19. Low-Complexity Hierarchical Mode Decision Algorithms Targeting VLSI Architecture Design for the H.264/AVC Video Encoder

    Directory of Open Access Journals (Sweden)

    Guilherme Corrêa

    2012-01-01

    Full Text Available In H.264/AVC, the encoding process can occur according to one of the 13 intraframe coding modes or according to one of the 8 available interframes block sizes, besides the SKIP mode. In the Joint Model reference software, the choice of the best mode is performed through exhaustive executions of the entire encoding process, which significantly increases the encoder's computational complexity and sometimes even forbids its use in real-time applications. Considering this context, this work proposes a set of heuristic algorithms targeting hardware architectures that lead to earlier selection of one encoding mode. The amount of repetitions of the encoding process is reduced by 47 times, at the cost of a relatively small cost in compression performance. When compared to other works, the fast hierarchical mode decision results are expressively more satisfactory in terms of computational complexity reduction, quality, and bit rate. The low-complexity mode decision architecture proposed is thus a very good option for real-time coding of high-resolution videos. The solution is especially interesting for embedded and mobile applications with support to multimedia systems, since it yields good compression rates and image quality with a very high reduction in the encoder complexity.

  20. A simple and efficient algorithm operating with linear time for MCEEG data compression.

    Science.gov (United States)

    Titus, Geevarghese; Sudhakar, M S

    2017-09-01

    Popularisation of electroencephalograph (EEG) signals in diversified fields have increased the need for devices capable of operating at lower power and storage requirements. This has led to a great deal of research in data compression, that can address (a) low latency in the coding of the signal, (b) reduced hardware and software dependencies, (c) quantify the system anomalies, and (d) effectively reconstruct the compressed signal. This paper proposes a computationally simple and novel coding scheme named spatial pseudo codec (SPC), to achieve lossy to near lossless compression of multichannel EEG (MCEEG). In the proposed system, MCEEG signals are initially normalized, followed by two parallel processes: one operating on integer part and the other, on fractional part of the normalized data. The redundancies in integer part are exploited using spatial domain encoder, and the fractional part is coded as pseudo integers. The proposed method has been tested on a wide range of databases having variable sampling rates and resolutions. Results indicate that the algorithm has a good recovery performance with an average percentage root mean square deviation (PRD) of 2.72 for an average compression ratio (CR) of 3.16. Furthermore, the algorithm has a complexity of only O(n) with an average encoding and decoding time per sample of 0.3 ms and 0.04 ms respectively. The performance of the algorithm is comparable with recent methods like fast discrete cosine transform (fDCT) and tensor decomposition methods. The results validated the feasibility of the proposed compression scheme for practical MCEEG recording, archiving and brain computer interfacing systems.

  1. ITERATION FREE FRACTAL COMPRESSION USING GENETIC ALGORITHM FOR STILL COLOUR IMAGES

    Directory of Open Access Journals (Sweden)

    A.R. Nadira Banu Kamal

    2014-02-01

    Full Text Available The storage requirements for images can be excessive, if true color and a high-perceived image quality are desired. An RGB image may be viewed as a stack of three gray-scale images that when fed into the red, green and blue inputs of a color monitor, produce a color image on the screen. The abnormal size of many images leads to long, costly, transmission times. Hence, an iteration free fractal algorithm is proposed in this research paper to design an efficient search of the domain pools for colour image compression using Genetic Algorithm (GA. The proposed methodology reduces the coding process time and intensive computation tasks. Parameters such as image quality, compression ratio and coding time are analyzed. It is observed that the proposed method achieves excellent performance in image quality with reduction in storage space.

  2. Optimal PET protection for streaming scalably compressed video streams with limited retransmission based on incomplete feedback.

    Science.gov (United States)

    Xiong, Ruiqin; Taubman, David; Sivaraman, Vijay

    2010-09-01

    For streaming scalably compressed video streams over unreliable networks, Limited-Retransmission Priority Encoding Transmission (LR-PET) outperforms PET remarkably since the opportunity to retransmit is fully exploited by hypothesizing the possible future retransmission behavior before the retransmission really occurs. For the retransmission to be efficient in such a scheme, it is critical to get adequate acknowledgment from a previous transmission before deciding what data to retransmit. However, in many scenarios, the presence of a stochastic packet delay process results in frequent late acknowledgements, while imperfect feedback channels can impair the server's knowledge of what the client has received. This paper proposes an extended LR-PET scheme, which optimizes PET-protection of transmitted bitstreams, recognizing that the received feedback information is likely to be incomplete. Similar to the original LR-PET, the behavior of future retransmissions is hypothesized in the optimization objective of each transmission opportunity. As the key contribution, we develop a method to efficiently derive the effective recovery probability versus redundancy rate characteristic for the extended LR-PET communication process. This significantly simplifies the ultimate protection assignment procedure. This paper also demonstrates the advantage of the proposed strategy over several alternative strategies.

  3. Partial Encryption of Entropy-Coded Video Compression Using Coupled Chaotic Maps

    Directory of Open Access Journals (Sweden)

    Fadi Almasalha

    2014-10-01

    Full Text Available Due to pervasive communication infrastructures, a plethora of enabling technologies is being developed over mobile and wired networks. Among these, video streaming services over IP are the most challenging in terms of quality, real-time requirements and security. In this paper, we propose a novel scheme to efficiently secure variable length coded (VLC multimedia bit streams, such as H.264. It is based on code word error diffusion and variable size segment shuffling. The codeword diffusion and the shuffling mechanisms are based on random operations from a secure and computationally efficient chaos-based pseudo-random number generator. The proposed scheme is ubiquitous to the end users and can be deployed at any node in the network. It provides different levels of security, with encrypted data volume fluctuating between 5.5–17%. It works on the compressed bit stream without requiring any decoding. It provides excellent encryption speeds on different platforms, including mobile devices. It is 200% faster and 150% more power efficient when compared with AES software-based full encryption schemes. Regarding security, the scheme is robust to well-known attacks in the literature, such as brute force and known/chosen plain text attacks.

  4. Integrating Algorithm Visualization Video into a First-Year Algorithm and Data Structure Course

    Science.gov (United States)

    Crescenzi, Pilu; Malizia, Alessio; Verri, M. Cecilia; Diaz, Paloma; Aedo, Ignacio

    2012-01-01

    In this paper we describe the results that we have obtained while integrating algorithm visualization (AV) movies (strongly tightened with the other teaching material), within a first-year undergraduate course on algorithms and data structures. Our experimental results seem to support the hypothesis that making these movies available significantly…

  5. Detection of Double-Compressed H.264/AVC Video Incorporating the Features of the String of Data Bits and Skip Macroblocks

    Directory of Open Access Journals (Sweden)

    Heng Yao

    2017-12-01

    Full Text Available Today’s H.264/AVC coded videos have a high quality, high data-compression ratio. They also have a strong fault tolerance, better network adaptability, and have been widely applied on the Internet. With the popularity of powerful and easy-to-use video editing software, digital videos can be tampered with in various ways. Therefore, the double compression in the H.264/AVC video can be used as a first step in the study of video-tampering forensics. This paper proposes a simple, but effective, double-compression detection method that analyzes the periodic features of the string of data bits (SODBs and the skip macroblocks (S-MBs for all I-frames and P-frames in a double-compressed H.264/AVC video. For a given suspicious video, the SODBs and S-MBs are extracted for each frame. Both features are then incorporated to generate one enhanced feature to represent the periodic artifact of the double-compressed video. Finally, a time-domain analysis is conducted to detect the periodicity of the features. The primary Group of Pictures (GOP size is estimated based on an exhaustive strategy. The experimental results demonstrate the efficacy of the proposed method.

  6. Quantifying the effect of disruptions to temporal coherence on the intelligibility of compressed American Sign Language video

    Science.gov (United States)

    Ciaramello, Frank M.; Hemami, Sheila S.

    2009-02-01

    Communication of American Sign Language (ASL) over mobile phones would be very beneficial to the Deaf community. ASL video encoded to achieve the rates provided by current cellular networks must be heavily compressed and appropriate assessment techniques are required to analyze the intelligibility of the compressed video. As an extension to a purely spatial measure of intelligibility, this paper quantifies the effect of temporal compression artifacts on sign language intelligibility. These artifacts can be the result of motion-compensation errors that distract the observer or frame rate reductions. They reduce the the perception of smooth motion and disrupt the temporal coherence of the video. Motion-compensation errors that affect temporal coherence are identified by measuring the block-level correlation between co-located macroblocks in adjacent frames. The impact of frame rate reductions was quantified through experimental testing. A subjective study was performed in which fluent ASL participants rated the intelligibility of sequences encoded at a range of 5 different frame rates and with 3 different levels of distortion. The subjective data is used to parameterize an objective intelligibility measure which is highly correlated with subjective ratings at multiple frame rates.

  7. Participant satisfaction with a school telehealth education program using interactive compressed video delivery methods in rural Arkansas.

    Science.gov (United States)

    Bynum, Ann B; Cranford, Charles O; Irwin, Cathy A; Denny, George S

    2002-08-01

    Socioeconomic and demographic factors can affect the impact of telehealth education programs that use interactive compressed video technology. This study assessed program satisfaction among participants in the University of Arkansas for Medical Sciences' School Telehealth Education Program delivered by interactive compressed video. Variables in the one-group posttest study were age, gender, ethnicity, education, community size, and program topics for years 1997-1999. The convenience sample included 3,319 participants in junior high and high schools. The School Telehealth Education Program provided information about health risks, disease prevention, health promotion, personal growth, and health sciences. Adolescents reported medium to high levels of satisfaction regarding program interest and quality. Significantly higher satisfaction was expressed for programs on muscular dystrophy, anatomy of the heart, and tobacco addiction (p Education Program, delivered by interactive compressed video, promoted program satisfaction among rural and minority populations and among junior high and high school students. Effective program methods included an emphasis on participants' learning needs, increasing access in rural areas among ethnic groups, speaker communication, and clarity of the program presentation.

  8. The effects of camera jitter for background subtraction algorithms on fused infrared-visible video streams

    Science.gov (United States)

    Becker, Stefan; Scherer-Negenborn, Norbert; Thakkar, Pooja; Hübner, Wolfgang; Arens, Michael

    2016-10-01

    This paper is a continuation of the work of Becker et al.1 In their work, they analyzed the robustness of various background subtraction algorithms on fused video streams originating from visible and infrared cameras. In order to cover a broader range of background subtraction applications, we show the effects of fusing infrared-visible video streams from vibrating cameras on a large set of background subtraction algorithms. The effectiveness is quantitatively analyzed on recorded data of a typical outdoor sequence with a fine-grained and accurate annotation of the images. Thereby, we identify approaches which can benefit from fused sensor signals with camera jitter. Finally conclusions on what fusion strategies should be preferred under such conditions are given.

  9. Source and Channel Adaptive Rate Control for Multicast Layered Video Transmission Based on a Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Viéron

    2004-03-01

    Full Text Available This paper introduces source-channel adaptive rate control (SARC, a new congestion control algorithm for layered video transmission in large multicast groups. In order to solve the well-known feedback implosion problem in large multicast groups, we first present a mechanism for filtering RTCP receiver reports sent from receivers to the whole session. The proposed filtering mechanism provides a classification of receivers according to a predefined similarity measure. An end-to-end source and FEC rate control based on this distributed feedback aggregation mechanism coupled with a video layered coding system is then described. The number of layers, their rate, and their levels of protection are adapted dynamically to aggregated feedbacks. The algorithms have been validated with the NS2 network simulator.

  10. The research of moving objects behavior detection and tracking algorithm in aerial video

    Science.gov (United States)

    Yang, Le-le; Li, Xin; Yang, Xiao-ping; Li, Dong-hui

    2015-12-01

    The article focuses on the research of moving target detection and tracking algorithm in Aerial monitoring. Study includes moving target detection, moving target behavioral analysis and Target Auto tracking. In moving target detection, the paper considering the characteristics of background subtraction and frame difference method, using background reconstruction method to accurately locate moving targets; in the analysis of the behavior of the moving object, using matlab technique shown in the binary image detection area, analyzing whether the moving objects invasion and invasion direction; In Auto Tracking moving target, A video tracking algorithm that used the prediction of object centroids based on Kalman filtering was proposed.

  11. An Algorithm to Compress Line-transition Data for Radiative-transfer Calculations

    Science.gov (United States)

    Cubillos, Patricio E.

    2017-11-01

    Molecular line-transition lists are an essential ingredient for radiative-transfer calculations. With recent databases now surpassing the billion-line mark, handling them has become computationally prohibitive, due to both the required processing power and memory. Here I present a temperature-dependent algorithm to separate strong from weak line transitions, reformatting the large majority of the weaker lines into a cross-section data file, and retaining the detailed line-by-line information of the fewer strong lines. For any given molecule over the 0.3-30 μm range, this algorithm reduces the number of lines to a few million, enabling faster radiative-transfer computations without a significant loss of information. The final compression rate depends on how densely populated the spectrum is. I validate this algorithm by comparing Exomol’s HCN extinction-coefficient spectra between the complete (65 million line transitions) and compressed (7.7 million) line lists. Over the 0.6-33 μm range, the average difference between extinction-coefficient values is less than 1%. A Python/C implementation of this algorithm is open-source and available at https://github.com/pcubillos/repack. So far, this code handles the Exomol and HITRAN line-transition format.

  12. VIDEO DENOISING USING SWITCHING ADAPTIVE DECISION BASED ALGORITHM WITH ROBUST MOTION ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V. Jayaraj

    2010-08-01

    Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.

  13. iTRAC : intelligent video compression for automated traffic surveillance systems.

    Science.gov (United States)

    2010-08-01

    Non-intrusive video imaging sensors are commonly used in traffic monitoring : and surveillance. For some applications it is necessary to transmit the video : data over communication links. However, due to increased requirements of : bitrate this mean...

  14. Recovery of a spectrum based on a compressive-sensing algorithm with weighted principal component analysis

    Science.gov (United States)

    Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang

    2017-07-01

    The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.

  15. Are Current Advances of Compression Algorithms for Capsule Endoscopy Enough? A Technical Review.

    Science.gov (United States)

    Alam, Mohammad Wajih; Hasan, Md Mehedi; Mohammed, Shahed Khan; Deeba, Farah; Wahid, Khan A

    2017-09-26

    The recent technological advances in capsule endoscopy system has revolutionized the healthcare system by introducing new techniques and functionalities in the system to diagnose entire gastrointestinal tract which resulted in better diagnostic accuracy, reducing hospitalization, and improving its clinical outcome. Although many benefits of present capsule endoscopy are known, there are still significant drawbacks with respect to conventional endoscope system including size, battery life, bandwidth, image quality and higher frame rate, which has restricted its wide use. In order to solve the problem related to current capsule endoscope, the importance of a low-cost and low-power compression algorithm, that produces higher frame rate with better image quality but lower bandwidth and transmission power, is paramount. While several review papers have been published describing the capability of capsule endoscope in terms of its functionality and emerging features, an extensive review on the compression algorithms from the past and for the future is still required. Hence, in this review paper, we aim to address the issue by exploring the specific characteristics of endoscopic images, summarizing useful compression techniques with in-depth analysis and finally making suggestions for possible future adaptation.

  16. A complexity-efficient and one-pass image compression algorithm for wireless capsule endoscopy.

    Science.gov (United States)

    Liu, Gang; Yan, Guozheng; Zhao, Shaopeng; Kuang, Shuai

    2015-01-01

    As an important part of the application-specific integrated circuit (ASIC) in wireless capsule endoscopy (WCE), the efficient compressor is crucial for image transmission and power consumption. In this paper, a complexity-efficient and one-pass image compression method is proposed for WCE with Bayer format images. The algorithm is modified from the standard lossless algorithm (JPEG-LS). Firstly, a causal interpolation is used to acquire the context template of a current pixel to be encoded, thus determining different encoding modes. Secondly, a gradient predictor, instead of the median predictor, is designed to improve the accuracy of the predictions. Thirdly, the gradient context is quantized to obtain the context index (Q). Eventually, the encoding process is achieved in different modes. The experimental and comparative results show that our proposed near-lossless compression method provides a high compression rate (2.315) and a high image quality (46.31 dB) compared with other methods. It performs well in the designed wireless capsule system and could be applied in other image fields.

  17. Hardware architectures for real time processing of High Definition video sequences

    OpenAIRE

    Genovese, Mariangela

    2014-01-01

    Actually, application fields, such as medicine, space exploration, surveillance, authentication, HDTV, and automated industry inspection, require capturing, storing and processing continuous streams of video data. Consequently, different process techniques (video enhancement, segmentation, object detection, or video compression, as examples) are involved in these applications. Such techniques often require a significant number of operations depending on the algorithm complexity and the video ...

  18. Incorporation of local dependent reliability information into the Prior Image Constrained Compressed Sensing (PICCS) reconstruction algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Vaegler, Sven; Sauer, Otto [Wuerzburg Univ. (Germany). Dept. of Radiation Oncology; Stsepankou, Dzmitry; Hesser, Juergen [University Medical Center Mannheim, Mannheim (Germany). Dept. of Experimental Radiation Oncology

    2015-07-01

    The reduction of dose in cone beam computer tomography (CBCT) arises from the decrease of the tube current for each projection as well as from the reduction of the number of projections. In order to maintain good image quality, sophisticated image reconstruction techniques are required. The Prior Image Constrained Compressed Sensing (PICCS) incorporates prior images into the reconstruction algorithm and outperforms the widespread used Feldkamp-Davis-Kress-algorithm (FDK) when the number of projections is reduced. However, prior images that contain major variations are not appropriately considered so far in PICCS. We therefore propose the partial-PICCS (pPICCS) algorithm. This framework is a problem-specific extension of PICCS and enables the incorporation of the reliability of the prior images additionally. We assumed that the prior images are composed of areas with large and small deviations. Accordingly, a weighting matrix considered the assigned areas in the objective function. We applied our algorithm to the problem of image reconstruction from few views by simulations with a computer phantom as well as on clinical CBCT projections from a head-and-neck case. All prior images contained large local variations. The reconstructed images were compared to the reconstruction results by the FDK-algorithm, by Compressed Sensing (CS) and by PICCS. To show the gain of image quality we compared image details with the reference image and used quantitative metrics (root-mean-square error (RMSE), contrast-to-noise-ratio (CNR)). The pPICCS reconstruction framework yield images with substantially improved quality even when the number of projections was very small. The images contained less streaking, blurring and inaccurately reconstructed structures compared to the images reconstructed by FDK, CS and conventional PICCS. The increased image quality is also reflected in large RMSE differences. We proposed a modification of the original PICCS algorithm. The pPICCS algorithm

  19. Fast algorithms for nonconvex compression sensing: MRI reconstruction from very few data

    Energy Technology Data Exchange (ETDEWEB)

    Chartrand, Rick [Los Alamos National Laboratory

    2009-01-01

    Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer k-space samples, thereby reducing scanning time. Previous work has shown that nonconvex optimization reduces still further the number of samples required for reconstruction, while still being tractable. In this work, we extend recent Fourier-based algorithms for convex optimization to the nonconvex setting, and obtain methods that combine the reconstruction abilities of previous nonconvex approaches with the computational speed of state-of-the-art convex methods.

  20. An optimized compression algorithm for real-time ECG data transmission in wireless network of medical information systems.

    Science.gov (United States)

    Cho, Gyoun-Yon; Lee, Seo-Joon; Lee, Tae-Ro

    2015-01-01

    Recent medical information systems are striving towards real-time monitoring models to care patients anytime and anywhere through ECG signals. However, there are several limitations such as data distortion and limited bandwidth in wireless communications. In order to overcome such limitations, this research focuses on compression. Few researches have been made to develop a specialized compression algorithm for ECG data transmission in real-time monitoring wireless network. Not only that, recent researches' algorithm is not appropriate for ECG signals. Therefore this paper presents a more developed algorithm EDLZW for efficient ECG data transmission. Results actually showed that the EDLZW compression ratio was 8.66, which was a performance that was 4 times better than any other recent compression method widely used today.

  1. Chroma Subsampling Influence on the Perceived Video Quality for Compressed Sequences in High Resolutions

    Directory of Open Access Journals (Sweden)

    Miroslav Uhrina

    2017-01-01

    Full Text Available This paper deals with the influence of chroma subsampling on perceived video quality measured by subjective metrics. The evaluation was done for two most used video codecs H.264/AVC and H.265/HEVC. Eight types of video sequences with Full HD and Ultra HD resolutions depending on content were tested. The experimental results showed that observers did not see the difference between unsubsampled and subsampled sequences, so using subsampled videos is preferable even 50 % of the amount of data can be saved. Also, the minimum bitrates to achieve the good and fair quality by each codec and resolution were determined.

  2. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    Science.gov (United States)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  3. Compressing an Ensemble with Statistical Models: An Algorithm for Global 3D Spatio-Temporal Temperature

    KAUST Repository

    Castruccio, Stefano

    2015-04-02

    One of the main challenges when working with modern climate model ensembles is the increasingly larger size of the data produced, and the consequent difficulty in storing large amounts of spatio-temporally resolved information. Many compression algorithms can be used to mitigate this problem, but since they are designed to compress generic scientific data sets, they do not account for the nature of climate model output and they compress only individual simulations. In this work, we propose a different, statistics-based approach that explicitly accounts for the space-time dependence of the data for annual global three-dimensional temperature fields in an initial condition ensemble. The set of estimated parameters is small (compared to the data size) and can be regarded as a summary of the essential structure of the ensemble output; therefore, it can be used to instantaneously reproduce the temperature fields in an ensemble with a substantial saving in storage and time. The statistical model exploits the gridded geometry of the data and parallelization across processors. It is therefore computationally convenient and allows to fit a non-trivial model to a data set of one billion data points with a covariance matrix comprising of 10^18 entries.

  4. Enhancing Perceived Quality of Compressed Images and Video with Anisotropic Diffusion and Fuzzy Filtering

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Korhonen, Jari; Forchhammer, Søren

    2013-01-01

    video sequences. For the video sequences, different filters are applied to luminance (Y) and chrominance (U,V) components. The performance of the proposed method has been compared against several other methods by using different objective quality metrics and a subjective comparison study. Both objective...

  5. Innovative Video Diagnostic Equipment for Material Science

    Science.gov (United States)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  6. Study on data compression algorithm and its implementation in portable electronic device for Internet of Things applications

    Directory of Open Access Journals (Sweden)

    Khairi Nor Asilah

    2017-01-01

    Full Text Available An Internet of Things (IoT device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy-saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications.

  7. Study on data compression algorithm and its implementation in portable electronic device for Internet of Things applications

    Science.gov (United States)

    Asilah Khairi, Nor; Bahari Jambek, Asral

    2017-11-01

    An Internet of Things (IoT) device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy-saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications.

  8. Analysis of Video Signal Transmission Through DWDM Network Based on a Quality Check Algorithm

    Directory of Open Access Journals (Sweden)

    A. Markovic

    2013-04-01

    Full Text Available This paper provides an analysis of the multiplexed video signal transmission through the Dense Wavelength Division Multiplexing (DWDM network based on a quality check algorithm, which determines where the interruption of the transmission quality starts. On the basis of this algorithm, simulations of transmission for specific values of fiber parameters ​​ are executed. The analysis of the results shows how the BER and Q-factor change depends on the length of the fiber, i.e. on the number of amplifiers, and what kind of an effect the number of multiplexed channels and the flow rate per channel have on a transmited signals. Analysis of DWDM systems is performed in the software package OptiSystem 7.0, which is designed for systems with flow rates of 2.5 Gb/s and 10 Gb/s per channel.

  9. The Impact of Video Compression on Remote Cardiac Pulse Measurement Using Imaging Photoplethysmography

    Science.gov (United States)

    2017-05-30

    quality is human subjective perception assessed by a Mean Opinion Score (MOS). Alternatively, video quality may be assessed using one of numerous...cameras. Synchronization of the image capture from the array was achieved using a PCIe-6323 data acquisition card (National Instruments, Austin...large reductions of either video resolution or frame rate did not strongly impact iPPG pulse rate measurements [9]. A balanced approach may yield

  10. Dynamic frame resizing with convolutional neural network for efficient video compression

    Science.gov (United States)

    Kim, Jaehwan; Park, Youngo; Choi, Kwang Pyo; Lee, JongSeok; Jeon, Sunyoung; Park, JeongHoon

    2017-09-01

    In the past, video codecs such as vc-1 and H.263 used a technique to encode reduced-resolution video and restore original resolution from the decoder for improvement of coding efficiency. The techniques of vc-1 and H.263 Annex Q are called dynamic frame resizing and reduced-resolution update mode, respectively. However, these techniques have not been widely used due to limited performance improvements that operate well only under specific conditions. In this paper, video frame resizing (reduced/restore) technique based on machine learning is proposed for improvement of coding efficiency. The proposed method features video of low resolution made by convolutional neural network (CNN) in encoder and reconstruction of original resolution using CNN in decoder. The proposed method shows improved subjective performance over all the high resolution videos which are dominantly consumed recently. In order to assess subjective quality of the proposed method, Video Multi-method Assessment Fusion (VMAF) which showed high reliability among many subjective measurement tools was used as subjective metric. Moreover, to assess general performance, diverse bitrates are tested. Experimental results showed that BD-rate based on VMAF was improved by about 51% compare to conventional HEVC. Especially, VMAF values were significantly improved in low bitrate. Also, when the method is subjectively tested, it had better subjective visual quality in similar bit rate.

  11. FORMAL CONDITIONS OF STEGANOGRAPHIC METHOD’S SUSTAINABILITY TO COMPRESSION ATTACKS AND THEIR IMPLEMENTATION IN NEW STEGANOGRAPHIC ALGORITHM

    Directory of Open Access Journals (Sweden)

    Kobozeva A.A.

    2013-04-01

    Full Text Available The analysis of the current development and operation of information systems provided sufficient data for definition of sufficient conditions for a formal presentation steganographic transformation as a set of perturbations of singular values of the matrices (corresponding to the container those ensure insensitivity (or low sensitivity of formed steganographic message to compression attacks. The obtained sufficient conditions are independent of the confidential information that embedded to container (spatial or frequency and specificity of stegano algorithm. The main mathematical tool is a matrix analysis. As the container is considered a digital image. New steganographic algorithm is developed and based on sufficient conditions that received in paper. The algorithm is stable to compression, including low compression rate. The results of computational experiments are presented.

  12. A QoE Aware Fairness Bi-level Resource Allocation Algorithm for Multiple Video Streaming in WLAN

    Directory of Open Access Journals (Sweden)

    Hu Zhou

    2015-11-01

    Full Text Available With the increasing of smart devices such as mobile phones and tablets, the scenario of multiple video users watching video streaming simultaneously in one wireless local area network (WLAN becomes more and more popular. However, the quality of experience (QoE and the fairness among multiple users are seriously impacted by the limited bandwidth and shared resources of WLAN. In this paper, we propose a novel bi-level resource allocation algorithm. To maximize the total throughput of the network, the WLAN is firstly tuned to the optimal operation point. Then the wireless resource is carefully allocated at the first level, i.e., between AP and uplink background traffic users, and the second level, i.e., among downlink video users. The simulation results show that the proposed algorithm can guarantee the QoE and the fairness for all the video users, and there is little impact on the average throughput of the background traffic users.

  13. Resolution enhancement of low quality videos using a high-resolution frame

    NARCIS (Netherlands)

    Pham, T.Q.; Van Vliet, L.J.; Schutte, K.

    2006-01-01

    This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of

  14. RF-photonic wideband measurements of energetic pulses on NIF enhanced by compressive sensing algorithms

    Science.gov (United States)

    Chou, Jason; Valley, George C.; Hernandez, Vincent J.; Bennett, Corey V.; Pelz, Larry; Heebner, John; Di Nicola, J. M.; Rever, Matthew; Bowers, Mark

    2014-03-01

    At the National Ignition Facility (NIF), home of the world's largest laser, a critical pulse screening process is used to ensure safe operating conditions for amplifiers and target optics. To achieve this, high speed recording instrumentation up to 34 GHz measures pulse shape characteristics throughout a facility the size of three football fields—which can be a time consuming procedure. As NIF transitions to higher power handling and increased wavelength flexibility, this lengthy and extensive process will need to be performed far more frequently. We have developed an accelerated highthroughput pulse screener that can identify nonconforming pulses across 48 locations using a single, real-time 34-GHz oscilloscope. Energetic pulse shapes from anywhere in the facility are imprinted onto telecom wavelengths, multiplexed, and transported over fiber without distortion. The critical pulse-screening process at high-energy laser facilities can be reduced from several hours just seconds—allowing greater operational efficiency, agility to system modifications, higher power handling, and reduced costs. Typically, the sampling noise from the oscilloscope places a limit on the achievable signal-to-noise ratio of the measurement, particularly when highly shaped and/or short duration pulses are required by target physicists. We have developed a sophisticated signal processing algorithm for this application that is based on orthogonal matching pursuit (OMP). This algorithm, developed for recovering signals in a compressive sensing system, enables high fidelity single shot screening even for low signal-to-noise ratio measurements.

  15. Pilot study on real-time motion detection in UAS video data by human observer and image exploitation algorithm

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Voit, Michael; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2017-05-01

    Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface - again, we compare gaze-based and traditional mouse-based interaction - we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.

  16. Compressed Natural Gas Installation. A Video-Based Training Program for Vehicle Conversion. Instructor's Edition.

    Science.gov (United States)

    Oklahoma State Dept. of Vocational and Technical Education, Stillwater. Curriculum and Instructional Materials Center.

    This instructor's guide contains the materials required to teach four competency-based course units of instruction in installing compressed natural gas (CNG) systems in motor vehicles. It is designed to accompany an instructional videotape (not included) on CNG installation. The following competencies are covered in the four instructional units:…

  17. ISTA-Net: Iterative Shrinkage-Thresholding Algorithm Inspired Deep Network for Image Compressive Sensing

    KAUST Repository

    Zhang, Jian

    2017-06-24

    Traditional methods for image compressive sensing (CS) reconstruction solve a well-defined inverse problem that is based on a predefined CS model, which defines the underlying structure of the problem and is generally solved by employing convergent iterative solvers. These optimization-based CS methods face the challenge of choosing optimal transforms and tuning parameters in their solvers, while also suffering from high computational complexity in most cases. Recently, some deep network based CS algorithms have been proposed to improve CS reconstruction performance, while dramatically reducing time complexity as compared to optimization-based methods. Despite their impressive results, the proposed networks (either with fully-connected or repetitive convolutional layers) lack any structural diversity and they are trained as a black box, void of any insights from the CS domain. In this paper, we combine the merits of both types of CS methods: the structure insights of optimization-based method and the performance/speed of network-based ones. We propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $l_1$ norm CS reconstruction model. ISTA-Net essentially implements a truncated form of ISTA, where all ISTA-Net parameters are learned end-to-end to minimize a reconstruction error in training. Borrowing more insights from the optimization realm, we propose an accelerated version of ISTA-Net, dubbed FISTA-Net, which is inspired by the fast iterative shrinkage-thresholding algorithm (FISTA). Interestingly, this acceleration naturally leads to skip connections in the underlying network design. Extensive CS experiments demonstrate that the proposed ISTA-Net and FISTA-Net outperform existing optimization-based and network-based CS methods by large margins, while maintaining a fast runtime.

  18. An efficient HW and SW design of H.264 video compression, storage and playback on FPGA devices for handheld thermal imaging systems

    Science.gov (United States)

    Gunay, Omer; Ozsarac, Ismail; Kamisli, Fatih

    2017-05-01

    Video recording is an essential property of new generation military imaging systems. Playback of the stored video on the same device is also desirable as it provides several operational benefits to end users. Two very important constraints for many military imaging systems, especially for hand-held devices and thermal weapon sights, are power consumption and size. To meet these constraints, it is essential to perform most of the processing applied to the video signal, such as preprocessing, compression, storing, decoding, playback and other system functions on a single programmable chip, such as FPGA, DSP, GPU or ASIC. In this work, H.264/AVC (Advanced Video Coding) compatible video compression, storage, decoding and playback blocks are efficiently designed and implemented on FPGA platforms using FPGA fabric and Altera NIOS II soft processor. Many subblocks that are used in video encoding are also used during video decoding in order to save FPGA resources and power. Computationally complex blocks are designed using FPGA fabric, while blocks such as SD card write/read, H.264 syntax decoding and CAVLC decoding are done using NIOS processor to benefit from software flexibility. In addition, to keep power consumption low, the system was designed to require limited external memory access. The design was tested using 640x480 25 fps thermal camera on CYCLONE V FPGA, which is the ALTERA's lowest power FPGA family, and consumes lower than 40% of CYCLONE V 5CEFA7 FPGA resources on average.

  19. Optimizing management of the condensing heat and cooling of gases compression in oxy block using of a genetic algorithm

    Science.gov (United States)

    Brzęczek, Mateusz; Bartela, Łukasz

    2013-12-01

    This paper presents the parameters of the reference oxy combustion block operating with supercritical steam parameters, equipped with an air separation unit and a carbon dioxide capture and compression installation. The possibility to recover the heat in the analyzed power plant is discussed. The decision variables and the thermodynamic functions for the optimization algorithm were identified. The principles of operation of genetic algorithm and methodology of conducted calculations are presented. The sensitivity analysis was performed for the best solutions to determine the effects of the selected variables on the power and efficiency of the unit. Optimization of the heat recovery from the air separation unit, flue gas condition and CO2 capture and compression installation using genetic algorithm was designed to replace the low-pressure section of the regenerative water heaters of steam cycle in analyzed unit. The result was to increase the power and efficiency of the entire power plant.

  20. GOP-based channel rate allocation using genetic algorithm for scalable video streaming over error-prone networks.

    Science.gov (United States)

    Fang, Tao; Chau, Lap-Pui

    2006-06-01

    In this paper, we address the problem of unequal error protection (UEP) for scalable video transmission over wireless packet-erasure channel. Unequal amounts of protection are allocated to the different frames (I- or P-frame) of a group-of-pictures (GOP), and in each frame, unequal amounts of protection are allocated to the progressive bit-stream of scalable video to provide a graceful degradation of video quality as packet loss rate varies. We use a genetic algorithm (GA) to quickly get the allocation pattern, which is hard to get with other conventional methods, like hill-climbing method. Theoretical analysis and experimental results both demonstrate the advantage of the proposed algorithm.

  1. Colour-reproduction algorithm for transmitting variable video frames and its application to capsule endoscopy.

    Science.gov (United States)

    Khan, Tareq; Shrestha, Ravi; Imtiaz, Md Shamin; Wahid, Khan A

    2015-04-01

    Presented is a new power-efficient colour generation algorithm for wireless capsule endoscopy (WCE) application. In WCE, transmitting colour image data from the human intestine through radio frequency (RF) consumes a huge amount of power. The conventional way is to transmit all R, G and B components of all frames. Using the proposed dictionary-based colour generation scheme, instead of sending all R, G and B frames, first one colour frame is sent followed by a series of grey-scale frames. At the receiver end, the colour information is extracted from the colour frame and then added to colourise the grey-scale frames. After a certain number of grey-scale frames, another colour frame is sent followed by the same number of grey-scale frames. This process is repeated until the end of the video sequence to maintain the colour similarity. As a result, over 50% of RF transmission power can be saved using the proposed scheme, which will eventually lead to a battery life extension of the capsule by 4-7 h. The reproduced colour images have been evaluated both statistically and subjectively by professional gastroenterologists. The algorithm is finally implemented using a WCE prototype and the performance is validated using an ex-vivo trial.

  2. Comprehensive depth estimation algorithm for efficient stereoscopic content creation in three-dimensional video systems

    Science.gov (United States)

    Xu, Huihui; Jiang, Mingyan

    2015-07-01

    Two-dimensional to three-dimensional (3-D) conversion in 3-D video applications has attracted great attention as it can alleviate the problem of stereoscopic content shortage. Depth estimation is an essential part of this conversion since the depth accuracy directly affects the quality of a stereoscopic image. In order to generate a perceptually reasonable depth map, a comprehensive depth estimation algorithm that considers the scenario type is presented. Based on the human visual system mechanism, which is sensitive to a change in the scenario, this study classifies the type of scenario into four classes according to the relationship between the movements of the camera and the object, and then leverages different strategies on the basis of the scenario type. The proposed strategies efficiently extract the depth information from different scenarios. In addition, the depth generation method for a scenario in which there is no motion, neither of the object nor the camera, is also suitable for the single image. Qualitative and quantitative evaluation results demonstrate that the proposed depth estimation algorithm is very effective for generating stereoscopic content and providing a realistic visual experience.

  3. A 3-Step Algorithm Using Region-Based Active Contours for Video Objects Detection

    Directory of Open Access Journals (Sweden)

    Stéphanie Jehan-Besson

    2002-06-01

    Full Text Available We propose a 3-step algorithm for the automatic detection of moving objects in video sequences using region-based active contours. First, we introduce a very full general framework for region-based active contours with a new Eulerian method to compute the evolution equation of the active contour from a criterion including both region-based and boundary-based terms. This framework can be easily adapted to various applications, thanks to the introduction of functions named descriptors of the different regions. With this new Eulerian method based on shape optimization principles, we can easily take into account the case of descriptors depending upon features globally attached to the regions. Second, we propose a 3-step algorithm for detection of moving objects, with a static or a mobile camera, using region-based active contours. The basic idea is to hierarchically associate temporal and spatial information. The active contour evolves with successively three sets of descriptors: a temporal one, and then two spatial ones. The third spatial descriptor takes advantage of the segmentation of the image in intensity homogeneous regions. User interaction is reduced to the choice of a few parameters at the beginning of the process. Some experimental results are supplied.

  4. Colour-reproduction algorithm for transmitting variable video frames and its application to capsule endoscopy

    Science.gov (United States)

    Khan, Tareq; Shrestha, Ravi; Imtiaz, Md. Shamin

    2015-01-01

    Presented is a new power-efficient colour generation algorithm for wireless capsule endoscopy (WCE) application. In WCE, transmitting colour image data from the human intestine through radio frequency (RF) consumes a huge amount of power. The conventional way is to transmit all R, G and B components of all frames. Using the proposed dictionary-based colour generation scheme, instead of sending all R, G and B frames, first one colour frame is sent followed by a series of grey-scale frames. At the receiver end, the colour information is extracted from the colour frame and then added to colourise the grey-scale frames. After a certain number of grey-scale frames, another colour frame is sent followed by the same number of grey-scale frames. This process is repeated until the end of the video sequence to maintain the colour similarity. As a result, over 50% of RF transmission power can be saved using the proposed scheme, which will eventually lead to a battery life extension of the capsule by 4–7 h. The reproduced colour images have been evaluated both statistically and subjectively by professional gastroenterologists. The algorithm is finally implemented using a WCE prototype and the performance is validated using an ex-vivo trial. PMID:26609405

  5. Using image processing technology combined with decision tree algorithm in laryngeal video stroboscope automatic identification of common vocal fold diseases.

    Science.gov (United States)

    Jeffrey Kuo, Chung-Feng; Wang, Po-Chun; Chu, Yueng-Hsiang; Wang, Hsing-Won; Lai, Chun-Yu

    2013-10-01

    This study used the actual laryngeal video stroboscope videos taken by physicians in clinical practice as the samples for experimental analysis. The samples were dynamic vocal fold videos. Image processing technology was used to automatically capture the image of the largest glottal area from the video to obtain the physiological data of the vocal folds. In this study, an automatic vocal fold disease identification system was designed, which can obtain the physiological parameters for normal vocal folds, vocal paralysis and vocal nodules from image processing according to the pathological features. The decision tree algorithm was used as the classifier of the vocal fold diseases. The identification rate was 92.6%, and the identification rate with an image recognition improvement processing procedure after classification can be improved to 98.7%. Hence, the proposed system has value in clinical practices. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Rapid 2D phase-contrast magnetic resonance angiography reconstruction algorithm via compressed sensing

    Science.gov (United States)

    Lee, Dong-Hoon; Hong, Cheol-Pyo; Lee, Man-Woo; Han, Bong-Soo

    2013-09-01

    Phase-contrast magnetic resonance angiography (PC MRA) is an excellent technique for visualization of venous vessels. However, the scan time of PC MRA is long compared with there of other MRA techniques. Recently, the potential of compressed sensing (CS) reconstruction to reduce the scan time in MR image acquisition using a sparse sampling dataset has become an active field of study. In this study, we propose a combination method to apply the CS reconstruction method to 2D PC MRA. This work was performed to enable faster 2D PC MRA imaging acquisition and to demonstrate its feasibility. We used a 0.32 T MR imaging (MRI) system and a total variation (TV)-based CS reconstruction algorithm. To validate the usefulness of our proposed reconstruction method, we used visual assessment for reconstructed images, and we measured the quantitative information for sampling rates from 12.5 to 75.0%. Based on our results, when the sampling ratio is increased, images reconstructed with the CS method have a similar level of image quality to fully sampled reconstruction images. The signal to noise ratio (SNR) and the contrast-to-noise ratio (CNR) were also closer to the reference values when the sampling ratio was increased. We confirmed the feasibility of 2D PC MRA with the CS reconstruction method. Our results provide evidence that this method can improve the time resolution of 2D PC MRA.

  7. Film Cooling Optimization Using Numerical Computation of the Compressible Viscous Flow Equations and Simplex Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmed M. Elsayed

    2013-01-01

    Full Text Available Film cooling is vital to gas turbine blades to protect them from high temperatures and hence high thermal stresses. In the current work, optimization of film cooling parameters on a flat plate is investigated numerically. The effect of film cooling parameters such as inlet velocity direction, lateral and forward diffusion angles, blowing ratio, and streamwise angle on the cooling effectiveness is studied, and optimum cooling parameters are selected. The numerical simulation of the coolant flow through flat plate hole system is carried out using the “CFDRC package” coupled with the optimization algorithm “simplex” to maximize overall film cooling effectiveness. Unstructured finite volume technique is used to solve the steady, three-dimensional and compressible Navier-Stokes equations. The results are compared with the published numerical and experimental data of a cylindrically round-simple hole, and the results show good agreement. In addition, the results indicate that the average overall film cooling effectiveness is enhanced by decreasing the streamwise angle for high blowing ratio and by increasing the lateral and forward diffusion angles. Optimum geometry of the cooling hole on a flat plate is determined. In addition, numerical simulations of film cooling on actual turbine blade are performed using the flat plate optimal hole geometry.

  8. A high reliability detection algorithm for wireless ECG systems based on compressed sensing theory.

    Science.gov (United States)

    Balouchestani, Mohammadreza; Raahemifar, Kaainran; Krishnan, Sridhar

    2013-01-01

    Wireless Body Area Networks (WBANs) consist of small intelligent biomedical wireless sensors attached on or implanted in the body to collect vital biomedical data from the human body providing Continuous Health Monitoring Systems (CHMS). The WBANs promise to be a key element in wireless electrocardiogram (ECG) systems for next-generation. ECG signals are widely used in health care systems as a noninvasive technique for diagnosis of heart conditions. However, the use of conventional ECG system is restricted by patient's mobility, transmission capacity, and physical size. Aforementioned highlights the need and advantage of wireless ECG systems with low sampling-rate and low power consumption. With this in mind, Compressed Sensing (CS) procedure as a new sampling approach and the collaboration from Shannon Energy Transformation (SET) and Peak Finding Schemes (PFS) is used to provide a robust low-complexity detection algorithm in gateways and access points in the hospitals and medical centers with high probability and enough accuracy. Advanced wireless ECG systems based on our approach will be able to deliver healthcare not only to patients in hospitals and medical centers; but also at their homes and workplaces thus offering cost saving, and improving the quality of life. Our simulation results show an increment of 0.1 % for sensitivity as well as 1.5% for the prediction level and detection accuracy.

  9. Study on a High Compression Processing for Video-on-Demand e-learning System

    Science.gov (United States)

    Nomura, Yoshihiko; Matsuda, Ryutaro; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    The authors proposed a high-quality and small-capacity lecture-video-file creating system for distance e-learning system. Examining the feature of the lecturing scene, the authors ingeniously employ two kinds of image-capturing equipment having complementary characteristics : one is a digital video camera with a low resolution and a high frame rate, and the other is a digital still camera with a high resolution and a very low frame rate. By managing the two kinds of image-capturing equipment, and by integrating them with image processing, we can produce course materials with the greatly reduced file capacity : the course materials satisfy the requirements both for the temporal resolution to see the lecturer's point-indicating actions and for the high spatial resolution to read the small written letters. As a result of a comparative experiment, the e-lecture using the proposed system was confirmed to be more effective than an ordinary lecture from the viewpoint of educational effect.

  10. Video compression and DICOM proxies for remote viewing of DICOM images

    Science.gov (United States)

    Khorasani, Elahe; Sheinin, Vadim; Paulovicks, Brent; Jagmohan, Ashish

    2009-02-01

    Digital medical images are rapidly growing in size and volume. A typical study includes multiple image "slices." These images have a special format and a communication protocol referred to as DICOM (Digital Imaging Communications in Medicine). Storing, retrieving, and viewing these images are handled by DICOM-enabled systems. DICOM images are stored in central repository servers called PACS (Picture Archival and Communication Systems). Remote viewing stations are DICOM-enabled applications that can query the PACS servers and retrieve the DICOM images for viewing. Modern medical images are quite large, reaching as much as 1 GB per file. When the viewing station is connected to the PACS server via a high-bandwidth local LAN, downloading of the images is relatively efficient and does not cause significant wasted time for physicians. Problems arise when the viewing station is located in a remote facility that has a low-bandwidth link to the PACS server. If the link between the PACS and remote facility is in the range of 1 Mbit/sec, downloading medical images is very slow. To overcome this problem, medical images are compressed to reduce the size for transmission. This paper describes a method of compression that maintains diagnostic quality of images while significantly reducing the volume to be transmitted, without any change to the existing PACS servers and viewer software, and without requiring any change in the way doctors retrieve and view images today.

  11. The XH-map algorithm: A method to process stereo video to produce a real-time obstacle map

    Science.gov (United States)

    Rosselot, Donald; Hall, Ernest L.

    2005-10-01

    This paper presents a novel, simple and fast algorithm to produce a "floor plan" obstacle map in real time using video. The XH-map algorithm is a transformation of stereo vision data in disparity map space into a two dimensional obstacle map space using a method that can be likened to a histogram reduction of image information. The classic floor-ground background noise problem is addressed with a simple one-time semi-automatic calibration method incorporated into the algorithm. This implementation of this algorithm utilizes the Intel Performance Primitives library and OpenCV libraries for extremely fast and efficient execution, creating a scaled obstacle map from a 480x640x256 stereo pair in 1.4 milliseconds. This algorithm has many applications in robotics and computer vision including enabling an "Intelligent Robot" robot to "see" for path planning and obstacle avoidance.

  12. Basic prediction techniques in modern video coding standards

    CERN Document Server

    Kim, Byung-Gyu

    2016-01-01

    This book discusses in detail the basic algorithms of video compression that are widely used in modern video codec. The authors dissect complicated specifications and present material in a way that gets readers quickly up to speed by describing video compression algorithms succinctly, without going to the mathematical details and technical specifications. For accelerated learning, hybrid codec structure, inter- and intra- prediction techniques in MPEG-4, H.264/AVC, and HEVC are discussed together. In addition, the latest research in the fast encoder design for the HEVC and H.264/AVC is also included.

  13. Video-based eyetracking methods and algorithms in head-mounted displays

    Science.gov (United States)

    Hua, Hong; Krishnaswamy, Prasanna; Rolland, Jannick P.

    2006-05-01

    Head pose is utilized to approximate a user’s line-of-sight for real-time image rendering and interaction in most of the 3D visualization applications using head-mounted displays (HMD). The eye often reaches an object of interest before the completion of most head movements. It is highly desirable to integrate eye-tracking capability into HMDs in various applications. While the added complexity of an eyetracked-HMD (ET-HMD) imposes challenges on designing a compact, portable, and robust system, the integration offers opportunities to improve eye tracking accuracy and robustness. In this paper, based on the modeling of an eye imaging and tracking system, we examine the challenges and identify parametric requirements for video-based pupil-glint tracking methods in an ET-HMD design, and predict how these parameters may affect the tracking accuracy, resolution, and robustness. We further present novel methods and associated algorithms that effectively improve eye-tracking accuracy and extend the tracking range.

  14. APPLICATION OF HYBRID WAVELET-FRACTAL COMPRESSION ALGORITHM FOR RADIOGRAPHIC IMAGES OF WELD DEFECTS

    OpenAIRE

    Faiza Mekhalfa; Daoud Berkani

    2011-01-01

    Based on the standard fractal transformation in spatial domain, simple relations may be found relating coefficients in detail subbands in the wavelet domain. In this work we evaluate a hybrid wavelet-fractal image coder, and we test its ability to compress radiographic images of weld defects. A comparative study between the hybrid coder and standard fractal compression technique have been made in order to investigate the compression ratio and corresponding quality of the image using peak sign...

  15. On optimisation of wavelet algorithms for non-perfect wavelet compression of digital medical images

    CERN Document Server

    Ricke, J

    2001-01-01

    Aim: Optimisation of medical image compression. Evaluation of wavelet-filters for wavelet-compression. Results: Application of filters with different complexity results in significant variations in the quality of image reconstruction after compression specifically in low frequency information. Filters of high complexity proved to be advantageous despite of heterogenous results during visual analysis. For high frequency details, complexity of filters did not prove to be of significant impact on image after reconstruction.

  16. Ensuring security of H.264 videos by using watermarking

    Science.gov (United States)

    Chaumont, Marc

    2011-06-01

    Watermarking is known to be a very difficult task. Robustness, Distortion, Payload, Security, Complexity are many constraints to deal with. When applied to a video stream, the difficulty seems to be growing in comparison to image watermarking. Many classical non malicious manipulations of a compressed stream may suppress the embedded information. For example, a simple re-compression of a DVD movie (MPEG2 compression) to a DivX movie will defeat most of the current state-of-the-art watermarking systems. In this talk, we will expose the different techniques in order to watermark a video compressed stream. Before, we will present the H.264/AVC standard which is one of the most powerful video-compression algorithms. The discussion about video watermarking will be illustrated with H.264 streams. The specific example of traitor tracing will be presented. Deadlocks will be discussed, and then the possible extensions and the future applications will conclude the presentation.

  17. A DCT-Based Watermarking Algorithm Robust To Cropping and Compression

    Science.gov (United States)

    2002-03-01

    which involves the embedding of a certain kind of information under a digital object (image, video, audio ) for the purpose of copyright protection...video, audio ) for the purpose of copyright protection. Both the image and the watermark are most frequently translated into a transform domain where the...9 2. Steganography

  18. High Quality Real-Time Video with Scanning Electron Microscope Using Total Variation Algorithm on a Graphics Processing Unit

    Science.gov (United States)

    Ouarti, Nizar; Sauvet, Bruno; Régnier, Stéphane

    2012-04-01

    The scanning electron microscope (SEM) is usually dedicated to taking a picture of micro-nanoscopic objects. In the present study, we wondered whether a SEM can be converted as a real-time video display. To this end, we designed a new methodology. We use the slow mode of the SEM to acquire a high quality reference image that can then be used to estimate the optimal parameters that regularize the signal for a given method. Here, we employ Total Variation, a method which minimizes the noise and regularizes the image. An optimal lagrangian multiplier can be computed that regularizes the image efficiently. We showed that a limited number of iterations for Total Variation algorithm can lead to an acceptable quality of regularization. This algorithm is parallel and deployed on a Graphics Processing Unit to obtain a real-time high quality video with a SEM. It opens the possibility of a real-time interaction at micro-nanoscales.

  19. Influence of image compression on the quality of UNB pan-sharpened imagery: a case study with security video image frames

    Science.gov (United States)

    Adhamkhiabani, Sina Adham; Zhang, Yun; Fathollahi, Fatemeh

    2014-05-01

    UNB Pan-sharp, also named FuzeGo, is an image fusion technique to produce high resolution color satellite images by fusing a high resolution panchromatic (monochrome) image and a low resolution multispectral (color) image. This is an effective solution that modern satellites have been using to capture high resolution color images at an ultra-high speed. Initial research on security camera systems shows that the UNB Pan-sharp technique can also be utilized to produce high resolution and high sensitive color video images for various imaging and monitoring applications. Based on UNB Pansharp technique, a video camera prototype system, called the UNB Super-camera system, was developed that captures high resolution panchromatic images and low resolution color images simultaneously, and produces real-time high resolution color video images on the fly. In a separate study, it was proved that UNB Super Camera outperforms conventional 1-chip and 3-chip color cameras in image quality, especially when the illumination is low such as in room lighting. In this research the influence of image compression on the quality of UNB Pan-sharped high resolution color images is evaluated, since image compression is widely used in still and video cameras to reduce data volume and speed up data transfer. The results demonstrate that UNB Pan-sharp can consistently produce high resolution color images that have the same detail as the input high resolution panchromatic image and the same color of the input low resolution color image, regardless the compression ratio and lighting condition. In addition, the high resolution color images produced by UNB Pan-sharp have higher sensitivity (signal to noise ratio) and better edge sharpness and color rendering than those of the same generation 1-chip color camera, regardless the compression ratio and lighting condition.

  20. Discontinuity minimization for omnidirectional video projections

    Science.gov (United States)

    Alshina, Elena; Zakharchenko, Vladyslav

    2017-09-01

    Advances in display technologies both for head mounted devices and television panels demand resolution increase beyond 4K for source signal in virtual reality video streaming applications. This poses a problem of content delivery trough a bandwidth limited distribution networks. Considering a fact that source signal covers entire surrounding space investigation reviled that compression efficiency may fluctuate 40% in average depending on origin selection at the conversion stage from 3D space to 2D projection. Based on these knowledge the origin selection algorithm for video compression applications has been proposed. Using discontinuity entropy minimization function projection origin rotation may be defined to provide optimal compression results. Outcome of this research may be applied across various video compression solutions for omnidirectional content.

  1. Temporal Segmentation of MPEG Video Streams

    Directory of Open Access Journals (Sweden)

    Janko Calic

    2002-06-01

    Full Text Available Many algorithms for temporal video partitioning rely on the analysis of uncompressed video features. Since the information relevant to the partitioning process can be extracted directly from the MPEG compressed stream, higher efficiency can be achieved utilizing information from the MPEG compressed domain. This paper introduces a real-time algorithm for scene change detection that analyses the statistics of the macroblock features extracted directly from the MPEG stream. A method for extraction of the continuous frame difference that transforms the 3D video stream into a 1D curve is presented. This transform is then further employed to extract temporal units within the analysed video sequence. Results of computer simulations are reported.

  2. Processing Decoded Video for LCD-LED Backlight Display

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan

    on local LED-LCD backlight. Second, removing the digital video codec artifacts such as blocking and ringing artifacts by post-processing algorithms. A novel algorithm based on image features with optimal balance between visual quality and power consumption was developed. In addition, to remove flickering......The quality of digital images and video signal on visual media such as TV screens and LCD displays is affected by two main factors; the display technology and compression standards. Accurate knowledge about the characteristics of display and the video signal can be utilized to develop advanced...... algorithms for signal (image or video) enhancement. One particular application of such algorithms is the case of LCDs with dynamic local backlight. The thesis addressed two main problems; first, designing algorithms that improve the visual quality of perceived image and video and reduce power consumption...

  3. Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)

    Science.gov (United States)

    Schmalz, Tyler; Ryan, Jack

    2011-01-01

    Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.

  4. Watermarking textures in video games

    Science.gov (United States)

    Liu, Huajian; Berchtold, Waldemar; Schäfer, Marcel; Lieb, Patrick; Steinebach, Martin

    2014-02-01

    Digital watermarking is a promising solution to video game piracy. In this paper, based on the analysis of special challenges and requirements in terms of watermarking textures in video games, a novel watermarking scheme for DDS textures in video games is proposed. To meet the performance requirements in video game applications, the proposed algorithm embeds the watermark message directly in the compressed stream in DDS files and can be straightforwardly applied in watermark container technique for real-time embedding. Furthermore, the embedding approach achieves high watermark payload to handle collusion secure fingerprinting codes with extreme length. Hence, the scheme is resistant to collusion attacks, which is indispensable in video game applications. The proposed scheme is evaluated in aspects of transparency, robustness, security and performance. Especially, in addition to classical objective evaluation, the visual quality and playing experience of watermarked games is assessed subjectively in game playing.

  5. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    Science.gov (United States)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  6. Application aware approach to compression and transmission of H.264 encoded video for automated and centralized transportation surveillance.

    Science.gov (United States)

    2012-10-01

    In this report we present a transportation video coding and wireless transmission system specically tailored to automated : vehicle tracking applications. By taking into account the video characteristics and the lossy nature of the wireless channe...

  7. Tiny videos: a large data set for nonparametric video retrieval and frame classification.

    Science.gov (United States)

    Karpenko, Alexandre; Aarabi, Parham

    2011-03-01

    In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.

  8. STUDY OF BLOCKING EFFECT ELIMINATION METHODS BY MEANS OF INTRAFRAME VIDEO SEQUENCE INTERPOLATION

    Directory of Open Access Journals (Sweden)

    I. S. Rubina

    2015-01-01

    Full Text Available The paper deals with image interpolation methods and their applicability to eliminate some of the artifacts related to both the dynamic properties of objects in video sequences and algorithms used in the order of encoding steps. The main drawback of existing methods is the high computational complexity, unacceptable in video processing. Interpolation of signal samples for blocking - effect elimination at the output of the convertion encoding is proposed as a part of the study. It was necessary to develop methods for improvement of compression ratio and quality of the reconstructed video data by blocking effect elimination on the borders of the segments by intraframe interpolating of video sequence segments. The main point of developed methods is an adaptive recursive algorithm application with adaptive-sized interpolation kernel both with and without the brightness gradient consideration at the boundaries of objects and video sequence blocks. Within theoretical part of the research, methods of information theory (RD-theory and data redundancy elimination, methods of pattern recognition and digital signal processing, as well as methods of probability theory are used. Within experimental part of the research, software implementation of compression algorithms with subsequent comparison of the implemented algorithms with the existing ones was carried out. Proposed methods were compared with the simple averaging algorithm and the adaptive algorithm of central counting interpolation. The advantage of the algorithm based on the adaptive kernel size selection interpolation is in compression ratio increasing by 30%, and the advantage of the modified algorithm based on the adaptive interpolation kernel size selection is in the compression ratio increasing by 35% in comparison with existing algorithms, interpolation and quality of the reconstructed video sequence improving by 3% compared to the one compressed without interpolation. The findings will be

  9. A compressed sensing based reconstruction algorithm for synchrotron source propagation-based X-ray phase contrast computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Melli, Seyed Ali, E-mail: sem649@mail.usask.ca [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Wahid, Khan A. [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Babyn, Paul [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada); Montgomery, James [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Snead, Elisabeth [Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, SK (Canada); El-Gayed, Ali [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Pettitt, Murray; Wolkowski, Bailey [College of Agriculture and Bioresources, University of Saskatchewan, Saskatoon, SK (Canada); Wesolowski, Michal [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada)

    2016-01-11

    Synchrotron source propagation-based X-ray phase contrast computed tomography is increasingly used in pre-clinical imaging. However, it typically requires a large number of projections, and subsequently a large radiation dose, to produce high quality images. To improve the applicability of this imaging technique, reconstruction algorithms that can reduce the radiation dose and acquisition time without degrading image quality are needed. The proposed research focused on using a novel combination of Douglas–Rachford splitting and randomized Kaczmarz algorithms to solve large-scale total variation based optimization in a compressed sensing framework to reconstruct 2D images from a reduced number of projections. Visual assessment and quantitative performance evaluations of a synthetic abdomen phantom and real reconstructed image of an ex-vivo slice of canine prostate tissue demonstrate that the proposed algorithm is competitive in reconstruction process compared with other well-known algorithms. An additional potential benefit of reducing the number of projections would be reduction of time for motion artifact to occur if the sample moves during image acquisition. Use of this reconstruction algorithm to reduce the required number of projections in synchrotron source propagation-based X-ray phase contrast computed tomography is an effective form of dose reduction that may pave the way for imaging of in-vivo samples.

  10. Study on the Detection of Moving Target in the Mining Method Based on Hybrid Algorithm for Sports Video Analysis

    Directory of Open Access Journals (Sweden)

    Huang Tian

    2014-10-01

    Full Text Available Moving object detection and tracking is the computer vision and image processing is a hot research direction, based on the analysis of the moving target detection and tracking algorithm in common use, focus on the sports video target tracking non rigid body. In sports video, non rigid athletes often have physical deformation in the process of movement, and may be associated with the occurrence of moving target under cover. Media data is surging to fast search and query causes more difficulties in data. However, the majority of users want to be able to quickly from the multimedia data to extract the interested content and implicit knowledge (concepts, rules, rules, models and correlation, retrieval and query quickly to take advantage of them, but also can provide the decision support problem solving hierarchy. Based on the motion in sport video object as the object of study, conducts the system research from the theoretical level and technical framework and so on, from the layer by layer mining between low level motion features to high-level semantic motion video, not only provides support for users to find information quickly, but also can provide decision support for the user to solve the problem.

  11. Handling the data management needs of high-throughput sequencing data: SpeedGene, a compression algorithm for the efficient storage of genetic data

    Directory of Open Access Journals (Sweden)

    Qiao Dandi

    2012-05-01

    Full Text Available Abstract Background As Next-Generation Sequencing data becomes available, existing hardware environments do not provide sufficient storage space and computational power to store and process the data due to their enormous size. This is and will be a frequent problem that is encountered everyday by researchers who are working on genetic data. There are some options available for compressing and storing such data, such as general-purpose compression software, PBAT/PLINK binary format, etc. However, these currently available methods either do not offer sufficient compression rates, or require a great amount of CPU time for decompression and loading every time the data is accessed. Results Here, we propose a novel and simple algorithm for storing such sequencing data. We show that, the compression factor of the algorithm ranges from 16 to several hundreds, which potentially allows SNP data of hundreds of Gigabytes to be stored in hundreds of Megabytes. We provide a C++ implementation of the algorithm, which supports direct loading and parallel loading of the compressed format without requiring extra time for decompression. By applying the algorithm to simulated and real datasets, we show that the algorithm gives greater compression rate than the commonly used compression methods, and the data-loading process takes less time. Also, The C++ library provides direct-data-retrieving functions, which allows the compressed information to be easily accessed by other C++ programs. Conclusions The SpeedGene algorithm enables the storage and the analysis of next generation sequencing data in current hardware environment, making system upgrades unnecessary.

  12. A Simple and Fast Iterative Soft-thresholding Algorithm for Tight Frames in Compressed Sensing Magnetic Resonance Imaging

    CERN Document Server

    Liu, Yunsong; Cai, Jian-Feng; Guo, Di; Chen, Zhong; Qu, Xiaobo

    2015-01-01

    Compressed sensing has shown great potentials in accelerating magnetic resonance imaging. Fast image reconstruction and high image quality are two main issues faced by this new technology. It has been shown that, redundant image representations, e.g. tight frames, can significantly improve the image quality. But how to efficiently solve the reconstruction problem with these redundant representation systems is still challenging. This paper attempts to address the problem of applying fast iterative soft-thresholding algorithm (FISTA) to tight frames based magnetic resonance image reconstruction. By introducing the canonical dual frame, we construct an orthogonal projection operator on the range of the analysis sparsity operator and propose a new algorithm, called the projected FISTA (pFISTA). We theoretically prove that pFISTA converges to the minimum of a function with a balanced tight frame sparsity. One major advantage of pFISTA is that only one extra parameter, the step size, is introduced and the numerical...

  13. 3D video coding: an overview of present and upcoming standards

    Science.gov (United States)

    Merkle, Philipp; Müller, Karsten; Wiegand, Thomas

    2010-07-01

    An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.

  14. A New RTL Design Approach for a DCT/IDCT-Based Image Compression Architecture using the mCBE Algorithm

    Directory of Open Access Journals (Sweden)

    Rachmad Vidya Wicaksana Putra

    2013-09-01

    Full Text Available In  the  literature, several approaches  of  designing  a  DCT/IDCT-based image compression system have been proposed.  In this paper,  we present a new RTL design approach with as main  focus developing a  DCT/IDCT-based image compression  architecture  using  a  self-created  algorithm.  This  algorithm  can efficiently  minimize  the  amount  of  shifter -adders  to  substitute  multiplier s.  We call  this  new  algorithm  the  multiplication  from  Common  Binary  Expression (mCBE  Algorithm. Besides this algorithm, we propose alternative quantization numbers,  which  can  be  implemented  simply  as  shifters  in  digital  hardware. Mostly, these numbers can retain a good compressed-image quality  compared to JPEG  recommendations.  These  ideas  lead  to  our  design  being  small  in  circuit area,  multiplierless,  and  low  in  complexity.  The  proposed  8-point  1D-DCT design  has  only  six  stages,  while  the  8-point  1D-IDCT  design  has  only  seven stages  (one  stage  being  defined as  equal  to  the  delay  of  one  shifter  or  2-input adder. By using the pipelining method, we can achieve a high-speed architecture with latency as    a  trade-off consideration. The  design has been synthesized and can reach a speed of up to 1.41ns critical path delay (709.22MHz. 

  15. Supplementary Material for: Compressing an Ensemble With Statistical Models: An Algorithm for Global 3D Spatio-Temporal Temperature

    KAUST Repository

    Castruccio, Stefano

    2016-01-01

    One of the main challenges when working with modern climate model ensembles is the increasingly larger size of the data produced, and the consequent difficulty in storing large amounts of spatio-temporally resolved information. Many compression algorithms can be used to mitigate this problem, but since they are designed to compress generic scientific datasets, they do not account for the nature of climate model output and they compress only individual simulations. In this work, we propose a different, statistics-based approach that explicitly accounts for the space-time dependence of the data for annual global three-dimensional temperature fields in an initial condition ensemble. The set of estimated parameters is small (compared to the data size) and can be regarded as a summary of the essential structure of the ensemble output; therefore, it can be used to instantaneously reproduce the temperature fields in an ensemble with a substantial saving in storage and time. The statistical model exploits the gridded geometry of the data and parallelization across processors. It is therefore computationally convenient and allows to fit a nontrivial model to a dataset of 1 billion data points with a covariance matrix comprising of 1018 entries. Supplementary materials for this article are available online.

  16. A compressed sensing-based iterative algorithm for CT reconstruction and its possible application to phase contrast imaging

    Directory of Open Access Journals (Sweden)

    Li Xueli

    2011-08-01

    Full Text Available Abstract Background Computed Tomography (CT is a technology that obtains the tomogram of the observed objects. In real-world applications, especially the biomedical applications, lower radiation dose have been constantly pursued. To shorten scanning time and reduce radiation dose, one can decrease X-ray exposure time at each projection view or decrease the number of projections. Until quite recently, the traditional filtered back projection (FBP method has been commonly exploited in CT image reconstruction. Applying the FBP method requires using a large amount of projection data. Especially when the exposure speed is limited by the mechanical characteristic of the imaging facilities, using FBP method may prolong scanning time and cumulate with a high dose of radiation consequently damaging the biological specimens. Methods In this paper, we present a compressed sensing-based (CS-based iterative algorithm for CT reconstruction. The algorithm minimizes the l1-norm of the sparse image as the constraint factor for the iteration procedure. With this method, we can reconstruct images from substantially reduced projection data and reduce the impact of artifacts introduced into the CT reconstructed image by insufficient projection information. Results To validate and evaluate the performance of this CS-base iterative algorithm, we carried out quantitative evaluation studies in imaging of both software Shepp-Logan phantom and real polystyrene sample. The former is completely absorption based and the later is imaged in phase contrast. The results show that the CS-based iterative algorithm can yield images with quality comparable to that obtained with existing FBP and traditional algebraic reconstruction technique (ART algorithms. Discussion Compared with the common reconstruction from 180 projection images, this algorithm completes CT reconstruction from only 60 projection images, cuts the scan time, and maintains the acceptable quality of the

  17. A new technique in reference based DNA sequence compression algorithm: Enabling partial decompression

    Science.gov (United States)

    Banerjee, Kakoli; Prasad, R. A.

    2014-10-01

    The whole gamut of Genetic data is ever increasing exponentially. The human genome in its base format occupies almost thirty terabyte of data and doubling its size every two and a half year. It is well-know that computational resources are limited. The most important resource which genetic data requires in its collection, storage and retrieval is its storage space. Storage is limited. Computational performance is also dependent on storage and execution time. Transmission capabilities are also directly dependent on the size of the data. Hence Data compression techniques become an issue of utmost importance when we confront with the task of handling such giganticdatabases like GenBank. Decompression is also an issue when such huge databases are being handled. This paper is intended not only to provide genetic data compression but also partially decompress the genetic sequences.

  18. A Multi-Frame Post-Processing Approach to Improved Decoding of H.264/AVC Video

    DEFF Research Database (Denmark)

    Huang, Xin; Li, Huiying; Forchhammer, Søren

    2007-01-01

    Video compression techniques may yield visually annoying artifacts for limited bitrate coding. In order to improve video quality, a multi-frame based motion compensated filtering algorithm is reported based on combining multiple pictures to form a single super-resolution picture and decimation...

  19. Hardware acceleration of lucky-region fusion (LRF) algorithm for high-performance real-time video processing

    Science.gov (United States)

    Browning, Tyler; Jackson, Christopher; Cayci, Furkan; Carhart, Gary W.; Liu, J. J.; Kiamilev, Fouad

    2015-06-01

    "Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm extracts sharp regions of an image obtained from a series of short exposure frames from fast, high-resolution image sensors, and fuses the sharp regions into a final, improved image. In our previous research, the LRF algorithm had been implemented on CPU and field programmable gate array (FPGA) platforms. The CPU did not have sufficient processing power to handle real-time processing of video. Last year, we presented a real-time LRF implementation using an FPGA. However, due to the slow register-transfer level (RTL) development and simulation time, it was difficult to adjust and discover optimal LRF settings such as Gaussian kernel radius and synthetic frame buffer size. To overcome this limitation, we implemented the LRF algorithm on an off-the-shelf graphical processing unit (GPU) in order to take advantage of built-in parallelization and significantly faster development time. Our initial results show that the unoptimized GPU implementation has almost comparable turbulence mitigation to the FPGA version. In our presentation, we will explore optimization of the LRF algorithm on the GPU to achieve higher performance results, and adding new performance capabilities such as image stabilization.

  20. Design of binary patterns for speckle reduction in holographic display with compressive sensing and direct-binary search algorithm

    Science.gov (United States)

    Leportier, Thibault; Hwang, Do Kyung; Park, Min-Chul

    2017-08-01

    One problem common to imaging techniques based on coherent light is speckle noise. This phenomenon is caused mostly by random interference of light scattered by rough surfaces. Speckle noise can be avoided by using advanced holographic imaging techniques such as optical scanning holography. A more widely known method is to capture several holograms of the same object and to perform an averaging operation so that the signal to noise ratio can be improved. Several digital filters were also proposed to reduce noise in the numerical reconstruction plane of holograms, even though they usually require finding a compromise between noise reduction and edge preservation. In this study, we used a digital filter based on compressive sensing algorithm. This approach enables to obtain results equivalent to the average of multiple holograms, but only a single hologram is needed. Filters for speckle reduction are applied on numerical reconstructions of hologram, and not on the hologram itself. Then, optical reconstruction cannot be performed. We propose a method based on direct-binary search (DBS) algorithm to generate binary holograms that can be reconstructed optically after application of a speckle reduction filter. Since the optimization procedure of the DBS algorithm is performed in the image plane, speckle reduction techniques can be applied on the complex hologram and used as a reference to obtain a binary pattern where the speckle noise generated during the recording of the hologram has been filtered.

  1. Data Compression Algorithm Architecture for Large Depth-of-Field Particle Image Velocimeters

    Science.gov (United States)

    Bos, Brent; Memarsadeghi, Nargess; Kizhner, Semion; Antonille, Scott

    2013-01-01

    A large depth-of-field particle image velocimeter (PIV) is designed to characterize dynamic dust environments on planetary surfaces. This instrument detects lofted dust particles, and senses the number of particles per unit volume, measuring their sizes, velocities (both speed and direction), and shape factors when the particles are large. To measure these particle characteristics in-flight, the instrument gathers two-dimensional image data at a high frame rate, typically >4,000 Hz, generating large amounts of data for every second of operation, approximately 6 GB/s. To characterize a planetary dust environment that is dynamic, the instrument would have to operate for at least several minutes during an observation period, easily producing more than a terabyte of data per observation. Given current technology, this amount of data would be very difficult to store onboard a spacecraft, and downlink to Earth. Since 2007, innovators have been developing an autonomous image analysis algorithm architecture for the PIV instrument to greatly reduce the amount of data that it has to store and downlink. The algorithm analyzes PIV images and automatically reduces the image information down to only the particle measurement data that is of interest, reducing the amount of data that is handled by more than 10(exp 3). The state of development for this innovation is now fairly mature, with a functional algorithm architecture, along with several key pieces of algorithm logic, that has been proven through field test data acquired with a proof-of-concept PIV instrument.

  2. Remote sensing image compression assessment based on multilevel distortions

    Science.gov (United States)

    Jiang, Hongxu; Yang, Kai; Liu, Tingshan; Zhang, Yongfei

    2014-01-01

    The measurement of visual quality is of fundamental importance to remote sensing image compression, especially for image quality assessment and compression algorithm optimization. We exploit the distortion features of optical remote sensing image compression and propose a full-reference image quality metric based on multilevel distortions (MLD), which assesses image quality by calculating distortions of three levels (such as pixel-level, contexture-level, and content-level) between original images and compressed images. Based on this, a multiscale MLD (MMLD) algorithm is designed and it outperforms the other current methods in our testing. In order to validate the performance of our algorithm, a special remote sensing image compression distortion (RICD) database is constructed, involving 250 remote sensing images compressed with different algorithms and various distortions. Experimental results on RICD and Laboratory for Image and Video Engineering databases show that the proposed MMLD algorithm has better consistency with subjective perception values than current state-of-the-art methods in remote sensing image compression assessment, and the objective assessment results can show the distortion features and visual quality of compressed image well. It is suitable to be the evaluation criteria for optical remote sensing image compression.

  3. Improving Video Segmentation by Fusing Depth Cues and the Visual Background Extractor (ViBe Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaoqin Zhou

    2017-05-01

    Full Text Available Depth-sensing technology has led to broad applications of inexpensive depth cameras that can capture human motion and scenes in three-dimensional space. Background subtraction algorithms can be improved by fusing color and depth cues, thereby allowing many issues encountered in classical color segmentation to be solved. In this paper, we propose a new fusion method that combines depth and color information for foreground segmentation based on an advanced color-based algorithm. First, a background model and a depth model are developed. Then, based on these models, we propose a new updating strategy that can eliminate ghosting and black shadows almost completely. Extensive experiments have been performed to compare the proposed algorithm with other, conventional RGB-D (Red-Green-Blue and Depth algorithms. The experimental results suggest that our method extracts foregrounds with higher effectiveness and efficiency.

  4. Double HEVC Compression Detection with the Same QPs Based on the PU Numbers

    Directory of Open Access Journals (Sweden)

    Jia Rui-Shi

    2016-01-01

    Full Text Available Double HEVC compression detection is of great importance in video forensics. However, effective detection algorithm based on the same Qps is rarely reported. In this paper, a novel method based on the same Qps is applied in dealing with double HEVC compression detection. Firstly, the numbers of PU blocks with the size of 4×4 in each I frame is extracted during the codec procedures when the video is compressed with the same Qps. Then, calculate the standard deviation of 4×4 PU blocks difference (SDoPU before and after the compression. Finally, select the appropriate threshold for compression testing classification according to the SDoPU statistical feature. By performing numerical experiments, We prove that the proposed algorithm is of high classification accuracy for detecting double HEVC compression.

  5. DEFLATE Compression Algorithm Corrects for Overestimation of Phylogenetic Diversity by Grantham Approach to Single-Nucleotide Polymorphism Classification

    Directory of Open Access Journals (Sweden)

    Arran Schlosberg

    2014-05-01

    Full Text Available Improvements in speed and cost of genome sequencing are resulting in increasing numbers of novel non-synonymous single nucleotide polymorphisms (nsSNPs in genes known to be associated with disease. The large number of nsSNPs makes laboratory-based classification infeasible and familial co-segregation with disease is not always possible. In-silico methods for classification or triage are thus utilised. A popular tool based on multiple-species sequence alignments (MSAs and work by Grantham, Align-GVGD, has been shown to underestimate deleterious effects, particularly as sequence numbers increase. We utilised the DEFLATE compression algorithm to account for expected variation across a number of species. With the adjusted Grantham measure we derived a means of quantitatively clustering known neutral and deleterious nsSNPs from the same gene; this was then used to assign novel variants to the most appropriate cluster as a means of binary classification. Scaling of clusters allows for inter-gene comparison of variants through a single pathogenicity score. The approach improves upon the classification accuracy of Align-GVGD while correcting for sensitivity to large MSAs. Open-source code and a web server are made available at https://github.com/aschlosberg/CompressGV.

  6. Fundamental Frequency Estimation of the Speech Signal Compressed by MP3 Algorithm Using PCC Interpolation

    Directory of Open Access Journals (Sweden)

    MILIVOJEVIC, Z. N.

    2010-02-01

    Full Text Available In this paper the fundamental frequency estimation results of the MP3 modeled speech signal are analyzed. The estimation of the fundamental frequency was performed by the Picking-Peaks algorithm with the implemented Parametric Cubic Convolution (PCC interpolation. The efficiency of PCC was tested for Catmull-Rom, Greville and Greville two-parametric kernel. Depending on MSE, a window that gives optimal results was chosen.

  7. Multiple Sparse Measurement Gradient Reconstruction Algorithm for DOA Estimation in Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Weijian Si

    2015-01-01

    Full Text Available A novel direction of arrival (DOA estimation method in compressed sensing (CS is proposed, in which the DOA estimation problem is cast as the joint sparse reconstruction from multiple measurement vectors (MMV. The proposed method is derived through transforming quadratically constrained linear programming (QCLP into unconstrained convex optimization which overcomes the drawback that l1-norm is nondifferentiable when sparse sources are reconstructed by minimizing l1-norm. The convergence rate and estimation performance of the proposed method can be significantly improved, since the steepest descent step and Barzilai-Borwein step are alternately used as the search step in the unconstrained convex optimization. The proposed method can obtain satisfactory performance especially in these scenarios with low signal to noise ratio (SNR, small number of snapshots, or coherent sources. Simulation results show the superior performance of the proposed method as compared with existing methods.

  8. 3D Imaging Algorithm for Down-Looking MIMO Array SAR Based on Bayesian Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Xiaozhen Ren

    2014-01-01

    Full Text Available Down-looking MIMO array SAR can reconstruct 3D images of the observed area in the inferior of the platform of the SAR and has wide application prospects. In this paper, a new strategy based on Bayesian compressive sensing theory is proposed for down-looking MIMO array SAR imaging, which transforms the cross-track imaging process of down-looking MIMO array SAR into the problem of sparse signal reconstruction from noisy measurements. Due to account for additive noise encountered in the measurement process, high quality image can be achieved. Simulation results indicate that the proposed method can provide better resolution and lower sidelobes compared to the conventional method.

  9. A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection.

    Science.gov (United States)

    Thounaojam, Dalton Meitei; Khelchandra, Thongam; Manglem Singh, Kh; Roy, Sudipta

    2016-01-01

    This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter.

  10. Multiobjective Optimization of a Vehicle Vibration Model Using the Improved Compressed-Objective Genetic Algorithm with Convergence Detection

    Directory of Open Access Journals (Sweden)

    Kittipong Boonlong

    2013-01-01

    Full Text Available Ride quality and road holding capacity of a vehicle is significantly influenced by its suspension system. In the design process, a number of objective functions related to comfort and road holding capacity are taken into consideration. In this paper, the five-degree-of-freedom system of vehicle vibration model with passive suspension is investigated. This multiobjective optimization problem consists of five objective functions. Based on these five design objectives, this paper formulates four two-objective optimization problems by considering four pairs of design objectives and one five-objective optimization problem. This paper proposes the use of the improved compressed objective genetic algorithm (COGA-II with convergence detection. COGA-II is intentionally designed for dealing with a problem having many optimized objectives. Furthermore, the performance of COGA-II was benchmarked with the multiobjective uniform-diversity genetic algorithm (MUGA utilized in the previous study. From the simulation results, with equal population sizes, COGA-II employing the convergence detection for searching termination uses less numbers of generations for most sets of design objectives than MUGA whose termination condition is defined by the constant maximum number of generations. Moreover, the solutions obtained from COGA-II are obviously superior to those obtained from MUGA regardless of sets of design objective.

  11. Communicating pictures a course in image and video coding

    CERN Document Server

    Bull, David R

    2014-01-01

    Communicating Pictures starts with a unique historical perspective of the role of images in communications and then builds on this to explain the applications and requirements of a modern video coding system. It draws on the author's extensive academic and professional experience of signal processing and video coding to deliver a text that is algorithmically rigorous, yet accessible, relevant to modern standards, and practical. It offers a thorough grounding in visual perception, and demonstrates how modern image and video compression methods can be designed in order to meet the rate-quality performance levels demanded by today's applications, networks and users. With this book you will learn: Practical issues when implementing a codec, such as picture boundary extension and complexity reduction, with particular emphasis on efficient algorithms for transforms, motion estimators and error resilience Conflicts between conventional video compression, based on variable length coding and spatiotemporal prediction,...

  12. Simulation of video sequences for an accurate evaluation of tracking algorithms on complex scenes

    Science.gov (United States)

    Dubreu, Christine; Manzanera, Antoine; Bohain, Eric

    2008-04-01

    As target tracking is arousing more and more interest, the necessity to reliably assess tracking algorithms in any conditions is becoming essential. The evaluation of such algorithms requires a database of sequences representative of the whole range of conditions in which the tracking system is likely to operate, together with its associated ground truth. However, building such a database with real sequences, and collecting the associated ground truth appears to be hardly possible and very time-consuming. Therefore, more and more often, synthetic sequences are generated by complex and heavy simulation platforms to evaluate the performance of tracking algorithms. Some methods have also been proposed using simple synthetic sequences generated without such complex simulation platforms. These sequences are generated from a finite number of discriminating parameters, and are statistically representative, as regards these parameters, of real sequences. They are very simple and not photorealistic, but can be reliably used for low-level tracking algorithms evaluation in any operating conditions. The aim of this paper is to assess the reliability of these non-photorealistic synthetic sequences for evaluation of tracking systems on complex-textured objects, and to show how the number of parameters can be increased to synthesize more elaborated scenes and deal with more complex dynamics, including occlusions and three-dimensional deformations.

  13. Compression embedding

    Science.gov (United States)

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  14. Processing Decoded Video for Backlight Dimming

    DEFF Research Database (Denmark)

    Burini, Nino; Korhonen, Jari

    Quality of digital image and video signals on TV screens is aected by many factors, including the display technology and compression standards. An accurate knowledge of the characteristics of the display andof the video signals can be used to develop advanced algorithms that improve the visual...... rendition of the signals, particularly in the case of LCDs with dynamic local backlight. This thesis shows that it is possible to model LCDs with dynamic backlight to design algorithms that improve the visual quality of 2D and 3D content, and that digital video coding artifacts like blocking or ringing can...... be reduced with post-processing. LCD screens with dynamic local backlight are modeled in their main aspects, like pixel luminance, light diusion and light perception. Following the model, novel algorithms based on optimization are presented and extended, then reduced in complexity, to produce backlights...

  15. Feasibility of Radon projection acquisition for compressive imaging in MMW region based new video rate 16×16 GDD FPA camera

    Science.gov (United States)

    Levanon, Assaf; Konstantinovsky, Michael; Kopeika, Natan S.; Yitzhaky, Yitzhak; Stern, A.; Turak, Svetlana; Abramovich, Amir

    2015-05-01

    In this article we present preliminary results for the combination of two interesting fields in the last few years: 1) Compressed imaging (CI), which is a joint sensing and compressing process, that attempts to exploit the large redundancy in typical images in order to capture fewer samples than usual. 2) Millimeter Waves (MMW) imaging. MMW based imaging systems are required for a large variety of applications in many growing fields such as medical treatments, homeland security, concealed weapon detection, and space technology. Moreover, the possibility to create a reliable imaging in low visibility conditions such as heavy cloud, smoke, fog and sandstorms in the MMW region, generate high interest from military groups in order to be ready for new combat. The lack of inexpensive room temperature imaging sensors makes it difficult to provide a suitable MMW system for many of the above applications. A system based on Glow Discharge Detector (GDD) Focal Plane Arrays (FPA) can be very efficient in real time imaging with significant results. The GDD is located in free space and it can detect MMW radiation almost isotropically. In this article, we present a new approach of reconstruction MMW imaging by rotation scanning of the target. The Collection process here, based on Radon projections allows implementation of the compressive sensing principles into the MMW region. Feasibility of concept was obtained as radon line imaging results. MMW imaging results with our resent sensor are also presented for the first time. The multiplexing frame rate of 16×16 GDD FPA permits real time video rate imaging of 30 frames per second and comprehensive 3D MMW imaging. It uses commercial GDD lamps with 3mm diameter, Ne indicator lamps as pixel detectors. Combination of these two fields should make significant improvement in MMW region imaging research, and new various of possibilities in compressing sensing technique.

  16. A Time-Consistent Video Segmentation Algorithm Designed for Real-Time Implementation

    Directory of Open Access Journals (Sweden)

    M. El Hassani

    2008-01-01

    Temporal consistency of the segmentation is ensured by incorporating motion information through the use of an improved change-detection mask. This mask is designed using both illumination differences between frames and region segmentation of the previous frame. By considering both pixel and region levels, we obtain a particularly efficient algorithm at a low computational cost, allowing its implementation in real-time on the TriMedia processor for CIF image sequences.

  17. a New Multi-Resolution Algorithm to Store and Transmit Compressed DTM

    Science.gov (United States)

    Biagi, L.; Brovelli, M.; Zamboni, G.

    2012-07-01

    WebGIS and virtual globes allow DTMs distribution and three dimensional representations to the Web users' community. In these applications, the database storage size represents a critical point. DTMs are obtained by some sampling or interpolation on the raw observations and typically are stored and distributed by data based models, like for example regular grids. A new approach to store and transmit DTMs is presented. The idea is to use multi-resolution bilinear spline functions to interpolate the observations and to model the terrain. More in detail, the algorithm performs the following actions. 1) The spatial distribution of the observations is investigated. Where few data are available, few levels of splines are activated while more levels are activated where the raw observations are denser: each new level corresponds to an halving of the spline support with respect to the previous level. 2) After the selection of the spline functions to be activated, the relevant coefficients are estimated by interpolating the observations. The interpolation is computed by batch least squares. 3) Finally, the estimated coefficients of the splines are stored. The model guarantees a local resolution consistent with the data density and can be defined analytical, because the coefficients of a given function are stored instead of a set of heights. The approach is discussed and compared with the traditional techniques to interpolate, store and transmit DTMs, considering accuracy and storage requirements. It is also compared with another multi-resolution technique. The research has been funded by the INTERREG HELI-DEM (Helvetia Italy Digital Elevation Model) project.

  18. Colour-reproduction algorithm for transmitting variable video frames and its application to capsule endoscopy

    OpenAIRE

    Khan, Tareq; Shrestha, Ravi; Imtiaz, Md. Shamin; Khan A. Wahid

    2015-01-01

    Presented is a new power-efficient colour generation algorithm for wireless capsule endoscopy (WCE) application. In WCE, transmitting colour image data from the human intestine through radio frequency (RF) consumes a huge amount of power. The conventional way is to transmit all R, G and B components of all frames. Using the proposed dictionary-based colour generation scheme, instead of sending all R, G and B frames, first one colour frame is sent followed by a series of grey-scale frames. At ...

  19. Peri-operative imaging of cancer margins with reflectance confocal microscopy during Mohs micrographic surgery: feasibility of a video-mosaicing algorithm

    Science.gov (United States)

    Flores, Eileen; Yelamos, Oriol; Cordova, Miguel; Kose, Kivanc; Phillips, William; Rossi, Anthony; Nehal, Kishwer; Rajadhyaksha, Milind

    2017-02-01

    Reflectance confocal microscopy (RCM) imaging shows promise for guiding surgical treatment of skin cancers. Recent technological advancements such as the introduction of the handheld version of the reflectance confocal microscope, video acquisition and video-mosaicing have improved RCM as an emerging tool to evaluate cancer margins during routine surgical skin procedures such as Mohs micrographic surgery (MMS). Detection of residual non-melanoma skin cancer (NMSC) tumor during MMS is feasible, as demonstrated by the introduction of real-time perioperative imaging on patients in the surgical setting. Our study is currently testing the feasibility of a new mosaicing algorithm for perioperative RCM imaging of NMSC cancer margins on patients during MMS. We report progress toward imaging and image analysis on forty-five patients, who presented for MMS at the MSKCC Dermatology service. The first 10 patients were used as a training set to establish an RCM imaging algorithm, which was implemented on the remaining test set of 35 patients. RCM imaging, using 35% AlCl3 for nuclear contrast, was performed pre- and intra-operatively with the Vivascope 3000 (Caliber ID). Imaging was performed in quadrants in the wound, to simulate the Mohs surgeon's examination of pathology. Videos were taken at the epidermal and deep dermal margins. Our Mohs surgeons assessed all videos and video-mosaics for quality and correlation to histology. Overall, our RCM video-mosaicing algorithm is feasible. RCM videos and video-mosaics of the epidermal and dermal margins were found to be of clinically acceptable quality. Assessment of cancer margins was affected by type of NMSC, size and location. Among the test set of 35 patients, 83% showed acceptable imaging quality, resolution and contrast. Visualization of nuclear and cellular morphology of residual BCC/SCC tumor and normal skin features could be detected in the peripheral and deep dermal margins. We observed correlation between the RCM videos/video

  20. Algorithms

    Indian Academy of Sciences (India)

    positive numbers. The word 'algorithm' was most often associated with this algorithm till 1950. It may however be pOinted out that several non-trivial algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used.

  1. Algorithms

    Indian Academy of Sciences (India)

    In the description of algorithms and programming languages, what is the role of control abstraction? • What are the inherent limitations of the algorithmic processes? In future articles in this series, we will show that these constructs are powerful and can be used to encode any algorithm. In the next article, we will discuss ...

  2. Efficient Coding of Shape and Transparency for Video Objects

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2007-01-01

    A novel scheme for coding gray-level alpha planes in object-based video is presented. Gray-level alpha planes convey the shape and the transparency information, which are required for smooth composition of video objects. The algorithm proposed is based on the segmentation of the alpha plane...... shape layer is processed by a novel video shape coder. In intra mode, the DSLSC binary image coder presented in is used. This is extended here with an intermode utilizing temporal redundancies in shape image sequences. Then the opaque layer is compressed by a newly designed scheme which models...

  3. Advanced algorithms for information science

    Energy Technology Data Exchange (ETDEWEB)

    Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.

    1998-12-31

    This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression.

  4. Algorithms

    Indian Academy of Sciences (India)

    , i is referred to as the loop-index, 'stat-body' is any sequence of ... while i ~ N do stat-body; i: = i+ 1; endwhile. The algorithm for sorting the numbers is described in Table 1 and the algorithmic steps on a list of 4 numbers shown in. Figure 1.

  5. Energy efficient image/video data transmission on commercial multi-core processors.

    Science.gov (United States)

    Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee

    2012-11-01

    In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2~5 without compromising image/video quality.

  6. Lossless Compression of Double-Precision Floating-Point Data for Numerical Simulations: Highly Parallelizable Algorithms for GPU Computing

    National Research Council Canada - National Science Library

    OHARA, Mamoru; YAMAGUCHI, Takashi

    2012-01-01

    .... In addition, considering overheads for transferring data between the devices and host memories, it is preferable that the data is compressed in a part of parallel computation performed on the devices...

  7. Electromyography-based seizure detector: Preliminary results comparing a generalized tonic-clonic seizure detection algorithm to video-EEG recordings.

    Science.gov (United States)

    Szabó, Charles Ákos; Morgan, Lola C; Karkar, Kameel M; Leary, Linda D; Lie, Octavian V; Girouard, Michael; Cavazos, José E

    2015-09-01

    Automatic detection of generalized tonic-clonic seizures (GTCS) will facilitate patient monitoring and early intervention to prevent comorbidities, recurrent seizures, or death. Brain Sentinel (San Antonio, Texas, USA) developed a seizure-detection algorithm evaluating surface electromyography (sEMG) signals during GTCS. This study aims to validate the seizure-detection algorithm using inpatient video-electroencephalography (EEG) monitoring. sEMG was recorded unilaterally from the biceps/triceps muscles in 33 patients (17white/16 male) with a mean age of 40 (range 14-64) years who were admitted for video-EEG monitoring. Maximum voluntary biceps contraction was measured in each patient to set up the baseline physiologic muscle threshold. The raw EMG signal was recorded using conventional amplifiers, sampling at 1,024 Hz and filtered with a 60 Hz noise detection algorithm before it was processed with three band-pass filters at pass frequencies of 3-40, 130-240, and 300-400 Hz. A seizure-detection algorithm utilizing Hotelling's T-squared power analysis of compound muscle action potentials was used to identify GTCS and correlated with video-EEG recordings. In 1,399 h of continuous recording, there were 196 epileptic seizures (21 GTCS, 96 myoclonic, 28 tonic, 12 absence, and 42 focal seizures with or without loss of awareness) and 4 nonepileptic spells. During retrospective, offline evaluation of sEMG from the biceps alone, the algorithm detected 20 GTCS (95%) in 11 patients, averaging within 20 s of electroclinical onset of generalized tonic activity, as identified by video-EEG monitoring. Only one false-positive detection occurred during the postictal period following a GTCS, but false alarms were not triggered by other seizure types or spells. Brain Sentinel's seizure detection algorithm demonstrated excellent sensitivity and specificity for identifying GTCS recorded in an epilepsy monitoring unit. Further studies are needed in larger patient groups, including

  8. Encoder power consumption comparison of Distributed Video Codec and H.264/AVC in low-complexity mode

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Belyaev, Evgeny; Forchhammer, Søren

    2010-01-01

    This paper presents a power consumption comparison of a novel approach to video compression based on distributed video coding (DVC) and widely used video compression based on H.264/AVC standard. We have used a low-complexity configuration for H.264/AVC codec. It is well-known that motion estimation...... (ME) and CABAC entropy coder consume much power so we eliminate ME from the codec and use CAVLC instead of CABAC. Some investigations show that low-complexity DVC outperforms other algorithms in terms of encoder side energy consumption . However, estimations of power consumption for H.264/AVC and DVC...

  9. Delivering Diagnostic Quality Video over Mobile Wireless Networks for Telemedicine

    Directory of Open Access Journals (Sweden)

    Sira P. Rao

    2009-01-01

    Full Text Available In real-time remote diagnosis of emergency medical events, mobility can be enabled by wireless video communications. However, clinical use of this potential advance will depend on definitive and compelling demonstrations of the reliability of diagnostic quality video. Because the medical domain has its own fidelity criteria, it is important to incorporate diagnostic video quality criteria into any video compression system design. To this end, we used flexible algorithms for region-of-interest (ROI video compression and obtained feedback from medical experts to develop criteria for diagnostically lossless (DL quality. The design of the system occurred in three steps-measurement of bit rate at which DL quality is achieved through evaluation of videos by medical experts, incorporation of that information into a flexible video encoder through the notion of encoder states, and an encoder state update option based on a built-in quality criterion. Medical experts then evaluated our system for the diagnostic quality of the video, allowing us to verify that it is possible to realize DL quality in the ROI at practical communication data transfer rates, enabling mobile medical assessment over bit-rate limited wireless channels. This work lays the scientific foundation for additional validation through prototyped technology, field testing, and clinical trials.

  10. Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    Science.gov (United States)

    Shalkhauser, Mary JO; Whyte, Wayne A.

    1991-01-01

    Advances in very large scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible an potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for DPCM (differential pulse code midulation)-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the codec are described, and performance results are provided.

  11. Transcoding-Based Error-Resilient Video Adaptation for 3G Wireless Networks

    Directory of Open Access Journals (Sweden)

    Dogan Safak

    2007-01-01

    Full Text Available Transcoding is an effective method to provide video adaptation for heterogeneous internetwork video access and communication environments, which require the tailoring (i.e., repurposing of coded video properties to channel conditions, terminal capabilities, and user preferences. This paper presents a video transcoding system that is capable of applying a suite of error resilience tools on the input compressed video streams while controlling the output rates to provide robust communications over error-prone and bandwidth-limited 3G wireless networks. The transcoder is also designed to employ a new adaptive intra-refresh algorithm, which is responsive to the detected scene activity inherently embedded into the video content and the reported time-varying channel error conditions of the wireless network. Comprehensive computer simulations demonstrate significant improvements in the received video quality performances using the new transcoding architecture without an extra computational cost.

  12. Transcoding-Based Error-Resilient Video Adaptation for 3G Wireless Networks

    Science.gov (United States)

    Eminsoy, Sertac; Dogan, Safak; Kondoz, Ahmet M.

    2007-12-01

    Transcoding is an effective method to provide video adaptation for heterogeneous internetwork video access and communication environments, which require the tailoring (i.e., repurposing) of coded video properties to channel conditions, terminal capabilities, and user preferences. This paper presents a video transcoding system that is capable of applying a suite of error resilience tools on the input compressed video streams while controlling the output rates to provide robust communications over error-prone and bandwidth-limited 3G wireless networks. The transcoder is also designed to employ a new adaptive intra-refresh algorithm, which is responsive to the detected scene activity inherently embedded into the video content and the reported time-varying channel error conditions of the wireless network. Comprehensive computer simulations demonstrate significant improvements in the received video quality performances using the new transcoding architecture without an extra computational cost.

  13. Hyperspectral data compression

    CERN Document Server

    Motta, Giovanni; Storer, James A

    2006-01-01

    Provides a survey of results in the field of compression of remote sensed 3D data, with a particular interest in hyperspectral imagery. This work covers topics such as compression architecture, lossless compression, lossy techniques, and more. It also describes a lossless algorithm based on vector quantization.

  14. An overview of semantic compression

    Science.gov (United States)

    Schmalz, Mark S.

    2010-08-01

    We live in such perceptually rich natural and manmade environments that detection and recognition of objects is mediated cerebrally by attentional filtering, in order to separate objects of interest from background clutter. In computer models of the human visual system, attentional filtering is often restricted to early processing, where areas of interest (AOIs) are delineated around anomalies of interest, then the pixels within each AOI's subtense are isolated for later processing. In contrast, the human visual system concurrently detects many targets at multiple levels (e.g., retinal center-surround filters, ganglion layer feature detectors, post-retinal spatial filtering, and cortical detection / filtering of features and objects, to name but a few processes). Intracranial attentional filtering appears to play multiple roles, including clutter filtration at all levels of processing - thus, we process individual retinal cell responses, early filtering response, and so forth, on up to the filtering of objects at high levels of semantic complexity. Computationally, image compression techniques have progressed from emphasizing pixels, to considering regions of pixels as foci of computational interest. In more recent research, object-based compression has been investigated with varying rate-distortion performance and computational efficiency. Codecs have been developed for a wide variety of applications, although the majority of compression and decompression transforms continue to concentrate on region- and pixel-based processing, in part because of computational convenience. It is interesting to note that a growing body of research has emphasized the detection and representation of small features in relationship to their surrounding environment, which has occasionally been called semantic compression. In this paper, we overview different types of semantic compression approaches, with particular interest in high-level compression algorithms. Various algorithms and

  15. Algorithms

    Indian Academy of Sciences (India)

    Algorithms. 3. Procedures and Recursion. R K Shyamasundar. In this article we introduce procedural abstraction and illustrate its uses. Further, we illustrate the notion of recursion which is one of the most useful features of procedural abstraction. Procedures. Let us consider a variation of the pro blem of summing the first M.

  16. Algorithms

    Indian Academy of Sciences (India)

    number of elements. We shall illustrate the widely used matrix multiplication algorithm using the two dimensional arrays in the following. Consider two matrices A and B of integer type with di- mensions m x nand n x p respectively. Then, multiplication of. A by B denoted, A x B , is defined by matrix C of dimension m xp where.

  17. Evaluation of the image quality in digital breast tomosynthesis (DBT) employed with a compressed-sensing (CS)-based reconstruction algorithm by using the mammographic accreditation phantom

    Energy Technology Data Exchange (ETDEWEB)

    Park, Yeonok; Cho, Heemoon; Je, Uikyu; Cho, Hyosung, E-mail: hscho1@yonsei.ac.kr; Park, Chulkyu; Lim, Hyunwoo; Kim, Kyuseok; Kim, Guna; Park, Soyoung; Woo, Taeho; Choi, Sungil

    2015-12-21

    In this work, we have developed a prototype digital breast tomosynthesis (DBT) system which mainly consists of an x-ray generator (28 kV{sub p}, 7 mA s), a CMOS-type flat-panel detector (70-μm pixel size, 230.5×339 mm{sup 2} active area), and a rotational arm to move the x-ray generator in an arc. We employed a compressed-sensing (CS)-based reconstruction algorithm, rather than a common filtered-backprojection (FBP) one, for more accurate DBT reconstruction. Here the CS is a state-of-the-art mathematical theory for solving the inverse problems, which exploits the sparsity of the image with substantially high accuracy. We evaluated the reconstruction quality in terms of the detectability, the contrast-to-noise ratio (CNR), and the slice-sensitive profile (SSP) by using the mammographic accreditation phantom (Model 015, CIRS Inc.) and compared it to the FBP-based quality. The CS-based algorithm yielded much better image quality, preserving superior image homogeneity, edge sharpening, and cross-plane resolution, compared to the FBP-based one. - Highlights: • A prototype digital breast tomosynthesis (DBT) system is developed. • Compressed-sensing (CS) based reconstruction framework is employed. • We reconstructed high-quality DBT images by using the proposed reconstruction framework.

  18. Digital video transcoding for transmission and storage

    CERN Document Server

    Sun, Huifang; Chen, Xuemin

    2004-01-01

    Professionals in the video and multimedia industries need a book that explains industry standards for video coding and how to convert the compressed information between standards. Digital Video Transcoding for Transmission and Storage answers this demand while also supplying the theories and principles of video compression and transcoding technologies. Emphasizing digital video transcoding techniques, this book summarizes its content via examples of practical methods for transcoder implementation. It relates almost all of its featured transcoding technologies to practical applications.This vol

  19. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  20. A fast method for video deblurring based on a combination of gradient methods and denoising algorithms in Matlab and C environments

    Science.gov (United States)

    Mirzadeh, Zeynab; Mehri, Razieh; Rabbani, Hossein

    2010-01-01

    In this paper the degraded video with blur and noise is enhanced by using an algorithm based on an iterative procedure. In this algorithm at first we estimate the clean data and blur function using Newton optimization method and then the estimation procedure is improved using appropriate denoising methods. These noise reduction techniques are based on local statistics of clean data and blur function. For estimated blur function we use LPA-ICI (local polynomial approximation - intersection of confidence intervals) method that use an anisotropic window around each point and obtain the enhanced data employing Wiener filter in this local window. Similarly, to improvement the quality of estimated clean video, at first we transform the data to wavelet transform domain and then improve our estimation using maximum a posterior (MAP) estimator and local Laplace prior. This procedure (initial estimation and improvement of estimation by denoising) is iterated and finally the clean video is obtained. The implementation of this algorithm is slow in MATLAB1 environment and so it is not suitable for online applications. However, MATLAB has the capability of running functions written in C. The files which hold the source for these functions are called MEX-Files. The MEX functions allow system-specific APIs to be called to extend MATLAB's abilities. So, in this paper to speed up our algorithm, the written code in MATLAB is sectioned and the elapsed time for each section is measured and slow sections (that use 60% of complete running time) are selected. Then these slow sections are translated to C++ and linked to MATLAB. In fact, the high loads of information in images and processed data in the "for" loops of relevant code, makes MATLAB an unsuitable candidate for writing such programs. The written code for our video deblurring algorithm in MATLAB contains eight "for" loops. These eighth "for" utilize 60% of the total execution time of the entire program and so the runtime should be

  1. EFFICIENT BLOCK MATCHING ALGORITHMS FOR MOTION ESTIMATION IN H.264/AVC

    Directory of Open Access Journals (Sweden)

    P. Muralidhar

    2015-02-01

    Full Text Available In Scalable Video Coding (SVC, motion estimation and inter-layer prediction play an important role in elimination of temporal and spatial redundancies between consecutive layers. This paper evaluates the performance of widely accepted block matching algorithms used in various video compression standards, with emphasis on the performance of the algorithms for a didactic scalable video codec. Many different implementations of Fast Motion Estimation Algorithms have been proposed to reduce motion estimation complexity. The block matching algorithms have been analyzed with emphasis on Peak Signal to Noise Ratio (PSNR and computations using MATLAB. In addition to the above comparisons, a survey has been done on Spiral Search Motion Estimation Algorithms for Video Coding. A New Modified Spiral Search (NMSS motion estimation algorithm has been proposed with lower computational complexity. The proposed algorithm achieves 72% reduction in computation with a minimal (<1dB reduction in PSNR. A brief introduction to the entire flow of video compression H.264/SVC is also presented in this paper.

  2. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  3. Robust real-time segmentation of images and videos using a smooth-spline snake-based algorithm.

    Science.gov (United States)

    Precioso, Frederic; Barlaud, Michel; Blu, Thierry; Unser, Michael

    2005-07-01

    This paper deals with fast image and video segmentation using active contours. Region-based active contours using level sets are powerful techniques for video segmentation, but they suffer from large computational cost. A parametric active contour method based on B-Spline interpolation has been proposed in to highly reduce the computational cost, but this method is sensitive to noise. Here, we choose to relax the rigid interpolation constraint in order to robustify our method in the presence of noise: by using smoothing splines, we trade a tunable amount of interpolation error for a smoother spline curve. We show by experiments on natural sequences that this new flexibility yields segmentation results of higher quality at no additional computational cost. Hence, real-time processing for moving objects segmentation is preserved.

  4. Study of Temporal Effects on Subjective Video Quality of Experience.

    Science.gov (United States)

    Bampis, Christos George; Zhi Li; Moorthy, Anush Krishna; Katsavounidis, Ioannis; Aaron, Anne; Bovik, Alan Conrad

    2017-11-01

    HTTP adaptive streaming is being increasingly deployed by network content providers, such as Netflix and YouTube. By dividing video content into data chunks encoded at different bitrates, a client is able to request the appropriate bitrate for the segment to be played next based on the estimated network conditions. However, this can introduce a number of impairments, including compression artifacts and rebuffering events, which can severely impact an end-user's quality of experience (QoE). We have recently created a new video quality database, which simulates a typical video streaming application, using long video sequences and interesting Netflix content. Going beyond previous efforts, the new database contains highly diverse and contemporary content, and it includes the subjective opinions of a sizable number of human subjects regarding the effects on QoE of both rebuffering and compression distortions. We observed that rebuffering is always obvious and unpleasant to subjects, while bitrate changes may be less obvious due to content-related dependencies. Transient bitrate drops were preferable over rebuffering only on low complexity video content, while consistently low bitrates were poorly tolerated. We evaluated different objective video quality assessment algorithms on our database and found that objective video quality models are unreliable for QoE prediction on videos suffering from both rebuffering events and bitrate changes. This implies the need for more general QoE models that take into account objective quality models, rebuffering-aware information, and memory. The publicly available video content as well as metadata for all of the videos in the new database can be found at http://live.ece.utexas.edu/research/LIVE_NFLXStudy/nflx_index.html.

  5. Hardware-Algorithms Co-Design and Implementation of an Analog-to-Information Converter for Biosignals Based on Compressed Sensing.

    Science.gov (United States)

    Pareschi, Fabio; Albertini, Pierluigi; Frattini, Giovanni; Mangia, Mauro; Rovatti, Riccardo; Setti, Gianluca

    2016-02-01

    We report the design and implementation of an Analog-to-Information Converter (AIC) based on Compressed Sensing (CS). The system is realized in a CMOS 180 nm technology and targets the acquisition of bio-signals with Nyquist frequency up to 100 kHz. To maximize performance and reduce hardware complexity, we co-design hardware together with acquisition and reconstruction algorithms. The resulting AIC outperforms previously proposed solutions mainly thanks to two key features. First, we adopt a novel method to deal with saturations in the computation of CS measurements. This allows no loss in performance even when 60% of measurements saturate. Second, the system is able to adapt itself to the energy distribution of the input by exploiting the so-called rakeness to maximize the amount of information contained in the measurements. With this approach, the 16 measurement channels integrated into a single device are expected to allow the acquisition and the correct reconstruction of most biomedical signals. As a case study, measurements on real electrocardiograms (ECGs) and electromyograms (EMGs) show signals that these can be reconstructed without any noticeable degradation with a compression rate, respectively, of 8 and 10.

  6. Distributed Compressive Sensing

    Science.gov (United States)

    2009-01-01

    more powerful algorithms like SOMP can be used. The ACIE algorithm is similar in spirit to other iterative estimation algorithms, such as turbo...Mitchell, “JPEG: Still image data compression standard,” Van Nostrand Reinhold , 1993. [11] D. S. Taubman and M. W. Marcellin, JPEG 2000: Image

  7. HIERARCHICAL ADAPTIVE ROOD PATTERN SEARCH FOR MOTION ESTIMATION AT VIDEO SEQUENCE ANALYSIS

    Directory of Open Access Journals (Sweden)

    V. T. Nguyen

    2016-05-01

    Full Text Available Subject of Research.The paper deals with the motion estimation algorithms for the analysis of video sequences in compression standards MPEG-4 Visual and H.264. Anew algorithm has been offered based on the analysis of the advantages and disadvantages of existing algorithms. Method. Thealgorithm is called hierarchical adaptive rood pattern search (Hierarchical ARPS, HARPS. This new algorithm includes the classic adaptive rood pattern search ARPS and hierarchical search MP (Hierarchical search or Mean pyramid. All motion estimation algorithms have been implemented using MATLAB package and tested with several video sequences. Main Results. The criteria for evaluating the algorithms were: speed, peak signal to noise ratio, mean square error and mean absolute deviation. The proposed method showed a much better performance at a comparable error and deviation. The peak signal to noise ratio in different video sequences shows better and worse results than characteristics of known algorithms so it requires further investigation. Practical Relevance. Application of this algorithm in MPEG-4 and H.264 codecs instead of the standard can significantly reduce compression time. This feature enables to recommend it in telecommunication systems for multimedia data storing, transmission and processing.

  8. Research on key technologies in multiview video and interactive multiview video streaming

    OpenAIRE

    Xiu, Xiaoyu

    2011-01-01

    Emerging video applications are being developed where multiple views of a scene are captured. Two central issues in the deployment of future multiview video (MVV) systems are compression efficiency and interactive video experience, which makes it necessary to develop advanced technologies on multiview video coding (MVC) and interactive multiview video streaming (IMVS). The former aims at efficient compression of all MVV data in a ratedistortion (RD) optimal manner by exploiting both temporal ...

  9. Evaluation of H.264 and H.265 full motion video encoding for small UAS platforms

    Science.gov (United States)

    McGuinness, Christopher D.; Walker, David; Taylor, Clark; Hill, Kerry; Hoffman, Marc

    2016-05-01

    Of all the steps in the image acquisition and formation pipeline, compression is the only process that degrades image quality. A selected compression algorithm succeeds or fails to provide sufficient quality at the requested compression rate depending on how well the algorithm is suited to the input data. Applying an algorithm designed for one type of data to a different type often results in poor compression performance. This is mostly the case when comparing the performance of H.264, designed for standard definition data, to HEVC (High Efficiency Video Coding), which the Joint Collaborative Team on Video Coding (JCT-VC) designed for high-definition data. This study focuses on evaluating how HEVC compares to H.264 when compressing data from small UAS platforms. To compare the standards directly, we assess two open-source traditional software solutions: x264 and x265. These software-only comparisons allow us to establish a baseline of how much improvement can generally be expected of HEVC over H.264. Then, specific solutions leveraging different types of hardware are selected to understand the limitations of commercial-off-the-shelf (COTS) options. Algorithmically, regardless of the implementation, HEVC is found to provide similar quality video as H.264 at 40% lower data rates for video resolutions greater than 1280x720, roughly 1 Megapixel (MPx). For resolutions less than 1MPx, H.264 is an adequate solution though a small (roughly 20%) compression boost is earned by employing HEVC. New low cost, size, weight, and power (CSWAP) HEVC implementations are being developed and will be ideal for small UAS systems.

  10. Efficient Enhancement for Spatial Scalable Video Coding Transmission

    Directory of Open Access Journals (Sweden)

    Mayada Khairy

    2017-01-01

    Full Text Available Scalable Video Coding (SVC is an international standard technique for video compression. It is an extension of H.264 Advanced Video Coding (AVC. In the encoding of video streams by SVC, it is suitable to employ the macroblock (MB mode because it affords superior coding efficiency. However, the exhaustive mode decision technique that is usually used for SVC increases the computational complexity, resulting in a longer encoding time (ET. Many other algorithms were proposed to solve this problem with imperfection of increasing transmission time (TT across the network. To minimize the ET and TT, this paper introduces four efficient algorithms based on spatial scalability. The algorithms utilize the mode-distribution correlation between the base layer (BL and enhancement layers (ELs and interpolation between the EL frames. The proposed algorithms are of two categories. Those of the first category are based on interlayer residual SVC spatial scalability. They employ two methods, namely, interlayer interpolation (ILIP and the interlayer base mode (ILBM method, and enable ET and TT savings of up to 69.3% and 83.6%, respectively. The algorithms of the second category are based on full-search SVC spatial scalability. They utilize two methods, namely, full interpolation (FIP and the full-base mode (FBM method, and enable ET and TT savings of up to 55.3% and 76.6%, respectively.

  11. Prediction of crack growth direction by Strain Energy Sih's Theory on specimens SEN under tension-compression biaxial loading employing Genetic Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez-MartInez R; Lugo-Gonzalez E; Urriolagoitia-Calderon G; Urriolagoitia-Sosa G; Hernandez-Gomez L H; Romero-Angeles B; Torres-San Miguel Ch, E-mail: rrodriguezm@ipn.mx, E-mail: urrio332@hotmail.com, E-mail: guiurri@hotmail.com, E-mail: luishector56@hotmail.com, E-mail: romerobeatriz98@hotmail.com, E-mail: napor@hotmail.com [INSTITUTO POLITECNICO NACIONAL Seccion de Estudios de Posgrado e Investigacion (SEPI), Escuela Superior de Ingenieria Mecanica y Electrica (ESIME), Edificio 5. 2do Piso, Unidad Profesional Adolfo Lopez Mateos ' Zacatenco' Col. Lindavista, C.P. 07738, Mexico, D.F. (Mexico)

    2011-07-19

    Crack growth direction has been studied in many ways. Particularly Sih's strain energy theory predicts that a fracture under a three-dimensional state of stress spreads in direction of the minimum strain energy density. In this work a study for angle of fracture growth was made, considering a biaxial stress state at the crack tip on SEN specimens. The stress state applied on a tension-compression SEN specimen is biaxial one on crack tip, as it can observed in figure 1. A solution method proposed to obtain a mathematical model considering genetic algorithms, which have demonstrated great capacity for the solution of many engineering problems. From the model given by Sih one can deduce the density of strain energy stored for unit of volume at the crack tip as dW = [1/2E({sigma}{sup 2}{sub x} + {sigma}{sup 2}{sub y}) - {nu}/E({sigma}{sub x}{sigma}{sub y})]dV (1). From equation (1) a mathematical deduction to solve in terms of {theta} of this case was developed employing Genetic Algorithms, where {theta} is a crack propagation direction in plane x-y. Steel and aluminium mechanical properties to modelled specimens were employed, because they are two of materials but used in engineering design. Obtained results show stable zones of fracture propagation but only in a range of applied loading.

  12. Evaluation of the image quality in digital breast tomosynthesis (DBT) employed with a compressed-sensing (CS)-based reconstruction algorithm by using the mammographic accreditation phantom

    Science.gov (United States)

    Park, Yeonok; Cho, Heemoon; Je, Uikyu; Cho, Hyosung; Park, Chulkyu; Lim, Hyunwoo; Kim, Kyuseok; Kim, Guna; Park, Soyoung; Woo, Taeho; Choi, Sungil

    2015-12-01

    In this work, we have developed a prototype digital breast tomosynthesis (DBT) system which mainly consists of an x-ray generator (28 kVp, 7 mA s), a CMOS-type flat-panel detector (70-μm pixel size, 230.5×339 mm2 active area), and a rotational arm to move the x-ray generator in an arc. We employed a compressed-sensing (CS)-based reconstruction algorithm, rather than a common filtered-backprojection (FBP) one, for more accurate DBT reconstruction. Here the CS is a state-of-the-art mathematical theory for solving the inverse problems, which exploits the sparsity of the image with substantially high accuracy. We evaluated the reconstruction quality in terms of the detectability, the contrast-to-noise ratio (CNR), and the slice-sensitive profile (SSP) by using the mammographic accreditation phantom (Model 015, CIRS Inc.) and compared it to the FBP-based quality. The CS-based algorithm yielded much better image quality, preserving superior image homogeneity, edge sharpening, and cross-plane resolution, compared to the FBP-based one.

  13. Depth assisted compression of full parallax light fields

    Science.gov (United States)

    Graziosi, Danillo B.; Alpaslan, Zahir Y.; El-Ghoroury, Hussein S.

    2015-03-01

    Full parallax light field displays require high pixel density and huge amounts of data. Compression is a necessary tool used by 3D display systems to cope with the high bandwidth requirements. One of the formats adopted by MPEG for 3D video coding standards is the use of multiple views with associated depth maps. Depth maps enable the coding of a reduced number of views, and are used by compression and synthesis software to reconstruct the light field. However, most of the developed coding and synthesis tools target linearly arranged cameras with small baselines. Here we propose to use the 3D video coding format for full parallax light field coding. We introduce a view selection method inspired by plenoptic sampling followed by transform-based view coding and view synthesis prediction to code residual views. We determine the minimal requirements for view sub-sampling and present the rate-distortion performance of our proposal. We also compare our method with established video compression techniques, such as H.264/AVC, H.264/MVC, and the new 3D video coding algorithm, 3DV-ATM. Our results show that our method not only has an improved rate-distortion performance, it also preserves the structure of the perceived light fields better.

  14. Intelligent video surveillance systems

    CERN Document Server

    Dufour, Jean-Yves

    2012-01-01

    Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi

  15. Reconfigurable Secure Video Codec Based on DWT and AES Processor

    Directory of Open Access Journals (Sweden)

    Rached Tourki

    2010-01-01

    Full Text Available In this paper, we proposed a secure video codec based on the discrete wavelet transformation (DWT and the Advanced Encryption Standard (AES processor. Either, use of video coding with DWT or encryption using AES is well known. However, linking these two designs to achieve secure video coding is leading. The contributions of our work are as follows. First, a new method for image and video compression is proposed. This codec is a synthesis of JPEG and JPEG2000,which is implemented using Huffman coding to the JPEG and DWT to the JPEG2000. Furthermore, an improved motion estimation algorithm is proposed. Second, the encryptiondecryption effects are achieved by the AES processor. AES is aim to encrypt group of LL bands. The prominent feature of this method is an encryption of LL bands by AES-128 (128-bit keys, or AES-192 (192-bit keys, or AES-256 (256-bit keys.Third, we focus on a method that implements partial encryption of LL bands. Our approach provides considerable levels of security (key size, partial encryption, mode encryption, and has very limited adverse impact on the compression efficiency. The proposed codec can provide up to 9 cipher schemes within a reasonable software cost. Latency, correlation, PSNR and compression rate results are analyzed and shown.

  16. Complexity-aware high efficiency video coding

    CERN Document Server

    Correa, Guilherme; Agostini, Luciano; Cruz, Luis A da Silva

    2016-01-01

    This book discusses computational complexity of High Efficiency Video Coding (HEVC) encoders with coverage extending from the analysis of HEVC compression efficiency and computational complexity to the reduction and scaling of its encoding complexity. After an introduction to the topic and a review of the state-of-the-art research in the field, the authors provide a detailed analysis of the HEVC encoding tools compression efficiency and computational complexity.  Readers will benefit from a set of algorithms for scaling the computational complexity of HEVC encoders, all of which take advantage from the flexibility of the frame partitioning structures allowed by the standard.  The authors also provide a set of early termination methods based on data mining and machine learning techniques, which are able to reduce the computational complexity required to find the best frame partitioning structures. The applicability of the proposed methods is finally exemplified with an encoding time control system that emplo...

  17. Smartphone based automatic organ validation in ultrasound video.

    Science.gov (United States)

    Vaish, Pallavi; Bharath, R; Rajalakshmi, P

    2017-07-01

    Telesonography involves transmission of ultrasound video from remote areas to the doctors for getting diagnosis. Due to the lack of trained sonographers in remote areas, the ultrasound videos scanned by these untrained persons do not contain the proper information that is required by a physician. As compared to standard methods for video transmission, mHealth driven systems need to be developed for transmitting valid medical videos. To overcome this problem, we are proposing an organ validation algorithm to evaluate the ultrasound video based on the content present. This will guide the semi skilled person to acquire the representative data from patient. Advancement in smartphone technology allows us to perform high medical image processing on smartphone. In this paper we have developed an Application (APP) for a smartphone which can automatically detect the valid frames (which consist of clear organ visibility) in an ultrasound video and ignores the invalid frames (which consist of no-organ visibility), and produces a compressed sized video. This is done by extracting the GIST features from the Region of Interest (ROI) of the frame and then classifying the frame using SVM classifier with quadratic kernel. The developed application resulted with the accuracy of 94.93% in classifying valid and invalid images.

  18. Quality of Experience Assessment of Video Quality in Social Clouds

    Directory of Open Access Journals (Sweden)

    Asif Ali Laghari

    2017-01-01

    Full Text Available Video sharing on social clouds is popular among the users around the world. High-Definition (HD videos have big file size so the storing in cloud storage and streaming of videos with high quality from cloud to the client are a big problem for service providers. Social clouds compress the videos to save storage and stream over slow networks to provide quality of service (QoS. Compression of video decreases the quality compared to original video and parameters are changed during the online play as well as after download. Degradation of video quality due to compression decreases the quality of experience (QoE level of end users. To assess the QoE of video compression, we conducted subjective (QoE experiments by uploading, sharing, and playing videos from social clouds. Three popular social clouds, Facebook, Tumblr, and Twitter, were selected to upload and play videos online for users. The QoE was recorded by using questionnaire given to users to provide their experience about the video quality they perceive. Results show that Facebook and Twitter compressed HD videos more as compared to other clouds. However, Facebook gives a better quality of compressed videos compared to Twitter. Therefore, users assigned low ratings for Twitter for online video quality compared to Tumblr that provided high-quality online play of videos with less compression.

  19. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  20. Hierarchical compression of Caenorhabditis elegans locomotion reveals phenotypic differences in the organization of behaviour

    Science.gov (United States)

    2016-01-01

    Regularities in animal behaviour offer insights into the underlying organizational and functional principles of nervous systems and automated tracking provides the opportunity to extract features of behaviour directly from large-scale video data. Yet how to effectively analyse such behavioural data remains an open question. Here, we explore whether a minimum description length principle can be exploited to identify meaningful behaviours and phenotypes. We apply a dictionary compression algorithm to behavioural sequences from the nematode worm Caenorhabditis elegans freely crawling on an agar plate both with and without food and during chemotaxis. We find that the motifs identified by the compression algorithm are rare but relevant for comparisons between worms in different environments, suggesting that hierarchical compression can be a useful step in behaviour analysis. We also use compressibility as a new quantitative phenotype and find that the behaviour of wild-isolated strains of C. elegans is more compressible than that of the laboratory strain N2 as well as the majority of mutant strains examined. Importantly, in distinction to more conventional phenotypes such as overall motor activity or aggregation behaviour, the increased compressibility of wild isolates is not explained by the loss of function of the gene npr-1, which suggests that erratic locomotion is a laboratory-derived trait with a novel genetic basis. Because hierarchical compression can be applied to any sequence, we anticipate that compressibility can offer insights into the organization of behaviour in other animals including humans. PMID:27581484

  1. Hierarchical compression of Caenorhabditis elegans locomotion reveals phenotypic differences in the organization of behaviour.

    Science.gov (United States)

    Gomez-Marin, Alex; Stephens, Greg J; Brown, André E X

    2016-08-01

    Regularities in animal behaviour offer insights into the underlying organizational and functional principles of nervous systems and automated tracking provides the opportunity to extract features of behaviour directly from large-scale video data. Yet how to effectively analyse such behavioural data remains an open question. Here, we explore whether a minimum description length principle can be exploited to identify meaningful behaviours and phenotypes. We apply a dictionary compression algorithm to behavioural sequences from the nematode worm Caenorhabditis elegans freely crawling on an agar plate both with and without food and during chemotaxis. We find that the motifs identified by the compression algorithm are rare but relevant for comparisons between worms in different environments, suggesting that hierarchical compression can be a useful step in behaviour analysis. We also use compressibility as a new quantitative phenotype and find that the behaviour of wild-isolated strains of C. elegans is more compressible than that of the laboratory strain N2 as well as the majority of mutant strains examined. Importantly, in distinction to more conventional phenotypes such as overall motor activity or aggregation behaviour, the increased compressibility of wild isolates is not explained by the loss of function of the gene npr-1, which suggests that erratic locomotion is a laboratory-derived trait with a novel genetic basis. Because hierarchical compression can be applied to any sequence, we anticipate that compressibility can offer insights into the organization of behaviour in other animals including humans. © 2016 The Authors.

  2. Reference Based Genome Compression

    CERN Document Server

    Chern, Bobbie; Manolakos, Alexandros; No, Albert; Venkat, Kartik; Weissman, Tsachy

    2012-01-01

    DNA sequencing technology has advanced to a point where storage is becoming the central bottleneck in the acquisition and mining of more data. Large amounts of data are vital for genomics research, and generic compression tools, while viable, cannot offer the same savings as approaches tuned to inherent biological properties. We propose an algorithm to compress a target genome given a known reference genome. The proposed algorithm first generates a mapping from the reference to the target genome, and then compresses this mapping with an entropy coder. As an illustration of the performance: applying our algorithm to James Watson's genome with hg18 as a reference, we are able to reduce the 2991 megabyte (MB) genome down to 6.99 MB, while Gzip compresses it to 834.8 MB.

  3. Context-dependent JPEG backward-compatible high-dynamic range image compression

    Science.gov (United States)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  4. Rate-distortion optimised video transmission using pyramid vector quantisation.

    Science.gov (United States)

    Bokhari, Syed; Nix, Andrew R; Bull, David R

    2012-08-01

    Conventional video compression relies on interframe prediction (motion estimation), intra frame prediction and variable-length entropy encoding to achieve high compression ratios but, as a consequence, produces an encoded bitstream that is inherently sensitive to channel errors. In order to ensure reliable delivery over lossy channels, it is necessary to invoke various additional error detection and correction methods. In contrast, techniques such as Pyramid Vector Quantisation have the ability to prevent error propagation through the use of fixed length codewords. This paper introduces an efficient rate distortion optimisation algorithm for intra-mode PVQ which offers similar compression performance to intra H.264/AVC and Motion JPEG 2000 while offering inherent error resilience. The performance of our enhanced codec is evaluated for HD content in the context of a realistic (IEEE 802.11n) wireless environment. We show that PVQ provides high tolerance to corrupted data compared to the state of the art while obviating the need for complex encoding tools.

  5. New modulation-based watermarking technique for video

    Science.gov (United States)

    Lemma, Aweke; van der Veen, Michiel; Celik, Mehmet

    2006-02-01

    Successful watermarking algorithms have already been developed for various applications ranging from meta-data tagging to forensic tracking. Nevertheless, it is commendable to develop alternative watermarking techniques that provide a broader basis for meeting emerging services, usage models and security threats. To this end, we propose a new multiplicative watermarking technique for video, which is based on the principles of our successful MASK audio watermark. Audio-MASK has embedded the watermark by modulating the short-time envelope of the audio signal and performed detection using a simple envelope detector followed by a SPOMF (symmetrical phase-only matched filter). Video-MASK takes a similar approach and modulates the image luminance envelope. In addition, it incorporates a simple model to account for the luminance sensitivity of the HVS (human visual system). Preliminary tests show algorithms transparency and robustness to lossy compression.

  6. Watermarking in H.264/AVC compressed domain using Exp-Golomb code words mapping

    Science.gov (United States)

    Xu, Dawen; Wang, Rangding

    2011-09-01

    In this paper, a fast watermarking algorithm for the H.264/AVC compressed video using Exponential-Golomb (Exp-Golomb) code words mapping is proposed. During the embedding process, the eligible Exp-Golomb code words of reference frames are first identified, and then the mapping rules between these code words and the watermark bits are established. Watermark embedding is performed by modulating the corresponding Exp-Golomb code words, which is based on the established mapping rules. The watermark information can be extracted directly from the encoded stream without resorting to the original video, and merely requires parsing the Exp-Golomb code from bit stream rather than decoding the video. Experimental results show that the proposed watermarking scheme can effectively embed information with no bit rate increase and almost no quality degradation. The algorithm, however, is fragile and re-encoding at alternate bit rates or transcoding removes the watermark.

  7. Safety and efficacy of a potential treatment algorithm by using manual compression repair and ultrasound-guided thrombin injection for the management of iatrogenic femoral artery pseudoaneurysm in a large patient cohort.

    Science.gov (United States)

    Dzijan-Horn, Marijana; Langwieser, Nicolas; Groha, Philipp; Bradaric, Christian; Linhardt, Maryam; Böttiger, Corinna; Byrne, Robert A; Steppich, Birgit; Koppara, Tobias; Gödel, Julia; Hadamitzky, Martin; Ott, Ilka; von Beckerath, Nicolas; Kastrati, Adnan; Laugwitz, Karl-Ludwig; Ibrahim, Tareq

    2014-04-01

    Because of the risk of associated complications, femoral pseudoaneurysm (PSA) formation implies further treatment. Ultrasound-guided thrombin injection (UGTI) is becoming the accepted gold standard, but manual compression (MC) represents an established treatment option including PSAs not feasible for UGTI. This study aims to assess our experience in PSA treatment using MC or UGTI according to a potential algorithm based on morphological properties in a large patient cohort. Between January 2007 and January 2011, a total of 432 PSAs were diagnosed in 29091 consecutive patients (1.49%) undergoing femoral artery catheterization. When compressible, small PSAs (manual compression therapy and within 4 to 6 hours after UGTI or by the next morning and were available for 428 patients (99.1%). The overall success rate of our institutional therapeutic approach was 97.2%, which was achieved by 178 MC- and 357 UGTI-procedures, respectively. Procedural complications occurred in 5 cases (1.4%) after UGTI and in 3 cases (1.7%) after MC, respectively. The treatment algorithm was not successful in 12 patients, whereas 2 PSAs (0.5%) were successfully excluded by implantation of a covered stent-graft, and 10 patients necessitated surgical intervention (2.3%), which was associated with a high complication rate (30%). The presented treatment algorithm facilitates effective and safe PSA elimination.

  8. Compression and Predictive Distributions for Large Alphabets

    Science.gov (United States)

    Yang, Xiao

    Data generated from large alphabet exist almost everywhere in our life, for example, texts, images and videos. Traditional universal compression algorithms mostly involve small alphabets and assume implicitly an asymptotic condition under which the extra bits induced in the compression process vanishes as an infinite number of data come. In this thesis, we put the main focus on compression and prediction for large alphabets with the alphabet size comparable or larger than the sample size. We first consider sequences of random variables independent and identically generated from a large alphabet. In particular, the size of the sample is allowed to be variable. A product distribution based on Poisson sampling and tiling is proposed as the coding distribution, which highly simplifies the implementation and analysis through independence. Moreover, we characterize the behavior of the coding distribution through a condition on the tail sum of the ordered counts, and apply it to sequences satisfying this condition. Further, we apply this method to envelope classes. This coding distribution provides a convenient method to approximately compute the Shtarkov's normalized maximum likelihood (NML) distribution. And the extra price paid for this convenience is small compared to the total cost. Furthermore, we find this coding distribution can also be used to calculate the NML distribution exactly. And this calculation remains simple due to the independence of the coding distribution. Further, we consider a more realistic class---the Markov class, and in particular, tree sources. A context tree based algorithm is designed to describe the dependencies among the contexts. It is a greedy algorithm which seeks for the greatest savings in codelength when constructing the tree. Compression and prediction of individual counts associated with the contexts uses the same coding distribution as in the i.i.d case. Combining these two procedures, we demonstrate a compression algorithm based

  9. A fast meteor detection algorithm

    Science.gov (United States)

    Gural, P.

    2016-01-01

    A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.

  10. Distortion-Based Link Adaptation for Wireless Video Transmission

    Directory of Open Access Journals (Sweden)

    Andrew Nix

    2008-06-01

    Full Text Available Wireless local area networks (WLANs such as IEEE 802.11a/g utilise numerous transmission modes, each providing different throughputs and reliability levels. Most link adaptation algorithms proposed in the literature (i maximise the error-free data throughput, (ii do not take into account the content of the data stream, and (iii rely strongly on the use of ARQ. Low-latency applications, such as real-time video transmission, do not permit large numbers of retransmission. In this paper, a novel link adaptation scheme is presented that improves the quality of service (QoS for video transmission. Rather than maximising the error-free throughput, our scheme minimises the video distortion of the received sequence. With the use of simple and local rate distortion measures and end-to-end distortion models at the video encoder, the proposed scheme estimates the received video distortion at the current transmission rate, as well as on the adjacent lower and higher rates. This allows the system to select the link-speed which offers the lowest distortion and to adapt to the channel conditions. Simulation results are presented using the MPEG-4/AVC H.264 video compression standard over IEEE 802.11g. The results show that the proposed system closely follows the optimum theoretic solution.

  11. Very low bit rate video coding standards

    Science.gov (United States)

    Zhang, Ya-Qin

    1995-04-01

    Very low bit rate video coding has received considerable attention in academia and industry in terms of both coding algorithms and standards activities. In addition to the earlier ITU-T efforts on H.320 standardization for video conferencing from 64 kbps to 1.544 Mbps in ISDN environment, the ITU-T/SG15 has formed an expert group on low bit coding (LBC) for visual telephone below 64 kbps. The ITU-T/SG15/LBC work consists of two phases: the near-term and long-term. The near-term standard H.32P/N, based on existing compression technologies, mainly addresses the issues related to visual telephony at below 28.8 kbps, the V.34 modem rate used in the existing Public Switched Telephone Network (PSTN). H.32P/N will be technically frozen in January '95. The long-term standard H.32P/L, relying on fundamentally new compression technologies with much improved performance, will address video telephony in both PSTN and mobile environment. The ISO/SG29/WG11, after its highly visible and successful MPEG 1/2 work, is starting to focus on the next- generation audiovisual multimedia coding standard MPEG 4. With the recent change of direction, MPEG 4 intends to provide an audio visual coding standard allowing for interactivity, high compression, and/or universal accessibility, with high degree of flexibility and extensibility. This paper briefly summarizes these on-going standards activities undertaken by ITU-T/LBC and ISO/MPEG 4 as of December 1994.

  12. Fast algorithm for the 3-D DCT-II\\ud

    OpenAIRE

    Boussakta, S.; Alshibami, H.O.

    2004-01-01

    Recently, many applications for three-dimensional\\ud (3-D) image and video compression have been proposed using 3-D discrete cosine transforms (3-D DCTs). Among different types of DCTs, the type-II DCT (DCT-II) is the most used. In order to use the 3-D DCTs in practical applications, fast 3-D algorithms are essential. Therefore, in this paper, the 3-D vector-radix decimation-in-frequency (3-D VR DIF) algorithm that calculates the 3-D DCT-II directly is introduced. The mathematical analysis an...

  13. Transmission of object based fine-granular-scalability video over networks

    Science.gov (United States)

    Shi, Xu-li; Jin, Zhi-cheng; Teng, Guo-wei; Zhang, Zhao-yang; An, Ping; Xiao, Guang

    2006-05-01

    It is a hot focus of current researches in video standards that how to transmit video streams over Internet and wireless networks. One of the key methods is FGS(Fine-Granular-Scalability), which can always adapt to the network bandwidth varying but with some sacrifice of coding efficiency, is supported by MPEG-4. Object-based video coding algorithm has been firstly included in MPEG-4 standard that can be applied in interactive video. However, the real time segmentation of VOP(video object plan) is difficult that limit the application of MPEG-4 standard in interactive video. H.264/AVC is the up-to-date video-coding standard, which enhance compression performance and provision a network-friendly video representation. In this paper, we proposed a new Object Based FGS(OBFGS) coding algorithm embedded in H.264/AVC that is different from that in mpeg-4. After the algorithms optimization for the H.264 encoder, the FGS first finish the base-layer coding. Then extract moving VOP using the base-layer information of motion vectors and DCT coefficients. Sparse motion vector field of p-frame composed of 4*4 blocks, 4*8 blocks and 8*4 blocks in base-layer is interpolated. The DCT coefficient of I-frame is calculated by using information of spatial intra-prediction. After forward projecting each p-frame vector to the immediate adjacent I-frame, the method extracts moving VOPs (video object plan) using a recursion 4*4 block classification process. Only the blocks that belong to the moving VOP in 4*4 block-level accuracy is coded to produce enhancement-layer stream. Experimental results show that our proposed system can obtain high interested VOP quality at the cost of fewer coding efficiency.

  14. Automated analysis and annotation of basketball video

    Science.gov (United States)

    Saur, Drew D.; Tan, Yap-Peng; Kulkarni, Sanjeev R.; Ramadge, Peter J.

    1997-01-01

    Automated analysis and annotation of video sequences are important for digital video libraries, content-based video browsing and data mining projects. A successful video annotation system should provide users with useful video content summary in a reasonable processing time. Given the wide variety of video genres available today, automatically extracting meaningful video content for annotation still remains hard by using current available techniques. However, a wide range video has inherent structure such that some prior knowledge about the video content can be exploited to improve our understanding of the high-level video semantic content. In this paper, we develop tools and techniques for analyzing structured video by using the low-level information available directly from MPEG compressed video. Being able to work directly in the video compressed domain can greatly reduce the processing time and enhance storage efficiency. As a testbed, we have developed a basketball annotation system which combines the low-level information extracted from MPEG stream with the prior knowledge of basketball video structure to provide high level content analysis, annotation and browsing for events such as wide- angle and close-up views, fast breaks, steals, potential shots, number of possessions and possession times. We expect our approach can also be extended to structured video in other domains.

  15. CHARACTER RECOGNITION OF VIDEO SUBTITLES\\

    Directory of Open Access Journals (Sweden)

    Satish S Hiremath

    2016-11-01

    Full Text Available An important task in content based video indexing is to extract text information from videos. The challenges involved in text extraction and recognition are variation of illumination on each video frame with text, the text present on the complex background and different font size of the text. Using various image processing algorithms like morphological operations, blob detection and histogram of oriented gradients the character recognition of video subtitles is implemented. Segmentation, feature extraction and classification are the major steps of character recognition. Several experimental results are shown to demonstrate the performance of the proposed algorithm

  16. Binary video codec for data reduction in wireless visual sensor networks

    Science.gov (United States)

    Khursheed, Khursheed; Ahmad, Naeem; Imran, Muhammad; O'Nils, Mattias

    2013-02-01

    Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring, stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as well as the communication energy consumption of the VSN needs to be optimized in such a way that the network remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient computational resources for processing the images and wide communication bandwidth for the transmission of the results. Image processing algorithms must be designed and developed in such a way that they are computationally less complex and must provide high compression rate. For some applications of WVSN, the captured images can be segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are computationally less complex and provide better compression rate than that of bi-level image coding methods. Change coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide better compression efficiency compared to image coding but it is effective for applications having slight changes between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is higher than a certain level then the compression efficiency

  17. Bridging analog and digital video in the surgical setting.

    Science.gov (United States)

    Miron, Hagai; Blumenthal, Eytan Z

    2003-10-01

    Editing surgical videos requires a basic understanding of key technical issues, especially when transforming from analog to digital media. These issues include an understanding of compression-decompression (eg, MPEGs), generation quality loss, video formats, and compression ratios. We introduce basic terminology and concepts related to analog and digital video, emphasizing the process of converting analog video to digital files. The choice of hardware, software, and formats is discussed, including advantages and drawbacks. Last, we provide an inexpensive hardware-software solution.

  18. Lossless compression of color sequences using optimal linear prediction theory.

    Science.gov (United States)

    Andriani, Stefano; Calvagno, Giancarlo

    2008-11-01

    In this paper, we present a novel technique that uses the optimal linear prediction theory to exploit all the existing redundancies in a color video sequence for lossless compression purposes. The main idea is to introduce the spatial, the spectral, and the temporal correlations in the autocorrelation matrix estimate. In this way, we calculate the cross correlations between adjacent frames and adjacent color components to improve the prediction, i.e., reduce the prediction error energy. The residual image is then coded using a context-based Golomb-Rice coder, where the error modeling is provided by a quantized version of the local prediction error variance. Experimental results show that the proposed algorithm achieves good compression ratios and it is roboust against the scene change problem.

  19. Analysis of LAPAN-IPB image lossless compression using differential pulse code modulation and huffman coding

    Science.gov (United States)

    Hakim, P. R.; Permala, R.

    2017-01-01

    LAPAN-A3/IPB satellite is the latest Indonesian experimental microsatellite with remote sensing and earth surveillance missions. The satellite has three optical payloads, which are multispectral push-broom imager, digital matrix camera and video camera. To increase data transmission efficiency, the multispectral imager data can be compressed using either lossy or lossless compression method. This paper aims to analyze Differential Pulse Code Modulation (DPCM) method and Huffman coding that are used in LAPAN-IPB satellite image lossless compression. Based on several simulation and analysis that have been done, current LAPAN-IPB lossless compression algorithm has moderate performance. There are several aspects that can be improved from current configuration, which are the type of DPCM code used, the type of Huffman entropy-coding scheme, and the use of sub-image compression method. The key result of this research shows that at least two neighboring pixels should be used for DPCM calculation to increase compression performance. Meanwhile, varying Huffman tables with sub-image approach could also increase the performance if on-board computer can support for more complicated algorithm. These results can be used as references in designing Payload Data Handling System (PDHS) for an upcoming LAPAN-A4 satellite.

  20. Non-US data compression and coding research. FASAC Technical Assessment Report

    Energy Technology Data Exchange (ETDEWEB)

    Gray, R.M.; Cohn, M.; Craver, L.W.; Gersho, A.; Lookabaugh, T.; Pollara, F.; Vetterli, M.

    1993-11-01

    This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity, though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.

  1. Fingerprints in Compressed Strings

    DEFF Research Database (Denmark)

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li

    2013-01-01

    The Karp-Rabin fingerprint of a string is a type of hash value that due to its strong properties has been used in many string algorithms. In this paper we show how to construct a data structure for a string S of size N compressed by a context-free grammar of size n that answers fingerprint queries...

  2. Compressive light field displays.

    Science.gov (United States)

    Wetzstein, Gordon; Lanman, Douglas; Hirsch, Matthew; Heidrich, Wolfgang; Raskar, Ramesh

    2012-01-01

    Light fields are the multiview extension of stereo image pairs: a collection of images showing a 3D scene from slightly different perspectives. Depicting high-resolution light fields usually requires an excessively large display bandwidth; compressive light field displays are enabled by the codesign of optical elements and computational-processing algorithms. Rather than pursuing a direct "optical" solution (for example, adding one more pixel to support the emission of one additional light ray), compressive displays aim to create flexible optical systems that can synthesize a compressed target light field. In effect, each pixel emits a superposition of light rays. Through compression and tailored optical designs, fewer display pixels are necessary to emit a given light field than a direct optical solution would require.

  3. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  4. Astronomical context coder for image compression

    Science.gov (United States)

    Pata, Petr; Schindler, Jaromir

    2015-10-01

    Recent lossless still image compression formats are powerful tools for compression of all kind of common images (pictures, text, schemes, etc.). Generally, the performance of a compression algorithm depends on its ability to anticipate the image function of the processed image. In other words, a compression algorithm to be successful, it has to take perfectly the advantage of coded image properties. Astronomical data form a special class of images and they have, among general image properties, also some specific characteristics which are unique. If a new coder is able to correctly use the knowledge of these special properties it should lead to its superior performance on this specific class of images at least in terms of the compression ratio. In this work, the novel lossless astronomical image data compression method will be presented. The achievable compression ratio of this new coder will be compared to theoretical lossless compression limit and also to the recent compression standards of the astronomy and general multimedia.

  5. A Nonlinear Decision-Based Algorithm for Removal of Strip Lines, Drop Lines, Blotches, Band Missing and Impulses in Images and Videos

    Directory of Open Access Journals (Sweden)

    D. Ebenezer

    2008-10-01

    Full Text Available A decision-based nonlinear algorithm for removal of strip lines, drop lines, blotches, band missing, and impulses in images is presented. The algorithm performs two simultaneous operations, namely, detection of corrupted pixels and estimation of new pixels for replacing the corrupted pixels. Removal of these artifacts is achieved without damaging edges and details. The algorithm uses an adaptive length window whose maximum size is 5×5 to avoid blurring due to large window sizes. However, the restricted window size renders median operation less effective whenever noise is excessive in which case the proposed algorithm automatically switches to mean filtering. The performance of the algorithm is analyzed in terms of mean square error [MSE], peak-signal-to-noise ratio [PSNR], and image enhancement factor [IEF] and compared with standard algorithms already in use. Improved performance of the proposed algorithm is demonstrated. The advantage of the proposed algorithm is that a single algorithm can replace several independent algorithms required for removal of different artifacts.

  6. A Nonlinear Decision-Based Algorithm for Removal of Strip Lines, Drop Lines, Blotches, Band Missing and Impulses in Images and Videos

    Directory of Open Access Journals (Sweden)

    Manikandan S

    2008-01-01

    Full Text Available Abstract A decision-based nonlinear algorithm for removal of strip lines, drop lines, blotches, band missing, and impulses in images is presented. The algorithm performs two simultaneous operations, namely, detection of corrupted pixels and estimation of new pixels for replacing the corrupted pixels. Removal of these artifacts is achieved without damaging edges and details. The algorithm uses an adaptive length window whose maximum size is to avoid blurring due to large window sizes. However, the restricted window size renders median operation less effective whenever noise is excessive in which case the proposed algorithm automatically switches to mean filtering. The performance of the algorithm is analyzed in terms of mean square error [MSE], peak-signal-to-noise ratio [PSNR], and image enhancement factor [IEF] and compared with standard algorithms already in use. Improved performance of the proposed algorithm is demonstrated. The advantage of the proposed algorithm is that a single algorithm can replace several independent algorithms required for removal of different artifacts.

  7. An Attention-Information-Based Spatial Adaptation Framework for Browsing Videos via Mobile Devices

    Directory of Open Access Journals (Sweden)

    Li Houqiang

    2007-01-01

    Full Text Available With the growing popularity of personal digital assistant devices and smart phones, more and more consumers are becoming quite enthusiastic to appreciate videos via mobile devices. However, limited display size of the mobile devices has been imposing significant barriers for users to enjoy browsing high-resolution videos. In this paper, we present an attention-information-based spatial adaptation framework to address this problem. The whole framework includes two major parts: video content generation and video adaptation system. During video compression, the attention information in video sequences will be detected using an attention model and embedded into bitstreams with proposed supplement-enhanced information (SEI structure. Furthermore, we also develop an innovative scheme to adaptively adjust quantization parameters in order to simultaneously improve the quality of overall encoding and the quality of transcoding the attention areas. When the high-resolution bitstream is transmitted to mobile users, a fast transcoding algorithm we developed earlier will be applied to generate a new bitstream for attention areas in frames. The new low-resolution bitstream containing mostly attention information, instead of the high-resolution one, will be sent to users for display on the mobile devices. Experimental results show that the proposed spatial adaptation scheme is able to improve both subjective and objective video qualities.

  8. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  9. Video processing project

    CSIR Research Space (South Africa)

    Globisch, R

    2009-03-01

    Full Text Available Video processing source code for algorithms and tools used in software media pipelines (e.g. image scalers, colour converters, etc.) The currently available source code is written in C++ with their associated libraries and DirectShow- Filters....

  10. Adaptive learning compressive tracking based on Markov location prediction

    Science.gov (United States)

    Zhou, Xingyu; Fu, Dongmei; Yang, Tao; Shi, Yanan

    2017-03-01

    Object tracking is an interdisciplinary research topic in image processing, pattern recognition, and computer vision which has theoretical and practical application value in video surveillance, virtual reality, and automatic navigation. Compressive tracking (CT) has many advantages, such as efficiency and accuracy. However, when there are object occlusion, abrupt motion and blur, similar objects, and scale changing, the CT has the problem of tracking drift. We propose the Markov object location prediction to get the initial position of the object. Then CT is used to locate the object accurately, and the classifier parameter adaptive updating strategy is given based on the confidence map. At the same time according to the object location, extract the scale features, which is able to deal with object scale variations effectively. Experimental results show that the proposed algorithm has better tracking accuracy and robustness than current advanced algorithms and achieves real-time performance.

  11. Distributed source coding of video with non-stationary side-information

    NARCIS (Netherlands)

    Meyer, P.F.A.; Westerlaken, R.P.; Klein Gunnewiek, R.; Lagendijk, R.L.

    2005-01-01

    In distributed video coding, the complexity of the video encoder is reduced at the cost of a more complex video decoder. Using the principles of Slepian andWolf, video compression is then carried out using channel coding principles, under the assumption that the video decoder can temporally predict

  12. Compressed Subsequence Matching and Packed Tree Coloring

    DEFF Research Database (Denmark)

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li

    2017-01-01

    We present a new algorithm for subsequence matching in grammar compressed strings. Given a grammar of size n compressing a string of size N and a pattern string of size m over an alphabet of size \\(\\sigma \\), our algorithm uses \\(O(n+\\frac{n\\sigma }{w})\\) space and \\(O(n+\\frac{n\\sigma }{w}+m\\log N...

  13. Real-time heart rate measurement for multi-people using compressive tracking

    Science.gov (United States)

    Liu, Lingling; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Dong, Liquan; Ma, Feilong; Pang, Zongguang; Cai, Zhi; Zhang, Yachu; Hua, Peng; Yuan, Ruifeng

    2017-09-01

    The rise of aging population has created a demand for inexpensive, unobtrusive, automated health care solutions. Image PhotoPlethysmoGraphy(IPPG) aids in the development of these solutions by allowing for the extraction of physiological signals from video data. However, the main deficiencies of the recent IPPG methods are non-automated, non-real-time and susceptible to motion artifacts(MA). In this paper, a real-time heart rate(HR) detection method for multiple subjects simultaneously was proposed and realized using the open computer vision(openCV) library, which consists of getting multiple subjects' facial video automatically through a Webcam, detecting the region of interest (ROI) in the video, reducing the false detection rate by our improved Adaboost algorithm, reducing the MA by our improved compress tracking(CT) algorithm, wavelet noise-suppression algorithm for denoising and multi-threads for higher detection speed. For comparison, HR was measured simultaneously using a medical pulse oximetry device for every subject during all sessions. Experimental results on a data set of 30 subjects show that the max average absolute error of heart rate estimation is less than 8 beats per minute (BPM), and the processing speed of every frame has almost reached real-time: the experiments with video recordings of ten subjects under the condition of the pixel resolution of 600× 800 pixels show that the average HR detection time of 10 subjects was about 17 frames per second (fps).

  14. Video-rate fluorescence lifetime imaging camera with CMOS single-photon avalanche diode arrays and high-speed imaging algorithm

    NARCIS (Netherlands)

    Li, D.D.U.; Arlt, J.; Tyndall, D.; Walker, R.; Richardson, J.; Stoppa, D.; Charbon, E.; Henderson, R.K.

    2011-01-01

    A high-speed and hardware-only algorithm using a center of mass method has been proposed for single-detector fluorescence lifetime sensing applications. This algorithm is now implemented on a field programmable gate array to provide fast lifetime estimates from a 32 × 32 low dark count 0.13 ?m

  15. Performance and Complexity Co-evaluation of the Advanced Video Coding Standard for Cost-Effective Multimedia Communications

    Science.gov (United States)

    Saponara, Sergio; Denolf, Kristof; Lafruit, Gauthier; Blanch, Carolina; Bormans, Jan

    2004-12-01

    The advanced video codec (AVC) standard, recently defined by a joint video team (JVT) of ITU-T and ISO/IEC, is introduced in this paper together with its performance and complexity co-evaluation. While the basic framework is similar to the motion-compensated hybrid scheme of previous video coding standards, additional tools improve the compression efficiency at the expense of an increased implementation cost. As a first step to bridge the gap between the algorithmic design of a complex multimedia system and its cost-effective realization, a high-level co-evaluation approach is proposed and applied to a real-life AVC design. An exhaustive analysis of the codec compression efficiency versus complexity (memory and computational costs) project space is carried out at the early algorithmic design phase. If all new coding features are used, the improved AVC compression efficiency (up to 50% compared to current video coding technology) comes with a complexity increase of a factor 2 for the decoder and larger than one order of magnitude for the encoder. This represents a challenge for resource-constrained multimedia systems such as wireless devices or high-volume consumer electronics. The analysis also highlights important properties of the AVC framework allowing for complexity reduction at the high system level: when combining the new coding features, the implementation complexity accumulates, while the global compression efficiency saturates. Thus, a proper use of the AVC tools maintains the same performance as the most complex configuration while considerably reducing complexity. The reported results provide inputs to assist the profile definition in the standard, highlight the AVC bottlenecks, and select optimal trade-offs between algorithmic performance and complexity.

  16. Performance and Complexity Co-evaluation of the Advanced Video Coding Standard for Cost-Effective Multimedia Communications

    Directory of Open Access Journals (Sweden)

    Saponara Sergio

    2004-01-01

    Full Text Available The advanced video codec (AVC standard, recently defined by a joint video team (JVT of ITU-T and ISO/IEC, is introduced in this paper together with its performance and complexity co-evaluation. While the basic framework is similar to the motion-compensated hybrid scheme of previous video coding standards, additional tools improve the compression efficiency at the expense of an increased implementation cost. As a first step to bridge the gap between the algorithmic design of a complex multimedia system and its cost-effective realization, a high-level co-evaluation approach is proposed and applied to a real-life AVC design. An exhaustive analysis of the codec compression efficiency versus complexity (memory and computational costs project space is carried out at the early algorithmic design phase. If all new coding features are used, the improved AVC compression efficiency (up to 50% compared to current video coding technology comes with a complexity increase of a factor 2 for the decoder and larger than one order of magnitude for the encoder. This represents a challenge for resource-constrained multimedia systems such as wireless devices or high-volume consumer electronics. The analysis also highlights important properties of the AVC framework allowing for complexity reduction at the high system level: when combining the new coding features, the implementation complexity accumulates, while the global compression efficiency saturates. Thus, a proper use of the AVC tools maintains the same performance as the most complex configuration while considerably reducing complexity. The reported results provide inputs to assist the profile definition in the standard, highlight the AVC bottlenecks, and select optimal trade-offs between algorithmic performance and complexity.

  17. Lazy Management for Frequency Table on Hardware-Based Stream Lossless Data Compression

    Directory of Open Access Journals (Sweden)

    Koichi Marumo

    2016-10-01

    Full Text Available The demand for communicating large amounts of data in real-time has raised new challenges with implementing high-speed communication paths for high definition video and sensory data. It requires the implementation of high speed data paths based on hardware. Implementation difficulties have to be addressed by applying new techniques based on data-oriented algorithms. This paper focuses on a solution for this problem by applying a lossless data compression mechanism on the communication data path. The new lossless data compression mechanism, called LCA-DLT, provides dynamic histogram management for symbol lookup tables used in the compression and the decompression operations. When the histogram memory is fully used, the management algorithm needs to find the least used entries and invalidate these entries. The invalidation operations cause the blocking of the compression and the decompression data stream. This paper proposes novel techniques to eliminate blocking by introducing a dynamic invalidation mechanism, which allows achievement of a high throughput data compression.

  18. Robust video transmission with distributed source coded auxiliary channel.

    Science.gov (United States)

    Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan

    2009-12-01

    We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints.

  19. Video Data Compression Study for Remote Sensors

    Science.gov (United States)

    1976-02-01

    instrumentation required to implement these frame- to frame techniques, the project monktor requested we terminate the emphasls on frame-to-frane methods...dimensional PCNM proceo • alcke, In te tfollowing di.;,•ssion, the s.ystum ,iperatiLc.n iet’’ out!;.n. lb,’flhnif Ohnt, without loss u" gonerality, th. L

  20. Digital video.

    Science.gov (United States)

    Johnson, Don; Johnson, Mike

    2004-04-01

    The process of digital capture, editing, and archiving video has become an important aspect of documenting arthroscopic surgery. Recording the arthroscopic findings before and after surgery is an essential part of the patient's medical record. The hardware and software has become more reasonable to purchase, but the learning curve to master the software is steep. Digital video is captured at the time of arthroscopy to a hard disk, and written to a CD at the end of the operative procedure. The process of obtaining video of open procedures is more complex. Outside video of the procedure is recorded on digital tape with a digital video camera. The camera must be plugged into a computer to capture the video on the hard disk. Adobe Premiere software is used to edit the video and render the finished video to the hard drive. This finished video is burned onto a CD. We outline the choice of computer hardware and software for the manipulation of digital video. The techniques of backup and archiving the completed projects and files also are outlined. The uses of digital video for education and the formats that can be used in PowerPoint presentations are discussed.

  1. Contextual analysis of videos

    CERN Document Server

    Thida, Myo; Monekosso, Dorothy

    2013-01-01

    Video context analysis is an active and vibrant research area, which provides means for extracting, analyzing and understanding behavior of a single target and multiple targets. Over the last few decades, computer vision researchers have been working to improve the accuracy and robustness of algorithms to analyse the context of a video automatically. In general, the research work in this area can be categorized into three major topics: 1) counting number of people in the scene 2) tracking individuals in a crowd and 3) understanding behavior of a single target or multiple targets in the scene.

  2. Video-based rendering

    CERN Document Server

    Magnor, Marcus A

    2005-01-01

    Driven by consumer-market applications that enjoy steadily increasing economic importance, graphics hardware and rendering algorithms are a central focus of computer graphics research. Video-based rendering is an approach that aims to overcome the current bottleneck in the time-consuming modeling process and has applications in areas such as computer games, special effects, and interactive TV. This book offers an in-depth introduction to video-based rendering, a rapidly developing new interdisciplinary topic employing techniques from computer graphics, computer vision, and telecommunication en

  3. Video Pulses: User-Based Modeling of Interesting Video Segments

    Directory of Open Access Journals (Sweden)

    Markos Avlonitis

    2014-01-01

    Full Text Available We present a user-based method that detects regions of interest within a video in order to provide video skims and video summaries. Previous research in video retrieval has focused on content-based techniques, such as pattern recognition algorithms that attempt to understand the low-level features of a video. We are proposing a pulse modeling method, which makes sense of a web video by analyzing users' Replay interactions with the video player. In particular, we have modeled the user information seeking behavior as a time series and the semantic regions as a discrete pulse of fixed width. Then, we have calculated the correlation coefficient between the dynamically detected pulses at the local maximums of the user activity signal and the pulse of reference. We have found that users' Replay activity significantly matches the important segments in information-rich and visually complex videos, such as lecture, how-to, and documentary. The proposed signal processing of user activity is complementary to previous work in content-based video retrieval and provides an additional user-based dimension for modeling the semantics of a social video on the web.

  4. Super-Resolution Still and Video Reconstruction from MPEG Coded Video

    National Research Council Canada - National Science Library

    Altunbasak, Yucel

    2004-01-01

    Transform coding is a popular and effective compression method for both still images and video sequences, as is evident from its widespread use in international media coding standards such as MPEG, H.263 and JPEG...

  5. Visualizing and quantifying movement from pre-recorded videos: The spectral time-lapse (STL algorithm [v1; ref status: indexed, http://f1000r.es/2qo

    Directory of Open Access Journals (Sweden)

    Christopher R Madan

    2014-01-01

    Full Text Available When studying animal behaviour within an open environment, movement-related data are often important for behavioural analyses. Therefore, simple and efficient techniques are needed to present and analyze the data of such movements. However, it is challenging to present both spatial and temporal information of movements within a two-dimensional image representation. To address this challenge, we developed the spectral time-lapse (STL algorithm that re-codes an animal’s position at every time point with a time-specific color, and overlays it with a reference frame of the video, to produce a summary image. We additionally incorporated automated motion tracking, such that the animal’s position can be extracted and summary statistics such as path length and duration can be calculated, as well as instantaneous velocity and acceleration. Here we describe the STL algorithm and offer a freely available MATLAB toolbox that implements the algorithm and allows for a large degree of end-user control and flexibility.

  6. Nonlinear Frequency Compression

    Science.gov (United States)

    Scollie, Susan; Glista, Danielle; Seelisch, Andreas

    2013-01-01

    Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality. PMID:23539261

  7. Algoritmi selektivnog šifrovanja - pregled sa ocenom performansi / Selective encryption algorithms: Overview with performance evaluation

    Directory of Open Access Journals (Sweden)

    Boriša Ž. Jovanović

    2010-10-01

    Full Text Available Digitalni multimedijalni sadržaj postaje zastupljeniji i sve više se razmenjuje putem računarskih mreža i javnih kanala (satelitske komunikacije, bežične mreže, internet, itd. koji predstavljaju nebezbedne medijume za prenos informacija osetljive sadržine. Sve više na značaju dobijaju mehanizmi kriptološke zaštite slika i video sadržaja. Tradicionalni sistemi kriptografske obrade u sistemima za prenos ovih vrsta informacija garantuju visok stepen sigurnosti, ali i imaju svoje nedostatke - visoku cenu implementacije i znatno kašnjenje u prenosu podataka. Pomenuti nedostaci se prevazilaze primenom algoritama selektivnog šifrovanja. / Digital multimedia content is becoming widely used and increasingly exchanged over computer network and public channels (satelite, wireless networks, Internet, etc. which is unsecured transmission media for ex changing that kind of information. Mechanisms made to encrypt image and video data are becoming more and more significant. Traditional cryptographic techniques can guarantee a high level of security but at the cost of expensive implementation and important transmission delays. These shortcomings can be exceeded using selective encryption algorithms. Introduction In traditional image and video content protection schemes, called fully layered, the whole content is first compressed. Then, the compressed bitstream is entirely encrypted using a standard cipher (DES - Data Encryption Algorithm, IDEA - International Data Encryption Algorithm, AES - Advanced Encryption Algorithm etc.. The specific characteristics of this kind of data, high-transmission rate with limited bandwidth, make standard encryption algorithms inadequate. Another limitation of traditional systems consists of altering the whole bitstream syntax which may disable some codec functionalities on the delivery site coder and decoder on the receiving site. Selective encryption is a new trend in image and video content protection. As its

  8. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  9. Video Stabilization Using Feature Point Matching

    Science.gov (United States)

    Kulkarni, Shamsundar; Bormane, D. S.; Nalbalwar, S. L.

    2017-01-01

    Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper an algorithm is proposed to stabilize jittery videos. A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video is identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.

  10. Character superimposition inpainting in surveillance video

    Science.gov (United States)

    Jia, Lili; Tao, Junjie; You, Ying

    2016-01-01

    Video surveillance systems play an important role in the crime scene investigation, and the digital surveillance system always requires the superimposed video data being subjected to a data compression processing. The purpose of this paper is to study the use of inpainting techniques to remove the characters and inpaint the target region. We give the efficient framework including getting Character Superimposition mask, superimposition movement and inpainting the blanks. The character region is located with the manual ROI selection and varying text extractor, such as the time. The superimposed characters usually have distinguished colors from the original background, so the edges are easily detected. We use the canny operator the get the edge image. The missing information which is effect the structure of the original image is reconstructed using a structure propagating algorithm. The experiment was done with C/C++ in the vs2010 KDE. The framework of this paper showed is powerful to recreate the character superimposition region and helpful to the crime scene investigation.

  11. Performance Analysis of Video Transmission Using Sequential Distortion Minimization Method for Digital Video Broadcasting Terrestrial

    Directory of Open Access Journals (Sweden)

    Novita Astin

    2016-12-01

    Full Text Available This paper presents about the transmission of Digital Video Broadcasting system with streaming video resolution 640x480 on different IQ rate and modulation. In the video transmission, distortion often occurs, so the received video has bad quality. Key frames selection algorithm is flexibel on a change of video, but on these methods, the temporal information of a video sequence is omitted. To minimize distortion between the original video and received video, we aimed at adding methodology using sequential distortion minimization algorithm. Its aim was to create a new video, better than original video without significant loss of content between the original video and received video, fixed sequentially. The reliability of video transmission was observed based on a constellation diagram, with the best result on IQ rate 2 Mhz and modulation 8 QAM. The best video transmission was also investigated using SEDIM (Sequential Distortion Minimization Method and without SEDIM. The experimental result showed that the PSNR (Peak Signal to Noise Ratio average of video transmission using SEDIM was an increase from 19,855 dB to 48,386 dB and SSIM (Structural Similarity average increase 10,49%. The experimental results and comparison of proposed method obtained a good performance. USRP board was used as RF front-end on 2,2 GHz.

  12. Compressed sensing for distributed systems

    CERN Document Server

    Coluccia, Giulio; Magli, Enrico

    2015-01-01

    This book presents a survey of the state-of-the art in the exciting and timely topic of compressed sensing for distributed systems. It has to be noted that, while compressed sensing has been studied for some time now, its distributed applications are relatively new. Remarkably, such applications are ideally suited to exploit all the benefits that compressed sensing can provide. The objective of this book is to provide the reader with a comprehensive survey of this topic, from the basic concepts to different classes of centralized and distributed reconstruction algorithms, as well as a comparison of these techniques. This book collects different contributions on these aspects. It presents the underlying theory in a complete and unified way for the first time, presenting various signal models and their use cases. It contains a theoretical part collecting latest results in rate-distortion analysis of distributed compressed sensing, as well as practical implementations of algorithms obtaining performance close to...

  13. Search in Real-Time Video Games

    OpenAIRE

    Cowling, Peter I.; Buro, Michael; Bida, Michal; Botea, Adi; Bouzy, Bruno; Butz, Martin V.; Hingston, Philip; Muñoz-Avila, Hector; Nau, Dana; Sipper, Moshe

    2013-01-01

    This chapter arises from the discussions of an experienced international group of researchers interested in the potential for creative application of algorithms for searching finite discrete graphs, which have been highly successful in a wide range of application areas, to address a broad range of problems arising in video games. The chapter first summarises the state of the art in search algorithms for games. It then considers the challenges in implementing these algorithms in video games (p...

  14. Estimation of Web video multiplicity

    Science.gov (United States)

    Cheung, SenChing S.; Zakhor, Avideh

    1999-12-01

    With ever more popularity of video web-publishing, many popular contents are being mirrored, reformatted, modified and republished, resulting in excessive content duplication. While such redundancy provides fault tolerance for continuous availability of information, it could potentially create problems for multimedia search engines in that the search results for a given query might become repetitious, and cluttered with a large number of duplicates. As such, developing techniques for detecting similarity and duplication is important to multimedia search engines. In addition, content providers might be interested in identifying duplicates of their content for legal, contractual or other business related reasons. In this paper, we propose an efficient algorithm called video signature to detect similar video sequences for large databases such as the web. The idea is to first form a 'signature' for each video sequence by selection a small number of its frames that are most similar to a number of randomly chosen seed images. Then the similarity between any tow video sequences can be reliably estimated by comparing their respective signatures. Using this method, we achieve 85 percent recall and precision ratios on a test database of 377 video sequences. As a proof of concept, we have applied our proposed algorithm to a collection of 1800 hours of video corresponding to around 45000 clips from the web. Our results indicate that, on average, every video in our collection from the web has around five similar copies.

  15. An Adaptive Fair-Distributed Scheduling Algorithm to Guarantee QoS for Both VBR and CBR Video Traffics on IEEE 802.11e WLANs

    Directory of Open Access Journals (Sweden)

    Reza Berangi

    2008-07-01

    Full Text Available Most of the centralized QoS mechanisms for WLAN MAC layer are only able to guarantee QoS parameters for CBR video traffic effectively. On the other hand, the existing distributed QoS mechanisms are only able to differentiate between various traffic streams without being able to guarantee QoS. This paper addresses these deficiencies by proposing a new distributed QoS scheme that guarantees QoS parameters such as delay and throughput for both CBR and VBR video traffics. The proposed scheme is also fair for all streams and it can adapt to the various conditions of the network. To achieve this, three fields are added to the RTS/CTS frames whose combination with the previously existing duration field of RTS/CTS frames guarantees the periodic fair adaptive access of a station to the channel. The performance of the proposed method has been evaluated with NS-2. The results showed that it outperforms IEEE 802.11e HCCA.

  16. Fast Compressive Tracking.

    Science.gov (United States)

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2014-10-01

    It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.

  17. Video enhancement effectiveness for target detection

    Science.gov (United States)

    Simon, Michael; Fischer, Amber; Petrov, Plamen

    2011-05-01

    Unmanned aerial vehicles (UAVs) capture real-time video data of military targets while keeping the warfighter at a safe distance. This keeps soldiers out of harm's way while they perform intelligence, surveillance and reconnaissance (ISR) and close-air support troops in contact (CAS-TIC) situations. The military also wants to use UAV video to achieve force multiplication. One method of achieving effective force multiplication involves fielding numerous UAVs with cameras and having multiple videos processed simultaneously by a single operator. However, monitoring multiple video streams is difficult for operators when the videos are of low quality. To address this challenge, we researched several promising video enhancement algorithms that focus on improving video quality. In this paper, we discuss our video enhancement suite and provide examples of video enhancement capabilities, focusing on stabilization, dehazing, and denoising. We provide results that show the effects of our enhancement algorithms on target detection and tracking algorithms. These results indicate that there is potential to assist the operator in identifying and tracking relevant targets with aided target recognition even on difficult video, increasing the force multiplier effect of UAVs. This work also forms the basis for human factors research into the effects of enhancement algorithms on ISR missions.

  18. Video games

    OpenAIRE

    Kolář, Vojtěch

    2012-01-01

    This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...

  19. High-quality lossy compression: current and future trends

    Science.gov (United States)

    McLaughlin, Steven W.

    1995-01-01

    This paper is concerned with current and future trends in the lossy compression of real sources such as imagery, video, speech and music. We put all lossy compression schemes into common framework where each can be characterized in terms of three well-defined advantages: cell shape, region shape and memory advantages. We concentrate on image compression and discuss how new entropy constrained trellis-based compressors achieve cell- shape, region-shape and memory gain resulting in high fidelity and high compression.

  20. Robust Shot Boundary Detection from Video Using Dynamic Texture

    Directory of Open Access Journals (Sweden)

    Peng Taile

    2014-03-01

    Full Text Available Video boundary detection belongs to a basis subject in computer vision. It is more important to video analysis and video understanding. The existing video boundary detection methods always are effective to certain types of video data. These methods have relatively low generalization ability. We present a novel shot boundary detection algorithm based on video dynamic texture. Firstly, the two adjacent frames are read from a given video. We normalize the two frames to get the same size frame. Secondly, we divide these frames into some sub-domain on the same standard. The following thing is to calculate the average gradient direction of sub-domain and form dynamic texture. Finally, the dynamic texture of adjacent frames is compared. We have done some experiments in different types of video data. These experimental results show that our method has high generalization ability. To different type of videos, our algorithm can achieve higher average precision and average recall relative to some algorithms.

  1. Designing experiments through compressed sensing.

    Energy Technology Data Exchange (ETDEWEB)

    Young, Joseph G.; Ridzal, Denis

    2013-06-01

    In the following paper, we discuss how to design an ensemble of experiments through the use of compressed sensing. Specifically, we show how to conduct a small number of physical experiments and then use compressed sensing to reconstruct a larger set of data. In order to accomplish this, we organize our results into four sections. We begin by extending the theory of compressed sensing to a finite product of Hilbert spaces. Then, we show how these results apply to experiment design. Next, we develop an efficient reconstruction algorithm that allows us to reconstruct experimental data projected onto a finite element basis. Finally, we verify our approach with two computational experiments.

  2. Indexing Motion Detection Data for Surveillance Video

    DEFF Research Database (Denmark)

    Vind, Søren Juhl; Bille, Philip; Gørtz, Inge Li

    2014-01-01

    We show how to compactly index video data to support fast motion detection queries. A query specifies a time interval T, a area A in the video and two thresholds v and p. The answer to a query is a list of timestamps in T where ≥ p% of A has changed by ≥ v values. Our results show that by building...... a small index, we can support queries with a speedup of two to three orders of magnitude compared to motion detection without an index. For high resolution video, the index size is about 20% of the compressed video size....

  3. Diversity-Aware Multi-Video Summarization

    Science.gov (United States)

    Panda, Rameswar; Mithun, Niluthpol Chowdhury; Roy-Chowdhury, Amit K.

    2017-10-01

    Most video summarization approaches have focused on extracting a summary from a single video; we propose an unsupervised framework for summarizing a collection of videos. We observe that each video in the collection may contain some information that other videos do not have, and thus exploring the underlying complementarity could be beneficial in creating a diverse informative summary. We develop a novel diversity-aware sparse optimization method for multi-video summarization by exploring the complementarity within the videos. Our approach extracts a multi-video summary which is both interesting and representative in describing the whole video collection. To efficiently solve our optimization problem, we develop an alternating minimization algorithm that minimizes the overall objective function with respect to one video at a time while fixing the other videos. Moreover, we introduce a new benchmark dataset, Tour20, that contains 140 videos with multiple human created summaries, which were acquired in a controlled experiment. Finally, by extensive experiments on the new Tour20 dataset and several other multi-view datasets, we show that the proposed approach clearly outperforms the state-of-the-art methods on the two problems-topic-oriented video summarization and multi-view video summarization in a camera network.

  4. Ontology based approach for video transmission over the network

    OpenAIRE

    Rachit Mohan Garg; Yamini Sood; Neha Tyagi

    2011-01-01

    With the increase in the bandwidth & the transmission speed over the internet, transmission of multimedia objects like video, audio, images has become an easier work. In this paper we provide an approach that can be useful for transmission of video objects over the internet without much fuzz. The approach provides a ontology based framework that is used to establish an automatic deployment of video transmission system. Further the video is compressed using the structural flow mechanism tha...

  5. Evaluating and Implementing JPEG XR Optimized for Video Surveillance

    OpenAIRE

    Yu, Lang

    2010-01-01

    This report describes both evaluation and implementation of the new coming image compression standard JPEG XR. The intention is to determine if JPEG XR is an appropriate standard for IP based video surveillance purposes. Video surveillance, especially IP based video surveillance, currently has an increasing role in the security market. To be a good standard for surveillance, the video stream generated by the camera is required to be low bit-rate, low latency on the network and at the same tim...

  6. Color image and video enhancement

    CERN Document Server

    Lecca, Michela; Smolka, Bogdan

    2015-01-01

    This text covers state-of-the-art color image and video enhancement techniques. The book examines the multivariate nature of color image/video data as it pertains to contrast enhancement, color correction (equalization, harmonization, normalization, balancing, constancy, etc.), noise removal and smoothing. This book also discusses color and contrast enhancement in vision sensors and applications of image and video enhancement.   ·         Focuses on enhancement of color images/video ·         Addresses algorithms for enhancing color images and video ·         Presents coverage on super resolution, restoration, in painting, and colorization.

  7. Region of interest video coding for low bit-rate transmission of carotid ultrasound videos over 3G wireless networks.

    Science.gov (United States)

    Tsapatsoulis, Nicolas; Loizou, Christos; Pattichis, Constantinos

    2007-01-01

    Efficient medical video transmission over 3G wireless is of great importance for fast diagnosis and on site medical staff training purposes. In this paper we present a region of interest based ultrasound video compression study which shows that significant reduction of the required, for transmission, bit rate can be achieved without altering the design of existing video codecs. Simple preprocessing of the original videos to define visually and clinically important areas is the only requirement.

  8. Perceived Quality of Full HD Video - Subjective Quality Assessment

    Directory of Open Access Journals (Sweden)

    Juraj Bienik

    2016-01-01

    Full Text Available In recent years, an interest in multimedia services has become a global trend and this trend is still rising. The video quality is a very significant part from the bundle of multimedia services, which leads to a requirement for quality assessment in the video domain. Video quality of a streamed video across IP networks is generally influenced by two factors “transmission link imperfection and efficiency of compression standards. This paper deals with subjective video quality assessment and the impact of the compression standards H.264, H.265 and VP9 on perceived video quality of these compression standards. The evaluation is done for four full HD sequences, the difference of scenes is in the content“ distinction is based on Spatial (SI and Temporal (TI Index of test sequences. Finally, experimental results follow up to 30% bitrate reducing of H.265 and VP9 compared with the reference H.264.

  9. Image compression with Iris-C

    Science.gov (United States)

    Gains, David

    2009-05-01

    Iris-C is an image codec designed for streaming video applications that demand low bit rate, low latency, lossless image compression. To achieve compression and low latency the codec features the discrete wavelet transform, Exp-Golomb coding, and online processes that construct dynamic models of the input video. Like H.264 and Dirac, the Iris-C codec accepts input video from both the YUV and YCOCG colour spaces, but the system can also operate on Bayer RAW data read directly from an image sensor. Testing shows that the Iris-C codec is competitive with the Dirac low delay syntax codec which is typically regarded as the state-of-the-art low latency, lossless video compressor.

  10. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    Science.gov (United States)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  11. Streaming Compression of Hexahedral Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  12. Design Optimization of a Transonic-Fan Rotor Using Numerical Computations of the Full Compressible Navier-Stokes Equations and Simplex Algorithm

    Directory of Open Access Journals (Sweden)

    M. A. Aziz

    2014-01-01

    Full Text Available The design of a transonic-fan rotor is optimized using numerical computations of the full three-dimensional Navier-Stokes equations. The CFDRC-ACE multiphysics module, which is a pressure-based solver, is used for the numerical simulation. The code is coupled with simplex optimization algorithm. The optimization process is started from a suitable design point obtained using low fidelity analytical methods that is based on experimental correlations for the pressure losses and blade deviation angle. The fan blade shape is defined by its stacking line and airfoil shape which are considered the optimization parameters. The stacking line is defined by lean, sweep, and skews, while blade airfoil shape is modified considering the thickness and camber distributions. The optimization has been performed to maximize the rotor total pressure ratio while keeping the rotor efficiency and surge margin above certain required values. The results obtained are verified with the experimental data of Rotor 67. In addition, the results of the optimized fan indicate that the optimum design is found to be leaned in the direction of rotation and has a forward sweep from the hub to mean section and backward sweep to the tip. The pressure ratio increases from 1.427 to 1.627 at the design speed and mass flow rate.

  13. No Reference Prediction of Quality Metrics for H.264 Compressed Infrared Image Sequences for UAV Applications

    DEFF Research Database (Denmark)

    Hossain, Kabir; Mantel, Claire; Forchhammer, Søren

    2018-01-01

    The framework for this research work is the acquisition of Infrared (IR) images from Unmanned Aerial Vehicles (UAV). In this paper we consider the No-Reference (NR) prediction of Full Reference Quality Metrics for Infrared (IR) video sequences which are compressed and thus distorted by an H.264...... and temporal perceptual information. Those features are then mapped, using a machine learning (ML) algorithm, the Support Vector Regression (SVR), to the quality scores of Full Reference (FR) quality metrics. The novelty of this work is to design a NR framework for the prediction of quality metrics by applying...... with the true FR quality metrics scores of four images metrics: PSNR, NQM, SSIM and UQI and one video metric: VQM. Results show that our technique achieves a fairly reasonable performance. The improved performance obtained in SROCC and LCC is up to 0.99 and the RMSE is reduced to as little as 0.01 between...

  14. How to detect Edgar Allan Poe's 'purloined letter,' or cross-correlation algorithms in digitized video images for object identification, movement evaluation, and deformation analysis

    Science.gov (United States)

    Dost, Michael; Vogel, Dietmar; Winkler, Thomas; Vogel, Juergen; Erb, Rolf; Kieselstein, Eva; Michel, Bernd

    2003-07-01

    Cross correlation analysis of digitised grey scale patterns is based on - at least - two images which are compared one to each other. Comparison is performed by means of a two-dimensional cross correlation algorithm applied to a set of local intensity submatrices taken from the pattern matrices of the reference and the comparison images in the surrounding of predefined points of interest. Established as an outstanding NDE tool for 2D and 3D deformation field analysis with a focus on micro- and nanoscale applications (microDAC and nanoDAC), the method exhibits an additional potential for far wider applications, that could be used for advancing homeland security. Cause the cross correlation algorithm in some kind seems to imitate some of the "smart" properties of human vision, this "field-of-surface-related" method can provide alternative solutions to some object and process recognition problems that are difficult to solve with more classic "object-related" image processing methods. Detecting differences between two or more images using cross correlation techniques can open new and unusual applications in identification and detection of hidden objects or objects with unknown origin, in movement or displacement field analysis and in some aspects of biometric analysis, that could be of special interest for homeland security.

  15. Video Podcasts

    DEFF Research Database (Denmark)

    Nortvig, Anne Mette; Sørensen, Birgitte Holm

    2016-01-01

    This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...

  16. Economical Video Monitoring of Traffic

    Science.gov (United States)

    Houser, B. C.; Paine, G.; Rubenstein, L. D.; Parham, O. Bruce, Jr.; Graves, W.; Bradley, C.

    1986-01-01

    Data compression allows video signals to be transmitted economically on telephone circuits. Telephone lines transmit television signals to remote traffic-control center. Lines also carry command signals from center to TV camera and compressor at highway site. Video system with television cameras positioned at critical points on highways allows traffic controllers to determine visually, almost immediately, exact cause of traffic-flow disruption; e.g., accidents, breakdowns, or spills, almost immediately. Controllers can then dispatch appropriate emergency services and alert motorists to minimize traffic backups.

  17. Interactive streaming of stored multiview video using redundant frame structures.

    Science.gov (United States)

    Cheung, Gene; Ortega, Antonio; Cheung, Ngai-Man

    2011-03-01

    While much of multiview video coding focuses on the rate-distortion performance of compressing all frames of all views for storage or non-interactive video delivery over networks, we address the problem of designing a frame structure to enable interactive multiview streaming, where clients can interactively switch views during video playback. Thus, as a client is playing back successive frames (in time) for a given view, it can send a request to the server to switch to a different view while continuing uninterrupted temporal playback. Noting that standard tools for random access (i.e., I-frame insertion) can be bandwidth-inefficient for this application, we propose a redundant representation of I-, P-, and "merge" frames, where each original picture can be encoded into multiple versions, appropriately trading off expected transmission rate with storage, to facilitate view switching. We first present ad hoc frame structures with good performance when the view-switching probabilities are either very large or very small. We then present optimization algorithms that generate more general frame structures with better overall performance for the general case. We show in our experiments that we can generate redundant frame structures offering a range of tradeoff points between transmission and storage, e.g., outperforming simple I-frame insertion structures by up to 45% in terms of bandwidth efficiency at twice the storage cost.

  18. Compressive beamforming

    DEFF Research Database (Denmark)

    Xenaki, Angeliki; Mosegaard, Klaus

    2014-01-01

    Sound source localization with sensor arrays involves the estimation of the direction-of-arrival (DOA) from a limited number of observations. Compressive sensing (CS) solves such underdetermined problems achieving sparsity, thus improved resolution, and can be solved efficiently with convex...

  19. Incremental data compression -extended abstract-

    NARCIS (Netherlands)

    Jeuring, J.T.

    1992-01-01

    Data may be compressed using textual substitution. Textual substitution identifies repeated substrings and replaces some or all substrings by pointers to another copy. We construct an incremental algorithm for a specific textual substitution method: coding a text with respect to a dictionary. With

  20. Statistical Analysis of Video Frame Size Distribution Originating from Scalable Video Codec (SVC

    Directory of Open Access Journals (Sweden)

    Sima Ahmadpour

    2017-01-01

    Full Text Available Designing an effective and high performance network requires an accurate characterization and modeling of network traffic. The modeling of video frame sizes is normally applied in simulation studies and mathematical analysis and generating streams for testing and compliance purposes. Besides, video traffic assumed as a major source of multimedia traffic in future heterogeneous network. Therefore, the statistical distribution of video data can be used as the inputs for performance modeling of networks. The finding of this paper comprises the theoretical definition of distribution which seems to be relevant to the video trace in terms of its statistical properties and finds the best distribution using both the graphical method and the hypothesis test. The data set used in this article consists of layered video traces generating from Scalable Video Codec (SVC video compression technique of three different movies.

  1. Algorithms and Algorithmic Languages.

    Science.gov (United States)

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  2. Analysis by compression

    DEFF Research Database (Denmark)

    Meredith, David

    MEL is a geometric music encoding language designed to allow for musical objects to be encoded parsimoniously as sets of points in pitch-time space, generated by performing geometric transformations on component patterns. MEL has been implemented in Java and coupled with the SIATEC pattern discov...... discovery algorithm to allow for compact encodings to be generated automatically from in extenso note lists. The MEL-SIATEC system is founded on the belief that music analysis and music perception can be modelled as the compression of in extenso descriptions of musical objects....

  3. ADAPTIVE STREAMING OVER HTTP (DASH UNTUK APLIKASI VIDEO STREAMING

    Directory of Open Access Journals (Sweden)

    I Made Oka Widyantara

    2015-12-01

    Full Text Available This paper aims to analyze Internet-based streaming video service in the communication media with variable bit rates. The proposed scheme on Dynamic Adaptive Streaming over HTTP (DASH using the internet network that adapts to the protocol Hyper Text Transfer Protocol (HTTP. DASH technology allows a video in the video segmentation into several packages that will distreamingkan. DASH initial stage is to compress the video source to lower the bit rate video codec uses H.26. Video compressed further in the segmentation using MP4Box generates streaming packets with the specified duration. These packages are assembled into packets in a streaming media format Presentation Description (MPD or known as MPEG-DASH. Streaming video format MPEG-DASH run on a platform with the player bitdash teritegrasi bitcoin. With this scheme, the video will have several variants of the bit rates that gave rise to the concept of scalability of streaming video services on the client side. The main target of the mechanism is smooth the MPEG-DASH streaming video display on the client. The simulation results show that the scheme based scalable video streaming MPEG-DASH able to improve the quality of image display on the client side, where the procedure bufering videos can be made constant and fine for the duration of video views

  4. ALOGORITHMS FOR AUTOMATIC RUNWAY DETECTION ON VIDEO SEQUENCES

    Directory of Open Access Journals (Sweden)

    A. I. Logvin

    2015-01-01

    Full Text Available The article discusses algorithm for automatic runway detection on video sequences. The main stages of algorithm are represented. Some methods to increase reliability of recognition are described.

  5. Recovery of Compressively Sampled Sparse Signals using Cyclic Matching Pursuit

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Christensen, Mads Græsbøll; Gribonval, Rémi

    We empirically show how applying a pure greedy algorithm cyclically can recover compressively sampled sparse signals as well as other more computationally complex approaches, such as orthogonal greedy algorithms, iterative thresholding, and $\\ell_1$-minimization.......We empirically show how applying a pure greedy algorithm cyclically can recover compressively sampled sparse signals as well as other more computationally complex approaches, such as orthogonal greedy algorithms, iterative thresholding, and $\\ell_1$-minimization....

  6. Performance optimization for pedestrian detection on degraded video using natural scene statistics

    Science.gov (United States)

    Winterlich, Anthony; Denny, Patrick; Kilmartin, Liam; Glavin, Martin; Jones, Edward

    2014-11-01

    We evaluate the effects of transmission artifacts such as JPEG compression and additive white Gaussian noise on the performance of a state-of-the-art pedestrian detection algorithm, which is based on integral channel features. Integral channel features combine the diversity of information obtained from multiple image channels with the computational efficiency of the Viola and Jones detection framework. We utilize "quality aware" spatial image statistics to blindly categorize distorted video frames by distortion type and level without the use of an explicit reference. We combine quality statistics with a multiclassifier detection framework for optimal pedestrian detection performance across varying image quality. Our detection method provides statistically significant improvements over current approaches based on single classifiers, on two large pedestrian databases containing a wide variety of artificially added distortion. The improvement in detection performance is further demonstrated on real video data captured from multiple cameras containing varying levels of sensor noise and compression. The results of our research have the potential to be used in real-time in-vehicle networks to improve pedestrian detection performance across a wide range of image and video quality.

  7. Speech Compression

    Directory of Open Access Journals (Sweden)

    Jerry D. Gibson

    2016-06-01

    Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

  8. SVC VIDEO STREAM ALLOCATION AND ADAPTATION IN HETEROGENEOUS NETWORK

    Directory of Open Access Journals (Sweden)

    E. A. Pakulova

    2016-07-01

    Full Text Available The paper deals with video data transmission in format H.264/SVC standard with QoS requirements satisfaction. The Sender-Side Path Scheduling (SSPS algorithm and Sender-Side Video Adaptation (SSVA algorithm were developed. SSPS algorithm gives the possibility to allocate video traffic among several interfaces while SSVA algorithm dynamically changes the quality of video sequence in relation to QoS requirements. It was shown that common usage of two developed algorithms enables to aggregate throughput of access networks, increase parameters of Quality of Experience and decrease losses in comparison with Round Robin algorithm. For evaluation of proposed solution, the set-up was made. The trace files with throughput of existing public networks were used in experiments. Based on this information the throughputs of networks were limited and losses for paths were set. The results of research may be used for study and transmission of video data in heterogeneous wireless networks.

  9. Video surveillance using JPEG 2000

    Science.gov (United States)

    Dufaux, Frederic; Ebrahimi, Touradj

    2004-11-01

    This paper describes a video surveillance system which is composed of three key components, smart cameras, a server, and clients, connected through IP-networks in wired or wireless configurations. The system has been designed so as to protect the privacy of people under surveillance. Smart cameras are based on JPEG 2000 compression where an analysis module allows for events detection and regions of interest identification. The resulting regions of interest can then be encoded with better quality and scrambled. Compressed video streams are scrambled and signed for the purpose of privacy and data integrity verification using JPSEC compliant methods. The same bitstream may also be protected for robustness to transmission errors based on JPWL compliant methods. The server receives, stores, manages and transmits the video sequences on wired and wireless channels to a variety of clients and users with different device capabilities, channel characteristics and preferences. Use of seamless scalable coding of video sequences prevents any need for transcoding operations at any point in the system.

  10. Akademisk video

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    Dette kapitel har fokus på metodiske problemstillinger, der opstår i forhold til at bruge (digital) video i forbindelse med forskningskommunikation, ikke mindst online. Video har længe været benyttet i forskningen til dataindsamling og forskningskommunikation. Med digitaliseringen og internettet er...... der dog opstået nye muligheder og udfordringer i forhold til at formidle og distribuere forskningsresultater til forskellige målgrupper via video. Samtidig er klassiske metodologiske problematikker som forskerens positionering i forhold til det undersøgte stadig aktuelle. Både klassiske og nye...... problemstillinger diskuteres i kapitlet, som rammesætter diskussionen ud fra forskellige positioneringsmuligheder: formidler, historiefortæller, eller dialogist. Disse positioner relaterer sig til genrer inden for ’akademisk video’. Afslutningsvis præsenteres en metodisk værktøjskasse med redskaber til planlægning...

  11. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... World Videos. The workshops were run on December 4, 2016, in Cancun in Mexico. The two workshops together received 13 papers. Each paper was then reviewed by at least two expert reviewers in the field. In all, 11 papers were accepted to be presented at the workshops. The topics covered in the papers...

  12. Encoding Concept Prototypes for Video Event Detection and Summarization

    NARCIS (Netherlands)

    Mazloom, M.; Habibian, A.; Liu, D.; Snoek, C.G.M.; Chang, S.F.

    2015-01-01

    This paper proposes a new semantic video representation for few and zero example event detection and unsupervised video event summarization. Different from existing works, which obtain a semantic representation by training concepts over images or entire video clips, we propose an algorithm that

  13. Exploring compression techniques for ROOT IO

    Science.gov (United States)

    Zhang, Z.; Bockelman, B.

    2017-10-01

    ROOT provides an flexible format used throughout the HEP community. The number of use cases - from an archival data format to end-stage analysis - has required a number of tradeoffs to be exposed to the user. For example, a high “compression level” in the traditional DEFLATE algorithm will result in a smaller file (saving disk space) at the cost of slower decompression (costing CPU time when read). At the scale of the LHC experiment, poor design choices can result in terabytes of wasted space or wasted CPU time. We explore and attempt to quantify some of these tradeoffs. Specifically, we explore: the use of alternate compressing algorithms to optimize for read performance; an alternate method of compressing individual events to allow efficient random access; and a new approach to whole-file compression. Quantitative results are given, as well as guidance on how to make compression decisions for different use cases.

  14. MRI Images Compression Using Curvelets Transforms

    Science.gov (United States)

    Beladgham, M.; Hacene, I. Boucli; Taleb-Ahmed, A.; Khélif, M.

    2008-06-01

    In the field of medical diagnostics, interested parties have resorted increasingly to medical imaging, it is well established that the accuracy and completeness of diagnosis are initially connected with the image quality, but the quality of the image is itself dependent on a number of factors including primarily the processing that an image must undergo to enhance its quality. We are interested in MRI medical image compression by Curvelets, of which we have proposed in this paper the compression algorithm FDCT using the wrapping method. In order to enhance the compression algorithm by FDCT, we have compared the results obtained with wavelet and Ridgelet transforms. The results are very satisfactory regarding compression ratio, and the computation time and quality of the compressed image compared to those of traditional methods.

  15. Compressed Sensing Of Complex Sinusoids Off The Grid

    Science.gov (United States)

    Ping, Cheng; Liu, Shi; Jiaqun, Zhao

    2015-07-01

    To solve off-grid problem in compressed sensing, a new reconstruction algorithm for complex sinusoids is proposed. The compressed sensing reconstruction problem is transformed into a joint optimized problem. Based on coordinate descent approach and linear estimator, a new iteration algorithm is proposed. The results of experiments verify the effectiveness of the proposed method.

  16. Compressive Transient Imaging

    KAUST Repository

    Sun, Qilin

    2017-04-01

    High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.

  17. A new display stream compression standard under development in VESA

    Science.gov (United States)

    Jacobson, Natan; Thirumalai, Vijayaraghavan; Joshi, Rajan; Goel, James

    2017-09-01

    The Advanced Display Stream Compression (ADSC) codec project is in development in response to a call for technologies from the Video Electronics Standards Association (VESA). This codec targets visually lossless compression of display streams at a high compression rate (typically 6 bits/pixel) for mobile/VR/HDR applications. Functionality of the ADSC codec is described in this paper, and subjective trials results are provided using the ISO 29170-2 testing protocol.

  18. Dynamical Functional Theory for Compressed Sensing

    DEFF Research Database (Denmark)

    Cakmak, Burak; Opper, Manfred; Winther, Ole

    2017-01-01

    We introduce a theoretical approach for designing generalizations of the approximate message passing (AMP) algorithm for compressed sensing which are valid for large observation matrices that are drawn from an invariant random matrix ensemble. By design, the fixed points of the algorithm obey...

  19. HEVC performance and complexity for 4K video

    OpenAIRE

    Bross, Benjamin; George, Valeri; Álvarez-Mesa, Mauricio; Mayer, Tobias; Chi, Chi Ching; Brandenburg, Jens; Schierl, Thomas; Marpe, Detlev; Juurlink, Ben

    2013-01-01

    The recently finalized High-Efficiency Video Coding (HEVC) standard was jointly developed by the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) to improve the compression performance of current video coding standards by 50%. Especially when it comes to transmit high resolution video like 4K over the internet or in broadcast, the 50% bitrate reduction is essential. This paper shows that real-time decoding of 4K video with a frame-level parallel deco...

  20. Scalable video on demand adaptive Internet-based distribution

    CERN Document Server

    Zink, Michael

    2013-01-01

    In recent years, the proliferation of available video content and the popularity of the Internet have encouraged service providers to develop new ways of distributing content to clients. Increasing video scaling ratios and advanced digital signal processing techniques have led to Internet Video-on-Demand applications, but these currently lack efficiency and quality. Scalable Video on Demand: Adaptive Internet-based Distribution examines how current video compression and streaming can be used to deliver high-quality applications over the Internet. In addition to analysing the problems

  1. Intelligent video surveillance systems and technology

    CERN Document Server

    Ma, Yunqian

    2009-01-01

    From the streets of London to subway stations in New York City, hundreds of thousands of surveillance cameras ubiquitously collect hundreds of thousands of videos, often running 24/7. How can such vast volumes of video data be stored, analyzed, indexed, and searched? How can advanced video analysis and systems autonomously recognize people and detect targeted activities real-time? Collating and presenting the latest information Intelligent Video Surveillance: Systems and Technology explores these issues, from fundamentals principle to algorithmic design and system implementation.An Integrated

  2. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real W...

  3. Effective Quality-of-Service Renegotiating Schemes for Streaming Video

    Directory of Open Access Journals (Sweden)

    Song Hwangjun

    2004-01-01

    Full Text Available This paper presents effective quality-of-service renegotiating schemes for streaming video. The conventional network supporting quality of service generally allows a negotiation at a call setup. However, it is not efficient for the video application since the compressed video traffic is statistically nonstationary. Thus, we consider the network supporting quality-of-service renegotiations during the data transmission and study effective quality-of-service renegotiating schemes for streaming video. The token bucket model, whose parameters are token filling rate and token bucket size, is adopted for the video traffic model. The renegotiating time instants and the parameters are determined by analyzing the statistical information of compressed video traffic. In this paper, two renegotiating approaches, that is, fixed renegotiating interval case and variable renegotiating interval case, are examined. Finally, the experimental results are provided to show the performance of the proposed schemes.

  4. DSP Implementation of Image Compression by Multiresolutional Analysis

    Directory of Open Access Journals (Sweden)

    K. Vlcek

    1998-04-01

    Full Text Available Wavelet algorithms allow considerably higher compression rates compared to Fourier transform based methods. The most important field of applications of wavelet transforms is that the image is captured in few wavelet coefficients. The successful applications in compression of image or in series of images in both the space and the time dimensions. Compression algorithms exploit the multi-scale nature of the wavelet transform.

  5. Digital Video of Live-Scan Fingerprint Data

    Science.gov (United States)

    NIST Digital Video of Live-Scan Fingerprint Data (PC database for purchase)   NIST Special Database 24 contains MPEG-2 (Moving Picture Experts Group) compressed digital video of live-scan fingerprint data. The database is being distributed for use in developing and testing of fingerprint verification systems.

  6. General Video Game AI: Learning from Screen Capture

    OpenAIRE

    Kunanusont, Kamolwan; Lucas, Simon M.; Perez-Liebana, Diego

    2017-01-01

    General Video Game Artificial Intelligence is a general game playing framework for Artificial General Intelligence research in the video-games domain. In this paper, we propose for the first time a screen capture learning agent for General Video Game AI framework. A Deep Q-Network algorithm was applied and improved to develop an agent capable of learning to play different games in the framework. After testing this algorithm using various games of different categories and difficulty levels, th...

  7. Discrete cosine and sine transforms general properties, fast algorithms and integer approximations

    CERN Document Server

    Britanak, Vladimir; Rao, K R; Rao, K R

    2006-01-01

    The Discrete Cosine Transform (DCT) is used in many applications by the scientific, engineering and research communities and in data compression in particular. Fast algorithms and applications of the DCT Type II (DCT-II) have become the heart of many established international image/video coding standards. Since then other forms of the DCT and Discrete Sine Transform (DST) have been investigated in detail. This new edition presents the complete set of DCT and DST discrete trigonometric transforms, including their definitions, general mathematical properties, and relations to the optimal Karhune

  8. Algorithmic Relative Complexity

    OpenAIRE

    Daniele Cerra; Mihai Datcu

    2011-01-01

    Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity ...

  9. Compact video synopsis via global spatiotemporal optimization.

    Science.gov (United States)

    Nie, Yongwei; Xiao, Chunxia; Sun, Hanqiu; Li, Ping

    2013-10-01

    Video synopsis aims at providing condensed representations of video data sets that can be easily captured from digital cameras nowadays, especially for daily surveillance videos. Previous work in video synopsis usually moves active objects along the time axis, which inevitably causes collisions among the moving objects if compressed much. In this paper, we propose a novel approach for compact video synopsis using a unified spatiotemporal optimization. Our approach globally shifts moving objects in both spatial and temporal domains, which shifting objects temporally to reduce the length of the video and shifting colliding objects spatially to avoid visible collision artifacts. Furthermore, using a multilevel patch relocation (MPR) method, the moving space of the original video is expanded into a compact background based on environmental content to fit with the shifted objects. The shifted objects are finally composited with the expanded moving space to obtain the high-quality video synopsis, which is more condensed while remaining free of collision artifacts. Our experimental results have shown that the compact video synopsis we produced can be browsed quickly, preserves relative spatiotemporal relationships, and avoids motion collisions.

  10. Implementing a hardware-friendly wavelet entropy codec for scalable video

    Science.gov (United States)

    Eeckhaut, Hendrik; Christiaens, Mark; Devos, Harald; Stroobandt, Dirk

    2005-11-01

    In the RESUME project (Reconfigurable Embedded Systems for Use in Multimedia Environments) we explore the benefits of an implementation of scalable multimedia applications using reconfigurable hardware by building an FPGA implementation of a scalable wavelet-based video decoder. The term "scalable" refers to a design that can easily accommodate changes in quality of service with minimal computational overhead. This is important for portable devices that have different Quality of Service (QoS) requirements and have varying power restrictions. The scalable video decoder consists of three major blocks: a Wavelet Entropy Decoder (WED), an Inverse Discrete Wavelet Transformer (IDWT) and a Motion Compensator (MC). The WED decodes entropy encoded parts of the video stream into wavelet transformed frames. These frames are decoded bitlayer per bitlayer. The more bitlayers are decoded the higher the image quality (scalability in image quality). Resolution scalability is obtained as an inherent property of the IDWT. Finally framerate scalability is achieved through hierarchical motion compensation. In this article we present the results of our investigation into the hardware implementation of such a scalable video codec. In particular we found that the implementation of the entropy codec is a significant bottleneck. We present an alternative, hardware-friendly algorithm for entropy coding with excellent data locality (both temporal and spatial), streaming capabilities, a high degree of parallelism, a smaller memory footprint and state-of-the-art compression while maintaining all required scalability properties. These claims are supported by an effective hardware implementation on an FPGA.

  11. Online scene change detection of multicast (MBone) video

    Science.gov (United States)

    Zhou, Wensheng; Shen, Ye; Vellaikal, Asha; Kuo, C.-C. Jay

    1998-10-01

    Many multimedia applications, such as multimedia data management systems and communication systems, require efficient representation of multimedia content. Thus semantic interpretation of video content has been a popular research area. Currently, most content-based video representation involves the segmentation of video based on key frames which are generated using scene change detection techniques as well as camera/object motion. Then, video features can be extracted from key frames. However most of such research performs off-line video processing in which the whole video scope is known as a priori which allows multiple scans of the stored video files during video processing. In comparison, relatively not much research has been done in the area of on-line video processing, which is crucial in video communication applications such as on-line collaboration, news broadcasts and so on. Our research investigates on-line real-time scene change detection of multicast video over the Internet. Our on-line processing system are designed to meet the requirements of real-time video multicasting over the Internet and to utilize the successful video parsing techniques available today. The proposed algorithms extract key frames from video bitstreams sent through the MBone network, and the extracted key frames are multicasted as annotations or metadata over a separate channel to assist in content filtering such as those anticipated to be in use by on-line filtering proxies in the Internet. The performance of the proposed algorithms are demonstrated and discussed in this paper.

  12. Spatiotemporal super-resolution for low bitrate H.264 video

    OpenAIRE

    Anantrasirichai, N; Canagarajah, CN

    2010-01-01

    Super-resolution and frame interpolation enhance low resolution low-framerate videos. Such techniques are especially important for limited bandwidth communications. This paper proposes a novel technique to up-scale videos compressed with H.264 at low bit-rate both in spatial and temporal dimensions. A quantisation noise model is used in the super-resolution estimator, designed for low bitrate video, and a weighting map for decreasing inaccuracy of motion estimation are proposed. Results show ...

  13. Area and power efficient DCT architecture for image compression

    Science.gov (United States)

    Dhandapani, Vaithiyanathan; Ramachandran, Seshasayanan

    2014-12-01

    The discrete cosine transform (DCT) is one of the major components in image and video compression systems. The final output of these systems is interpreted by the human visual system (HVS), which is not perfect. The limited perception of human visualization allows the algorithm to be numerically approximate rather than exact. In this paper, we propose a new matrix for discrete cosine transform. The proposed 8 × 8 transformation matrix contains only zeros and ones which requires only adders, thus avoiding the need for multiplication and shift operations. The new class of transform requires only 12 additions, which highly reduces the computational complexity and achieves a performance in image compression that is comparable to that of the existing approximated DCT. Another important aspect of the proposed transform is that it provides an efficient area and power optimization while implementing in hardware. To ensure the versatility of the proposal and to further evaluate the performance and correctness of the structure in terms of speed, area, and power consumption, the model is implemented on Xilinx Virtex 7 field programmable gate array (FPGA) device and synthesized with Cadence® RTL Compiler® using UMC 90 nm standard cell library. The analysis obtained from the implementation indicates that the proposed structure is superior to the existing approximation techniques with a 30% reduction in power and 12% reduction in area.

  14. The effects of wireless channel errors on the quality of real time ultrasound video transmission.

    Science.gov (United States)

    Hernández, Carolina; Alesanco, Alvaro; Abadia, Violeta; García, José

    2006-01-01

    In this paper the effect of the conditions of wireless channels on real time ultrasound video transmission is studied. In order to simulate the transmission through a wireless channel, the model of Gilbert-Elliot is used, and the influence of its parameters in transmitted video quality is evaluated. In addition, the efficiency of using both UDP and UDP-Lite as transport protocols has been studied. The effect of using different video compression rates for XviD codec is also analyzed. Based on the obtained results it is observed as the election of the video compression rate depends on the bit error rate (BER) of the channel, since the election of a high compression bit rate for video transmission through a channel with a high BER can degrade the video quality more than using a lower compression rate. On the other hand, it is observed that using UDP as transport protocol better results in all the studied cases are obtained.

  15. Video time encoding machines.

    Science.gov (United States)

    Lazar, Aurel A; Pnevmatikakis, Eftychios A

    2011-03-01

    We investigate architectures for time encoding and time decoding of visual stimuli such as natural and synthetic video streams (movies, animation). The architecture for time encoding is akin to models of the early visual system. It consists of a bank of filters in cascade with single-input multi-output neural circuits. Neuron firing is based on either a threshold-and-fire or an integrate-and-fire spiking mechanism with feedback. We show that analog information is represented by the neural circuits as projections on a set of band-limited functions determined by the spike sequence. Under Nyquist-type and frame conditions, the encoded signal can be recovered from these projections with arbitrary precision. For the video time encoding machine architecture, we demonstrate that band-limited video streams of finite energy can be faithfully recovered from the spike trains and provide a stable algorithm for perfect recovery. The key condition for recovery calls for the number of neurons in the population to be above a threshold value.

  16. High bit depth infrared image compression via low bit depth codecs

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped...... with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth...... by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H...

  17. Feature representation and compression for content-based retrieval

    Science.gov (United States)

    Xie, Hua; Ortega, Antonio

    2000-12-01

    In semantic content-based image/video browsing and navigation systems, efficient mechanisms to represent and manage a large collection of digital images/videos are needed. Traditional keyword-based indexing describes the content of multimedia data through annotations such as text or keywords extracted manually by the user from a controlled vocabulary. This textual indexing technique lacks the flexibility of satisfying various kinds of queries requested by database users and also requires huge amount of work for updating the information. Current content-based retrieval systems often extract a set of features such as color, texture, shape motion, speed, and position from the raw multimedia data automatically and store them as content descriptors. This content-based metadata differs from text-based metadata in that it supports wider varieties of queries and can be extracted automatically, thus providing a promising approach for efficient database access and management. When the raw data volume grows very large, explicitly extracting the content-information and storing it as metadata along with the images will improve querying performance since metadata requires much less storage than the raw image data and thus will be easier to manipulate. In this paper we maintain that storing metadata together with images will enable effective information management and efficient remote query. We also show, using a texture classification example, that this side information can be compressed while guaranteeing that the desired query accuracy is satisfied. We argue that the compact representation of the image contents not only reduces significantly the storage and transmission rate requirement, but also facilitates certain types of queries. Algorithms are developed for optimized compression of this texture feature metadata given that the goal is to maximize the classification performance for a given rate budget.

  18. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition...

  19. Video Analytics

    DEFF Research Database (Denmark)

    include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition......This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...

  20. The Basic Principles and Methods of the System Approach to Compression of Telemetry Data

    Science.gov (United States)

    Levenets, A. V.

    2018-01-01

    The task of data compressing of measurement data is still urgent for information-measurement systems. In paper the basic principles necessary for designing of highly effective systems of compression of telemetric information are offered. A basis of the offered principles is representation of a telemetric frame as whole information space where we can find of existing correlation. The methods of data transformation and compressing algorithms realizing the offered principles are described. The compression ratio for offered compression algorithm is about 1.8 times higher, than for a classic algorithm. Thus, results of a research of methods and algorithms showing their good perspectives.

  1. JPEG compression history estimation for color images.

    Science.gov (United States)

    Neelamani, Ramesh; de Queiroz, Ricardo; Fan, Zhigang; Dash, Sanjeeb; Baraniuk, Richard G

    2006-06-01

    We routinely encounter digital color images that were previously compressed using the Joint Photographic Experts Group (JPEG) standard. En route to the image's current representation, the previous JPEG compression's various settings-termed its JPEG compression history (CH)-are often discarded after the JPEG decompression step. Given a JPEG-decompressed color image, this paper aims to estimate its lost JPEG CH. We observe that the previous JPEG compression's quantization step introduces a lattice structure in the discrete cosine transform (DCT) domain. This paper proposes two approaches that exploit this structure to solve the JPEG Compression History Estimation (CHEst) problem. First, we design a statistical dictionary-based CHEst algorithm that tests the various CHs in a dictionary and selects the maximum a posteriori estimate. Second, for cases where the DCT coefficients closely conform to a 3-D parallelepiped lattice, we design a blind lattice-based CHEst algorithm. The blind algorithm exploits the fact that the JPEG CH is encoded in the nearly orthogonal bases for the 3-D lattice and employs novel lattice algorithms and recent results on nearly orthogonal lattice bases to estimate the CH. Both algorithms provide robust JPEG CHEst performance in practice. Simulations demonstrate that JPEG CHEst can be useful in JPEG recompression; the estimated CH allows us to recompress a JPEG-decompressed image with minimal distortion (large signal-to-noise-ratio) and simultaneously achieve a small file-size.

  2. Adaptive Error Resilience for Video Streaming

    Directory of Open Access Journals (Sweden)

    Lakshmi R. Siruvuri

    2009-01-01

    Full Text Available Compressed video sequences are vulnerable to channel errors, to the extent that minor errors and/or small losses can result in substantial degradation. Thus, protecting compressed data against channel errors is imperative. The use of channel coding schemes can be effective in reducing the impact of channel errors, although this requires that extra parity bits to be transmitted, thus utilizing more bandwidth. However, this can be ameliorated if the transmitter can tailor the parity data rate based on its knowledge regarding current channel conditions. This can be achieved via feedback from the receiver to the transmitter. This paper describes a channel emulation system comprised of a server/proxy/client combination that utilizes feedback from the client to adapt the number of Reed-Solomon parity symbols used to protect compressed video sequences against channel errors.

  3. Classifying smoke in laparoscopic videos using SVM

    Directory of Open Access Journals (Sweden)

    Alshirbaji Tamer Abdulbaki

    2017-09-01

    Full Text Available Smoke in laparoscopic videos usually appears due to the use of electrocautery when cutting or coagulating tissues. Therefore, detecting smoke can be used for event-based annotation in laparoscopic surgeries by retrieving the events associated with the electrocauterization. Furthermore, smoke detection can also be used for automatic smoke removal. However, detecting smoke in laparoscopic video is a challenge because of the changeability of smoke patterns, the moving camera and the different lighting conditions. In this paper, we present a video-based smoke detection algorithm to detect smoke of different densities such as fog, low and high density in laparoscopic videos. The proposed method depends on extracting various visual features from the laparoscopic images and providing them to support vector machine (SVM classifier. Features are based on motion, colour and texture patterns of the smoke. We validated our algorithm using experimental evaluation on four laparoscopic cholecystectomy videos. These four videos were manually annotated by defining every frame as smoke or non-smoke frame. The algorithm was applied to the videos by using different feature combinations for classification. Experimental results show that the combination of all proposed features gives the best classification performance. The overall accuracy (i.e. correctly classified frames is around 84%, with the sensitivity (i.e. correctly detected smoke frames and the specificity (i.e. correctly detected non-smoke frames are 89% and 80%, respectively.

  4. Real-time video codec using reversible wavelets

    Science.gov (United States)

    Huang, Gen Dow; Chiang, David J.; Huang, Yi-En; Cheng, Allen

    2003-04-01

    This paper describes the hardware implementation of a real-time video codec using reversible Wavelets. The TechSoft (TS) real-time video system employs the Wavelet differencing for the inter-frame compression based on the independent Embedded Block Coding with Optimized Truncation (EBCOT) of the embedded bit stream. This high performance scalable image compression using EBCOT has been selected as part of the ISO new image compression standard, JPEG2000. The TS real-time video system can process up to 30 frames per second (fps) of the DVD format. In addition, audio signals are also processed by the same design for the cost reduction. Reversible Wavelets are used not only for the cost reduction, but also for the lossless applications. Design and implementation issues of the TS real-time video system are discussed.

  5. Compression limits in cascaded quadratic soliton compression

    DEFF Research Database (Denmark)

    Bache, Morten; Bang, Ole; Krolikowski, Wieslaw

    2008-01-01

    Cascaded quadratic soliton compressors generate under optimal conditions few-cycle pulses. Using theory and numerical simulations in a nonlinear crystal suitable for high-energy pulse compression, we address the limits to the compression quality and efficiency.......Cascaded quadratic soliton compressors generate under optimal conditions few-cycle pulses. Using theory and numerical simulations in a nonlinear crystal suitable for high-energy pulse compression, we address the limits to the compression quality and efficiency....

  6. Research on Agricultural Surveillance Video of Intelligent Tracking

    Science.gov (United States)

    Cai, Lecai; Xu, Jijia; Liangping, Jin; He, Zhiyong

    Intelligent video tracking technology is the digital video processing and analysis of an important field of application in the civilian and military defense have a wide range of applications. In this paper, a systematic study on the surveillance video of the Smart in the agricultural tracking, particularly in target detection and tracking problem of the study, respectively for the static background of the video sequences of moving targets detection and tracking algorithm, the goal of agricultural production for rapid detection and tracking algorithm and Mean Shift-based translation and rotation of the target tracking algorithm. Experimental results show that the system can effectively and accurately track the target in the surveillance video. Therefore, in agriculture for the intelligent video surveillance tracking study, whether it is from the environmental protection or social security, economic efficiency point of view, are very meaningful.

  7. Spatio-temporal image inpainting for video applications

    Directory of Open Access Journals (Sweden)

    Voronin Viacheslav

    2017-01-01

    Full Text Available Video inpainting or completion is a vital video improvement technique used to repair or edit digital videos. This paper describes a framework for temporally consistent video completion. The proposed method allows to remove dynamic objects or restore missing or tainted regions presented in a video sequence by utilizing spatial and temporal information from neighboring scenes. Masking algorithm is used for detection of scratches or damaged portions in video frames. The algorithm iteratively performs the following operations: achieve frame; update the scene model; update positions of moving objects; replace parts of the frame occupied by the objects marked for remove by using a background model. In this paper, we extend an image inpainting algorithm based texture and structure reconstruction by incorporating an improved strategy for video. Our algorithm is able to deal with a variety of challenging situations which naturally arise in video inpainting, such as the correct reconstruction of dynamic textures, multiple moving objects and moving background. Experimental comparisons to state-of-the-art video completion methods demonstrate the effectiveness of the proposed approach. It is shown that the proposed spatio-temporal image inpainting method allows restoring a missing blocks and removing a text from the scenes on videos.

  8. Color spaces in digital video

    Energy Technology Data Exchange (ETDEWEB)

    Gaunt, R.

    1997-05-01

    Whether it`s photography, computer graphics, publishing, or video; each medium has a defined color space, or gamut, which defines the extent that a given set of RGB colors can be mixed. When converting from one medium to another, an image must go through some form of conversion which maps colors into the destination color space. The conversion process isn`t always straight forward, easy, or reversible. In video, two common analog composite color spaces are Y`tjv (used in PAL) and Y`IQ (used in NTSC). These two color spaces have been around since the beginning of color television, and are primarily used in video transmission. Another analog scheme used in broadcast studios is Y`, R`-Y`, B`-Y` (used in Betacam and Mll) which is a component format. Y`, R`-Y`,B`-Y` maintains the color information of RGB but in less space. From this, the digital component video specification, ITU-Rec. 601-4 (formerly CCIR Rec. 601) was based. The color space for Rec. 601 is symbolized as Y`CbCr. Digital video formats such as DV, Dl, Digital-S, etc., use Rec. 601 to define their color gamut. Digital composite video (for D2 tape) is digitized analog Y`UV and is seeing decreased use. Because so much information is contained in video, segments of any significant length usually require some form of data compression. All of the above mentioned analog video formats are a means of reducing the bandwidth of RGB video. Video bulk storage devices, such as digital disk recorders, usually store frames in Y`CbCr format, even if no other compression method is used. Computer graphics and computer animations originate in RGB format because RGB must be used to calculate lighting and shadows. But storage of long animations in RGB format is usually cost prohibitive and a 30 frame-per-second data rate of uncompressed RGB is beyond most computers. By taking advantage of certain aspects of the human visual system, true color 24-bit RGB video images can be compressed with minimal loss of visual information

  9. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...... would like to emphasize another side to the algorithmic everyday life. We argue that algorithms can instigate and facilitate imagination, creativity, and frivolity, while saying something that is simultaneously old and new, always almost repeating what was before but never quite returning. We show...... this by threading together stimulating quotes and screenshots from Google’s autocomplete algorithms. In doing so, we invite the reader to re-explore Google’s autocomplete algorithms in a creative, playful, and reflexive way, thereby rendering more visible some of the excitement and frivolity that comes from being...

  10. Speech Enhancement Based on Compressed Sensing Technology

    Directory of Open Access Journals (Sweden)

    Huiyan Xu

    2014-10-01

    Full Text Available Compressed sensing (CS is a sampled approach on signal sparsity-base, and it can effectively extract the information which is contained in the signal. This paper presents a noisy speech enhancement new method based on CS process. Algorithm uses a voice sparsity in the discrete fast Fourier transform (Fast Fourier transform, FFT, and complex domain observation matrix is designed, and the noisy speech compression measurement and de-noising are made by soft threshold, and the speech signal is sparsely reconstructed by separable approximation (Sparse Reconstruction by Separable Approximation, SpaRSA algorithm to restore, speech enhancement is improved. Experimental results show that the denoising compression reconstruction of the noisy signal is done in the algorithm, SNR margin is improved greatly, and the background noise can been more effectively suppressed.

  11. The possibilities of compressed sensing based migration

    KAUST Repository

    Aldawood, Ali

    2013-09-22

    Linearized waveform inversion or Least-square migration helps reduce migration artifacts caused by limited acquisition aperture, coarse sampling of sources and receivers, and low subsurface illumination. However, leastsquare migration, based on L2-norm minimization of the misfit function, tends to produce a smeared (smoothed) depiction of the true subsurface reflectivity. Assuming that the subsurface reflectivity distribution is a sparse signal, we use a compressed-sensing (Basis Pursuit) algorithm to retrieve this sparse distribution from a small number of linear measurements. We applied a compressed-sensing algorithm to image a synthetic fault model using dense and sparse acquisition geometries. Tests on synthetic data demonstrate the ability of compressed-sensing to produce highly resolved migrated images. We, also, studied the robustness of the Basis Pursuit algorithm in the presence of Gaussian random noise.

  12. Design and Implementation of Video Shot Detection on Field Programmable Gate Arrays

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2012-09-01

    Full Text Available Video has become an interactive medium of communication in everyday life. The sheer volume of video makes it extremely difficult to browse through and find the required data. Hence extraction of key frames from the video which represents the abstract of the entire video becomes necessary. The aim of the video shot detection is to find the position of the shot boundaries, so that key frames can be selected from each shot for subsequent processing such as video summarization, indexing etc. For most of the surveillance applications like video summery, face recognition etc., the hardware (real time implementation of these algorithms becomes necessary. Here in this paper we present the architecture for simultaneous accessing of consecutive frames, which are then used for the implementation of various Video Shot Detection algorithms. We also present the real time implementation of three video shot detection algorithms using the above mentioned architecture on FPGA (Field Programmable Gate Arrays.

  13. Turning Video Resource Management into Cloud Computing

    Directory of Open Access Journals (Sweden)

    Weili Kou

    2016-07-01

    Full Text Available Big data makes cloud computing more and more popular in various fields. Video resources are very useful and important to education, security monitoring, and so on. However, issues of their huge volumes, complex data types, inefficient processing performance, weak security, and long times for loading pose challenges in video resource management. The Hadoop Distributed File System (HDFS is an open-source framework, which can provide cloud-based platforms and presents an opportunity for solving these problems. This paper presents video resource management architecture based on HDFS to provide a uniform framework and a five-layer model for standardizing the current various algorithms and applications. The architecture, basic model, and key algorithms are designed for turning video resources into a cloud computing environment. The design was tested by establishing a simulation system prototype.

  14. Shadow Detection Based on Regions of Light Sources for Object Extraction in Nighttime Video.

    Science.gov (United States)

    Lee, Gil-Beom; Lee, Myeong-Jin; Lee, Woo-Kyung; Park, Joo-Heon; Kim, Tae-Hwan

    2017-03-22

    Intelligent video surveillance systems detect pre-configured surveillance events through background modeling, foreground and object extraction, object tracking, and event detection. Shadow regions inside video frames sometimes appear as foreground objects, interfere with ensuing processes, and finally degrade the event detection performance of the systems. Conventional studies have mostly used intensity, color, texture, and geometric information to perform shadow detection in daytime video, but these methods lack the capability of removing shadows in nighttime video. In this paper, a novel shadow detection algorithm for nighttime video is proposed; this algorithm partitions each foreground object based on the object's vertical histogram and screens out shadow objects by validating their orientations heading toward regions of light sources. From the experimental results, it can be seen that the proposed algorithm shows more than 93.8% shadow removal and 89.9% object extraction rates for nighttime video sequences, and the algorithm outperforms conventional shadow removal algorithms designed for daytime videos.

  15. Data Partitioning Technique for Improved Video Prioritization

    Directory of Open Access Journals (Sweden)

    Ismail Amin Ali

    2017-07-01

    Full Text Available A compressed video bitstream can be partitioned according to the coding priority of the data, allowing prioritized wireless communication or selective dropping in a congested channel. Known as data partitioning in the H.264/Advanced Video Coding (AVC codec, this paper introduces a further sub-partition of one of the H.264/AVC codec’s three data-partitions. Results show a 5 dB improvement in Peak Signal-to-Noise Ratio (PSNR through this innovation. In particular, the data partition containing intra-coded residuals is sub-divided into data from: those macroblocks (MBs naturally intra-coded, and those MBs forcibly inserted for non-periodic intra-refresh. Interactive user-to-user video streaming can benefit, as then HTTP adaptive streaming is inappropriate and the High Efficiency Video Coding (HEVC codec is too energy demanding.

  16. Combined Sparsifying Transforms for Compressive Image Fusion

    Directory of Open Access Journals (Sweden)

    ZHAO, L.

    2013-11-01

    Full Text Available In this paper, we present a new compressive image fusion method based on combined sparsifying transforms. First, the framework of compressive image fusion is introduced briefly. Then, combined sparsifying transforms are presented to enhance the sparsity of images. Finally, a reconstruction algorithm based on the nonlinear conjugate gradient is presented to get the fused image. The simulations demonstrate that by using the combined sparsifying transforms better results can be achieved in terms of both the subjective visual effect and the objective evaluation indexes than using only a single sparsifying transform for compressive image fusion.

  17. Proxy-based Video Transmission: Error Resiliency, Resource Allocation, and Dynamic Caching

    OpenAIRE

    Tu, Wei

    2009-01-01

    In this dissertation, several approaches are proposed to improve the quality of video transmission over wired and wireless networks. To improve the robustness of video transmission over error-prone mobile networks, a proxy-based reference picture selection scheme is proposed. In the second part of the dissertation, rate-distortion optimized rate adaptation algorithms are proposed for video applications over congested network nodes. A segment-based proxy caching algorithm for video-on-demand a...

  18. Cloud Optimized Image Format and Compression

    Science.gov (United States)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  19. Display device-adapted video quality-of-experience assessment

    Science.gov (United States)

    Rehman, Abdul; Zeng, Kai; Wang, Zhou

    2015-03-01

    Today's viewers consume video content from a variety of connected devices, including smart phones, tablets, notebooks, TVs, and PCs. This imposes significant challenges for managing video traffic efficiently to ensure an acceptable quality-of-experience (QoE) for the end users as the perceptual quality of video content strongly depends on the properties of the display device and the viewing conditions. State-of-the-art full-reference objective video quality assessment algorithms do not take into account the combined impact of display device properties, viewing conditions, and video resolution while performing video quality assessment. We performed a subjective study in order to understand the impact of aforementioned factors on perceptual video QoE. We also propose a full reference video QoE measure, named SSIMplus, that provides real-time prediction of the perceptual quality of a video based on human visual system behaviors, video content characteristics (such as spatial and temporal complexity, and video resolution), display device properties (such as screen size, resolution, and brightness), and viewing conditions (such as viewing distance and angle). Experimental results have shown that the proposed algorithm outperforms state-of-the-art video quality measures in terms of accuracy and speed.

  20. Algorithms Introduction to Algorithms

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 1. Algorithms Introduction to Algorithms. R K Shyamasundar. Series Article Volume 1 Issue 1 January 1996 pp 20-27. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/001/01/0020-0027 ...

  1. Double HEVC Compression Detection with Different Bitrates Based on Co-occurrence Matrix of PU Types and DCT Coefficients

    Directory of Open Access Journals (Sweden)

    Li Zhao-Hong

    2017-01-01

    Full Text Available Detection of double video compression is of particular importance in video forensics, as it reveals partly the video processing history. In this paper, a double compression method is proposed for HEVC–the latest video coding standard. Firstly, four 5×5 co-occurrence matrixes were derived from DCT coefficients along four directions respectively, i.e., horizontal, vertical, main diagonal and minor diagonal. Then four 4×4 co-occurrence matrixes were derived from PU types which are innovative features of HEVC and rarely been utilized by researchers. Finally, these two feature set are combined and sent to support vector machine (SVM to detect re-compressed videos. In order to reduce the feature dimension, only the co-occurrence matrixes of DCT coefficients and PU types in horizontal direction are adopted to identify whether the video has undergone double compression. Experimental results show the effectiveness and the robustness against frame deletion of the proposed scheme.

  2. Performance evaluation software moving object detection and tracking in videos

    CERN Document Server

    Karasulu, Bahadir

    2013-01-01

    Performance Evaluation Software: Moving Object Detection and Tracking in Videos introduces a software approach for the real-time evaluation and performance comparison of the methods specializing in moving object detection and/or tracking (D&T) in video processing. Digital video content analysis is an important item for multimedia content-based indexing (MCBI), content-based video retrieval (CBVR) and visual surveillance systems. There are some frequently-used generic algorithms for video object D&T in the literature, such as Background Subtraction (BS), Continuously Adaptive Mean-shift (CMS),

  3. Resource-Constrained Low-Complexity Video Coding for Wireless Transmission

    DEFF Research Database (Denmark)

    Ukhanova, Ann

    of video quality. We proposed a new metric for objective quality assessment that considers frame rate. As many applications deal with wireless video transmission, we performed an analysis of compression and transmission systems with a focus on power-distortion trade-off. We proposed an approach......Constrained resources like memory, power, bandwidth and delay requirements in many mobile systems pose limitations for video applications. Standard approaches for video compression and transmission do not always satisfy system requirements. In this thesis we have shown that it is possible to modify...... for ratedistortion-complexity optimization of upcoming video compression standard HEVC. We also provided a new method allowing decrease of power consumption on mobile devices in 3G networks. Finally, we proposed low-delay and low-power approaches for video transmission over wireless personal area networks, including...

  4. New architecture for MPEG video streaming system with backward playback support.

    Science.gov (United States)

    Fu, Chang-Hong; Chan, Yui-Lam; Ip, Tak-Piu; Siu, Wan-Chi

    2007-09-01

    MPEG digital video is becoming ubiquitous for video storage and communications. It is often desirable to perform various video cassette recording (VCR) functions such as backward playback in MPEG videos. However, the predictive processing techniques employed in MPEG severely complicate the backward-play operation. A straightforward implementation of backward playback is to transmit and decode the whole group-of-picture (GOP), store all the decoded frames in the decoder buffer, and play the decoded frames in reverse order. This approach requires a significant buffer in the decoder, which depends on the GOP size, to store the decoded frames. This approach could not be possible in a severely constrained memory requirement. Another alternative is to decode the GOP up to the current frame to be displayed, and then go back to decode the GOP again up to the next frame to be displayed. This approach does not need the huge buffer, but requires much higher bandwidth of the network and complexity of the decoder. In this paper, we propose a macroblock-based algorithm for an efficient implementation of the MPEG video streaming system to provide backward playback over a network with the minimal requirements on the network bandwidth and the decoder complexity. The proposed algorithm classifies macroblocks in the requested frame into backward macroblocks (BMBs) and forward/backward macroblocks (FBMBs). Two macroblock-based techniques are used to manipulate different types of macroblocks in the compressed domain and the server then sends the processed macroblocks to the client machine. For BMBs, a VLC-domain technique is adopted to reduce the number of macroblocks that need to be decoded by the decoder and the number of bits that need to be sent over the network in the backward-play operation. We then propose a newly mixed VLC/DCT-domain technique to handle FBMBs in order to further reduce the computational complexity of the decoder. With these compressed-domain techniques, the

  5. Fast mode decision for the H.264/AVC video coding standard based on frequency domain motion estimation

    Science.gov (United States)

    Abdelazim, Abdelrahman; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel

    2011-07-01

    The H.264 video coding standard achieves high performance compression and image quality at the expense of increased encoding complexity. Consequently, several fast mode decision and motion estimation techniques have been developed to reduce the computational cost. These approaches successfully reduce the computational time by reducing the image quality and/or increasing the bitrate. In this paper we propose a novel fast mode decision and motion estimation technique. The algorithm utilizes preprocessing frequency domain motion estimation in order to accurately predict the best mode and the search range. Experimental results show that the proposed algorithm significantly reduces the motion estimation time by up to 97%, while maintaining similar rate distortion performance when compared to the Joint Model software.

  6. A Novel Quantum Video Steganography Protocol with Large Payload Based on MCQI Quantum Video

    Science.gov (United States)

    Qu, Zhiguo; Chen, Siyi; Ji, Sai

    2017-11-01

    As one of important multimedia forms in quantum network, quantum video attracts more and more attention of experts and scholars in the world. A secure quantum video steganography protocol with large payload based on the video strip encoding method called as MCQI (Multi-Channel Quantum Images) is proposed in this paper. The new protocol randomly embeds the secret information with the form of quantum video into quantum carrier video on the basis of unique features of video frames. It exploits to embed quantum video as secret information for covert communication. As a result, its capacity are greatly expanded compared with the previous quantum steganography achievements. Meanwhile, the new protocol also achieves good security and imperceptibility by virtue of the randomization of embedding positions and efficient use of redundant frames. Furthermore, the receiver enables to extract secret information from stego video without retaining the original carrier video, and restore the original quantum video as a follow. The simulation and experiment results prove that the algorithm not only has good imperceptibility, high security, but also has large payload.

  7. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... NEI YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration ... Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: ...

  8. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia ... of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ...

  9. NEI You Tube Videos: Amblyopia

    Science.gov (United States)

    ... YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia ... of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ...

  10. A Method for Counting Moving People in Video Surveillance Videos

    Directory of Open Access Journals (Sweden)

    Mario Vento

    2010-01-01

    Full Text Available People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem. This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an ϵ-SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  11. A Method for Counting Moving People in Video Surveillance Videos

    Directory of Open Access Journals (Sweden)

    Conte Donatello

    2010-01-01

    Full Text Available People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem. This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an -SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  12. A Method for Counting Moving People in Video Surveillance Videos

    Science.gov (United States)

    Conte, Donatello; Foggia, Pasquale; Percannella, Gennaro; Tufano, Francesco; Vento, Mario

    2010-12-01

    People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem). This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an [InlineEquation not available: see fulltext.]-SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  13. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed ... Activity Role of Body Weight in Osteoarthritis Educational Videos for Patients Rheumatoid Arthritis Educational Video Series Psoriatic ...

  14. 61214++++','DOAJ-ART-EN'); return false;" href="+++++https://jual.nipissingu.ca/wp-content/uploads/sites/25/2014/06/v61214.m4v">61214++++">Jailed - Video

    Directory of Open Access Journals (Sweden)

    Cameron CULBERT

    2012-07-01

    Full Text Available As the public education system in Northern Ontario continues to take a downward spiral, a plethora of secondary school students are being placed in an alternative educational environment. Juxtaposing the two educational settings reveals very similar methods and characteristics of educating our youth as opposed to using a truly alternative approach to education. This video reviews the relationship between public education and alternative education in a remote Northern Ontario setting. It is my belief that the traditional methods of teaching are not appropriate in educating at risk students in alternative schools. Paper and pencil worksheets do not motivate these students to learn and succeed. Alternative education should emphasize experiential learning, a just in time curriculum based on every unique individual and the students true passion for everyday life. Cameron Culbert was born on February 3rd, 1977 in North Bay, Ontario. His teenage years were split between attending public school and his willed curriculum on the ski hill. Culbert spent 10 years (1996-2002 & 2006-2010 competing for Canada as an alpine ski racer. His passion for teaching and coaching began as an athlete and has now transferred into the classroom and the community. As a graduate of Nipissing University (BA, BEd, MEd. Camerons research interests are alternative education, physical education and technology in the classroom. Currently Cameron is an active educator and coach in Northern Ontario.

  15. Video Design Games

    DEFF Research Database (Denmark)

    Smith, Rachel Charlotte; Christensen, Kasper Skov; Iversen, Ole Sejer

    We introduce Video Design Games to train educators in teaching design. The Video Design Game is a workshop format consisting of three rounds in which participants observe, reflect and generalize based on video snippets from their own practice. The paper reports on a Video Design Game workshop...

  16. Lossless Compression of Digital Images

    DEFF Research Database (Denmark)

    Martins, Bo

    Presently, tree coders are the best bi-level image coders. The currentISO standard, JBIG, is a good example.By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code.A number of general-purpose coders...... are constructed by this principle. A multi-pass free tree coding scheme produces excellent compression results for all test images. A multi-pass fast free template coding scheme produces much better results than JBIG for difficult images, such as halftonings. Rissanen's algorithm `Context' is presented in a new...... version that is substantially faster than its precursorsand brings it close to the multi-pass coders in compression performance.Handprinted characters are of unequal complexity; recent work by Singer and Tishby demonstrates that utilizing the physiological process of writing one can synthesize cursive...

  17. Characterization of social video

    Science.gov (United States)

    Ostrowski, Jeffrey R.; Sarhan, Nabil J.

    2009-01-01

    The popularity of social media has grown dramatically over the World Wide Web. In this paper, we analyze the video popularity distribution of well-known social video websites (YouTube, Google Video, and the AOL Truveo Video Search engine) and characterize their workload. We identify trends in the categories, lengths, and formats of those videos, as well as characterize the evolution of those videos over time. We further provide an extensive analysis and comparison of video content amongst the main regions of the world.

  18. A low-light-level video recursive filtering technology based on the three-dimensional coefficients

    Science.gov (United States)

    Fu, Rongguo; Feng, Shu; Shen, Tianyu; Luo, Hao; Wei, Yifang; Yang, Qi

    2017-08-01

    Low light level video is an important method of observation under low illumination condition, but the SNR of low light level video is low, the effect of observation is poor, so the noise reduction processing must be carried out. Low light level video noise mainly includes Gauss noise, Poisson noise, impulse noise, fixed pattern noise and dark current noise. In order to remove the noise in low-light-level video effectively, improve the quality of low-light-level video. This paper presents an improved time domain recursive filtering algorithm with three dimensional filtering coefficients. This algorithm makes use of the correlation between the temporal domain of the video sequence. In the video sequences, the proposed algorithm adaptively adjusts the local window filtering coefficients in space and time by motion estimation techniques, for the different pixel points of the same frame of the image, the different weighted coefficients are used. It can reduce the image tail, and ensure the noise reduction effect well. Before the noise reduction, a pretreatment based on boxfilter is used to reduce the complexity of the algorithm and improve the speed of the it. In order to enhance the visual effect of low-light-level video, an image enhancement algorithm based on guided image filter is used to enhance the edge of the video details. The results of experiment show that the hybrid algorithm can remove the noise of the low-light-level video effectively, enhance the edge feature and heighten the visual effects of video.

  19. Summarization of human activity videos via low-rank approximation

    OpenAIRE

    Mademlis, Ioannis; Tefas, Anastasios; Nikolaidis, Nikos; Pitas, Ioannis

    2017-01-01

    Summarization of videos depicting human activities is a timely problem with important applications, e.g., in the domains of surveillance or film/TV production, that steadily becomes more relevant. Research on video summarization has mainly relied on global clustering or local (frame-by-frame) saliency methods to provide automated algorithmic solutions for key-frame extraction. This work presents a method based on selecting as key-frames video frames able to optimally reconstruct the entire vi...

  20. The Research and Improvement of SDT Algorithm for Historical Data in SCADA

    Directory of Open Access Journals (Sweden)

    Xu Xu-Dong

    2017-01-01

    Full Text Available With the rapid development of Internet of things and big data technology, the amount of data collected by SCADA(Supervisory Control And Data Acquisitionsystem is growing exponentially, which the traditional SDT algorithm can not meet the requirements of SCADA system for historical data compression. In this paper, ASDT(Advanced SDT algorithm based on SDT algorithm is proposed and implemented in the Java language, which is based on the deep research of the data compression method, especially the Swing Door Trending. ASDT algorithm through the sine curve fitting data to achieve data compression, compared with the performance of the traditional SDT algorithm, which it can achieve better compression results. The experimental results show that compared with the traditional SDT algorithm, the ASDT algorithm can improve the compression ratio in the case of no significant increase in the compression error, and the compression radio is increased by nearly 50%.