WorldWideScience

Sample records for video coding algorithms

  1. Fast prediction algorithm for multiview video coding

    Science.gov (United States)

    Abdelazim, Abdelrahman; Mein, Stephen James; Varley, Martin Roy; Ait-Boudaoud, Djamel

    2013-03-01

    The H.264/multiview video coding (MVC) standard has been developed to enable efficient coding for three-dimensional and multiple viewpoint video sequences. The inter-view statistical dependencies are utilized and an inter-view prediction is employed to provide more efficient coding; however, this increases the overall encoding complexity. Motion homogeneity is exploited here to selectively enable inter-view prediction, and to reduce complexity in the motion estimation (ME) and the mode selection processes. This has been accomplished by defining situations that relate macro-blocks' motion characteristics to the mode selection and the inter-view prediction processes. When comparing the proposed algorithm to the H.264/MVC reference software and other recent work, the experimental results demonstrate a significant reduction in ME time while maintaining similar rate-distortion performance.

  2. Novel Intermode Prediction Algorithm for High Efficiency Video Coding Encoder

    Directory of Open Access Journals (Sweden)

    Chan-seob Park

    2014-01-01

    Full Text Available The joint collaborative team on video coding (JCT-VC is developing the next-generation video coding standard which is called high efficiency video coding (HEVC. In the HEVC, there are three units in block structure: coding unit (CU, prediction unit (PU, and transform unit (TU. The CU is the basic unit of region splitting like macroblock (MB. Each CU performs recursive splitting into four blocks with equal size, starting from the tree block. In this paper, we propose a fast CU depth decision algorithm for HEVC technology to reduce its computational complexity. In 2N×2N PU, the proposed method compares the rate-distortion (RD cost and determines the depth using the compared information. Moreover, in order to speed up the encoding time, the efficient merge SKIP detection method is developed additionally based on the contextual mode information of neighboring CUs. Experimental result shows that the proposed algorithm achieves the average time-saving factor of 44.84% in the random access (RA at Main profile configuration with the HEVC test model (HM 10.0 reference software. Compared to HM 10.0 encoder, a small BD-bitrate loss of 0.17% is also observed without significant loss of image quality.

  3. Fast motion prediction algorithm for multiview video coding

    Science.gov (United States)

    Abdelazim, Abdelrahman; Zhang, Guang Y.; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel

    2011-06-01

    Multiview Video Coding (MVC) is an extension to the H.264/MPEG-4 AVC video compression standard developed with joint efforts by MPEG/VCEG to enable efficient encoding of sequences captured simultaneously from multiple cameras using a single video stream. Therefore the design is aimed at exploiting inter-view dependencies in addition to reducing temporal redundancies. However, this further increases the overall encoding complexity In this paper, the high correlation between a macroblock and its enclosed partitions is utilised to estimate motion homogeneity, and based on the result inter-view prediction is selectively enabled or disabled. Moreover, if the MVC is divided into three layers in terms of motion prediction; the first being the full and sub-pixel motion search, the second being the mode selection process and the third being repetition of the first and second for inter-view prediction, the proposed algorithm significantly reduces the complexity in the three layers. To assess the proposed algorithm, a comprehensive set of experiments were conducted. The results show that the proposed algorithm significantly reduces the motion estimation time whilst maintaining similar Rate Distortion performance, when compared to both the H.264/MVC reference software and recently reported work.

  4. High efficiency video coding (HEVC) algorithms and architectures

    CERN Document Server

    Budagavi, Madhukar; Sullivan, Gary

    2014-01-01

    This book provides developers, engineers, researchers and students with detailed knowledge about the High Efficiency Video Coding (HEVC) standard. HEVC is the successor to the widely successful H.264/AVC video compression standard, and it provides around twice as much compression as H.264/AVC for the same level of quality. The applications for HEVC will not only cover the space of the well-known current uses and capabilities of digital video – they will also include the deployment of new services and the delivery of enhanced video quality, such as ultra-high-definition television (UHDTV) and video with higher dynamic range, wider range of representable color, and greater representation precision than what is typically found today. HEVC is the next major generation of video coding design – a flexible, reliable and robust solution that will support the next decade of video applications and ease the burden of video on world-wide network traffic. This book provides a detailed explanation of the various parts ...

  5. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    Science.gov (United States)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  6. 3D video coding for embedded devices energy efficient algorithms and architectures

    CERN Document Server

    Zatt, Bruno; Bampi, Sergio; Henkel, Jörg

    2013-01-01

    This book shows readers how to develop energy-efficient algorithms and hardware architectures to enable high-definition 3D video coding on resource-constrained embedded devices.  Users of the Multiview Video Coding (MVC) standard face the challenge of exploiting its 3D video-specific coding tools for increasing compression efficiency at the cost of increasing computational complexity and, consequently, the energy consumption.  This book enables readers to reduce the multiview video coding energy consumption through jointly considering the algorithmic and architectural levels.  Coverage includes an introduction to 3D videos and an extensive discussion of the current state-of-the-art of 3D video coding, as well as energy-efficient algorithms for 3D video coding and energy-efficient hardware architecture for 3D video coding.     ·         Discusses challenges related to performance and power in 3D video coding for embedded devices; ·         Describes energy-efficient algorithms for reduci...

  7. Motion estimation for video coding efficient algorithms and architectures

    CERN Document Server

    Chakrabarti, Indrajit; Chatterjee, Sumit Kumar

    2015-01-01

    The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.

  8. Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding

    Directory of Open Access Journals (Sweden)

    Pengyu Liu

    2013-01-01

    Full Text Available A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI. The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding.

  9. Feature-based fast coding unit partition algorithm for high efficiency video coding

    Directory of Open Access Journals (Sweden)

    Yih-Chuan Lin

    2015-04-01

    Full Text Available High Efficiency Video Coding (HEVC, which is the newest video coding standard, has been developed for the efficient compression of ultra high definition videos. One of the important features in HEVC is the adoption of a quad-tree based video coding structure, in which each incoming frame is represented as a set of non-overlapped coding tree blocks (CTB by variable-block sized prediction and coding process. To do this, each CTB needs to be recursively partitioned into coding unit (CU, predict unit (PU and transform unit (TU during the coding process, leading to a huge computational load in the coding of each video frame. This paper proposes to extract visual features in a CTB and uses them to simplify the coding procedure by reducing the depth of quad-tree partition for each CTB in HEVC intra coding mode. A measure for the edge strength in a CTB, which is defined with simple Sobel edge detection, is used to constrain the possible maximum depth of quad-tree partition of the CTB. With the constrained partition depth, the proposed method can reduce a lot of encoding time. Experimental results by HM10.1 show that the average time-savings is about 13.4% under the increase of encoded BD-Rate by only 0.02%, which is a less performance degradation in comparison to other similar methods.

  10. A Fast PDE Algorithm Using Adaptive Scan and Search for Video Coding

    Science.gov (United States)

    Kim, Jong-Nam

    In this paper, we propose an algorithm that reduces unnecessary computations, while keeping the same prediction quality as that of the full search algorithm. In the proposed algorithm, we can reduce unnecessary computations efficiently by calculating initial matching error point from first 1/N partial errors. We can increase the probability that hits minimum error point as soon as possible. Our algorithm decreases the computational amount by about 20% of the conventional PDE algorithm without any degradation of prediction quality. Our algorithm would be useful in real-time video coding applications using MPEG-2/4 AVC standards.

  11. A high-efficient significant coefficient scanning algorithm for 3-D embedded wavelet video coding

    Science.gov (United States)

    Song, Haohao; Yu, Songyu; Song, Li; Xiong, Hongkai

    2005-07-01

    3-D embedded wavelet video coding (3-D EWVC) algorithms become a vital scheme for state-of-the-art scalable video coding. A major objective in a progressive transmission scheme is to select the most important information which yields the largest distortion reduction to be transmitted first, so traditional 3-D EWVC algorithms scan coefficients according to bit-plane order. To significant bit information of the same bit-plane, however, these algorithms neglect the different effect of coefficients in different subbands to distortion. In this paper, we analyze different effect of significant information bits of the same bit-plane in different subbands to distortion and propose a high-efficient significant coefficient scanning algorithm. Experimental results of 3-D SPIHT and 3-D SPECK show that high-efficient significant coefficient scanning algorithm can improve traditional 3-D EWVC algorithms' ability of compression, and make reconstructed videos have higher PSNR and better visual effects in the same bit rate compared to original significant coefficient scanning algorithms respectively.

  12. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  13. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  14. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  15. Interactive video algorithms and technologies

    CERN Document Server

    Hammoud, Riad

    2006-01-01

    This book covers both algorithms and technologies of interactive videos, so that businesses in IT and data managements, scientists and software engineers in video processing and computer vision, coaches and instructors that use video technology in teaching, and finally end-users will greatly benefit from it. This book contains excellent scientific contributions made by a number of pioneering scientists and experts from around the globe. It consists of five parts. The first part introduces the reader to interactive video and video summarization and presents effective methodologies for automatic abstraction of a single video sequence, a set of video sequences, and a combined audio-video sequence. In the second part, a list of advanced algorithms and methodologies for automatic and semi-automatic analysis and editing of audio-video documents are presented. The third part tackles a more challenging level of automatic video re-structuring, filtering of video stream by extracting of highlights, events, and meaningf...

  16. A novel 3D scalable video compression algorithm

    Science.gov (United States)

    Somasundaram, Siva; Subbalakshmi, Koduvayur P.

    2003-05-01

    In this paper we propose a scalable video coding scheme that utilizes the embedded block coding with optimal truncation (EBCOT) compression algorithm. Three dimensional spatio-temporal decomposition of the video sequence succeeded by compression using the EBCOT generates a SNR and resolution scalable bit stream. The proposed video coding algorithm not only performs closer to the MPEG-4 video coding standard in compression efficiency but also provides better SNR and resolution scalability. Experimental results show that the performance of the proposed algorithm does better than the 3-D SPIHT (Set Partitioning in Hierarchial Trees) algorithm by 1.5dB.

  17. Coding Transparency in Object-Based Video

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2006-01-01

    A novel algorithm for coding gray level alpha planes in object-based video is presented. The scheme is based on segmentation in multiple layers. Different coders are specifically designed for each layer. In order to reduce the bit rate, cross-layer redundancies as well as temporal correlation...... are exploited. Coding results show the superior efficiency of the proposed scheme compared with MPEG-4...

  18. Scalable Video Coding

    NARCIS (Netherlands)

    Choupani, R.

    2017-01-01

    With the rapid improvements in digital communication technologies, distributing high-definition visual information has become more widespread. However, the available technologies were not sufficient to support the rising demand for high-definition video. This situation is further complicated when

  19. Video Coding Technique using MPEG Compression Standards

    Directory of Open Access Journals (Sweden)

    A. J. Falade

    2013-06-01

    Full Text Available Digital video compression technologies have become part of life, in the way visual information is created, communicated and consumed. Some application areas of video compression focused on the problem of optimizing storage space and transmission bandwidth (BW. The two dimensional discrete cosine transform (2-D DCT is an integral part of video and image compression, which is used in Moving Picture Expert Group (MPEG encoding standards. Thus, several video compression algorithms had been developed to reduce the data quantity and provide the acceptable quality standard. In the proposed study, the Matlab Simulink Model (MSM has been used for video coding/compression. The approach is more modern and reduces error resilience image distortion.

  20. Video Coding for ESL.

    Science.gov (United States)

    King, Kevin

    1992-01-01

    Coding tasks, a valuable technique for teaching English as a Second Language, are presented that enable students to look at patterns and structures of marital communication as well as objectively evaluate the degree of happiness or distress in the marriage. (seven references) (JL)

  1. Adaptive subband coding of full motion video

    Science.gov (United States)

    Sharifi, Kamran; Xiao, Leping; Leon-Garcia, Alberto

    1993-10-01

    In this paper a new algorithm for digital video coding is presented that is suitable for digital storage and video transmission applications in the range of 5 to 10 Mbps. The scheme is based on frame differencing and, unlike recent proposals, does not employ motion estimation and compensation. A novel adaptive grouping structure is used to segment the video sequence into groups of frames of variable sizes. Within each group, the frame difference is taken in a closed loop Differential Pulse Code Modulation (DPCM) structure and then decomposed into different frequency subbands. The important subbands are transformed using the Discrete Cosine Transform (DCT) and the resulting coefficients are adaptively quantized and runlength coded. The adaptation is based on the variance of sample values in each subband. To reduce the computation load, a very simple and efficient way has been used to estimate the variance of the subbands. It is shown that for many types of sequences, the performance of the proposed coder is comparable to that of coding methods which use motion parameters.

  2. Very low bit rate video coding standards

    Science.gov (United States)

    Zhang, Ya-Qin

    1995-04-01

    Very low bit rate video coding has received considerable attention in academia and industry in terms of both coding algorithms and standards activities. In addition to the earlier ITU-T efforts on H.320 standardization for video conferencing from 64 kbps to 1.544 Mbps in ISDN environment, the ITU-T/SG15 has formed an expert group on low bit coding (LBC) for visual telephone below 64 kbps. The ITU-T/SG15/LBC work consists of two phases: the near-term and long-term. The near-term standard H.32P/N, based on existing compression technologies, mainly addresses the issues related to visual telephony at below 28.8 kbps, the V.34 modem rate used in the existing Public Switched Telephone Network (PSTN). H.32P/N will be technically frozen in January '95. The long-term standard H.32P/L, relying on fundamentally new compression technologies with much improved performance, will address video telephony in both PSTN and mobile environment. The ISO/SG29/WG11, after its highly visible and successful MPEG 1/2 work, is starting to focus on the next- generation audiovisual multimedia coding standard MPEG 4. With the recent change of direction, MPEG 4 intends to provide an audio visual coding standard allowing for interactivity, high compression, and/or universal accessibility, with high degree of flexibility and extensibility. This paper briefly summarizes these on-going standards activities undertaken by ITU-T/LBC and ISO/MPEG 4 as of December 1994.

  3. Fully scalable video coding in multicast applications

    Science.gov (United States)

    Lerouge, Sam; De Sutter, Robbie; Lambert, Peter; Van de Walle, Rik

    2004-01-01

    The increasing diversity of the characteristics of the terminals and networks that are used to access multimedia content through the internet introduces new challenges for the distribution of multimedia data. Scalable video coding will be one of the elementary solutions in this domain. This type of coding allows to adapt an encoded video sequence to the limitations of the network or the receiving device by means of very basic operations. Algorithms for creating fully scalable video streams, in which multiple types of scalability are offered at the same time, are becoming mature. On the other hand, research on applications that use such bitstreams is only recently emerging. In this paper, we introduce a mathematical model for describing such bitstreams. In addition, we show how we can model applications that use scalable bitstreams by means of definitions that are built on top of this model. In particular, we chose to describe a multicast protocol that is targeted at scalable bitstreams. This way, we will demonstrate that it is possible to define an abstract model for scalable bitstreams, that can be used as a tool for reasoning about such bitstreams and related applications.

  4. Distributed Video Coding: Iterative Improvements

    DEFF Research Database (Denmark)

    Luong, Huynh Van

    at the decoder side offering such benefits for these applications. Although there have been some advanced improvement techniques, improving the DVC coding efficiency is still challenging. The thesis addresses this challenge by proposing several iterative algorithms at different working levels, e.g. bitplane......, band, and frame levels. In order to show the information theoretic basis, theoretical foundations of DVC are introduced. The first proposed algorithm applies parallel iterative decoding using multiple LDPC decoders to utilize cross bitplane correlation. To improve Side Information (SI) generation...... and noise modeling and also learn from the previous decoded Wyner-Ziv (WZ) frames, side information and noise learning (SING) is proposed. The SING scheme introduces an optical flow technique to compensate the weaknesses of the block based SI generation and also utilizes clustering of DCT blocks to capture...

  5. Communicating pictures a course in image and video coding

    CERN Document Server

    Bull, David R

    2014-01-01

    Communicating Pictures starts with a unique historical perspective of the role of images in communications and then builds on this to explain the applications and requirements of a modern video coding system. It draws on the author's extensive academic and professional experience of signal processing and video coding to deliver a text that is algorithmically rigorous, yet accessible, relevant to modern standards, and practical. It offers a thorough grounding in visual perception, and demonstrates how modern image and video compression methods can be designed in order to meet the rate-quality performance levels demanded by today's applications, networks and users. With this book you will learn: Practical issues when implementing a codec, such as picture boundary extension and complexity reduction, with particular emphasis on efficient algorithms for transforms, motion estimators and error resilience Conflicts between conventional video compression, based on variable length coding and spatiotemporal prediction,...

  6. Multi-hypothesis distributed stereo video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Zamarin, Marco; Forchhammer, Søren

    2013-01-01

    Distributed Video Coding (DVC) is a video coding paradigm that exploits the source statistics at the decoder based on the availability of the Side Information (SI). Stereo sequences are constituted by two views to give the user an illusion of depth. In this paper, we present a DVC decoder...

  7. High efficiency video coding coding tools and specification

    CERN Document Server

    Wien, Mathias

    2015-01-01

    The video coding standard High Efficiency Video Coding (HEVC) targets at improved compression performance for video resolutions of HD and beyond, providing Ultra HD video at similar compressed bit rates as for HD video encoded with the well-established video coding standard H.264 | AVC. Based on known concepts, new coding structures and improved coding tools have been developed and specified in HEVC. The standard is expected to be taken up easily by established industry as well as new endeavors, answering the needs of todays connected and ever-evolving online world. This book presents the High Efficiency Video Coding standard and explains it in a clear and coherent language. It provides a comprehensive and consistently written description, all of a piece. The book targets at both, newbies to video coding as well as experts in the field. While providing sections with introductory text for the beginner, it suits as a well-arranged reference book for the expert. The book provides a comprehensive reference for th...

  8. Layered Wyner-Ziv video coding.

    Science.gov (United States)

    Xu, Qian; Xiong, Zixiang

    2006-12-01

    Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks.

  9. Improved side information generation for distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2008-01-01

    As a new coding paradigm, distributed video coding (DVC) deals with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. The performance of DVC highly depends on the quality of side information. With a better side...... information generation method, fewer bits will be requested from the encoder and more reliable decoded frames will be obtained. In this paper, a side information generation method is introduced to further improve the rate-distortion (RD) performance of transform domain distributed video coding. This algorithm...... consists of a variable block size based Y, U and V component motion estimation and an adaptive weighted overlapped block motion compensation (OBMC). The proposal is tested and compared with the results of an executable DVC codec released by DISCOVER group (DIStributed COding for Video sERvices). RD...

  10. Complexity-aware high efficiency video coding

    CERN Document Server

    Correa, Guilherme; Agostini, Luciano; Cruz, Luis A da Silva

    2016-01-01

    This book discusses computational complexity of High Efficiency Video Coding (HEVC) encoders with coverage extending from the analysis of HEVC compression efficiency and computational complexity to the reduction and scaling of its encoding complexity. After an introduction to the topic and a review of the state-of-the-art research in the field, the authors provide a detailed analysis of the HEVC encoding tools compression efficiency and computational complexity.  Readers will benefit from a set of algorithms for scaling the computational complexity of HEVC encoders, all of which take advantage from the flexibility of the frame partitioning structures allowed by the standard.  The authors also provide a set of early termination methods based on data mining and machine learning techniques, which are able to reduce the computational complexity required to find the best frame partitioning structures. The applicability of the proposed methods is finally exemplified with an encoding time control system that emplo...

  11. Scalable video transmission over Rayleigh fading channels using LDPC codes

    Science.gov (United States)

    Bansal, Manu; Kondi, Lisimachos P.

    2005-03-01

    In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.

  12. Distributed video coding with multiple side information

    DEFF Research Database (Denmark)

    Huang, Xin; Brites, C.; Ascenso, J.

    2009-01-01

    Distributed Video Coding (DVC) is a new video coding paradigm which mainly exploits the source statistics at the decoder based on the availability of some decoder side information. The quality of the side information has a major impact on the DVC rate-distortion (RD) performance in the same way...... the quality of the predictions had a major impact in predictive video coding. In this paper, a DVC solution exploiting multiple side information is proposed; the multiple side information is generated by frame interpolation and frame extrapolation targeting to improve the side information of a single...

  13. Efficient Enhancement for Spatial Scalable Video Coding Transmission

    Directory of Open Access Journals (Sweden)

    Mayada Khairy

    2017-01-01

    Full Text Available Scalable Video Coding (SVC is an international standard technique for video compression. It is an extension of H.264 Advanced Video Coding (AVC. In the encoding of video streams by SVC, it is suitable to employ the macroblock (MB mode because it affords superior coding efficiency. However, the exhaustive mode decision technique that is usually used for SVC increases the computational complexity, resulting in a longer encoding time (ET. Many other algorithms were proposed to solve this problem with imperfection of increasing transmission time (TT across the network. To minimize the ET and TT, this paper introduces four efficient algorithms based on spatial scalability. The algorithms utilize the mode-distribution correlation between the base layer (BL and enhancement layers (ELs and interpolation between the EL frames. The proposed algorithms are of two categories. Those of the first category are based on interlayer residual SVC spatial scalability. They employ two methods, namely, interlayer interpolation (ILIP and the interlayer base mode (ILBM method, and enable ET and TT savings of up to 69.3% and 83.6%, respectively. The algorithms of the second category are based on full-search SVC spatial scalability. They utilize two methods, namely, full interpolation (FIP and the full-base mode (FBM method, and enable ET and TT savings of up to 55.3% and 76.6%, respectively.

  14. Efficient Coding of Shape and Transparency for Video Objects

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2007-01-01

    A novel scheme for coding gray-level alpha planes in object-based video is presented. Gray-level alpha planes convey the shape and the transparency information, which are required for smooth composition of video objects. The algorithm proposed is based on the segmentation of the alpha plane...... shape layer is processed by a novel video shape coder. In intra mode, the DSLSC binary image coder presented in is used. This is extended here with an intermode utilizing temporal redundancies in shape image sequences. Then the opaque layer is compressed by a newly designed scheme which models...

  15. Error Transmission in Video Coding with Gaussian Noise

    Directory of Open Access Journals (Sweden)

    A Purwadi

    2015-06-01

    Full Text Available In video transmission, there is a possibility of packet lost and a large load variation in the bandwidth. These are the sources of network congestion, which can interfere the communication data rate. The coding system used is a video coding standard, which is either MPEG-2 or H.263 with SNR scalability. The algorithm used for motion compensation, temporal redundancy and spatial redundancy is the Discrete Cosine Transform (DCT and quantization. The transmission error is simulated by adding Gaussian noise (error on motion vectors. From the simulation results, the SNR and Peak Signal to Noise Ratio (PSNR in the noisy video frames decline with averages of 3dB and increase Mean Square Error (MSE on video frames received noise.

  16. Least-Square Prediction for Backward Adaptive Video Coding

    Directory of Open Access Journals (Sweden)

    Li Xin

    2006-01-01

    Full Text Available Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP, and demonstrate its potential in video coding. Motivated by the duality between edge contour in images and motion trajectory in video, we propose to derive the best prediction of the current frame from its causal past using least-square method. It is demonstrated that LSP is particularly effective for modeling video material with slow motion and can be extended to handle fast motion by temporal warping and forward adaptation. For typical QCIF test sequences, LSP often achieves smaller MSE than , full-search, quarter-pel block matching algorithm (BMA without the need of transmitting any overhead.

  17. Context based Coding of Quantized Alpha Planes for Video Objects

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2002-01-01

    In object based video, each frame is a composition of objects that are coded separately. The composition is performed through the alpha plane that represents the transparency of the object. We present an alternative to MPEG-4 for coding of alpha planes that considers their specific properties. Co....... Comparisons in terms of rate and distortion are provided, showing that the proposed coding scheme for still alpha planes is better than the algorithms for I-frames used in MPEG-4.......In object based video, each frame is a composition of objects that are coded separately. The composition is performed through the alpha plane that represents the transparency of the object. We present an alternative to MPEG-4 for coding of alpha planes that considers their specific properties...

  18. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  19. low bit rate video coding low bit rate video coding

    African Journals Online (AJOL)

    eobe

    bottom-up merging procedure is to find out motion vector of current frame by any kind of motion estimation algorithm. Once, the motion vectors are available to motion compensation module, then the bottom-up merging process is implemented in two steps. Firstly, the VBMC merges macro-block into the bigger block, and ...

  20. Motion estimation techniques for digital video coding

    CERN Document Server

    Metkar, Shilpa

    2013-01-01

    The book deals with the development of a methodology to estimate the motion field between two frames for video coding applications. This book proposes an exhaustive study of the motion estimation process in the framework of a general video coder. The conceptual explanations are discussed in a simple language and with the use of suitable figures. The book will serve as a guide for new researchers working in the field of motion estimation techniques.

  1. 3D video coding: an overview of present and upcoming standards

    Science.gov (United States)

    Merkle, Philipp; Müller, Karsten; Wiegand, Thomas

    2010-07-01

    An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.

  2. Modified BTC Algorithm for Audio Signal Coding

    Directory of Open Access Journals (Sweden)

    TOMIC, S.

    2016-11-01

    Full Text Available This paper describes modification of a well-known image coding algorithm, named Block Truncation Coding (BTC and its application in audio signal coding. BTC algorithm was originally designed for black and white image coding. Since black and white images and audio signals have different statistical characteristics, the application of this image coding algorithm to audio signal presents a novelty and a challenge. Several implementation modifications are described in this paper, while the original idea of the algorithm is preserved. The main modifications are performed in the area of signal quantization, by designing more adequate quantizers for audio signal processing. The result is a novel audio coding algorithm, whose performance is presented and analyzed in this research. The performance analysis indicates that this novel algorithm can be successfully applied in audio signal coding.

  3. Probabilistic Decision Based Block Partitioning for Future Video Coding

    KAUST Repository

    Wang, Zhao

    2017-11-29

    In the latest Joint Video Exploration Team development, the quadtree plus binary tree (QTBT) block partitioning structure has been proposed for future video coding. Compared to the traditional quadtree structure of High Efficiency Video Coding (HEVC) standard, QTBT provides more flexible patterns for splitting the blocks, which results in dramatically increased combinations of block partitions and high computational complexity. In view of this, a confidence interval based early termination (CIET) scheme is proposed for QTBT to identify the unnecessary partition modes in the sense of rate-distortion (RD) optimization. In particular, a RD model is established to predict the RD cost of each partition pattern without the full encoding process. Subsequently, the mode decision problem is casted into a probabilistic framework to select the final partition based on the confidence interval decision strategy. Experimental results show that the proposed CIET algorithm can speed up QTBT block partitioning structure by reducing 54.7% encoding time with only 1.12% increase in terms of bit rate. Moreover, the proposed scheme performs consistently well for the high resolution sequences, of which the video coding efficiency is crucial in real applications.

  4. Distributed Video Coding for Multiview and Video-plus-depth Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo

    The interest in Distributed Video Coding (DVC) systems has grown considerably in the academic world in recent years. With DVC the correlation between frames is exploited at the decoder (joint decoding). The encoder codes the frame independently, performing relatively simple operations. Therefore......, with DVC the complexity is shifted from encoder to decoder, making the coding architecture a viable solution for encoders with limited resources. DVC may empower new applications which can benefit from this reversed coding architecture. Multiview Distributed Video Coding (M-DVC) is the application...... of the DVC principles to camera networks. Thanks to its reversed coding paradigm M-DVC enables the exploitation of inter-camera redundancy without inter-camera communication, because the frames are encoded independently. One of the key elements in DVC is the Side Information (SI) which is an estimation...

  5. Overview of MPEG internet video coding

    Science.gov (United States)

    Wang, R. G.; Li, G.; Park, S.; Kim, J.; Huang, T.; Jang, E. S.; Gao, W.

    2015-09-01

    MPEG has produced standards that have provided the industry with the best video compression technologies. In order to address the diversified needs of the Internet, MPEG issued the Call for Proposals (CfP) for internet video coding in July, 2011. It is anticipated that any patent declaration associated with the Baseline Profile of this standard will indicate that the patent owner is prepared to grant a free of charge license to an unrestricted number of applicants on a worldwide, non-discriminatory basis and under other reasonable terms and conditions to make, use, and sell implementations of the Baseline Profile of this standard in accordance with the ITU-T/ITU-R/ISO/IEC Common Patent Policy. Three different codecs had responded to the CfP, which are WVC, VCB and IVC. WVC was proposed jointly by Apple, Cisco, Fraunhofer HHI, Magnum Semiconductor, Polycom and RIM etc. it's in fact AVC baseline. VCB was proposed by Google, and it's in fact VP8. IVC was proposed by several Universities (Peking University, Tsinghua University, Zhejiang University, Hanyang University and Korea Aerospace University etc.) and its coding tools was developed from Zero. In this paper, we give an overview of the coding tools in IVC, and evaluate its performance by comparing it with WVC, VCB and AVC High Profile.

  6. Robust video transmission with distributed source coded auxiliary channel.

    Science.gov (United States)

    Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan

    2009-12-01

    We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints.

  7. Wyner-Ziv Bayer-pattern video coding

    OpenAIRE

    Chen, Hu

    2013-01-01

    This thesis addresses the problem of Bayer-pattern video communications using Wyner-Ziv video coding. There are three major contributions. Firstly, a state-of-the-art Wyner-Ziv video codec using turbo codes is optimized and its functionality is extended. Secondly, it is studied how to realize joint source-channel coding using Wyner-Ziv video coding. The motivation is to achieve high error resiliency for wireless video transmission. Thirdly, a new color space transform method is proposed speci...

  8. Variable disparity-motion estimation based fast three-view video coding

    Science.gov (United States)

    Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo

    2009-02-01

    In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.

  9. Multiresolution coding for video-based service applications

    Science.gov (United States)

    Gharavi, Hami

    1995-12-01

    The video coding and distribution approach presented in this paper has two key characteristics that make it ideal for integration of video communication services over common broadband digital networks. The modular multi-resolution nature of the coding scheme provides the necessary flexibility to accommodate future advances in video technology as well as robust distribution over various network environments. This paper will present an efficient and scalable coding scheme for video communications. The scheme is capable of encoding and decoding video signals in a hierarchical, multilayer fashion to provide video at differing quality grades. Subsequently, the utilization of this approach to enable efficient bandwidth sharing and robust distribution of video signals in multipoint communications is presented. Coding and distribution architectures are discussed which include multi-party communications in a multi-window fashion within ATM environments. Furthermore, under the limited capabilities typical of wideband/broadband access networks, this architecture accommodates important video-based service applications such as Interactive Distance Learning.

  10. Low Bit Rate Video Coding | Mishra | Nigerian Journal of Technology

    African Journals Online (AJOL)

    ... length bit rate (VLBR) broadly encompasses video coding which mandates a temporal frequency of 10 frames per second (fps) or less. Object-based video coding represents a very promising option for VLBR coding, though the problems of object identification and segmentation need to be addressed by further research.

  11. Attacking Automatic Video Analysis Algorithms: A Case Study of Google Cloud Video Intelligence API

    OpenAIRE

    Hosseini, Hossein; Xiao, Baicen; Clark, Andrew; Poovendran, Radha

    2017-01-01

    Due to the growth of video data on Internet, automatic video analysis has gained a lot of attention from academia as well as companies such as Facebook, Twitter and Google. In this paper, we examine the robustness of video analysis algorithms in adversarial settings. Specifically, we propose targeted attacks on two fundamental classes of video analysis algorithms, namely video classification and shot detection. We show that an adversary can subtly manipulate a video in such a way that a human...

  12. Decoding Interleaved Gabidulin Codes using Alekhnovich's Algorithm

    DEFF Research Database (Denmark)

    Puchinger, Sven; Müelich, Sven; Mödinger, David

    2017-01-01

    We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent.......We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent....

  13. Decoding Hermitian Codes with Sudan's Algorithm

    DEFF Research Database (Denmark)

    Høholdt, Tom; Nielsen, Rasmus Refslund

    1999-01-01

    We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q-polynomial, and a reduct......We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q...

  14. Error resilience technique for video coding using concealment

    Science.gov (United States)

    Li, Rong; Yu, Sheng-sheng; Zhu, Li

    2009-10-01

    The traditional error resilience technique has been widely used in video coding. Many literatures have shown that with the technique's help, the video coding bit stream can been protected and the reconstructed image will get high improvement. In this paper, we review the error resilience for video coding and give the experiment of this new technology. These techniques are based on coding simultaneously for synchronization and error protection or detection. We apply the techniques to improve the performance of the multiplexing protocol and also to improve the robustness of the coded video. The techniques proposed for the video also have the advantage of simple trans-coding with bit streams complying in H.263.

  15. Patent landscape for royalty-free video coding

    Science.gov (United States)

    Reader, Cliff

    2016-09-01

    Digital video coding is over 60 years old and the first major video coding standard - H.261 - is over 25 years old, yet today there are more patents than ever related to, or evaluated as essential to video coding standards. This paper examines the historical development of video coding standards, from the perspective of when the significant contributions for video coding technology were made, what performance can be attributed to those contributions and when original patents were filed for those contributions. These patents have now expired, so the main video coding tools, which provide the significant majority of coding performance, are now royalty-free. The deployment of video coding tools in a standard involves several related developments. The tools themselves have evolved over time to become more adaptive, taking advantage of the increased complexity afforded by advances in semiconductor technology. In most cases, the improvement in performance for any given tool has been incremental, although significant improvement has occurred in aggregate across all tools. The adaptivity must be mirrored by the encoder and decoder, and advances have been made in reducing the overhead of signaling adaptive modes and parameters. Efficient syntax has been developed to provide such signaling. Furthermore, efficient ways of implementing the tools with limited precision, simple mathematical operators have been developed. Correspondingly, categories of patents related to video coding can be defined. Without discussing active patents, this paper provides the timeline of the developments of video coding and lays out the landscape of patents related to video coding. This provides a foundation on which royalty free video codec design can take place.

  16. Joint disparity and motion estimation using optical flow for multiview Distributed Video Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Raket, Lars Lau; Brites, Catarina

    2014-01-01

    Distributed Video Coding (DVC) is a video coding paradigm where the source statistics are exploited at the decoder based on the availability of Side Information (SI). In a monoview video codec, the SI is generated by exploiting the temporal redundancy of the video, through motion estimation and c...... flow. The proposed SI generation algorithm allows for RD improvements up to 10% (Bjøntegaard) in bit-rate savings, when compared with block-based SI generation algorithms leveraging temporal and inter-view redundancies....... and compensation techniques. In a multiview scenario, the correlation between views can also be exploited to further enhance the overall Rate-Distortion (RD) performance. Thus, to generate SI in a multiview distributed coding scenario, a joint disparity and motion estimation technique is proposed, based on optical...

  17. Expressing Youth Voice through Video Games and Coding

    Science.gov (United States)

    Martin, Crystle

    2017-01-01

    A growing body of research focuses on the impact of video games and coding on learning. The research often elevates learning the technical skills associated with video games and coding or the importance of problem solving and computational thinking, which are, of course, necessary and relevant. However, the literature less often explores how young…

  18. Coding the Complexity of Activity in Video Recordings

    DEFF Research Database (Denmark)

    Harter, Christopher Daniel; Otrel-Cass, Kathrin

    2017-01-01

    This paper presents a theoretical approach to coding and analyzing video data on human interaction and activity, using principles found in cultural historical activity theory. The systematic classification or coding of information contained in video data on activity can be arduous and time consum...

  19. P2P Video Streaming Strategies based on Scalable Video Coding

    Directory of Open Access Journals (Sweden)

    F.A. López-Fuentes

    2015-02-01

    Full Text Available Video streaming over the Internet has gained significant popularity during the last years, and the academy and industry have realized a great research effort in this direction. In this scenario, scalable video coding (SVC has emerged as an important video standard to provide more functionality to video transmission and storage applications. This paper proposes and evaluates two strategies based on scalable video coding for P2P video streaming services. In the first strategy, SVC is used to offer differentiated quality video to peers with heterogeneous capacities. The second strategy uses SVC to reach a homogeneous video quality between different videos from different sources. The obtained results show that our proposed strategies enable a system to improve its performance and introduce benefits such as differentiated quality of video for clients with heterogeneous capacities and variable network conditions.

  20. Joint distributed source-channel coding for 3D videos

    Science.gov (United States)

    Palma, Veronica; Cancellaro, Michela; Neri, Alessandro

    2011-03-01

    This paper presents a distributed joint source-channel 3D video coding system. Our aim is the design of an efficient coding scheme for stereoscopic video communication over noisy channels that preserves the perceived visual quality while guaranteeing a low computational complexity. The drawback in using stereo sequences is the increased amount of data to be transmitted. Several methods are being used in the literature for encoding stereoscopic video. A significantly different approach respect to traditional video coding has been represented by Distributed Video Coding (DVC), which introduces a flexible architecture with the design of low complex video encoders. In this paper we propose a novel method for joint source-channel coding in a distributed approach. We choose turbo code for our application and study the new setting of distributed joint source channel coding of a video. Turbo code allows to send the minimum amount of data while guaranteeing near channel capacity error correcting performance. In this contribution, the mathematical framework will be fully detailed and tradeoff among redundancy and perceived quality and quality of experience will be analyzed with the aid of numerical experiments.

  1. Video over DSL with LDGM Codes for Interactive Applications

    Directory of Open Access Journals (Sweden)

    Laith Al-Jobouri

    2016-05-01

    Full Text Available Digital Subscriber Line (DSL network access is subject to error bursts, which, for interactive video, can introduce unacceptable latencies if video packets need to be re-sent. If the video packets are protected against errors with Forward Error Correction (FEC, calculation of the application-layer channel codes themselves may also introduce additional latency. This paper proposes Low-Density Generator Matrix (LDGM codes rather than other popular codes because they are more suitable for interactive video streaming, not only for their computational simplicity but also for their licensing advantage. The paper demonstrates that a reduction of up to 4 dB in video distortion is achievable with LDGM Application Layer (AL FEC. In addition, an extension to the LDGM scheme is demonstrated, which works by rearranging the columns of the parity check matrix so as to make it even more resilient to burst errors. Telemedicine and video conferencing are typical target applications.

  2. Super-Resolution Still and Video Reconstruction from MPEG Coded Video

    National Research Council Canada - National Science Library

    Altunbasak, Yucel

    2004-01-01

    Transform coding is a popular and effective compression method for both still images and video sequences, as is evident from its widespread use in international media coding standards such as MPEG, H.263 and JPEG...

  3. Water cycle algorithm: A detailed standard code

    Science.gov (United States)

    Sadollah, Ali; Eskandar, Hadi; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon

    Inspired by the observation of the water cycle process and movements of rivers and streams toward the sea, a population-based metaheuristic algorithm, the water cycle algorithm (WCA) has recently been proposed. Lately, an increasing number of WCA applications have appeared and the WCA has been utilized in different optimization fields. This paper provides detailed open source code for the WCA, of which the performance and efficiency has been demonstrated for solving optimization problems. The WCA has an interesting and simple concept and this paper aims to use its source code to provide a step-by-step explanation of the process it follows.

  4. Scalable-to-lossless transform domain distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Veselov, Anton

    2010-01-01

    Distributed video coding (DVC) is a novel approach providing new features as low complexity encoding by mainly exploiting the source statistics at the decoder based on the availability of decoder side information. In this paper, scalable-tolossless DVC is presented based on extending a lossy...... TransformDomain Wyner-Ziv (TDWZ) distributed video codec with feedback.The lossless coding is obtained by using a reversible integer DCT.Experimental results show that the performance of the proposed scalable-to-lossless TDWZ video codec can outperform alternatives based on the JPEG 2000 standard. The TDWZ...... codec provides frame by frame encoding. Comparing the lossless coding efficiency, the proposed scalable-to-lossless TDWZ video codec can save up to 5%-13% bits compared to JPEG LS and H.264 Intra frame lossless coding and do so as a scalable-to-lossless coding....

  5. Video Coding Technique using MPEG Compression Standards

    African Journals Online (AJOL)

    Akorede

    The two dimensional discrete cosine transform (2-D DCT) is an integral part of video and image compression, which is used ... Park, 1989). MPEG-1 systems and MPEG-2 video have been developed collaboratively with the International. Telecommunications Union- (ITU-T). The DVB selected. MPEG-2 added specifications ...

  6. Video Coding Technique using MPEG Compression Standards ...

    African Journals Online (AJOL)

    Digital video compression technologies have become part of life, in the way visual information is created, communicated and consumed. Some application areas of video compression focused on the problem of optimizing storage space and transmission bandwidth (BW). The two dimensional discrete cosine transform (2-D ...

  7. Improvement of Parallel Algorithm for MATRA Code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seong-Jin; Seo, Kyong-Won; Kwon, Hyouk; Hwang, Dae-Hyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    The feasibility study to parallelize the MATRA code was conducted in KAERI early this year. As a result, a parallel algorithm for the MATRA code has been developed to decrease a considerably required computing time to solve a bigsize problem such as a whole core pin-by-pin problem of a general PWR reactor and to improve an overall performance of the multi-physics coupling calculations. It was shown that the performance of the MATRA code was greatly improved by implementing the parallel algorithm using MPI communication. For problems of a 1/8 core and whole core for SMART reactor, a speedup was evaluated as about 10 when the numbers of used processor were 25. However, it was also shown that the performance deteriorated as the axial node number increased. In this paper, the procedure of a communication between processors is optimized to improve the previous parallel algorithm.. To improve the performance deterioration of the parallelized MATRA code, the communication algorithm between processors was newly presented. It was shown that the speedup was improved and stable regardless of the axial node number.

  8. MPEG-2 Compressed-Domain Algorithms for Video Analysis

    Directory of Open Access Journals (Sweden)

    Hesseler Wolfgang

    2006-01-01

    Full Text Available This paper presents new algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Three algorithms for efficient low-level metadata extraction in preprocessing stages are described. The first algorithm detects camera motion using the motion vector field of an MPEG-2 video. The second method extends the idea of motion detection to a limited region of interest, yielding an efficient algorithm to track objects inside video sequences. The third algorithm performs a cut detection using macroblock types and motion vectors.

  9. Mixture block coding with progressive transmission in packet video. Appendix 1: Item 2. M.S. Thesis

    Science.gov (United States)

    Chen, Yun-Chung

    1989-01-01

    Video transmission will become an important part of future multimedia communication because of dramatically increasing user demand for video, and rapid evolution of coding algorithm and VLSI technology. Video transmission will be part of the broadband-integrated services digital network (B-ISDN). Asynchronous transfer mode (ATM) is a viable candidate for implementation of B-ISDN due to its inherent flexibility, service independency, and high performance. According to the characteristics of ATM, the information has to be coded into discrete cells which travel independently in the packet switching network. A practical realization of an ATM video codec called Mixture Block Coding with Progressive Transmission (MBCPT) is presented. This variable bit rate coding algorithm shows how a constant quality performance can be obtained according to user demand. Interactions between codec and network are emphasized including packetization, service synchronization, flow control, and error recovery. Finally, some simulation results based on MBCPT coding with error recovery are presented.

  10. Basic prediction techniques in modern video coding standards

    CERN Document Server

    Kim, Byung-Gyu

    2016-01-01

    This book discusses in detail the basic algorithms of video compression that are widely used in modern video codec. The authors dissect complicated specifications and present material in a way that gets readers quickly up to speed by describing video compression algorithms succinctly, without going to the mathematical details and technical specifications. For accelerated learning, hybrid codec structure, inter- and intra- prediction techniques in MPEG-4, H.264/AVC, and HEVC are discussed together. In addition, the latest research in the fast encoder design for the HEVC and H.264/AVC is also included.

  11. Spatial Pyramid Covariance based Compact Video Code for Robust Face Retrieval in TV-series.

    Science.gov (United States)

    Li, Yan; Wang, Ruiping; Cui, Zhen; Shan, Shiguang; Chen, Xilin

    2016-10-10

    We address the problem of face video retrieval in TV-series which searches video clips based on the presence of specific character, given one face track of his/her. This is tremendously challenging because on one hand, faces in TV-series are captured in largely uncontrolled conditions with complex appearance variations, and on the other hand retrieval task typically needs efficient representation with low time and space complexity. To handle this problem, we propose a compact and discriminative representation for the huge body of video data, named Compact Video Code (CVC). Our method first models the face track by its sample (i.e., frame) covariance matrix to capture the video data variations in a statistical manner. To incorporate discriminative information and obtain more compact video signature suitable for retrieval, the high-dimensional covariance representation is further encoded as a much lower-dimensional binary vector, which finally yields the proposed CVC. Specifically, each bit of the code, i.e., each dimension of the binary vector, is produced via supervised learning in a max margin framework, which aims to make a balance between the discriminability and stability of the code. Besides, we further extend the descriptive granularity of covariance matrix from traditional pixel-level to more general patchlevel, and proceed to propose a novel hierarchical video representation named Spatial Pyramid Covariance (SPC) along with a fast calculation method. Face retrieval experiments on two challenging TV-series video databases, i.e., the Big Bang Theory and Prison Break, demonstrate the competitiveness of the proposed CVC over state-of-the-art retrieval methods. In addition, as a general video matching algorithm, CVC is also evaluated in traditional video face recognition task on a standard Internet database, i.e., YouTube Celebrities, showing its quite promising performance by using an extremely compact code with only 128 bits.

  12. SCTP as scalable video coding transport

    Science.gov (United States)

    Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.

    2013-12-01

    This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.

  13. Efficient temporal and interlayer parameter prediction for weighted prediction in scalable high efficiency video coding

    Science.gov (United States)

    Tsang, Sik-Ho; Chan, Yui-Lam; Siu, Wan-Chi

    2017-01-01

    Weighted prediction (WP) is an efficient video coding tool that was introduced since the establishment of the H.264/AVC video coding standard, for compensating the temporal illumination change in motion estimation and compensation. WP parameters, including a multiplicative weight and an additive offset for each reference frame, are required to be estimated and transmitted to the decoder by slice header. These parameters cause extra bits in the coded video bitstream. High efficiency video coding (HEVC) provides WP parameter prediction to reduce the overhead. Therefore, WP parameter prediction is crucial to research works or applications, which are related to WP. Prior art has been suggested to further improve the WP parameter prediction by implicit prediction of image characteristics and derivation of parameters. By exploiting both temporal and interlayer redundancies, we propose three WP parameter prediction algorithms, enhanced implicit WP parameter, enhanced direct WP parameter derivation, and interlayer WP parameter, to further improve the coding efficiency of HEVC. Results show that our proposed algorithms can achieve up to 5.83% and 5.23% bitrate reduction compared to the conventional scalable HEVC in the base layer for SNR scalability and 2× spatial scalability, respectively.

  14. Distributed video streaming using multiple description coding and unequal error protection.

    Science.gov (United States)

    Kim, Joohee; Mersereau, Russell M; Altunbasak, Yucel

    2005-07-01

    This paper presents a distributed video streaming framework using unbalanced multiple description coding (MDC) and unequal error protection. In the proposed video streaming framework, two senders simultaneously stream complementary descriptions to a single receiver over different paths. To minimize the overall distortion and exploit the benefits of multipath transport when the characteristics of each path are different, an unbalanced MDC method for wavelet-based coders combined with a TCP-friendly rate allocation algorithm is proposed. The proposed rate allocation algorithm adjusts the transmission rates and the channel coding rates for all senders in a coordinated fashion to minimize the overall distortion. Simulation results show that the proposed unbalanced MDC combined with our rate allocation algorithm achieves about 1-6 dB higher peal signal-to-noise ratio compared to conventional balanced MDC when the available bandwidths along the two paths are different under time-varying network conditions.

  15. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Directory of Open Access Journals (Sweden)

    Rik Van de Walle

    2007-01-01

    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4%.

  16. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Directory of Open Access Journals (Sweden)

    Lerouge Sam

    2007-01-01

    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4 .

  17. EZBC video streaming with channel coding and error concealment

    Science.gov (United States)

    Bajic, Ivan V.; Woods, John W.

    2003-06-01

    In this text we present a system for streaming video content encoded using the motion-compensated Embedded Zero Block Coder (EZBC). The system incorporates unequal loss protection in the form of multiple description FEC (MD-FEC) coding, which provides adequate protection for the embedded video bitstream when the loss process is not very bursty. The adverse effects of burst losses are reduced using a novel motion-compensated error concealmet method.

  18. Combined Scalable Video Coding Method for Wireless Transmission

    Directory of Open Access Journals (Sweden)

    Achmad Affandi

    2011-08-01

    Full Text Available Mobile video streaming is one of multimedia services that has developed very rapidly. Recently, bandwidth utilization for wireless transmission is the main problem in the field of multimedia communications. In this research, we offer a combination of scalable methods as the most attractive solution to this problem. Scalable method for wireless communication should adapt to input video sequence. Standard ITU (International Telecommunication Union - Joint Scalable Video Model (JSVM is employed to produce combined scalable video coding (CSVC method that match the required quality of video streaming services for wireless transmission. The investigation in this paper shows that combined scalable technique outperforms the non-scalable one, in using bit rate capacity at certain layer.

  19. Efficient Power Allocation for Video over Superposition Coding

    KAUST Repository

    Lau, Chun Pong

    2013-03-01

    In this paper we consider a wireless multimedia system by mapping scalable video coded (SVC) bit stream upon superposition coded (SPC) signals, referred to as (SVC-SPC) architecture. Empirical experiments using a software-defined radio(SDR) emulator are conducted to gain a better understanding of its efficiency, specifically, the impact of the received signal due to different power allocation ratios. Our experimental results show that to maintain high video quality, the power allocated to the base layer should be approximately four times higher than the power allocated to the enhancement layer.

  20. A Scalable Multiple Description Scheme for 3D Video Coding Based on the Interlayer Prediction Structure

    Directory of Open Access Journals (Sweden)

    Lorenzo Favalli

    2010-01-01

    Full Text Available The most recent literature indicates multiple description coding (MDC as a promising coding approach to handle the problem of video transmission over unreliable networks with different quality and bandwidth constraints. Furthermore, following recent commercial availability of autostereoscopic 3D displays that allow 3D visual data to be viewed without the use of special headgear or glasses, it is anticipated that the applications of 3D video will increase rapidly in the near future. Moving from the concept of spatial MDC, in this paper we introduce some efficient algorithms to obtain 3D substreams that also exploit some form of scalability. These algorithms are then applied to both coded stereo sequences and to depth image-based rendering (DIBR. In these algorithms, we first generate four 3D subsequences by subsampling, and then two of these subsequences are jointly used to form each of the two descriptions. For each description, one of the original subsequences is predicted from the other one via some scalable algorithms, focusing on the inter layer prediction scheme. The proposed algorithms can be implemented as pre- and postprocessing of the standard H.264/SVC coder that remains fully compatible with any standard coder. The experimental results presented show that these algorithms provide excellent results.

  1. Distributed source coding of video with non-stationary side-information

    NARCIS (Netherlands)

    Meyer, P.F.A.; Westerlaken, R.P.; Klein Gunnewiek, R.; Lagendijk, R.L.

    2005-01-01

    In distributed video coding, the complexity of the video encoder is reduced at the cost of a more complex video decoder. Using the principles of Slepian andWolf, video compression is then carried out using channel coding principles, under the assumption that the video decoder can temporally predict

  2. Intra prediction using face continuity in 360-degree video coding

    Science.gov (United States)

    Hanhart, Philippe; He, Yuwen; Ye, Yan

    2017-09-01

    This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.

  3. Two-description distributed video coding for robust transmission

    Directory of Open Access Journals (Sweden)

    Zhao Yao

    2011-01-01

    Full Text Available Abstract In this article, a two-description distributed video coding (2D-DVC is proposed to address the robust video transmission of low-power capturers. The odd/even frame-splitting partitions a video into two sub-sequences to produce two descriptions. Each description consists of two parts, where part 1 is a zero-motion based H.264-coded bitstream of a sub-sequence and part 2 is a Wyner-Ziv (WZ-coded bitstream of the other sub-sequence. As the redundant part, the WZ-coded bitstream guarantees that the lost sub-sequence is recovered when one description is lost. On the other hand, the redundancy degrades the rate-distortion performance as no loss occurs. A residual 2D-DVC is employed to further improve the rate-distortion performance, where the difference of two sub-sequences is WZ encoded to generate part 2 in each description. Furthermore, an optimization method is applied to control an appropriate amount of redundancy and therefore facilitate the tuning of central/side distortion tradeoff. The experimental results show that the proposed schemes achieve better performance than the referenced one especially for low-motion videos. Moreover, our schemes still maintain low-complexity encoding property.

  4. Continuity-Aware Scheduling Algorithm for Scalable Video Streaming

    Directory of Open Access Journals (Sweden)

    Atinat Palawan

    2016-05-01

    Full Text Available The consumer demand for retrieving and delivering visual content through consumer electronic devices has increased rapidly in recent years. The quality of video in packet networks is susceptible to certain traffic characteristics: average bandwidth availability, loss, delay and delay variation (jitter. This paper presents a scheduling algorithm that modifies the stream of scalable video to combat jitter. The algorithm provides unequal look-ahead by safeguarding the base layer (without the need for overhead of the scalable video. The results of the experiments show that our scheduling algorithm reduces the number of frames with a violated deadline and significantly improves the continuity of the video stream without compromising the average Y Peek Signal-to-Noise Ratio (PSNR.

  5. A Super-resolution Reconstruction Algorithm for Surveillance Video

    Directory of Open Access Journals (Sweden)

    Jian Shao

    2017-01-01

    Full Text Available Recent technological developments have resulted in surveillance video becoming a primary method of preserving public security. Many city crimes are observed in surveillance video. The most abundant evidence collected by the police is also acquired through surveillance video sources. Surveillance video footage offers very strong support for solving criminal cases, therefore, creating an effective policy, and applying useful methods to the retrieval of additional evidence is becoming increasingly important. However, surveillance video has had its failings, namely, video footage being captured in low resolution (LR and bad visual quality. In this paper, we discuss the characteristics of surveillance video and describe the manual feature registration – maximum a posteriori – projection onto convex sets to develop a super-resolution reconstruction method, which improves the quality of surveillance video. From this method, we can make optimal use of information contained in the LR video image, but we can also control the image edge clearly as well as the convergence of the algorithm. Finally, we make a suggestion on how to adjust the algorithm adaptability by analyzing the prior information of target image.

  6. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau

    2013-03-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However, it has never been practically implemented using a commercial 4G system. This paper demonstrates our prototype in achieving the SCM using a standard 802.16 based testbed for scalable video transmissions. In particular, to implement the superposition coded (SPC) modulation, we take advantage a novel software approach, namely logical SPC (L-SPC), which aims to mimic the physical layer superposition coded modulation. The emulation results show improved throughput comparing with generic multicast method.

  7. Gradient Descent Bit Flipping Algorithms for Decoding LDPC Codes

    OpenAIRE

    Wadayama, Tadashi; Nakamura, Keisuke; Yagita, Masayuki; Funahashi, Yuuki; Usami, Shogo; Takumi, Ichi

    2007-01-01

    A novel class of bit-flipping (BF) algorithms for decoding low-density parity-check (LDPC) codes is presented. The proposed algorithms, which are called gradient descent bit flipping (GDBF) algorithms, can be regarded as simplified gradient descent algorithms. Based on gradient descent formulation, the proposed algorithms are naturally derived from a simple non-linear objective function.

  8. PM1 steganographic algorithm using ternary Hamming Code

    Directory of Open Access Journals (Sweden)

    Kamil Kaczyński

    2015-12-01

    Full Text Available PM1 algorithm is a modification of well-known LSB steganographic algorithm. It has increased resistance to selected steganalytic attacks and increased embedding efficiency. Due to its uniqueness, PM1 algorithm allows us to use of larger alphabet of symbols, making it possible to further increase steganographic capacity. In this paper, we present the modified PM1 algorithm which utilizies so-called syndrome coding and ternary Hamming code. The modified algorithm has increased embedding efficiency, which means fewer changes introduced to carrier and increased capacity.[b]Keywords[/b]: steganography, linear codes, PM1, LSB, ternary Hamming code

  9. Error Resilience in Current Distributed Video Coding Architectures

    Directory of Open Access Journals (Sweden)

    Tonoli Claudia

    2009-01-01

    Full Text Available In distributed video coding the signal prediction is shifted at the decoder side, giving therefore most of the computational complexity burden at the receiver. Moreover, since no prediction loop exists before transmission, an intrinsic robustness to transmission errors has been claimed. This work evaluates and compares the error resilience performance of two distributed video coding architectures. In particular, we have considered a video codec based on the Stanford architecture (DISCOVER codec and a video codec based on the PRISM architecture. Specifically, an accurate temporal and rate/distortion based evaluation of the effects of the transmission errors for both the considered DVC architectures has been performed and discussed. These approaches have been also compared with H.264/AVC, in both cases of no error protection, and simple FEC error protection. Our evaluations have highlighted in all cases a strong dependence of the behavior of the various codecs to the content of the considered video sequence. In particular, PRISM seems to be particularly well suited for low-motion sequences, whereas DISCOVER provides better performance in the other cases.

  10. Subjective Video Quality Assessment in H.264/AVC Video Coding Standard

    Directory of Open Access Journals (Sweden)

    Z. Miličević

    2012-11-01

    Full Text Available This paper seeks to provide an approach for subjective video quality assessment in the H.264/AVC standard. For this purpose a special software program for the subjective assessment of quality of all the tested video sequences is developed. It was developed in accordance with recommendation ITU-T P.910, since it is suitable for the testing of multimedia applications. The obtained results show that in the proposed selective intra prediction and optimized inter prediction algorithm there is a small difference in picture quality (signal-to-noise ratio between decoded original and modified video sequences.

  11. Scene-aware joint global and local homographic video coding

    Science.gov (United States)

    Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.

    2016-09-01

    Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.

  12. Weighted-Bit-Flipping-Based Sequential Scheduling Decoding Algorithms for LDPC Codes

    Directory of Open Access Journals (Sweden)

    Qing Zhu

    2013-01-01

    Full Text Available Low-density parity-check (LDPC codes can be applied in a lot of different scenarios such as video broadcasting and satellite communications. LDPC codes are commonly decoded by an iterative algorithm called belief propagation (BP over the corresponding Tanner graph. The original BP updates all the variable-nodes simultaneously, followed by all the check-nodes simultaneously as well. We propose a sequential scheduling algorithm based on weighted bit-flipping (WBF algorithm for the sake of improving the convergence speed. Notoriously, WBF is a low-complexity and simple algorithm. We combine it with BP to obtain advantages of these two algorithms. Flipping function used in WBF is borrowed to determine the priority of scheduling. Simulation results show that it can provide a good tradeoff between FER performance and computation complexity for short-length LDPC codes.

  13. Game-Theoretic Rate-Distortion-Complexity Optimization of High Efficiency Video Coding

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Milani, Simone; Forchhammer, Søren

    2013-01-01

    This paper presents an algorithm for rate-distortioncomplexity optimization for the emerging High Efficiency Video Coding (HEVC) standard, whose high computational requirements urge the need for low-complexity optimization algorithms. Optimization approaches need to specify different complexity...... profiles in order to tailor the computational load to the different hardware and power-supply resources of devices. In this work, we focus on optimizing the quantization parameter and partition depth in HEVC via a game-theoretic approach. The proposed rate control strategy alone provides 0.2 dB improvement...

  14. Research of Video Steganalysis Algorithm Based on H265 Protocol

    Directory of Open Access Journals (Sweden)

    Wu Kaicheng

    2015-01-01

    This paper researches LSB matching VSA based on H265 protocol with the research background of 26 original Video sequences, it firstly extracts classification features out from training samples as input of SVM, and trains in SVM to obtain high-quality category classification model, and then tests whether there is suspicious information in the video sample. The experimental results show that VSA algorithm based on LSB matching can be more practical to obtain all frame embedded secret information and carrier and video of local frame embedded. In addition, VSA adopts the method of frame by frame with a strong robustness in resisting attack in the corresponding time domain.

  15. Coding B-Frames of Color Videos with Fuzzy Transforms

    Directory of Open Access Journals (Sweden)

    Ferdinando Di Martino

    2013-01-01

    Full Text Available We use a new method based on discrete fuzzy transforms for coding/decoding frames of color videos in which we determine dynamically the GOP sequences. Frames can be differentiated into intraframes, predictive frames, and bidirectional frames, and we consider particular frames, called Δ-frames (resp., R-frames, for coding P-frames (resp., B-frames by using two similarity measures based on Lukasiewicz -norm; moreover, a preprocessing phase is proposed to determine similarity thresholds for classifying the above types of frame. The proposed method provides acceptable results in terms of quality of the reconstructed videos to a certain extent if compared with classical-based F-transforms method and the standard MPEG-4.

  16. Noise Residual Learning for Noise Modeling in Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The noise model is one of the inherently difficult challenges in DVC. This paper considers Transform Domain Wyner-Ziv (TDWZ) coding and proposes...... noise residual learning techniques that take residues from previously decoded frames into account to estimate the decoding residue more precisely. Moreover, the techniques calculate a number of candidate noise residual distributions within a frame to adaptively optimize the soft side information during...... decoding. A residual refinement step is also introduced to take advantage of correlation of DCT coefficients. Experimental results show that the proposed techniques robustly improve the coding efficiency of TDWZ DVC and for GOP=2 bit-rate savings up to 35% on WZ frames are achieved compared with DISCOVER....

  17. Performance evaluation of nonscalable MPEG-2 video coding

    Science.gov (United States)

    Schmidt, Robert L.; Puri, Atul; Haskell, Barry G.

    1994-09-01

    The second phase of the ISO Moving Picture Experts Group audio-visual coding standard (MPEG-2) is nearly complete and this standard is expected to be used in a wide range of applications at variety of bitrates. While the standard specifies the syntax of the compressed bitstream and the semantics of the decoding process, it allows considerably flexibility in choice of encoding parameters and options enabling appropriate tradeoffs in performance versus complexity as might be suitable for an application. First, we present a review of profile and level structure in MPEG-2 which is the key for enabling use of coding tools in MPEG-2. Next, we include a brief review of tools for nonscalable coding within MPEG-2 standard. Finally, we investigate via simulations, tradeoffs in coding performance with choice of various parameters and options so that within the encoder complexity that can be afforded, encoder design with good performance tradeoffs can be accomplished. Simulations are performed on standard TV and HDTV resolution video of various formats and at many bitrates using nonscalable (single layer) video coding tools of the MPEG-2 standard.

  18. Distributed Coding/Decoding Complexity in Video Sensor Networks

    Science.gov (United States)

    Cordeiro, Paulo J.; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972

  19. Video coding for decoding power-constrained embedded devices

    Science.gov (United States)

    Lu, Ligang; Sheinin, Vadim

    2004-01-01

    Low power dissipation and fast processing time are crucial requirements for embedded multimedia devices. This paper presents a technique in video coding to decrease the power consumption at a standard video decoder. Coupled with a small dedicated video internal memory cache on a decoder, the technique can substantially decrease the amount of data traffic to the external memory at the decoder. A decrease in data traffic to the external memory at decoder will result in multiple benefits: faster real-time processing and power savings. The encoder, given prior knowledge of the decoder"s dedicated video internal memory cache management scheme, regulates its choice of motion compensated predictors to reduce the decoder"s external memory accesses. This technique can be used in any standard or proprietary encoder scheme to generate a compliant output bit stream decodable by standard CPU-based and dedicated hardware-based decoders for power savings with the best quality-power cost trade off. Our simulation results show that with a relatively small amount of dedicated video internal memory cache, the technique may decrease the traffic between CPU and external memory over 50%.

  20. A Fast and Efficient Topological Coding Algorithm for Compound Images

    Directory of Open Access Journals (Sweden)

    Xin Li

    2003-11-01

    Full Text Available We present a fast and efficient coding algorithm for compound images. Unlike popular mixture raster content (MRC based approaches, we propose to attack compound image coding problem from the perspective of modeling location uncertainty of image singularities. We suggest that a computationally simple two-class segmentation strategy is sufficient for the coding of compound images. We argue that jointly exploiting topological properties of image source in classification and coding stages is beneficial to the robustness of compound image coding systems. Experiment results have justified effectiveness and robustness of the proposed topological coding algorithm.

  1. Real-time demonstration hardware for enhanced DPCM video compression algorithm

    Science.gov (United States)

    Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.

    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development

  2. Intra Frame Coding In Advanced Video Coding Standard (H.264) to Obtain Consistent PSNR and Reduce Bit Rate for Diagonal Down Left Mode Using Gaussian Pulse

    Science.gov (United States)

    Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma

    2017-08-01

    Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids

  3. Interlayer Simplified Depth Coding for Quality Scalability on 3D High Efficiency Video Coding

    Directory of Open Access Journals (Sweden)

    Mengmeng Zhang

    2014-01-01

    Full Text Available A quality scalable extension design is proposed for the upcoming 3D video on the emerging standard for High Efficiency Video Coding (HEVC. A novel interlayer simplified depth coding (SDC prediction tool is added to reduce the amount of bits for depth maps representation by exploiting the correlation between coding layers. To further improve the coding performance, the coded prediction quadtree and texture data from corresponding SDC-coded blocks in the base layer can be used in interlayer simplified depth coding. In the proposed design, the multiloop decoder solution is also extended into the proposed scalable scenario for texture views and depth maps, and will be achieved by the interlayer texture prediction method. The experimental results indicate that the average Bjøntegaard Delta bitrate decrease of 54.4% can be gained in interlayer simplified depth coding prediction tool on multiloop decoder solution compared with simulcast. Consequently, significant rate savings confirm that the proposed method achieves better performance.

  4. An efficient video dehazing algorithm based on spectral clustering

    Science.gov (United States)

    Zhao, Fan; Yao, Zao; Song, XiaoFang; Yao, Yi

    2017-07-01

    Image and video dehazing is a popular topic in the field of computer vision and digital image processing. A fast, optimized dehazing algorithm was recently proposed that enhances contrast and reduces flickering artifacts in a dehazed video sequence by minimizing a cost function that makes transmission values spatially and temporally coherent. However, its fixed-size block partitioning leads to block effects. Further, the weak edges in a hazy image are not addressed. Hence, a video dehazing algorithm based on customized spectral clustering is proposed. To avoid block artifacts, the spectral clustering is customized to segment static scenes to ensure the same target has the same transmission value. Assuming that dehazed edge images have richer detail than before restoration, an edge cost function is added to the ransmission model. The experimental results demonstrate that the proposed method provides higher dehazing quality and lower time complexity than the previous technique.

  5. Voronoi Particle Merging Algorithm for PIC Codes

    CERN Document Server

    Luu, Phuc T

    2016-01-01

    We present a new particle-merging algorithm for the particle-in-cell method. Based on the concept of the Voronoi diagram, the algorithm partitions the phase space into smaller subsets, which consist of only particles that are in close proximity in the phase space to each other. We show the performance of our algorithm in the case of magnetic shower.

  6. Exploiting the Error-Correcting Capabilities of Low Density Parity Check Codes in Distributed Video Coding using Optical Flow

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau; Søgaard, Jacob; Salmistraro, Matteo

    2012-01-01

    We consider Distributed Video Coding (DVC) in presence of communication errors. First, we present DVC side information generation based on a new method of optical flow driven frame interpolation, where a highly optimized TV-L1 algorithm is used for the flow calculations and combine three flows....... Thereafter methods for exploiting the error-correcting capabilities of the LDPCA code in DVC are investigated. The proposed frame interpolation includes a symmetric flow constraint to the standard forward-backward frame interpolation scheme, which improves quality and handling of large motion. The three...... flows are combined in one solution. The proposed frame interpolation method consistently outperforms an overlapped block motion compensation scheme and a previous TV-L1 optical flow frame interpolation method with an average PSNR improvement of 1.3 dB and 2.3 dB respectively. For a GOP size of 2...

  7. Video Segmentation Using Fast Marching and Region Growing Algorithms

    Directory of Open Access Journals (Sweden)

    Eftychis Sifakis

    2002-04-01

    Full Text Available The algorithm presented in this paper is comprised of three main stages: (1 classification of the image sequence and, in the case of a moving camera, parametric motion estimation, (2 change detection having as reference a fixed frame, an appropriately selected frame or a displaced frame, and (3 object localization using local colour features. The image sequence classification is based on statistical tests on the frame difference. The change detection module uses a two-label fast marching algorithm. Finally, the object localization uses a region growing algorithm based on the colour similarity. Video object segmentation results are shown using the COST 211 data set.

  8. Scalable Video Coding with Interlayer Signal Decorrelation Techniques

    Directory of Open Access Journals (Sweden)

    Yang Wenxian

    2007-01-01

    Full Text Available Scalability is one of the essential requirements in the compression of visual data for present-day multimedia communications and storage. The basic building block for providing the spatial scalability in the scalable video coding (SVC standard is the well-known Laplacian pyramid (LP. An LP achieves the multiscale representation of the video as a base-layer signal at lower resolution together with several enhancement-layer signals at successive higher resolutions. In this paper, we propose to improve the coding performance of the enhancement layers through efficient interlayer decorrelation techniques. We first show that, with nonbiorthogonal upsampling and downsampling filters, the base layer and the enhancement layers are correlated. We investigate two structures to reduce this correlation. The first structure updates the base-layer signal by subtracting from it the low-frequency component of the enhancement layer signal. The second structure modifies the prediction in order that the low-frequency component in the new enhancement layer is diminished. The second structure is integrated in the JSVM 4.0 codec with suitable modifications in the prediction modes. Experimental results with some standard test sequences demonstrate coding gains up to 1 dB for I pictures and up to 0.7 dB for both I and P pictures.

  9. Cross-band noise model refinement for transform domain Wyner–Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2012-01-01

    TDWZ video coding trails that of conventional video coding solutions, mainly due to the quality of side information, inaccurate noise modeling and loss in the final coding step. The major goal of this paper is to enhance the accuracy of the noise modeling, which is one of the most important aspects...... that the proposed noise model and noise residue refinement scheme can improve the rate-distortion (RD) performance of TDWZ video coding significantly. The quality of the side information modeling is also evaluated by a measure of the ideal code length.......Distributed Video Coding (DVC) is a new video coding paradigm, which mainly exploits the source statistics at the decoder based on the availability of decoder side information. One approach to DVC is feedback channel based Transform Domain Wyner–Ziv (TDWZ) video coding. The efficiency of current...

  10. Application-adapted mobile 3D video coding and streaming — A survey

    Science.gov (United States)

    Liu, Yanwei; Ci, Song; Tang, Hui; Ye, Yun

    2012-03-01

    3D video technologies have been gradually matured to be moved into mobile platforms. In the mobile environments, the specific characteristics of wireless network and mobile device present great challenges for 3D video coding and streaming. The application-adapted mobile 3D video coding and streaming technologies are urgently needed. Based on the mobile 3D video application framework, this paper reviews the state-of-the-art technologies of mobile 3D video coding and streaming. Specifically, the mobile 3D video formats and the corresponding coding methods are firstly reviewed and then the streaming adaptation technologies including 3D video transcoding, 3D video rate control and cross-layer optimized 3D video streaming are surveyed. [Figure not available: see fulltext.

  11. The analysis of convolutional codes via the extended Smith algorithm

    Science.gov (United States)

    Mceliece, R. J.; Onyszchuk, I.

    1993-01-01

    Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.

  12. Decoding Algorithms for Random Linear Network Codes

    DEFF Research Database (Denmark)

    Heide, Janus; Pedersen, Morten Videbæk; Fitzek, Frank

    2011-01-01

    We consider the problem of efficient decoding of a random linear code over a finite field. In particular we are interested in the case where the code is random, relatively sparse, and use the binary finite field as an example. The goal is to decode the data using fewer operations to potentially a...

  13. Algorithms for coding scanned halftone pictures

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Forchhammer, Morten

    1988-01-01

    data structure and related algorithms for handling the digital screen without restrictions on the screen parameters is presented. Data compression rates above 20 are obtained for the halftone pictures. The algorithms are suited for implementation with fast dedicated hardware. The rescreening can also...

  14. A baseline algorithm for face detection and tracking in video

    Science.gov (United States)

    Manohar, Vasant; Soundararajan, Padmanabhan; Korzhova, Valentina; Boonstra, Matthew; Goldgof, Dmitry; Kasturi, Rangachar

    2007-10-01

    Establishing benchmark datasets, performance metrics and baseline algorithms have considerable research significance in gauging the progress in any application domain. These primarily allow both users and developers to compare the performance of various algorithms on a common platform. In our earlier works, we focused on developing performance metrics and establishing a substantial dataset with ground truth for object detection and tracking tasks (text and face) in two video domains -- broadcast news and meetings. In this paper, we present the results of a face detection and tracking algorithm on broadcast news videos with the objective of establishing a baseline performance for this task-domain pair. The detection algorithm uses a statistical approach that was originally developed by Viola and Jones and later extended by Lienhart. The algorithm uses a feature set that is Haar-like and a cascade of boosted decision tree classifiers as a statistical model. In this work, we used the Intel Open Source Computer Vision Library (OpenCV) implementation of the Haar face detection algorithm. The optimal values for the tunable parameters of this implementation were found through an experimental design strategy commonly used in statistical analyses of industrial processes. Tracking was accomplished as continuous detection with the detected objects in two frames mapped using a greedy algorithm based on the distances between the centroids of bounding boxes. Results on the evaluation set containing 50 sequences (~ 2.5 mins.) using the developed performance metrics show good performance of the algorithm reflecting the state-of-the-art which makes it an appropriate choice as the baseline algorithm for the problem.

  15. Source and Channel Adaptive Rate Control for Multicast Layered Video Transmission Based on a Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Viéron

    2004-03-01

    Full Text Available This paper introduces source-channel adaptive rate control (SARC, a new congestion control algorithm for layered video transmission in large multicast groups. In order to solve the well-known feedback implosion problem in large multicast groups, we first present a mechanism for filtering RTCP receiver reports sent from receivers to the whole session. The proposed filtering mechanism provides a classification of receivers according to a predefined similarity measure. An end-to-end source and FEC rate control based on this distributed feedback aggregation mechanism coupled with a video layered coding system is then described. The number of layers, their rate, and their levels of protection are adapted dynamically to aggregated feedbacks. The algorithms have been validated with the NS2 network simulator.

  16. A Survey of Linear Network Coding and Network Error Correction Code Constructions and Algorithms

    Directory of Open Access Journals (Sweden)

    Michele Sanna

    2011-01-01

    Full Text Available Network coding was introduced by Ahlswede et al. in a pioneering work in 2000. This paradigm encompasses coding and retransmission of messages at the intermediate nodes of the network. In contrast with traditional store-and-forward networking, network coding increases the throughput and the robustness of the transmission. Linear network coding is a practical implementation of this new paradigm covered by several research works that include rate characterization, error-protection coding, and construction of codes. Especially determining the coding characteristics has its importance in providing the premise for an efficient transmission. In this paper, we review the recent breakthroughs in linear network coding for acyclic networks with a survey of code constructions literature. Deterministic construction algorithms and randomized procedures are presented for traditional network coding and for network-control network coding.

  17. 3-D model-based frame interpolation for distributed video coding of static scenes.

    Science.gov (United States)

    Maitre, Matthieu; Guillemot, Christine; Morin, Luce

    2007-05-01

    This paper addresses the problem of side information extraction for distributed coding of videos captured by a camera moving in a 3-D static environment. Examples of targeted applications are augmented reality, remote-controlled robots operating in hazardous environments, or remote exploration by drones. It explores the benefits of the structure-from-motion paradigm for distributed coding of this type of video content. Two interpolation methods constrained by the scene geometry, based either on block matching along epipolar lines or on 3-D mesh fitting, are first developed. These techniques are based on a robust algorithm for sub-pel matching of feature points, which leads to semi-dense correspondences between key frames. However, their rate-distortion (RD) performances are limited by misalignments between the side information and the actual Wyner-Ziv (WZ) frames due to the assumption of linear motion between key frames. To cope with this problem, two feature point tracking techniques are introduced, which recover the camera parameters of the WZ frames. A first technique, in which the frames remain encoded separately, performs tracking at the decoder and leads to significant RD performance gains. A second technique further improves the RD performances by allowing a limited tracking at the encoder. As an additional benefit, statistics on tracks allow the encoder to adapt the key frame frequency to the video motion content.

  18. 3D Scan-Based Wavelet Transform and Quality Control for Video Coding

    Directory of Open Access Journals (Sweden)

    Parisot Christophe

    2003-01-01

    Full Text Available Wavelet coding has been shown to achieve better compression than DCT coding and moreover allows scalability. 2D DWT can be easily extended to 3D and thus applied to video coding. However, 3D subband coding of video suffers from two drawbacks. The first is the amount of memory required for coding large 3D blocks; the second is the lack of temporal quality due to the sequence temporal splitting. In fact, 3D block-based video coders produce jerks. They appear at blocks temporal borders during video playback. In this paper, we propose a new temporal scan-based wavelet transform method for video coding combining the advantages of wavelet coding (performance, scalability with acceptable reduced memory requirements, no additional CPU complexity, and avoiding jerks. We also propose an efficient quality allocation procedure to ensure a constant quality over time.

  19. 3D Scan-Based Wavelet Transform and Quality Control for Video Coding

    Science.gov (United States)

    Parisot, Christophe; Antonini, Marc; Barlaud, Michel

    2003-12-01

    Wavelet coding has been shown to achieve better compression than DCT coding and moreover allows scalability. 2D DWT can be easily extended to 3D and thus applied to video coding. However, 3D subband coding of video suffers from two drawbacks. The first is the amount of memory required for coding large 3D blocks; the second is the lack of temporal quality due to the sequence temporal splitting. In fact, 3D block-based video coders produce jerks. They appear at blocks temporal borders during video playback. In this paper, we propose a new temporal scan-based wavelet transform method for video coding combining the advantages of wavelet coding (performance, scalability) with acceptable reduced memory requirements, no additional CPU complexity, and avoiding jerks. We also propose an efficient quality allocation procedure to ensure a constant quality over time.

  20. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Dongyul Lee

    2014-01-01

    Full Text Available The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC with adaptive modulation and coding (AMC provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  1. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    Science.gov (United States)

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  2. Transform domain Wyner-Ziv video coding with refinement of noise residue and side information

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2010-01-01

    are successively updating the estimated noise residue for noise modeling and side information frame quality during decoding. Experimental results show that the proposed decoder can improve the Rate- Distortion (RD) performance of a state-of-the-art Wyner Ziv video codec for the set of test sequences.......Distributed Video Coding (DVC) is a video coding paradigm which mainly exploits the source statistics at the decoder based on the availability of side information at the decoder. This paper considers feedback channel based Transform Domain Wyner-Ziv (TDWZ) DVC. The coding efficiency of TDWZ video...... coding does not match that of conventional video coding yet, mainly due to the quality of side information and inaccurate noise estimation. In this context, a novel TDWZ video decoder with noise residue refinement (NRR) and side information refinement (SIR) is proposed. The proposed refinement schemes...

  3. New algorithm for iris recognition based on video sequences

    Science.gov (United States)

    Bourennane, Salah; Fossati, Caroline; Ketchantang, William

    2010-07-01

    Among existing biometrics, iris recognition systems are among the most accurate personal biometric identification systems. However, the acquisition of a workable iris image requires strict cooperation of the user; otherwise, the image will be rejected by a verification module because of its poor quality, inducing a high false reject rate (FRR). The FRR may also increase when iris localization fails or when the pupil is too dilated. To improve the existing methods, we propose to use video sequences acquired in real time by a camera. In order to keep the same computational load to identify the iris, we propose a new method to estimate the iris characteristics. First, we propose a new iris texture characterization based on Fourier-Mellin transform, which is less sensitive to pupil dilatations than previous methods. Then, we develop a new iris localization algorithm that is robust to variations of quality (partial occlusions due to eyelids and eyelashes, light reflects, etc.), and finally, we introduce a fast and new criterion of suitable image selection from an iris video sequence for an accurate recognition. The accuracy of each step of the algorithm in the whole proposed recognition process is tested and evaluated using our own iris video database and several public image databases, such as CASIA, UBIRIS, and BATH.

  4. 3D Scan-Based Wavelet Transform and Quality Control for Video Coding

    OpenAIRE

    Parisot Christophe; Antonini Marc; Barlaud Michel

    2003-01-01

    Wavelet coding has been shown to achieve better compression than DCT coding and moreover allows scalability. 2D DWT can be easily extended to 3D and thus applied to video coding. However, 3D subband coding of video suffers from two drawbacks. The first is the amount of memory required for coding large 3D blocks; the second is the lack of temporal quality due to the sequence temporal splitting. In fact, 3D block-based video coders produce jerks. They appear at blocks temporal borders during v...

  5. Genetic algorithms with permutation coding for multiple sequence alignment.

    Science.gov (United States)

    Ben Othman, Mohamed Tahar; Abdel-Azim, Gamil

    2013-08-01

    Multiple sequence alignment (MSA) is one of the topics of bio informatics that has seriously been researched. It is known as NP-complete problem. It is also considered as one of the most important and daunting tasks in computational biology. Concerning this a wide number of heuristic algorithms have been proposed to find optimal alignment. Among these heuristic algorithms are genetic algorithms (GA). The GA has mainly two major weaknesses: it is time consuming and can cause local minima. One of the significant aspects in the GA process in MSA is to maximize the similarities between sequences by adding and shuffling the gaps of Solution Coding (SC). Several ways for SC have been introduced. One of them is the Permutation Coding (PC). We propose a hybrid algorithm based on genetic algorithms (GAs) with a PC and 2-opt algorithm. The PC helps to code the MSA solution which maximizes the gain of resources, reliability and diversity of GA. The use of the PC opens the area by applying all functions over permutations for MSA. Thus, we suggest an algorithm to calculate the scoring function for multiple alignments based on PC, which is used as fitness function. The time complexity of the GA is reduced by using this algorithm. Our GA is implemented with different selections strategies and different crossovers. The probability of crossover and mutation is set as one strategy. Relevant patents have been probed in the topic.

  6. Joint Source-Channel Coding for Wavelet-Based Scalable Video Transmission Using an Adaptive Turbo Code

    Directory of Open Access Journals (Sweden)

    Ramzan Naeem

    2007-01-01

    Full Text Available An efficient approach for joint source and channel coding is presented. The proposed approach exploits the joint optimization of a wavelet-based scalable video coding framework and a forward error correction method based on turbo codes. The scheme minimizes the reconstructed video distortion at the decoder subject to a constraint on the overall transmission bitrate budget. The minimization is achieved by exploiting the source rate distortion characteristics and the statistics of the available codes. Here, the critical problem of estimating the bit error rate probability in error-prone applications is discussed. Aiming at improving the overall performance of the underlying joint source-channel coding, the combination of the packet size, interleaver, and channel coding rate is optimized using Lagrangian optimization. Experimental results show that the proposed approach outperforms conventional forward error correction techniques at all bit error rates. It also significantly improves the performance of end-to-end scalable video transmission at all channel bit rates.

  7. Joint Source-Channel Coding for Wavelet-Based Scalable Video Transmission Using an Adaptive Turbo Code

    Directory of Open Access Journals (Sweden)

    Naeem Ramzan

    2007-03-01

    Full Text Available An efficient approach for joint source and channel coding is presented. The proposed approach exploits the joint optimization of a wavelet-based scalable video coding framework and a forward error correction method based on turbo codes. The scheme minimizes the reconstructed video distortion at the decoder subject to a constraint on the overall transmission bitrate budget. The minimization is achieved by exploiting the source rate distortion characteristics and the statistics of the available codes. Here, the critical problem of estimating the bit error rate probability in error-prone applications is discussed. Aiming at improving the overall performance of the underlying joint source-channel coding, the combination of the packet size, interleaver, and channel coding rate is optimized using Lagrangian optimization. Experimental results show that the proposed approach outperforms conventional forward error correction techniques at all bit error rates. It also significantly improves the performance of end-to-end scalable video transmission at all channel bit rates.

  8. Standards-based approaches to 3D and multiview video coding

    Science.gov (United States)

    Sullivan, Gary J.

    2009-08-01

    The extension of video applications to enable 3D perception, which typically is considered to include a stereo viewing experience, is emerging as a mass market phenomenon, as is evident from the recent prevalence of 3D major cinema title releases. For high quality 3D video to become a commonplace user experience beyond limited cinema distribution, adoption of an interoperable coded 3D digital video format will be needed. Stereo-view video can also be studied as a special case of the more general technologies of multiview and "free-viewpoint" video systems. The history of standardization work on this topic is actually richer than people may typically realize. The ISO/IEC Moving Picture Experts Group (MPEG), in particular, has been developing interoperability standards to specify various such coding schemes since the advent of digital video as we know it. More recently, the ITU-T Visual Coding Experts Group (VCEG) has been involved as well in the Joint Video Team (JVT) work on development of 3D features for H.264/14496-10 Advanced Video Coding, including Multiview Video Coding (MVC) extensions. This paper surveys the prior, ongoing, and anticipated future standardization efforts on this subject to provide an overview and historical perspective on feasible approaches to 3D and multiview video coding.

  9. Scene-library-based video coding scheme exploiting long-term temporal correlation

    Science.gov (United States)

    Zuo, Xuguang; Yu, Lu; Yu, Hualong; Mao, Jue; Zhao, Yin

    2017-07-01

    In movies and TV shows, it is common that several scenes repeat alternately. These videos are characterized with the long-term temporal correlation, which can be exploited to improve video coding efficiency. However, in applications supporting random access (RA), a video is typically divided into a number of RA segments (RASs) by RA points (RAPs), and different RASs are coded independently. In such a way, the long-term temporal correlation among RASs with similar scenes cannot be used. We present a scene-library-based video coding scheme for the coding of videos with repeated scenes. First, a compact scene library is built by clustering similar scenes and extracting representative frames in encoding video. Then, the video is coded using a layered scene-library-based coding structure, in which the library frames serve as long-term reference frames. The scene library is not cleared by RAPs so that the long-term temporal correlation between RASs from similar scenes can be exploited. Furthermore, the RAP frames are coded as interframes by only referencing library frames so as to improve coding efficiency while maintaining RA property. Experimental results show that the coding scheme can achieve significant coding gain over state-of-the-art methods.

  10. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    Science.gov (United States)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  11. Emerging technologies for 3D video creation, coding, transmission and rendering

    CERN Document Server

    Dufaux, Frederic; Cagnazzo, Marco

    2013-01-01

    With the expectation of greatly enhanced user experience, 3D video is widely perceived as the next major advancement in video technology. In order to fulfil the expectation of enhanced user experience, 3D video calls for new technologies addressing efficient content creation, representation/coding, transmission and display. Emerging Technologies for 3D Video will deal with all aspects involved in 3D video systems and services, including content acquisition and creation, data representation and coding, transmission, view synthesis, rendering, display technologies, human percepti

  12. Mutiple LDPC Decoding using Bitplane Correlation for Transform Domain Wyner-Ziv Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    Distributed video coding (DVC) is an emerging video coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. This paper considers a Low Density Parity Check (LDPC) based Transform Domain Wyner-Ziv (TDWZ) video...... codec. To improve the LDPC coding performance in the context of TDWZ, this paper proposes a Wyner-Ziv video codec using bitplane correlation through multiple parallel LDPC decoding. The proposed scheme utilizes inter bitplane correlation to enhance the bitplane decoding performance. Experimental results...

  13. Depth-based coding of MVD data for 3D video extension of H.264/AVC

    Science.gov (United States)

    Rusanovskyy, Dmytro; Hannuksela, Miska M.; Su, Wenyi

    2013-06-01

    This paper describes a novel approach of using depth information for advanced coding of associated video data in Multiview Video plus Depth (MVD)-based 3D video systems. As a possible implementation of this conception, we describe two coding tools that have been developed for H.264/AVC based 3D Video Codec as response to Moving Picture Experts Group (MPEG) Call for Proposals (CfP). These tools are Depth-based Motion Vector Prediction (DMVP) and Backward View Synthesis Prediction (BVSP). Simulation results conducted under JCT-3V/MPEG 3DV Common Test Conditions show, that proposed in this paper tools reduce bit rate of coded video data by 15% of average delta bit rate reduction, which results in 13% of bit rate savings on total for the MVD data over the state-of-the-art MVC+D coding. Moreover, presented in this paper conception of depth-based coding of video has been further developed by MPEG 3DV and JCT-3V and this work resulted in even higher compression efficiency, bringing about 20% of delta bit rate reduction on total for coded MVD data over the reference MVC+D coding. Considering significant gains, proposed in this paper coding approach can be beneficial for development of new 3D video coding standards. [Figure not available: see fulltext.

  14. Improved virtual channel noise model for transform domain Wyner-Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2009-01-01

    Distributed video coding (DVC) has been proposed as a new video coding paradigm to deal with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. A virtual channel noise model is utilized at the decoder to estimate...... the noise distribution between the side information frame and the original frame. This is one of the most important aspects influencing the coding performance of DVC. Noise models with different granularity have been proposed. In this paper, an improved noise model for transform domain Wyner-Ziv video...

  15. Real coded genetic algorithm for fuzzy time series prediction

    Science.gov (United States)

    Jain, Shilpa; Bisht, Dinesh C. S.; Singh, Phool; Mathpal, Prakash C.

    2017-10-01

    Genetic Algorithm (GA) forms a subset of evolutionary computing, rapidly growing area of Artificial Intelligence (A.I.). Some variants of GA are binary GA, real GA, messy GA, micro GA, saw tooth GA, differential evolution GA. This research article presents a real coded GA for predicting enrollments of University of Alabama. Data of Alabama University is a fuzzy time series. Here, fuzzy logic is used to predict enrollments of Alabama University and genetic algorithm optimizes fuzzy intervals. Results are compared to other eminent author works and found satisfactory, and states that real coded GA are fast and accurate.

  16. Some algorithmic problems of plotting codes for unstructured grids

    Science.gov (United States)

    Loehner, Rainald; Parikh, Paresh; Gumbert, Clyde

    1989-01-01

    Some algorithmic problems encountered during the development of unstructured grid plotting codes are described. Chief among them are the interpolation of three-dimensional data on planes, the plotting of a three-dimensional surface with a constant value for a given unknown, and the calculation of particle and oil-flow paths. Some special features of the unstructured grid plotting code, FEPLOT3D, are also described.

  17. Comprehensive Analysis and Evaluation of Background Subtraction Algorithms for Surveillance Video

    National Research Council Canada - National Science Library

    Yan Feng; Shengmei Luo; Yumin Tian; Shuo Deng; Haihong Zheng

    2014-01-01

    .... Then, the algorithms were implemented and tested using different videos with ground truth, such as baseline, dynamic background, camera jitter, and intermittent object motion and shadow scenarios...

  18. Performance and Complexity Co-evaluation of the Advanced Video Coding Standard for Cost-Effective Multimedia Communications

    Science.gov (United States)

    Saponara, Sergio; Denolf, Kristof; Lafruit, Gauthier; Blanch, Carolina; Bormans, Jan

    2004-12-01

    The advanced video codec (AVC) standard, recently defined by a joint video team (JVT) of ITU-T and ISO/IEC, is introduced in this paper together with its performance and complexity co-evaluation. While the basic framework is similar to the motion-compensated hybrid scheme of previous video coding standards, additional tools improve the compression efficiency at the expense of an increased implementation cost. As a first step to bridge the gap between the algorithmic design of a complex multimedia system and its cost-effective realization, a high-level co-evaluation approach is proposed and applied to a real-life AVC design. An exhaustive analysis of the codec compression efficiency versus complexity (memory and computational costs) project space is carried out at the early algorithmic design phase. If all new coding features are used, the improved AVC compression efficiency (up to 50% compared to current video coding technology) comes with a complexity increase of a factor 2 for the decoder and larger than one order of magnitude for the encoder. This represents a challenge for resource-constrained multimedia systems such as wireless devices or high-volume consumer electronics. The analysis also highlights important properties of the AVC framework allowing for complexity reduction at the high system level: when combining the new coding features, the implementation complexity accumulates, while the global compression efficiency saturates. Thus, a proper use of the AVC tools maintains the same performance as the most complex configuration while considerably reducing complexity. The reported results provide inputs to assist the profile definition in the standard, highlight the AVC bottlenecks, and select optimal trade-offs between algorithmic performance and complexity.

  19. Performance and Complexity Co-evaluation of the Advanced Video Coding Standard for Cost-Effective Multimedia Communications

    Directory of Open Access Journals (Sweden)

    Saponara Sergio

    2004-01-01

    Full Text Available The advanced video codec (AVC standard, recently defined by a joint video team (JVT of ITU-T and ISO/IEC, is introduced in this paper together with its performance and complexity co-evaluation. While the basic framework is similar to the motion-compensated hybrid scheme of previous video coding standards, additional tools improve the compression efficiency at the expense of an increased implementation cost. As a first step to bridge the gap between the algorithmic design of a complex multimedia system and its cost-effective realization, a high-level co-evaluation approach is proposed and applied to a real-life AVC design. An exhaustive analysis of the codec compression efficiency versus complexity (memory and computational costs project space is carried out at the early algorithmic design phase. If all new coding features are used, the improved AVC compression efficiency (up to 50% compared to current video coding technology comes with a complexity increase of a factor 2 for the decoder and larger than one order of magnitude for the encoder. This represents a challenge for resource-constrained multimedia systems such as wireless devices or high-volume consumer electronics. The analysis also highlights important properties of the AVC framework allowing for complexity reduction at the high system level: when combining the new coding features, the implementation complexity accumulates, while the global compression efficiency saturates. Thus, a proper use of the AVC tools maintains the same performance as the most complex configuration while considerably reducing complexity. The reported results provide inputs to assist the profile definition in the standard, highlight the AVC bottlenecks, and select optimal trade-offs between algorithmic performance and complexity.

  20. Application of Enhanced Hadamard Error Correcting Code in Video-Watermarking and his comparison to Reed-Solomon Code

    Directory of Open Access Journals (Sweden)

    Dziech Andrzej

    2017-01-01

    Full Text Available Error Correcting Codes are playing a very important role in Video Watermarking technology. Because of very high compression rate (about 1:200 normally the watermarks can barely survive such massive attacks, despite very sophisticated embedding strategies. It can only work with a sufficient error correcting code method. In this paper, the authors compare the new developed Enhanced Hadamard Error Correcting Code (EHC with well known Reed-Solomon Code regarding its ability to preserve watermarks in the embedded video. The main idea of this new developed multidimensional Enhanced Hadamard Error Correcting Code is to map the 2D basis images into a collection of one-dimensional rows and to apply a 1D Hadamard decoding procedure on them. After this, the image is reassembled, and the 2D decoding procedure can be applied more efficiently. With this approach, it is possible to overcome the theoretical limit of error correcting capability of (d-1/2 bits, where d is a Hamming distance. Even better results could be achieved by expanding the 2D EHC to 3D. To prove the efficiency and practicability of this new Enhanced Hadamard Code, the method was applied to a video Watermarking Coding Scheme. The Video Watermarking Embedding procedure decomposes the initial video trough multi-Level Interframe Wavelet Transform. The low pass filtered part of the video stream is used for embedding the watermarks, which are protected respectively by Enhanced Hadamard or Reed-Solomon Correcting Code. The experimental results show that EHC performs much better than RS Code and seems to be very robust against strong MPEG compression.

  1. A comparative study of scalable video coding schemes utilizing wavelet technology

    Science.gov (United States)

    Schelkens, Peter; Andreopoulos, Yiannis; Barbarien, Joeri; Clerckx, Tom; Verdicchio, Fabio; Munteanu, Adrian; van der Schaar, Mihaela

    2004-02-01

    Video transmission over variable-bandwidth networks requires instantaneous bit-rate adaptation at the server site to provide an acceptable decoding quality. For this purpose, recent developments in video coding aim at providing a fully embedded bit-stream with seamless adaptation capabilities in bit-rate, frame-rate and resolution. A new promising technology in this context is wavelet-based video coding. Wavelets have already demonstrated their potential for quality and resolution scalability in still-image coding. This led to the investigation of various schemes for the compression of video, exploiting similar principles to generate embedded bit-streams. In this paper we present scalable wavelet-based video-coding technology with competitive rate-distortion behavior compared to standardized non-scalable technology.

  2. Decoding Reed–Solomon Codes Using Euclid's Algorithm

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 12; Issue 4. Decoding Reed–Solomon Codes Using Euclid's Algorithm. Priti Shankar. General Article Volume 12 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India.

  3. A New Algorithm of Shape Boundaries Based on Chain Coding

    Directory of Open Access Journals (Sweden)

    Zhao Xin

    2017-01-01

    Full Text Available A new method to obtain connected component in binary images is presented. The method uses DFA automaton to obtain chain code and label the component boundary. It is theoretically proved that the algorithm improves the image encoding efficiency closer to the lowest time consumption.

  4. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    Science.gov (United States)

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  5. Source mask optimization using real-coded genetic algorithms

    Science.gov (United States)

    Yang, Chaoxing; Wang, Xiangzhao; Li, Sikun; Erdmann, Andreas

    2013-04-01

    Source mask optimization (SMO) is considered to be one of the technologies to push conventional 193nm lithography to its ultimate limits. In comparison with other SMO methods that use an inverse problem formulation, SMO based on genetic algorithm (GA) requires very little knowledge of the process, and has the advantage of flexible problem formulation. Recent publications on SMO using a GA employ a binary-coded GA. In general, the performance of a GA depends not only on the merit or fitness function, but also on the parameters, operators and their algorithmic implementation. In this paper, we propose a SMO method using real-coded GA where the source and mask solutions are represented by floating point strings instead of bit strings. Besides from that, the selection, crossover, and mutation operators are replaced by corresponding floating-point versions. Both binary-coded and real-coded genetic algorithms were implemented in two versions of SMO and compared in numerical experiments, where the target patterns are staggered contact holes and a logic pattern with critical dimensions of 100 nm, respectively. The results demonstrate the performance improvement of the real-coded GA in comparison to the binary-coded version. Specifically, these improvements can be seen in a better convergence behavior. For example, the numerical experiments for the logic pattern showed that the average number of generations to converge to a proper fitness of 6.0 using the real-coded method is 61.8% (100 generations) less than that using binary-coded method.

  6. Real-Coded Quantum-Inspired Genetic Algorithm-Based BP Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Jianyong Liu

    2015-01-01

    Full Text Available The method that the real-coded quantum-inspired genetic algorithm (RQGA used to optimize the weights and threshold of BP neural network is proposed to overcome the defect that the gradient descent method makes the algorithm easily fall into local optimal value in the learning process. Quantum genetic algorithm (QGA is with good directional global optimization ability, but the conventional QGA is based on binary coding; the speed of calculation is reduced by the coding and decoding processes. So, RQGA is introduced to explore the search space, and the improved varied learning rate is adopted to train the BP neural network. Simulation test shows that the proposed algorithm is effective to rapidly converge to the solution conformed to constraint conditions.

  7. A BitTorrent-Based Dynamic Bandwidth Adaptation Algorithm for Video Streaming

    Science.gov (United States)

    Hsu, Tz-Heng; Liang, You-Sheng; Chiang, Meng-Shu

    In this paper, we propose a BitTorrent-based dynamic bandwidth adaptation algorithm for video streaming. Two mechanisms to improve the original BitTorrent protocol are proposed: (1) the decoding order frame first (DOFF) frame selection algorithm and (2) the rarest I frame first (RIFF) frame selection algorithm. With the proposed algorithms, a peer can periodically check the number of downloaded frames in the buffer and then allocate the available bandwidth adaptively for video streaming. As a result, users can have smooth video playout experience with the proposed algorithms.

  8. The algorithm of malicious code detection based on data mining

    Science.gov (United States)

    Yang, Yubo; Zhao, Yang; Liu, Xiabi

    2017-08-01

    Traditional technology of malicious code detection has low accuracy and it has insufficient detection capability for new variants. In terms of malicious code detection technology which is based on the data mining, its indicators are not accurate enough, and its classification detection efficiency is relatively low. This paper proposed the information gain ratio indicator based on the N-gram to choose signature, this indicator can accurately reflect the detection weight of the signature, and helped by C4.5 decision tree to elevate the algorithm of classification detection.

  9. Blahut-Arimoto algorithm and code design for action-dependent source coding problems

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Simeone, Osvaldo; Popovski, Petar

    2013-01-01

    The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient Blahut-Arimoto-type algorithm for the numerical computation of the rate-distortion-cost function...

  10. Optimal Merging Algorithms for Lossless Codes with Generalized Criteria

    CERN Document Server

    Charalambous, Themistoklis; Rezaei, Farzad

    2011-01-01

    This paper presents lossless prefix codes optimized with respect to a pay-off criterion consisting of a convex combination of maximum codeword length and average codeword length. The optimal codeword lengths obtained are based on a new coding algorithm which transforms the initial source probability vector into a new probability vector according to a merging rule. The coding algorithm is equivalent to a partition of the source alphabet into disjoint sets on which a new transformed probability vector is defined as a function of the initial source probability vector and a scalar parameter. The pay-off criterion considered encompasses a trade-off between maximum and average codeword length; it is related to a pay-off criterion consisting of a convex combination of average codeword length and average of an exponential function of the codeword length, and to an average codeword length pay-off criterion subject to a limited length constraint. A special case of the first related pay-off is connected to coding proble...

  11. Resource allocation for error resilient video coding over AWGN using optimization approach.

    Science.gov (United States)

    An, Cheolhong; Nguyen, Truong Q

    2008-12-01

    The number of slices for error resilient video coding is jointly optimized with 802.11a-like media access control and the physical layers with automatic repeat request and rate compatible punctured convolutional code over additive white gaussian noise channel as well as channel times allocation for time division multiple access. For error resilient video coding, the relation between the number of slices and coding efficiency is analyzed and formulated as a mathematical model. It is applied for the joint optimization problem, and the problem is solved by a convex optimization method such as the primal-dual decomposition method. We compare the performance of a video communication system which uses the optimal number of slices with one that codes a picture as one slice. From numerical examples, end-to-end distortion of utility functions can be significantly reduced with the optimal slices of a picture especially at low signal-to-noise ratio.

  12. Depth-based Multi-View 3D Video Coding

    DEFF Research Database (Denmark)

    Zamarin, Marco

    on edge-preserving solutions. In a proposed scheme, texture-depth correlation is exploited to predict surface shapes in the depth signal. In this way depth coding performance can be improved in terms of both compression gain and edge-preservation. Another solution proposes a new intra coding mode targeted...... to depth blocks featuring arbitrarily-shaped edges. Edge information is encoded exploiting previously coded edge blocks. Integrated in H.264/AVC, the proposed mode allows significant bit rate savings compared with a number of state-of-the-art depth codecs. View synthesis performances are also improved......, both in terms of objective and visual evaluations. Depth coding based on standard H.264/AVC is explored for multi-view plus depth image coding. A single depth map is used to disparity-compensate multiple views and allow more efficient coding than H.264 MVC at low bit rates. Lossless coding of depth...

  13. MPEG-2 video coding with an adaptive selection of scanning path and picture structure

    Science.gov (United States)

    Zhou, Minhua; De Lameillieure, Jan L.; Schaefer, Ralf

    1996-09-01

    In the MPEG-2 video coding an interlaced frame can be encoded as either a frame-picture or two field-pictures. The selection of picture structure (frame/field) has a strong impact on picture quality. In order to achieve the best possible picture quality, an adaptive scheme is proposed in this paper to select the optimal picture structure on a frame by frame basis. The selection of picture structure is performed in connection with that of the optimal scanning path. First, the scanning path (zig-zag scan/alternate scan) is chosen based on a post-analysis of DCT-coefficients. Secondly, the optimal picture structure is selected for the next frame according to the chosen scanning path, i.e. a zig-zag scan corresponds to frame picture structure, while an alternate scan corresponds to field picture structure. Furthermore, the TM5 buffer control algorithm is extended to support the coding with adaptive frame/field picture structure. Finally, simulation results verify the adaptive scheme proposed in this paper.

  14. Real-time transmission of digital video using variable-length coding

    Science.gov (United States)

    Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1993-01-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  15. Video coding for 3D-HEVC based on saliency information

    Science.gov (United States)

    Yu, Fang; An, Ping; Yang, Chao; You, Zhixiang; Shen, Liquan

    2016-11-01

    As an extension of High Efficiency Video Coding ( HEVC), 3D-HEVC has been widely researched under the impetus of the new generation coding standard in recent years. Compared with H.264/AVC, its compression efficiency is doubled while keeping the same video quality. However, its higher encoding complexity and longer encoding time are not negligible. To reduce the computational complexity and guarantee the subjective quality of virtual views, this paper presents a novel video coding method for 3D-HEVC based on the saliency informat ion which is an important part of Human Visual System (HVS). First of all, the relationship between the current coding unit and its adjacent units is used to adjust the maximum depth of each largest coding unit (LCU) and determine the SKIP mode reasonably. Then, according to the saliency informat ion of each frame image, the texture and its corresponding depth map will be divided into three regions, that is, salient area, middle area and non-salient area. Afterwards, d ifferent quantization parameters will be assigned to different regions to conduct low complexity coding. Finally, the compressed video will generate new view point videos through the renderer tool. As shown in our experiments, the proposed method saves more bit rate than other approaches and achieves up to highest 38% encoding time reduction without subjective quality loss in compression or rendering.

  16. Single-layer HDR video coding with SDR backward compatibility

    Science.gov (United States)

    Lasserre, S.; François, E.; Le Léannec, F.; Touzé, D.

    2016-09-01

    The migration from High Definition (HD) TV to Ultra High Definition (UHD) is already underway. In addition to an increase of picture spatial resolution, UHD will bring more color and higher contrast by introducing Wide Color Gamut (WCG) and High Dynamic Range (HDR) video. As both Standard Dynamic Range (SDR) and HDR devices will coexist in the ecosystem, the transition from Standard Dynamic Range (SDR) to HDR will require distribution solutions supporting some level of backward compatibility. This paper presents a new HDR content distribution scheme, named SL-HDR1, using a single layer codec design and providing SDR compatibility. The solution is based on a pre-encoding HDR-to-SDR conversion, generating a backward compatible SDR video, with side dynamic metadata. The resulting SDR video is then compressed, distributed and decoded using standard-compliant decoders (e.g. HEVC Main 10 compliant). The decoded SDR video can be directly rendered on SDR displays without adaptation. Dynamic metadata of limited size are generated by the pre-processing and used to reconstruct the HDR signal from the decoded SDR video, using a post-processing that is the functional inverse of the pre-processing. Both HDR quality and artistic intent are preserved. Pre- and post-processing are applied independently per picture, do not involve any inter-pixel dependency, and are codec agnostic. Compression performance, and SDR quality are shown to be solidly improved compared to the non-backward and backward-compatible approaches, respectively using the Perceptual Quantization (PQ) and Hybrid Log Gamma (HLG) Opto-Electronic Transfer Functions (OETF).

  17. Video error concealment using block matching and frequency selective extrapolation algorithms

    Science.gov (United States)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  18. Compact Hilbert Curve Index Algorithm Based on Gray Code

    Directory of Open Access Journals (Sweden)

    CAO Xuefeng

    2016-12-01

    Full Text Available Hilbert curve has best clustering in various kinds of space filling curves, and has been used as an important tools in discrete global grid spatial index design field. But there are lots of redundancies in the standard Hilbert curve index when the data set has large differences between dimensions. In this paper, the construction features of Hilbert curve is analyzed based on Gray code, and then the compact Hilbert curve index algorithm is put forward, in which the redundancy problem has been avoided while Hilbert curve clustering preserved. Finally, experiment results shows that the compact Hilbert curve index outperforms the standard Hilbert index, their 1 computational complexity is nearly equivalent, but the real data set test shows the coding time and storage space decrease 40%, the speedup ratio of sorting speed is nearly 4.3.

  19. An Algorithm of Extracting I-Frame in Compressed Video

    Directory of Open Access Journals (Sweden)

    Zhu Yaling

    2015-01-01

    Full Text Available The MPEG video data includes three types of frames, that is: I-frame, P-frame and B-frame. However, the I-frame records the main information of video data, the P-frame and the B-frame are just regarded as motion compensations of the I-frame. This paper presents the approach which analyzes the MPEG video stream in the compressed domain, and find out the key frame of MPEG video stream by extracting the I-frame. Experiments indicated that this method can be automatically realized in the compressed MPEG video and it will lay the foundation for the video processing in the future.

  20. An adaptive scan of high frequency subbands for dyadic intra frame in MPEG4-AVC/H.264 scalable video coding

    Science.gov (United States)

    Shahid, Z.; Chaumont, M.; Puech, W.

    2009-01-01

    This paper develops a new adaptive scanning methodology for intra frame scalable coding framework based on a subband/wavelet(DWTSB) coding approach for MPEG-4 AVC/H.264 scalable video coding (SVC). It attempts to take advantage of the prior knowledge of the frequencies which are present in different higher frequency subbands. We propose dyadic intra frame coding method with adaptive scan (DWTSB-AS) for each subband as traditional zigzag scan is not suitable for high frequency subbands. Thus, by just modification of the scan order of the intra frame scalable coding framework of H.264, we can get better compression. The proposed algorithm has been theoretically justified and is thoroughly evaluated against the current SVC test model JSVM and DWTSB through extensive coding experiments for scalable coding of intra frame. The simulation results show the proposed scanning algorithm consistently outperforms JSVM and DWTSB in PSNR performance. This results in extra compression for intra frames, along with spatial scalability. Thus Image and video coding applications, traditionally serviced by separate coders, can be efficiently provided by an integrated coding system.

  1. Interactive Consistency Algorithms Based on Voting and Error-Correding Codes

    NARCIS (Netherlands)

    Krol, Th.

    1995-01-01

    This paper presents a new class of synchronous deterministic non authenticated algorithms for reaching interactive consistency (Byzantine agreement). The algorithms are based on voting and error correcting codes and require considerably less data communication than the original algorithm, whereas

  2. Efficient Dual Domain Decoding of Linear Block Codes Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Ahmed Azouaoui

    2012-01-01

    Full Text Available A computationally efficient algorithm for decoding block codes is developed using a genetic algorithm (GA. The proposed algorithm uses the dual code in contrast to the existing genetic decoders in the literature that use the code itself. Hence, this new approach reduces the complexity of decoding the codes of high rates. We simulated our algorithm in various transmission channels. The performance of this algorithm is investigated and compared with competitor decoding algorithms including Maini and Shakeel ones. The results show that the proposed algorithm gives large gains over the Chase-2 decoding algorithm and reach the performance of the OSD-3 for some quadratic residue (QR codes. Further, we define a new crossover operator that exploits the domain specific information and compare it with uniform and two point crossover. The complexity of this algorithm is also discussed and compared to other algorithms.

  3. CowLog – Cross-Platform Application for Coding Behaviours from Video

    Directory of Open Access Journals (Sweden)

    Matti Pastell

    2016-04-01

    Full Text Available CowLog is a cross-platform application to code behaviours from video recordings for use in behavioural research. The software has been used in several studies e.g. to study sleep in dairy calves, emotions in goats and the mind wandering related to computer use during lectures. CowLog 3 is implemented using JavaScript and HTML using the Electron framework. The framework allows the development of packaged cross-platform applications using features from web browser (Chromium as well as server side JavaScript from Node.js. The program supports using multiple videos simultaneously and HTML5 and VLC video players. CowLog can be used for any project that requires coding the time of events from digital video. It is released under GNU GPL v2 making it possible for users to modify the application for their own needs. The software is available through its website http://cowlog.org.

  4. Optimization of high-definition video coding and hybrid fiber-wireless transmission in the 60 GHz band

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Pham, Tien Thang; Beltrán, Marta

    2011-01-01

    We demonstrate that, by jointly optimizing video coding and radio-over-fibre transmission, we extend the reach of 60-GHz wireless distribution of high-quality high-definition video satisfying low complexity and low delay constraints, while preserving superb video quality.......We demonstrate that, by jointly optimizing video coding and radio-over-fibre transmission, we extend the reach of 60-GHz wireless distribution of high-quality high-definition video satisfying low complexity and low delay constraints, while preserving superb video quality....

  5. Low-Complexity Multiple Description Coding of Video Based on 3D Block Transforms

    Directory of Open Access Journals (Sweden)

    Andrey Norkin

    2007-02-01

    Full Text Available The paper presents a multiple description (MD video coder based on three-dimensional (3D transforms. Two balanced descriptions are created from a video sequence. In the encoder, video sequence is represented in a form of coarse sequence approximation (shaper included in both descriptions and residual sequence (details which is split between two descriptions. The shaper is obtained by block-wise pruned 3D-DCT. The residual sequence is coded by 3D-DCT or hybrid, LOT+DCT, 3D-transform. The coding scheme is targeted to mobile devices. It has low computational complexity and improved robustness of transmission over unreliable networks. The coder is able to work at very low redundancies. The coding scheme is simple, yet it outperforms some MD coders based on motion-compensated prediction, especially in the low-redundancy region. The margin is up to 3 dB for reconstruction from one description.

  6. Transform Domain Unidirectional Distributed Video Coding Using Dynamic Parity Allocation

    Science.gov (United States)

    Badem, Murat B.; Fernando, Anil; Weerakkody, Rajitha; Arachchi, Hemantha K.; Kondoz, Ahmet M.

    DVC based video codecs proposed in the literature generally include a reverse (feedback) channel between the encoder and the decoder. This channel is used to communicate the dynamic parity bit request messages from the decoder to the encoder resulting in an optimum dynamic variable rate control implementation. However it is observed that this dynamic feedback mechanism is a practical hindrance in a number of practical consumer electronics applications. In this paper we proposed a novel transform domain Unidirectional Distributed Video Codec (UDVC) without a feedback channel. First, all Wyner-Ziv frames are divided into rectangular macroblocks. A simple metric is used for each block to represent the correlations between the corresponding blocks in the adjacent key frame and the Wyner-Ziv frame. Based on the value of this metric, parity is allocated dynamically for each block. These parities are either stored for offline processing or transmitted to the DVC decoder for on line processing. Simulation results show that the proposed codec outperforms the existing UDVC solutions by a significant margin.

  7. A Novel Face Segmentation Algorithm from a Video Sequence for Real-Time Face Recognition

    Directory of Open Access Journals (Sweden)

    Sudhaker Samuel RD

    2007-01-01

    Full Text Available The first step in an automatic face recognition system is to localize the face region in a cluttered background and carefully segment the face from each frame of a video sequence. In this paper, we propose a fast and efficient algorithm for segmenting a face suitable for recognition from a video sequence. The cluttered background is first subtracted from each frame, in the foreground regions, a coarse face region is found using skin colour. Then using a dynamic template matching approach the face is efficiently segmented. The proposed algorithm is fast and suitable for real-time video sequence. The algorithm is invariant to large scale and pose variation. The segmented face is then handed over to a recognition algorithm based on principal component analysis and linear discriminant analysis. The online face detection, segmentation, and recognition algorithms take an average of 0.06 second on a 3.2 GHz P4 machine.

  8. SBASIC video coding and its 3D-DCT extension for MPEG-4 multimedia

    Science.gov (United States)

    Puri, Atul; Schmidt, Robert L.; Haskell, Barry G.

    1996-02-01

    Due to the need to interchange video data in a seamless and cost effective manner, interoperability between applications, terminals and services has become increasingly important. The ISO Moving Picture Experts Group (MPEG) has developed the MPEG-1 and the MPEG-2 audio-visual coding standards to meet these challenges; these standards allow a range of applications at bitrates from 1 Mbits to 100 Mbit/s. However, in the meantime, a new breed of applications has arisen which demand higher compression, more interactivity and increased error resilience. These applications are expected to be addressed by the next phase standard, called MPEG-4, which is currently in progress. We discuss the various functionalities expected to be offered by the MPEG-4 standard along with the development plan and the framework used for evaluation of video coding proposals in the recent first evaluation tests. Having clarified the requirements, functionalities and the development process of MPEG-4, we propose a generalized approach for video coding referred to as adaptive scalable interframe coding (ASIC) for MPEG-4. Using this generalized approach we develop a video coding scheme suitable for MPEG-4 based multimedia applications in bitrate range of 320 kbit/s to 1024 kbit/s. The proposed scheme is referred to as source and bandwidth adaptive scalable interframe coding (SBASIC) and builds not only on the proven framework of motion compensated DCT coding and scalability but also introduces several new concepts. The SNR and MPEG-4 subjective evaluation results are presented to show the good performance achieved by SBASIC. Next, extension of SBASIC by motion compensated 3D- DCT coding is discussed. It is envisaged that this extension when complete will further improve the coding efficiency of SBASIC.

  9. Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding

    Directory of Open Access Journals (Sweden)

    Yun Zhang

    2010-01-01

    Full Text Available We propose a Stereoscopic Visual Attention- (SVA- based regional bit allocation optimization for Multiview Video Coding (MVC by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over 20∼30% bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by 0.46∼0.61 dB at the cost of insensitive image quality degradation of the background image.

  10. Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding

    Science.gov (United States)

    Zhang, Yun; Jiang, Gangyi; Yu, Mei; Chen, Ken; Dai, Qionghai

    2010-12-01

    We propose a Stereoscopic Visual Attention- (SVA-) based regional bit allocation optimization for Multiview Video Coding (MVC) by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI) is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over [InlineEquation not available: see fulltext.]% bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by [InlineEquation not available: see fulltext.] dB at the cost of insensitive image quality degradation of the background image.

  11. Bitstream decoding processor for fast entropy decoding of variable length coding-based multiformat videos

    Science.gov (United States)

    Jo, Hyunho; Sim, Donggyu

    2014-06-01

    We present a bitstream decoding processor for entropy decoding of variable length coding-based multiformat videos. Since most of the computational complexity of entropy decoders comes from bitstream accesses and table look-up process, the developed bitstream processing unit (BsPU) has several designated instructions to access bitstreams and to minimize branch operations in the table look-up process. In addition, the instruction for bitstream access has the capability to remove emulation prevention bytes (EPBs) of H.264/AVC without initial delay, repeated memory accesses, and additional buffer. Experimental results show that the proposed method for EPB removal achieves a speed-up of 1.23 times compared to the conventional EPB removal method. In addition, the BsPU achieves speed-ups of 5.6 and 3.5 times in entropy decoding of H.264/AVC and MPEG-4 Visual bitstreams, respectively, compared to an existing processor without designated instructions and a new table mapping algorithm. The BsPU is implemented on a Xilinx Virtex5 LX330 field-programmable gate array. The MPEG-4 Visual (ASP, Level 5) and H.264/AVC (Main Profile, Level 4) are processed using the developed BsPU with a core clock speed of under 250 MHz in real time.

  12. Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding

    Directory of Open Access Journals (Sweden)

    Dai Qionghai

    2010-01-01

    Full Text Available We propose a Stereoscopic Visual Attention- (SVA- based regional bit allocation optimization for Multiview Video Coding (MVC by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over % bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by  dB at the cost of insensitive image quality degradation of the background image.

  13. Fast mode decision for the H.264/AVC video coding standard based on frequency domain motion estimation

    Science.gov (United States)

    Abdelazim, Abdelrahman; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel

    2011-07-01

    The H.264 video coding standard achieves high performance compression and image quality at the expense of increased encoding complexity. Consequently, several fast mode decision and motion estimation techniques have been developed to reduce the computational cost. These approaches successfully reduce the computational time by reducing the image quality and/or increasing the bitrate. In this paper we propose a novel fast mode decision and motion estimation technique. The algorithm utilizes preprocessing frequency domain motion estimation in order to accurately predict the best mode and the search range. Experimental results show that the proposed algorithm significantly reduces the motion estimation time by up to 97%, while maintaining similar rate distortion performance when compared to the Joint Model software.

  14. A robust fusion method for multiview distributed video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Ascenso, Joao; Brites, Catarina

    2014-01-01

    to have the various views available simultaneously. However, in multiview DVC (M-DVC), the decoder can still exploit the redundancy between views, avoiding the need for inter-camera communication. The key element of every DVC decoder is the side information (SI), which can be generated by leveraging intra......-view or inter-view redundancy for multiview video data. In this paper, a novel learning-based fusion technique is proposed, which is able to robustly fuse an inter-view SI and an intra-view (temporal) SI. An inter-view SI generation method capable of identifying occluded areas is proposed and is coupled...... values. The proposed solution is able to achieve gains up to 0.9 dB in Bjøntegaard difference when compared with the best-performing (in a RD sense) single SI DVC decoder, chosen as the best of an inter-view and a temporal SI-based decoder one....

  15. Bidirectional Fano Algorithm for Lattice Coded MIMO Channels

    KAUST Repository

    Al-Quwaiee, Hessa

    2013-05-08

    Recently, lattices - a mathematical representation of infinite discrete points in the Euclidean space, have become an effective way to describe and analyze communication systems especially system those that can be modeled as linear Gaussian vector channel model. Channel codes based on lattices are preferred due to three facts: lattice codes have simple structure, the code can achieve the limits of the channel, and they can be decoded efficiently using lattice decoders which can be considered as the Closest Lattice Point Search (CLPS). Since the time lattice codes were introduced to Multiple Input Multiple Output (MIMO) channel, Sphere Decoder (SD) has been an efficient way to implement lattice decoders. Sphere decoder offers the optimal performance at the expense of high decoding complexity especially for low signal-to-noise ratios (SNR) and for high- dimensional systems. On the other hand, linear and non-linear receivers, Minimum Mean Square Error (MMSE), and MMSE Decision-Feedback Equalization (DFE), provide the lowest decoding complexity but unfortunately with poor performance. Several studies works have been conducted in the last years to address the problem of designing low complexity decoders for the MIMO channel that can achieve near optimal performance. It was found that sequential decoders using backward tree 
search can bridge the gap between SD and MMSE. The sequential decoder provides an interesting performance-complexity trade-off using a bias term. Yet, the sequential decoder still suffers from high complexity for mid-to-high SNR values. In this work, we propose a new algorithm for Bidirectional Fano sequential Decoder (BFD) in order to reduce the mid-to-high SNR complexity. Our algorithm consists of first constructing a unidirectional Sequential Decoder based on forward search using the QL decomposition. After that, BFD incorporates two searches, forward and backward, to work simultaneously till they merge and find the closest lattice point to the

  16. BioCode: two biologically compatible Algorithms for embedding data in non-coding and coding regions of DNA.

    Science.gov (United States)

    Haughton, David; Balado, Félix

    2013-04-09

    In recent times, the application of deoxyribonucleic acid (DNA) has diversified with the emergence of fields such as DNA computing and DNA data embedding. DNA data embedding, also known as DNA watermarking or DNA steganography, aims to develop robust algorithms for encoding non-genetic information in DNA. Inherently DNA is a digital medium whereby the nucleotide bases act as digital symbols, a fact which underpins all bioinformatics techniques, and which also makes trivial information encoding using DNA straightforward. However, the situation is more complex in methods which aim at embedding information in the genomes of living organisms. DNA is susceptible to mutations, which act as a noisy channel from the point of view of information encoded using DNA. This means that the DNA data embedding field is closely related to digital communications. Moreover it is a particularly unique digital communications area, because important biological constraints must be observed by all methods. Many DNA data embedding algorithms have been presented to date, all of which operate in one of two regions: non-coding DNA (ncDNA) or protein-coding DNA (pcDNA). This paper proposes two novel DNA data embedding algorithms jointly called BioCode, which operate in ncDNA and pcDNA, respectively, and which comply fully with stricter biological restrictions. Existing methods comply with some elementary biological constraints, such as preserving protein translation in pcDNA. However there exist further biological restrictions which no DNA data embedding methods to date account for. Observing these constraints is key to increasing the biocompatibility and in turn, the robustness of information encoded in DNA. The algorithms encode information in near optimal ways from a coding point of view, as we demonstrate by means of theoretical and empirical (in silico) analyses. Also, they are shown to encode information in a robust way, such that mutations have isolated effects. Furthermore, the

  17. Re-estimation of Motion and Reconstruction for Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Raket, Lars Lau; Forchhammer, Søren

    2014-01-01

    Transform domain Wyner-Ziv (TDWZ) video coding is an efficient approach to distributed video coding (DVC), which provides low complexity encoding by exploiting the source statistics at the decoder side. The DVC coding efficiency depends mainly on side information and noise modeling. This paper...... proposes a motion re-estimation technique based on optical flow to improve side information and noise residual frames by taking partially decoded information into account. To improve noise modeling, a noise residual motion re-estimation technique is proposed. Residual motion compensation with motion...... re-estimation (MORE) are integrated in the SING TDWZ codec, which uses side information and noise learning. For Wyner-Ziv frames using GOP size 2, the MORE codec significantly improves the TDWZ coding efficiency with an average (Bjøntegaard) PSNR improvement of 2.5 dB and up to 6 dB improvement...

  18. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    Science.gov (United States)

    Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu

    2017-01-01

    Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112

  19. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    Directory of Open Access Journals (Sweden)

    Guangle Yao

    2017-08-01

    Full Text Available Background subtraction (BS is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR.

  20. Region-of-interest based rate control for UAV video coding

    Science.gov (United States)

    Zhao, Chun-lei; Dai, Ming; Xiong, Jing-ying

    2016-05-01

    To meet the requirement of high-quality transmission of videos captured by unmanned aerial vehicles (UAV) with low bandwidth, a novel rate control (RC) scheme based on region-of-interest (ROI) is proposed. First, the ROI information is sent to the encoder with the latest high efficient video coding (HEVC) standard to generate an ROI map. Then, by using the ROI map, bit allocation methods are developed at frame level and large coding unit (LCU) level, to avoid inaccurate bit allocation produced by camera movement. At last, by using a better robustness R- λ model, the quantization parameter ( QP) for each LCU is calculated. The experimental results show that the proposed RC method can get a lower bitrate error and a higher quality for reconstructed video by choosing appropriate pixel weight on the HEVC platform.

  1. Extending JPEG-LS for low-complexity scalable video coding

    DEFF Research Database (Denmark)

    Ukhanova, Anna; Sergeev, Anton; Forchhammer, Søren

    2011-01-01

    JPEG-LS, the well-known international standard for lossless and near-lossless image compression, was originally designed for non-scalable applications. In this paper we propose a scalable modification of JPEG-LS and compare it with the leading image and video coding standards JPEG2000 and H.264/SVC...

  2. Optimization of high-definition video coding and hybrid fiber-wireless transmission in the 60 GHz band.

    Science.gov (United States)

    Lebedev, Alexander; Pham, Tien Thang; Beltrán, Marta; Yu, Xianbin; Ukhanova, Anna; Llorente, Roberto; Monroy, Idelfonso Tafur; Forchhammer, Søren

    2011-12-12

    The paper addresses the problem of distribution of high-definition video over fiber-wireless networks. The physical layer architecture with the low complexity envelope detection solution is investigated. We present both experimental studies and simulation of high quality high-definition compressed video transmission over 60 GHz fiber-wireless link. Using advanced video coding we satisfy low complexity and low delay constraints, meanwhile preserving the superb video quality after significantly extended wireless distance. © 2011 Optical Society of America

  3. Spherical rotation orientation indication for HEVC and JEM coding of 360 degree video

    Science.gov (United States)

    Boyce, Jill; Xu, Qian

    2017-09-01

    Omnidirectional (or "360 degree") video, representing a panoramic view of a spherical 360° ×180° scene, can be encoded using conventional video compression standards, once it has been projection mapped to a 2D rectangular format. Equirectangular projection format is currently used for mapping 360 degree video to a rectangular representation for coding using HEVC/JEM. However, video in the top and bottom regions of the image, corresponding to the "north pole" and "south pole" of the spherical representation, is significantly warped. We propose to perform spherical rotation of the input video prior to HEVC/JEM encoding in order to improve the coding efficiency, and to signal parameters in a supplemental enhancement information (SEI) message that describe the inverse rotation process recommended to be applied following HEVC/JEM decoding, prior to display. Experiment results show that up to 17.8% bitrate gain (using the WS-PSNR end-to-end metric) can be achieved for the Chairlift sequence using HM16.15 and 11.9% gain using JEM6.0, and an average gain of 2.9% for HM16.15 and 2.2% for JEM6.0.

  4. Combined Wavelet Video Coding and Error Control for Internet Streaming and Multicast

    Science.gov (United States)

    Chu, Tianli; Xiong, Zixiang

    2003-12-01

    This paper proposes an integrated approach to Internet video streaming and multicast (e.g., receiver-driven layered multicast (RLM) by McCanne) based on combined wavelet video coding and error control. We design a packetized wavelet video (PWV) coder to facilitate its integration with error control. The PWV coder produces packetized layered bitstreams that are independent among layers while being embedded within each layer. Thus, a lost packet only renders the following packets in the same layer useless. Based on the PWV coder, we search for a multilayered error-control strategy that optimally trades off source and channel coding for each layer under a given transmission rate to mitigate the effects of packet loss. While both the PWV coder and the error-control strategy are new—the former incorporates embedded wavelet video coding and packetization and the latter extends the single-layered approach for RLM by Chou et al.—the main distinction of this paper lies in the seamless integration of the two parts. Theoretical analysis shows a gain of up to 1 dB on a channel with 20% packet loss using our combined approach over separate designs of the source coder and the error-control mechanism. This is also substantiated by our simulations with a gain of up to 0.6 dB. In addition, our simulations show a gain of up to 2.2 dB over previous results reported by Chou et al.

  5. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2007-01-01

    Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

  6. General Video Game Evaluation Using Relative Algorithm Performance Profiles

    DEFF Research Database (Denmark)

    Nielsen, Thorbjørn; Barros, Gabriella; Togelius, Julian

    2015-01-01

    In order to generate complete games through evolution we need generic and reliably evaluation functions for games. It has been suggested that game quality could be characterised through playing a game with different controllers and comparing their performance. This paper explores that idea through...... investigating the relative performance of different general game-playing algorithms. Seven game-playing algorithms was used to play several hand-designed, mutated and randomly generated VGDL game descriptions. Results discussed appear to support the conjecture that well-designed games have, in average, a higher...... performance difference between better and worse game-playing algorithms....

  7. Iterative Multiview Side Information for Enhanced Reconstruction in Distributed Video Coding

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available Distributed video coding (DVC is a new paradigm for video compression based on the information theoretical results of Slepian and Wolf (SW and Wyner and Ziv (WZ. DVC entails low-complexity encoders as well as separate encoding of correlated video sources. This is particularly attractive for multiview camera systems in video surveillance and camera sensor network applications, where low complexity is required at the encoder. In addition, the separate encoding of the sources implies no communication between the cameras in a practical scenario. This is an advantage since communication is time and power consuming and requires complex networking. In this work, different intercamera estimation techniques for side information (SI generation are explored and compared in terms of estimating quality, complexity, and rate distortion (RD performance. Further, a technique called iterative multiview side information (IMSI is introduced, where the final SI is used in an iterative reconstruction process. The simulation results show that IMSI significantly improves the RD performance for video with significant motion and activity. Furthermore, DVC outperforms AVC/H.264 Intra for video with average and low motion but it is still inferior to the Inter No Motion and Inter Motion modes.

  8. Video coding standards AVS China, H.264/MPEG-4 PART 10, HEVC, VP6, DIRAC and VC-1

    CERN Document Server

    Rao, K R; Hwang, Jae Jeong

    2014-01-01

    Review by Ashraf A. Kassim, Professor, Department of Electrical & Computer Engineering, and Associate Dean, School of Engineering, National University of Singapore.     The book consists of eight chapters of which the first two provide an overview of various video & image coding standards, and video formats. The next four chapters present in detail the Audio & video standard (AVS) of China, the H.264/MPEG-4 Advanced video coding (AVC) standard, High efficiency video coding (HEVC) standard and the VP6 video coding standard (now VP10) respectively. The performance of the wavelet based Dirac video codec is compared with H.264/MPEG-4 AVC in chapter 7. Finally in chapter 8, the VC-1 video coding standard is presented together with VC-2 which is based on the intra frame coding of Dirac and an outline of a H.264/AVC to VC-1 transcoder.   The authors also present and discuss relevant research literature such as those which document improved methods & techniques, and also point to other related reso...

  9. Delta modulation. [overshoot suppression algorithm for video data transmission

    Science.gov (United States)

    Schilling, D. L.

    1973-01-01

    The overshoot suppression algorithm has been more extensively studied. Computer generated test-pictures show a radical improvement due to the overshoot suppression algorithm. Considering the delta modulator link as a nonlinear digital filter, a formula that relates the minimum rise time that can be handled for given filter parameters and voltage swings has been developed. The settling time has been calculated for the case of overshoot suppression as well as when no suppression is employed. The results indicate a significant decrease in settling time when overshoot suppression is used. An algorithm for correcting channel errors has been developed. It is shown that pulse stuffing PCM words in the DM bit stream results in a significant reduction in error length.

  10. Algorithm-Based Motion Magnification for Video Processing in Urological Laparoscopy.

    Science.gov (United States)

    Adams, Fabian; Schoelly, Reto; Schlager, Daniel; Schoenthaler, Martin; Schoeb, Dominik S; Wilhelm, Konrad; Hein, Simon; Wetterauer, Ulrich; Miernik, Arkadiusz

    2017-06-01

    Minimally invasive surgery is in constant further development and has replaced many conventional operative procedures. If vascular structure movement could be detected during these procedures, it could reduce the risk of vascular injury and conversion to open surgery. The recently proposed motion-amplifying algorithm, Eulerian Video Magnification (EVM), has been shown to substantially enhance minimal object changes in digitally recorded video that is barely perceptible to the human eye. We adapted and examined this technology for use in urological laparoscopy. Video sequences of routine urological laparoscopic interventions were recorded and further processed using spatial decomposition and filtering algorithms. The freely available EVM algorithm was investigated for its usability in real-time processing. In addition, a new image processing technology, the CRS iimotion Motion Magnification (CRSMM) algorithm, was specifically adjusted for endoscopic requirements, applied, and validated by our working group. Using EVM, no significant motion enhancement could be detected without severe impairment of the image resolution, motion, and color presentation. The CRSMM algorithm significantly improved image quality in terms of motion enhancement. In particular, the pulsation of vascular structures could be displayed more accurately than in EVM. Motion magnification image processing technology has the potential for clinical importance as a video optimizing modality in endoscopic and laparoscopic surgery. Barely detectable (micro)movements can be visualized using this noninvasive marker-free method. Despite these optimistic results, the technology requires considerable further technical development and clinical tests.

  11. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    Thomas André

    2007-03-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  12. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    André Thomas

    2007-01-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  13. A Complete Video Coding Chain Based on Multi-Dimensional Discrete Cosine Transform

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2010-09-01

    Full Text Available The paper deals with a video compression method based on the multi-dimensional discrete cosine transform. In the text, the encoder and decoder architectures including the definitions of all mathematical operations like the forward and inverse 3-D DCT, quantization and thresholding are presented. According to the particular number of currently processed pictures, the new quantization tables and entropy code dictionaries are proposed in the paper. The practical properties of the 3-D DCT coding chain compared with the modern video compression methods (such as H.264 and WebM and the computing complexity are presented as well. It will be proved the best compress properties could be achieved by complex H.264 codec. On the other hand the computing complexity - especially on the encoding side - is lower for the 3-D DCT method.

  14. Side Information and Noise Learning for Distributed Video Coding using Optical Flow and Clustering

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Rakêt, Lars Lau; Huang, Xin

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The coding efficiency of DVC critically depends on the quality of side information generation and accuracy of noise modeling. This paper considers...... side information frames. Clustering is introduced to capture cross band correlation and increase local adaptivity in the noise modeling. This paper also proposes techniques to learn from previously decoded (WZ) frames. Different techniques are combined by calculating a number of candidate soft side...... information for (LDPCA) decoding. The proposed decoder side techniques for side information and noise learning (SING) are integrated in a TDWZ scheme. On test sequences, the proposed SING codec robustly improves the coding efficiency of TDWZ DVC. For WZ frames using a GOP size of 2, up to 4dB improvement...

  15. Bridging Inter-flow and Intra-flow Network Coding for Video Applications

    DEFF Research Database (Denmark)

    Hansen, Jonas; Krigslund, Jeppe; Roetter, Daniel Enrique Lucani

    2013-01-01

    enhance reliability, common of the former, while maintaining an efficient spectrum usage, typical of the latter. This paper uses the intuition provided in [1] to propose a practical implementation of the protocol leveraging Random Linear Network Coding (RLNC) for intra-flow coding, a credit based packet...... transmission approach to decide how much and when to send redundancy in the network, and a minimalistic feedback mechanism to guarantee delivery of generations of the different flows. Given the delay constraints of video applications, we proposed a simple yet effective coding mechanism, Block Coding On The Fly...... (BCFly), that allows a block encoder to be fed on-the-fly, thus reducing the delay to accumulate enough packets that is introduced by typical generation based NC techniques. Our measurements and comparison to forwarding and COPE show that CORE not only outperforms these schemes even for small packet...

  16. Multiscale Architectures and Parallel Algorithms for Video Object Tracking

    Science.gov (United States)

    2011-10-01

    Black River Systems. This may have inadvertently introduced bugs that were later discovered by AFRL during testing (of the June 22, 2011 version of...Parallelism in Algorithms and Architectures, pages 289–298, 2007. [3] S. Ali and M. Shah. COCOA - Tracking in aerial imagery. In Daniel J. Henry

  17. Improved Side Information Generation for Distributed Video Coding by Exploiting Spatial and Temporal Correlations

    Directory of Open Access Journals (Sweden)

    Ye Shuiming

    2009-01-01

    Full Text Available Distributed video coding (DVC is a video coding paradigm allowing low complexity encoding for emerging applications such as wireless video surveillance. Side information (SI generation is a key function in the DVC decoder, and plays a key-role in determining the performance of the codec. This paper proposes an improved SI generation for DVC, which exploits both spatial and temporal correlations in the sequences. Partially decoded Wyner-Ziv (WZ frames, based on initial SI by motion compensated temporal interpolation, are exploited to improve the performance of the whole SI generation. More specifically, an enhanced temporal frame interpolation is proposed, including motion vector refinement and smoothing, optimal compensation mode selection, and a new matching criterion for motion estimation. The improved SI technique is also applied to a new hybrid spatial and temporal error concealment scheme to conceal errors in WZ frames. Simulation results show that the proposed scheme can achieve up to 1.0 dB improvement in rate distortion performance in WZ frames for video with high motion, when compared to state-of-the-art DVC. In addition, both the objective and perceptual qualities of the corrupted sequences are significantly improved by the proposed hybrid error concealment scheme, outperforming both spatial and temporal concealments alone.

  18. Traffic and Quality Characterization of the H.264/AVC Scalable Video Coding Extension

    Directory of Open Access Journals (Sweden)

    Geert Van der Auwera

    2008-01-01

    Full Text Available The recent scalable video coding (SVC extension to the H.264/AVC video coding standard has unprecedented compression efficiency while supporting a wide range of scalability modes, including temporal, spatial, and quality (SNR scalability, as well as combined spatiotemporal SNR scalability. The traffic characteristics, especially the bit rate variabilities, of the individual layer streams critically affect their network transport. We study the SVC traffic statistics, including the bit rate distortion and bit rate variability distortion, with long CIF resolution video sequences and compare them with the corresponding MPEG-4 Part 2 traffic statistics. We consider (i temporal scalability with three temporal layers, (ii spatial scalability with a QCIF base layer and a CIF enhancement layer, as well as (iii quality scalability modes FGS and MGS. We find that the significant improvement in RD efficiency of SVC is accompanied by substantially higher traffic variabilities as compared to the equivalent MPEG-4 Part 2 streams. We find that separately analyzing the traffic of temporal-scalability only encodings gives reasonable estimates of the traffic statistics of the temporal layers embedded in combined spatiotemporal encodings and in the base layer of combined FGS-temporal encodings. Overall, we find that SVC achieves significantly higher compression ratios than MPEG-4 Part 2, but produces unprecedented levels of traffic variability, thus presenting new challenges for the network transport of scalable video.

  19. Impact of scan conversion methods on the performance of scalable video coding

    Science.gov (United States)

    Dubois, Eric; Baaziz, Nadia; Matta, Marwan

    1995-04-01

    The ability to flexibly access coded video data at different resolutions or bit rates is referred to as scalability. We are concerned here with the class of methods referred to as pyramidal embedded coding in which specific subsets of the binary data can be used to decode lower- resolution versions of the video sequence. Two key techniques in such a pyramidal coder are the scan-conversion operations of down-conversion and up-conversion. Down-conversion is required to produce the smaller, lower-resolution versions of the image sequence. Up- conversion is used to perform conditional coding, whereby the coded lower-resolution image is interpolated to the same resolution as the next higher image and used to assist in the encoding of that level. The coding efficiency depends on the accuracy of this up-conversion process. In this paper techniques for down-conversion and up-conversion are addressed in the context of a two-level pyramidal representation. We first present the pyramidal technique for spatial scalability and review the methods used in MPEG-2. We then discuss some enhanced methods for down- and up-conversion, and evaluate their performance in the context of the two-level scalable system.

  20. The Frontiers of Real-coded Genetic Algorithms

    Science.gov (United States)

    Kobayashi, Shigenobu

    Real-coded genetic algorithms (RCGA) are expected to solve efficiently real parameter optimization problems of multimodality, parameter dependency, and ill-scale. Multi-parental crossovers such as the simplex crossover (SPX) and the UNDX-m as extensions of the unimodal normal distribution crossove (UNDX) show relatively good performance for RCGA. The minimal generation gap (MGG) is used widely as a generation alternation model for RCGA. However, the MGG is not suited for multi-parental crossovers. Both the SPX and the UNDX-m have their own drawbacks respectively. Therefore, RCGA composed of them cannot be applied to highly dimensional problems, because their hidden faults appear. This paper presents a new and robust faramework for RCGA. First, we propose a generation alternation model called JGG (just generation gap) suited for multi-parental crossovers. The JGG replaces parents with children completely every generation. To solve the asymmetry and bias of children distribution generated by the SPX and the UNDX-m, an enhanced SPX (e-SPX) and an enhanced UNDX (e-UNDX) are proposed. Moreover, we propose a crossover called REX(φ,n+k) as a generlization of the e-UNDX, where φ and n+k denote some probability distribution and the number of parents respectively. A concept of the globally descent direction (GDD) is introduced to handle the situations where the population does not cover any optimum. The GDD can be used under the big valley structure. Then, we propose REXstar as an extention of the REX(φ,n+k) that can generate children to the GDD efficiently. Several experiments show excellent performance and robustness of the REXstar. Finally, the future work is discussed.

  1. ALGORITHM OF PLACEMENT OF VIDEO SURVEILLANCE CAMERAS AND ITS SOFTWARE IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available Comprehensive distributed safety, control, and monitoring systems applied by companies and organizations of different ownership structure play a substantial role in the present-day society. Video surveillance elements that ensure image processing and decision making in automated or automatic modes are the essential components of new systems. This paper covers the modeling of video surveillance systems installed in buildings, and the algorithm, or pattern, of video camera placement with due account for nearly all characteristics of buildings, detection and recognition facilities, and cameras themselves. This algorithm will be subsequently implemented as a user application. The project contemplates a comprehensive approach to the automatic placement of cameras that take account of their mutual positioning and compatibility of tasks. The project objective is to develop the principal elements of the algorithm of recognition of a moving object to be detected by several cameras. The image obtained by different cameras will be processed. Parameters of motion are to be identified to develop a table of possible options of routes. The implementation of the recognition algorithm represents an independent research project to be covered by a different article. This project consists in the assessment of the degree of complexity of an algorithm of camera placement designated for identification of cases of inaccurate algorithm implementation, as well as in the formulation of supplementary requirements and input data by means of intercrossing sectors covered by neighbouring cameras. The project also contemplates identification of potential problems in the course of development of a physical security and monitoring system at the stage of the project design development and testing. The camera placement algorithm has been implemented as a software application that has already been pilot tested on buildings and inside premises that have irregular dimensions. The

  2. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    Science.gov (United States)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  3. Code Synchronization Algorithm Based on Segment Correlation in Spread Spectrum Communication

    Directory of Open Access Journals (Sweden)

    Aohan Li

    2015-10-01

    Full Text Available Spread Spectrum (SPSP Communication is the theoretical basis of Direct Sequence Spread Spectrum (DSSS transceiver technology. Spreading code, modulation, demodulation, carrier synchronization and code synchronization in SPSP communications are the core parts of DSSS transceivers. This paper focuses on the code synchronization problem in SPSP communications. A novel code synchronization algorithm based on segment correlation is proposed. The proposed algorithm can effectively deal with the informational misjudgment caused by the unreasonable data acquisition times. This misjudgment may lead to an inability of DSSS receivers to restore transmitted signals. Simulation results show the feasibility of a DSSS transceiver design based on the proposed code synchronization algorithm. Finally, the communication functions of the DSSS transceiver based on the proposed code synchronization algorithm are implemented on Field Programmable Gate Array (FPGA.

  4. High-threshold decoding algorithms for the gauge color code

    Science.gov (United States)

    Zeng, William; Brown, Benjamin

    Gauge color codes are topological quantum error correcting codes on three dimensional lattices. They have garnered recent interest due to two important properties: (1) they admit a universal transversal gate set, and (2) their structure allows reliable error correction using syndrome data obtained from a measurement circuit of constant depth. Both of these properties make gauge color codes intriguing candidates for low overhead fault-tolerant quantum computation. Recent work by Brown et al. calculated a threshold of 0.31% for a particular gauge color code lattice using a simple clustering decoder and phenomenological noise. We show that we can achieve improved threshold error rates using the efficient Wootton and Loss Markov-chain Monte Carlo (MCMC) decoding. In the case of the surface code, the MCMC decoder produced a threshold close to that code's upper bound. While no upper bound is known for gauge color codes, the thresholds we present here may give a better estimate.

  5. Prediction accuracy in estimating joint angle trajectories using a video posture coding method for sagittal lifting tasks.

    Science.gov (United States)

    Chang, Chien-Chi; McGorry, Raymond W; Lin, Jia-Hua; Xu, Xu; Hsiang, Simon M

    2010-08-01

    This study investigated prediction accuracy of a video posture coding method for lifting joint trajectory estimation. From three filming angles, the coder selected four key snapshots, identified joint angles and then a prediction program estimated the joint trajectories over the course of a lift. Results revealed a limited range of differences of joint angles (elbow, shoulder, hip, knee, ankle) between the manual coding method and the electromagnetic motion tracking system approach. Lifting range significantly affected estimate accuracy for all joints and camcorder filming angle had a significant effect on all joints but the hip. Joint trajectory predictions were more accurate for knuckle-to-shoulder lifts than for floor-to-shoulder or floor-to-knuckle lifts with average root mean square errors (RMSE) of 8.65 degrees , 11.15 degrees and 11.93 degrees , respectively. Accuracy was also greater for the filming angles orthogonal to the participant's sagittal plane (RMSE = 9.97 degrees ) as compared to filming angles of 45 degrees (RMSE = 11.01 degrees ) or 135 degrees (10.71 degrees ). The effects of lifting speed and loading conditions were minimal. To further increase prediction accuracy, improved prediction algorithms and/or better posture matching methods should be investigated. STATEMENT OF RELEVANCE: Observation and classification of postures are common steps in risk assessment of manual materials handling tasks. The ability to accurately predict lifting patterns through video coding can provide ergonomists with greater resolution in characterising or assessing the lifting tasks than evaluation based solely on sampling with a single lifting posture event.

  6. Shot Boundary Detection in Soccer Video using Twin-comparison Algorithm and Dominant Color Region

    Directory of Open Access Journals (Sweden)

    Matko Šarić

    2008-06-01

    Full Text Available The first step in generic video processing is temporal segmentation, i.e. shot boundary detection. Camera shot transitions can be either abrupt (e.g. cuts or gradual (e.g. fades, dissolves, wipes. Sports video is one of the most challenging domains for robust shot boundary detection. We proposed a shot boundary detection algorithm for soccer video based on the twin-comparison method and the absolute difference between frames in their ratios of dominant colored pixels to total number of pixels. With this approach the detection of gradual transitions is improved by decreasing the number of false positives caused by some camera operations. We also compared performances of our algorithm and the standard twin-comparison method.

  7. The Development of Video Learning to Deliver a Basic Algorithm Learning

    Directory of Open Access Journals (Sweden)

    slamet kurniawan fahrurozi

    2017-12-01

    Full Text Available The world of education is currently entering the era of the media world, where learning activities demand reduction of lecture methods and Should be replaced by the use of many medias. In relation to the function of instructional media, it can be emphasized as follows: as a tool to make learning more effective, accelerate the teaching and learning process and improve the quality of teaching and learning process. This research aimed to develop a learning video programming basic materials algorithm that is appropriate to be applied as a learning resource in class X SMK. This study was also aimed to know the feasibility of learning video media developed. The research method used was research was research and development using development model developed by Alessi and Trollip (2001. The development model was divided into 3 stages namely Planning, Design, and Develpoment. Data collection techniques used interview method, literature method and instrument method. In the next stage, learning video was validated or evaluated by the material experts, media experts and users who are implemented to 30 Learners. The result of the research showed that video learning has been successfully made on basic programming subjects which consist of 8 scane video. Based on the learning video validation result, the percentage of learning video's eligibility is 90.5% from material experts, 95.9% of media experts, and 84% of users or learners. From the testing result that the learning videos that have been developed can be used as learning resources or instructional media programming subjects basic materials algorithm.

  8. Progressive Dictionary Learning with Hierarchical Predictive Structure for Scalable Video Coding.

    Science.gov (United States)

    Dai, Wenrui; Shen, Yangmei; Xiong, Hongkai; Jiang, Xiaoqian; Zou, Junni; Taubman, David

    2017-04-12

    Dictionary learning has emerged as a promising alternative to the conventional hybrid coding framework. However, the rigid structure of sequential training and prediction degrades its performance in scalable video coding. This paper proposes a progressive dictionary learning framework with hierarchical predictive structure for scalable video coding, especially in low bitrate region. For pyramidal layers, sparse representation based on spatio-temporal dictionary is adopted to improve the coding efficiency of enhancement layers (ELs) with a guarantee of reconstruction performance. The overcomplete dictionary is trained to adaptively capture local structures along motion trajectories as well as exploit the correlations between neighboring layers of resolutions. Furthermore, progressive dictionary learning is developed to enable the scalability in temporal domain and restrict the error propagation in a close-loop predictor. Under the hierarchical predictive structure, online learning is leveraged to guarantee the training and prediction performance with an improved convergence rate. To accommodate with the stateof- the-art scalable extension of H.264/AVC and latest HEVC, standardized codec cores are utilized to encode the base and enhancement layers. Experimental results show that the proposed method outperforms the latest SHVC and HEVC simulcast over extensive test sequences with various resolutions.

  9. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    Science.gov (United States)

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  10. An electrocorticographic BCI using code-based VEP for control in video applications: a single-subject study.

    Science.gov (United States)

    Kapeller, Christoph; Kamada, Kyousuke; Ogawa, Hiroshi; Prueckl, Robert; Scharinger, Josef; Guger, Christoph

    2014-01-01

    A brain-computer-interface (BCI) allows the user to control a device or software with brain activity. Many BCIs rely on visual stimuli with constant stimulation cycles that elicit steady-state visual evoked potentials (SSVEP) in the electroencephalogram (EEG). This EEG response can be generated with a LED or a computer screen flashing at a constant frequency, and similar EEG activity can be elicited with pseudo-random stimulation sequences on a screen (code-based BCI). Using electrocorticography (ECoG) instead of EEG promises higher spatial and temporal resolution and leads to more dominant evoked potentials due to visual stimulation. This work is focused on BCIs based on visual evoked potentials (VEP) and its capability as a continuous control interface for augmentation of video applications. One 35 year old female subject with implanted subdural grids participated in the study. The task was to select one out of four visual targets, while each was flickering with a code sequence. After a calibration run including 200 code sequences, a linear classifier was used during an evaluation run to identify the selected visual target based on the generated code-based VEPs over 20 trials. Multiple ECoG buffer lengths were tested and the subject reached a mean online classification accuracy of 99.21% for a window length of 3.15 s. Finally, the subject performed an unsupervised free run in combination with visual feedback of the current selection. Additionally, an algorithm was implemented that allowed to suppress false positive selections and this allowed the subject to start and stop the BCI at any time. The code-based BCI system attained very high online accuracy, which makes this approach very promising for control applications where a continuous control signal is needed.

  11. Combined Wavelet Video Coding and Error Control for Internet Streaming and Multicast

    Directory of Open Access Journals (Sweden)

    Tianli Chu

    2003-01-01

    Full Text Available This paper proposes an integrated approach to Internet video streaming and multicast (e.g., receiver-driven layered multicast (RLM by McCanne based on combined wavelet video coding and error control. We design a packetized wavelet video (PWV coder to facilitate its integration with error control. The PWV coder produces packetized layered bitstreams that are independent among layers while being embedded within each layer. Thus, a lost packet only renders the following packets in the same layer useless. Based on the PWV coder, we search for a multilayered error-control strategy that optimally trades off source and channel coding for each layer under a given transmission rate to mitigate the effects of packet loss. While both the PWV coder and the error-control strategy are new—the former incorporates embedded wavelet video coding and packetization and the latter extends the single-layered approach for RLM by Chou et al.—the main distinction of this paper lies in the seamless integration of the two parts. Theoretical analysis shows a gain of up to 1 dB on a channel with 20% packet loss using our combined approach over separate designs of the source coder and the error-control mechanism. This is also substantiated by our simulations with a gain of up to 0.6 dB. In addition, our simulations show a gain of up to 2.2 dB over previous results reported by Chou et al.

  12. Adaptive mode decision with residual motion compensation for distributed video coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren; Slowack, Jurgen

    2013-01-01

    Distributed video coding (DVC) is a coding paradigm that entails low complexity encoding by exploiting the source statistics at the decoder. To improve the DVC coding efficiency, this paper proposes a novel adaptive technique for mode decision to control and take advantage of skip mode and intra...... mode in DVC. The adaptive mode decision is not only based on quality of key frames but also the rate of Wyner-Ziv (WZ) frames. To improve noise distribution estimation for a more accurate mode decision, a residual motion compensation is proposed to estimate a current noise residue based on a previously...... decoded frame. The experimental results show that the proposed adaptive mode decision DVC significantly improves the rate distortion performance without increasing the encoding complexity. For a GOP size of 2 on the set of test sequences, the average bitrate saving of the proposed codec is 35.5% on WZ...

  13. Adaptive mode decision with residual motion compensation for distributed video coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren; Slowack, Jurgen

    2015-01-01

    Distributed video coding (DVC) is a coding paradigm that entails low complexity encoding by exploiting the source statistics at the decoder. To improve the DVC coding efficiency, this paper proposes a novel adaptive technique for mode decision to control and take advantage of skip mode and intra...... mode in DVC. The adaptive mode decision is not only based on quality of key frames but also the rate of Wyner-Ziv (WZ) frames. To improve noise distribution estimation for a more accurate mode decision, a residual motion compensation is proposed to estimate a current noise residue based on a previously...... decoded frame. The experimental results show that the proposed adaptive mode decision DVC significantly improves the rate distortion performance without increasing the encoding complexity. For a GOP size of 2 on the set of test sequences, the average bitrate saving of the proposed codec is 35.5% on WZ...

  14. Comprehensive Analysis and Evaluation of Background Subtraction Algorithms for Surveillance Video

    Directory of Open Access Journals (Sweden)

    Yan Feng

    2014-08-01

    Full Text Available Background Subtraction techniques are the basis for moving target detection and tracking in the domain of video surveillance, while the robust and reliable detection and tracking algorithms in complex environment is a challenging subject, so evaluations of various background subtraction algorithms are of great significance. Nine state of the art methods ranging from simple to sophisticated ones are discussed. Then the algorithms were implemented and tested using different videos with ground truth, such as baseline, dynamic background, camera jitter, and intermittent object motion and shadow scenarios. The best suited background modeling methods for each scenario are given by comprehensive analysis of three parameters: recall, precision and F-Measure, which facilitates more accurate target detection and tracking.

  15. DMPDS: A Fast Motion Estimation Algorithm Targeting High Resolution Videos and Its FPGA Implementation

    Directory of Open Access Journals (Sweden)

    Gustavo Sanchez

    2012-01-01

    Full Text Available This paper presents a new fast motion estimation (ME algorithm targeting high resolution digital videos and its efficient hardware architecture design. The new Dynamic Multipoint Diamond Search (DMPDS algorithm is a fast algorithm which increases the ME quality when compared with other fast ME algorithms. The DMPDS achieves a better digital video quality reducing the occurrence of local minima falls, especially in high definition videos. The quality results show that the DMPDS is able to reach an average PSNR gain of 1.85 dB when compared with the well-known Diamond Search (DS algorithm. When compared to the optimum results generated by the Full Search (FS algorithm the DMPDS shows a lose of only 1.03 dB in the PSNR. On the other hand, the DMPDS reached a complexity reduction higher than 45 times when compared to FS. The quality gains related to DS caused an expected increase in the DMPDS complexity which uses 6.4-times more calculations than DS. The DMPDS architecture was designed focused on high performance and low cost, targeting to process Quad Full High Definition (QFHD videos in real time (30 frames per second. The architecture was described in VHDL and synthesized to Altera Stratix 4 and Xilinx Virtex 5 FPGAs. The synthesis results show that the architecture is able to achieve processing rates higher than 53 QFHD fps, reaching the real-time requirements. The DMPDS architecture achieved the highest processing rate when compared to related works in the literature. This high processing rate was obtained designing an architecture with a high operation frequency and low numbers of cycles necessary to process each block.

  16. Low-Cost Super-Resolution Algorithms Implementation Over a HW/SW Video Compression Platform

    Directory of Open Access Journals (Sweden)

    Llopis Rafael Peset

    2006-01-01

    Full Text Available Two approaches are presented in this paper to improve the quality of digital images over the sensor resolution using super-resolution techniques: iterative super-resolution (ISR and noniterative super-resolution (NISR algorithms. The results show important improvements in the image quality, assuming that sufficient sample data and a reasonable amount of aliasing are available at the input images. These super-resolution algorithms have been implemented over a codesign video compression platform developed by Philips Research, performing minimal changes on the overall hardware architecture. In this way, a novel and feasible low-cost implementation has been obtained by using the resources encountered in a generic hybrid video encoder. Although a specific video codec platform has been used, the methodology presented in this paper is easily extendable to any other video encoder architectures. Finally a comparison in terms of memory, computational load, and image quality for both algorithms, as well as some general statements about the final impact of the sampling process on the quality of the super-resolved (SR image, are also presented.

  17. The study of Genetic Algorithm by Hierarchical Coded for the MMRCPSP

    Science.gov (United States)

    Shi-man, Xie; Chen, Jian-wei; Xuan, Zhao-yan

    In order to solve the problem of Multi-Mode Resource Constrained Project Scheduling Problem (MMRCPSP), this paper suggests Genetic Algorithm (GA) by hierarchical coded. In the first layer, the chromosomes are used to choose the activity sequence. In the second layer, the chromosomes are used to decide the combination of activity modes. The chromosomes produced by the Activities Resource Competition Relation (ARCR) are coded by binary code. That is to say, the subsequent operation will be improved by mature algorithm including selection, crossover and mutation. Finally, programing used PSBLIB standard data shows that this algorithm is feasible.

  18. Reduced Complexity Iterative Decoding of 3D-Product Block Codes Based on Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Abdeslam Ahmadi

    2012-01-01

    Full Text Available Two iterative decoding algorithms of 3D-product block codes (3D-PBC based on genetic algorithms (GAs are presented. The first algorithm uses the Chase-Pyndiah SISO, and the second one uses the list-based SISO decoding algorithm (LBDA based on order- reprocessing. We applied these algorithms over AWGN channel to symmetric 3D-PBC constructed from BCH codes. The simulation results show that the first algorithm outperforms the Chase-Pyndiah one and is only 1.38 dB away from the Shannon capacity limit at BER of 10−5 for BCH (31, 21, 53 and 1.4 dB for BCH (16, 11, 43. The simulations of the LBDA-based GA on the BCH (16, 11, 43 show that its performances outperform the first algorithm and is about 1.33 dB from the Shannon limit. Furthermore, these algorithms can be applied to any arbitrary 3D binary product block codes, without the need of a hard-in hard-out decoder. We show also that the two proposed decoders are less complex than both Chase-Pyndiah algorithm for codes with large correction capacity and LBDA for large parameter. Those features make the decoders based on genetic algorithms efficient and attractive.

  19. Integrating Algorithm Visualization Video into a First-Year Algorithm and Data Structure Course

    Science.gov (United States)

    Crescenzi, Pilu; Malizia, Alessio; Verri, M. Cecilia; Diaz, Paloma; Aedo, Ignacio

    2012-01-01

    In this paper we describe the results that we have obtained while integrating algorithm visualization (AV) movies (strongly tightened with the other teaching material), within a first-year undergraduate course on algorithms and data structures. Our experimental results seem to support the hypothesis that making these movies available significantly…

  20. Research on the improved vector coding algorithm for two dimensional Discrete Fourier Transform

    Science.gov (United States)

    Zhang, Hao; Chen, Zhaodou; Yang, Jin

    2017-03-01

    Discrete Fourier Transform (DFT) plays a crucial role in signal processing. In this paper, a new fast algorithm is presented for two dimensional DFT with different lengths. This algorithm is derived using a technique for multidimensional integral point called `vector coding'. The new algorithm significantly reduces the multiplications and recursive stages compared with row-column algorithm, and also skip the data transposing. The algorithm can spread to multidimensional DFT. For two dimensional, compared with row-column algorithm, the vector coding algorithm has the same addition, but about three-quarters of multiplication, and reduce the recursive stages a half. This original article was incorrectly published with pages 5 and 6 missing. At the request of the Proceedings Editor, a corrected version of this article was published online 24 March 2017. This article has also been corrected in the printed version of the volume.

  1. Screen Codes: Efficient Data Transfer from Video Displays to Mobile Devices

    OpenAIRE

    Collomosse, J.; Kindberg, T.

    2007-01-01

    We present ‘Screen codes’ - a space- and time-efficient, aesthetically compelling method for transferring data from a display to a camera-equipped mobile device. Screen codes encode data as a grid of luminosity fluctuations within an arbitrary image, displayed on the video screen and decoded on a mobile device. These ‘twinkling’ images are a form of ‘visual hyperlink’, by which users can move dynamically generated content to and from their mobile devices. They help bridge the ‘content divide’...

  2. CowLog – Cross-Platform Application for Coding Behaviours from Video

    OpenAIRE

    Pastell, Matti

    2016-01-01

    CowLog is a cross-platform application to code behaviours from video recordings for use in behavioural research. The software has been used in several studies e.g. to study sleep in dairy calves, emotions in goats and the mind wandering related to computer use during lectures. CowLog 3 is implemented using JavaScript and HTML using the Electron framework. The framework allows the development of packaged cross-platform applications using features from web browser (Chromium) as well as server s...

  3. Machine-Learning Algorithms to Code Public Health Spending Accounts.

    Science.gov (United States)

    Brady, Eoghan S; Leider, Jonathon P; Resnick, Beth A; Alfonso, Y Natalia; Bishai, David

    Government public health expenditure data sets require time- and labor-intensive manipulation to summarize results that public health policy makers can use. Our objective was to compare the performances of machine-learning algorithms with manual classification of public health expenditures to determine if machines could provide a faster, cheaper alternative to manual classification. We used machine-learning algorithms to replicate the process of manually classifying state public health expenditures, using the standardized public health spending categories from the Foundational Public Health Services model and a large data set from the US Census Bureau. We obtained a data set of 1.9 million individual expenditure items from 2000 to 2013. We collapsed these data into 147 280 summary expenditure records, and we followed a standardized method of manually classifying each expenditure record as public health, maybe public health, or not public health. We then trained 9 machine-learning algorithms to replicate the manual process. We calculated recall, precision, and coverage rates to measure the performance of individual and ensembled algorithms. Compared with manual classification, the machine-learning random forests algorithm produced 84% recall and 91% precision. With algorithm ensembling, we achieved our target criterion of 90% recall by using a consensus ensemble of ≥6 algorithms while still retaining 93% coverage, leaving only 7% of the summary expenditure records unclassified. Machine learning can be a time- and cost-saving tool for estimating public health spending in the United States. It can be used with standardized public health spending categories based on the Foundational Public Health Services model to help parse public health expenditure information from other types of health-related spending, provide data that are more comparable across public health organizations, and evaluate the impact of evidence-based public health resource allocation.

  4. A Systematic Approach to Modified BCJR MAP Algorithms for Convolutional Codes

    National Research Council Canada - National Science Library

    Wang, Sichun; Patenaude, François

    2006-01-01

    .... The existence of a relatively large number of similar but different modified BCJR MAP algorithms, derived using the Markov chain properties of convolutional codes, naturally leads to the following questions...

  5. A Systematic Approach to Modified BCJR MAP Algorithms for Convolutional Codes

    Directory of Open Access Journals (Sweden)

    Patenaude François

    2006-01-01

    Full Text Available Since Berrou, Glavieux and Thitimajshima published their landmark paper in 1993, different modified BCJR MAP algorithms have appeared in the literature. The existence of a relatively large number of similar but different modified BCJR MAP algorithms, derived using the Markov chain properties of convolutional codes, naturally leads to the following questions. What is the relationship among the different modified BCJR MAP algorithms? What are their relative performance, computational complexities, and memory requirements? In this paper, we answer these questions. We derive systematically four major modified BCJR MAP algorithms from the BCJR MAP algorithm using simple mathematical transformations. The connections between the original and the four modified BCJR MAP algorithms are established. A detailed analysis of the different modified BCJR MAP algorithms shows that they have identical computational complexities and memory requirements. Computer simulations demonstrate that the four modified BCJR MAP algorithms all have identical performance to the BCJR MAP algorithm.

  6. Mean-Adaptive Real-Coding Genetic Algorithm and its Applications to Electromagnetic Optimization (Part One

    Directory of Open Access Journals (Sweden)

    Z. Raida

    2007-09-01

    Full Text Available In the paper, a novel instance of the real-coding steady-state genetic algorithm, called the Mean-adaptive real-coding genetic algorithm, is put forward. In this instance, three novel implementations of evolution operators are incorporated. Those are a recombination and two mutation operators. All of the evolution operators are designed with the aim of possessing a big explorative power. Moreover, one of the mutation operators exhibits self-adaptive behavior and the other exhibits adaptive behavior, thereby allowing the algorithm to self-control its own mutability as the search advances. This algorithm also takes advantage of population-elitist selection, acting as a replacement policy, being adopted from evolution strategies. The purpose of this paper (i.e., the first part is to provide theoretical foundations of a robust and advanced instance of the real-coding genetic algorithm having the big potential of being successfully applied to electromagnetic optimization.

  7. The effects of camera jitter for background subtraction algorithms on fused infrared-visible video streams

    Science.gov (United States)

    Becker, Stefan; Scherer-Negenborn, Norbert; Thakkar, Pooja; Hübner, Wolfgang; Arens, Michael

    2016-10-01

    This paper is a continuation of the work of Becker et al.1 In their work, they analyzed the robustness of various background subtraction algorithms on fused video streams originating from visible and infrared cameras. In order to cover a broader range of background subtraction applications, we show the effects of fusing infrared-visible video streams from vibrating cameras on a large set of background subtraction algorithms. The effectiveness is quantitatively analyzed on recorded data of a typical outdoor sequence with a fine-grained and accurate annotation of the images. Thereby, we identify approaches which can benefit from fused sensor signals with camera jitter. Finally conclusions on what fusion strategies should be preferred under such conditions are given.

  8. Region of interest video coding for low bit-rate transmission of carotid ultrasound videos over 3G wireless networks.

    Science.gov (United States)

    Tsapatsoulis, Nicolas; Loizou, Christos; Pattichis, Constantinos

    2007-01-01

    Efficient medical video transmission over 3G wireless is of great importance for fast diagnosis and on site medical staff training purposes. In this paper we present a region of interest based ultrasound video compression study which shows that significant reduction of the required, for transmission, bit rate can be achieved without altering the design of existing video codecs. Simple preprocessing of the original videos to define visually and clinically important areas is the only requirement.

  9. The research of moving objects behavior detection and tracking algorithm in aerial video

    Science.gov (United States)

    Yang, Le-le; Li, Xin; Yang, Xiao-ping; Li, Dong-hui

    2015-12-01

    The article focuses on the research of moving target detection and tracking algorithm in Aerial monitoring. Study includes moving target detection, moving target behavioral analysis and Target Auto tracking. In moving target detection, the paper considering the characteristics of background subtraction and frame difference method, using background reconstruction method to accurately locate moving targets; in the analysis of the behavior of the moving object, using matlab technique shown in the binary image detection area, analyzing whether the moving objects invasion and invasion direction; In Auto Tracking moving target, A video tracking algorithm that used the prediction of object centroids based on Kalman filtering was proposed.

  10. VIDEO DENOISING USING SWITCHING ADAPTIVE DECISION BASED ALGORITHM WITH ROBUST MOTION ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V. Jayaraj

    2010-08-01

    Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.

  11. Adaptive antenna array algorithms and their impact on code division ...

    African Journals Online (AJOL)

    In this paper four each blind adaptive array algorithms are developed, and their performance under different test situations (e.g. A WGN (Additive White Gaussian Noise) channel, and multipath environment) is studied A MATLAB test bed is created to show their performance on these two test situations and an optimum one ...

  12. High pressure humidification columns: Design equations, algorithm, and computer code

    Energy Technology Data Exchange (ETDEWEB)

    Enick, R.M. [Pittsburgh Univ., PA (United States). Dept. of Chemical and Petroleum Engineering; Klara, S.M. [USDOE Pittsburgh Energy Technology Center, PA (United States); Marano, J.J. [Burns and Roe Services Corp., Pittsburgh, PA (United States)

    1994-07-01

    This report describes the detailed development of a computer model to simulate the humidification of an air stream in contact with a water stream in a countercurrent, packed tower, humidification column. The computer model has been developed as a user model for the Advanced System for Process Engineering (ASPEN) simulator. This was done to utilize the powerful ASPEN flash algorithms as well as to provide ease of use when using ASPEN to model systems containing humidification columns. The model can easily be modified for stand-alone use by incorporating any standard algorithm for performing flash calculations. The model was primarily developed to analyze Humid Air Turbine (HAT) power cycles; however, it can be used for any application that involves a humidifier or saturator. The solution is based on a multiple stage model of a packed column which incorporates mass and energy, balances, mass transfer and heat transfer rate expressions, the Lewis relation and a thermodynamic equilibrium model for the air-water system. The inlet air properties, inlet water properties and a measure of the mass transfer and heat transfer which occur in the column are the only required input parameters to the model. Several example problems are provided to illustrate the algorithm`s ability to generate the temperature of the water, flow rate of the water, temperature of the air, flow rate of the air and humidity of the air as a function of height in the column. The algorithm can be used to model any high-pressure air humidification column operating at pressures up to 50 atm. This discussion includes descriptions of various humidification processes, detailed derivations of the relevant expressions, and methods of incorporating these equations into a computer model for a humidification column.

  13. A hybrid N-body code incorporating algorithmic regularization and post-Newtonian forces

    NARCIS (Netherlands)

    Harfst, S.; Gualandris, A.; Merritt, D.; Mikkola, S.

    2008-01-01

    We describe a novel N-body code designed for simulations of the central regions of galaxies containing massive black holes. The code incorporates Mikkola's 'algorithmic' chain regularization scheme including post-Newtonian terms up to PN2.5 order. Stars moving beyond the chain are advanced using a

  14. Video processing project

    CSIR Research Space (South Africa)

    Globisch, R

    2009-03-01

    Full Text Available Video processing source code for algorithms and tools used in software media pipelines (e.g. image scalers, colour converters, etc.) The currently available source code is written in C++ with their associated libraries and DirectShow- Filters....

  15. A general-purpose contact detection algorithm for nonlinear structural analysis codes

    Energy Technology Data Exchange (ETDEWEB)

    Heinstein, M.W.; Attaway, S.W.; Swegle, J.W.; Mello, F.J.

    1993-05-01

    A new contact detection algorithm has been developed to address difficulties associated with the numerical simulation of contact in nonlinear finite element structural analysis codes. Problems including accurate and efficient detection of contact for self-contacting surfaces, tearing and eroding surfaces, and multi-body impact are addressed. The proposed algorithm is portable between dynamic and quasi-static codes and can efficiently model contact between a variety of finite element types including shells, bricks, beams and particles. The algorithm is composed of (1) a location strategy that uses a global search to decide which slave nodes are in proximity to a master surface and (2) an accurate detailed contact check that uses the projected motions of both master surface and slave node. In this report, currently used contact detection algorithms and their associated difficulties are discussed. Then the proposed algorithm and how it addresses these problems is described. Finally, the capability of the new algorithm is illustrated with several example problems.

  16. Efficient biprediction decision scheme for fast high efficiency video coding encoding

    Science.gov (United States)

    Park, Sang-hyo; Lee, Seung-ho; Jang, Euee S.; Jun, Dongsan; Kang, Jung-Won

    2016-11-01

    An efficient biprediction decision scheme of high efficiency video coding (HEVC) is proposed for fast-encoding applications. For low-delay video applications, bidirectional prediction can be used to increase compression performance efficiently with previous reference frames. However, at the same time, the computational complexity of the HEVC encoder is significantly increased due to the additional biprediction search. Although a some research has attempted to reduce this complexity, whether the prediction is strongly related to both motion complexity and prediction modes in a coding unit has not yet been investigated. A method that avoids most compression-inefficient search points is proposed so that the computational complexity of the motion estimation process can be dramatically decreased. To determine if biprediction is critical, the proposed method exploits the stochastic correlation of the context of prediction units (PUs): the direction of a PU and the accuracy of a motion vector. Through experimental results, the proposed method showed that the time complexity of biprediction can be reduced to 30% on average, outperforming existing methods in view of encoding time, number of function calls, and memory access.

  17. The future of 3D and video coding in mobile and the internet

    Science.gov (United States)

    Bivolarski, Lazar

    2013-09-01

    The current Internet success has already changed our social and economic world and is still continuing to revolutionize the information exchange. The exponential increase of amount and types of data that is currently exchanged on the Internet represents significant challenge for the design of future architectures and solutions. This paper reviews the current status and trends in the design of solutions and research activities in the future Internet from point of view of managing the growth of bandwidth requirements and complexity of the multimedia that is being created and shared. Outlines the challenges that are present before the video coding and approaches to the design of standardized media formats and protocols while considering the expected convergence of multimedia formats and exchange interfaces. The rapid growth of connected mobile devices adds to the current and the future challenges in combination with the expected, in near future, arrival of multitude of connected devices. The new Internet technologies connecting the Internet of Things with wireless visual sensor networks and 3D virtual worlds requires conceptually new approaches of media content handling from acquisition to presentation in the 3D Media Internet. Accounting for the entire transmission system properties and enabling adaptation in real-time to context and content throughout the media proceeding path will be paramount in enabling the new media architectures as well as the new applications and services. The common video coding formats will need to be conceptually redesigned to allow for the implementation of the necessary 3D Media Internet features.

  18. Bit Plane Coding based Steganography Technique for JPEG2000 Images and Videos

    Directory of Open Access Journals (Sweden)

    Geeta Kasana

    2016-02-01

    Full Text Available In this paper, a Bit Plane Coding (BPC based steganography technique for JPEG2000 images and Motion JPEG2000 video is proposed. Embedding in this technique is performed in the lowest significant bit planes of the wavelet coefficients of a cover image. In JPEG2000 standard, the number of bit planes of wavelet coefficients to be used in encoding is dependent on the compression rate and are used in Tier-2 process of JPEG2000. In the proposed technique, Tier-1 and Tier-2 processes of JPEG2000 and Motion JPEG2000 are executed twice on the encoder side to collect the information about the lowest bit planes of all code blocks of a cover image, which is utilized in embedding and transmitted to the decoder. After embedding secret data, Optimal Pixel Adjustment Process (OPAP is applied on stego images to enhance its visual quality. Experimental results show that proposed technique provides large embedding capacity and better visual quality of stego images than existing steganography techniques for JPEG2000 compressed images and videos. Extracted secret image is similar to the original secret image.

  19. A new neutron energy spectrum unfolding code using a two steps genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Shahabinejad, H., E-mail: shahabinejad1367@yahoo.com; Hosseini, S.A.; Sohrabpour, M.

    2016-03-01

    A new neutron spectrum unfolding code TGASU (Two-steps Genetic Algorithm Spectrum Unfolding) has been developed to unfold the neutron spectrum from a pulse height distribution which was calculated using the MCNPX-ESUT computational Monte Carlo code. To perform the unfolding process, the response matrices were generated using the MCNPX-ESUT computational code. Both one step (common GA) and two steps GAs have been implemented to unfold the neutron spectra. According to the obtained results, the new two steps GA code results has shown closer match in all energy regions and particularly in the high energy regions. The results of the TGASU code have been compared with those of the standard spectra, LSQR method and GAMCD code. The results of the TGASU code have been demonstrated to be more accurate than that of the existing computational codes for both under-determined and over-determined problems.

  20. Dictionary Learning for Sparse Coding: Algorithms and Convergence Analysis.

    Science.gov (United States)

    Bao, Chenglong; Ji, Hui; Quan, Yuhui; Shen, Zuowei

    2016-07-01

    In recent years, sparse coding has been widely used in many applications ranging from image processing to pattern recognition. Most existing sparse coding based applications require solving a class of challenging non-smooth and non-convex optimization problems. Despite the fact that many numerical methods have been developed for solving these problems, it remains an open problem to find a numerical method which is not only empirically fast, but also has mathematically guaranteed strong convergence. In this paper, we propose an alternating iteration scheme for solving such problems. A rigorous convergence analysis shows that the proposed method satisfies the global convergence property: the whole sequence of iterates is convergent and converges to a critical point. Besides the theoretical soundness, the practical benefit of the proposed method is validated in applications including image restoration and recognition. Experiments show that the proposed method achieves similar results with less computation when compared to widely used methods such as K-SVD.

  1. Interactive Video Coding and Transmission over Heterogeneous Wired-to-Wireless IP Networks Using an Edge Proxy

    Directory of Open Access Journals (Sweden)

    Modestino James W

    2004-01-01

    Full Text Available Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC coding scheme employing Reed-Solomon (RS codes and rate-compatible punctured convolutional (RCPC codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.

  2. Code mapping algorithm for custom instructions on reconfigurable instruction set processors

    Science.gov (United States)

    Zhang, Huizhen; Chen, Yonghong

    2015-01-01

    Instruction set extension with custom instructions has become common to speed up execution of applications. And it is crucial of the problem to generate new optimised code with custom instructions through retargetable tool suites. This article proposes a code mapping algorithm, which maps static code of programs to dynamic hot paths corresponding to custom instructions. The algorithm uses parameterised polynomial representations and invokes some string matching method. It can play both fine-grained mapping and coarse-grained mapping and find nested matching optimisation in representations. Correctness of the algorithm is formally proved, and time complexity is also analysed from it. Furthermore, some experiments are designed with some programs of NetBench and MiBench. The results show that the algorithm can achieve high accuracy and have notable performance improvement.

  3. Transform extension for block-based hybrid video codec with decoupling transform sizes from prediction sizes and coding sizes

    Science.gov (United States)

    Chen, Jing; Li, Ge; Fan, Kui; Guo, Xiaoqiang

    2017-09-01

    In the block-based hybrid video coding framework, transform is applied to the residual signal resulting from intra/inter prediction. Thus in the most of video codecs, transform block (TB) size is equal to the prediction block (PB) size. To further improve coding efficiency, recent video coding techniques have supported decoupling transform and prediction sizes. By splitting one prediction block into small transform blocks, the Residual Quad-tree (RQT) structure attempts to search the best transform size. However, in the current RQT, the transform size cannot be larger than the size of prediction block. In this paper, we introduce a transform extension method by decoupling transform sizes from prediction sizes and coding sizes. In addition to getting the transform block within the current PB partition, we combine multiple adjacent PBs to form a larger TB and select best block size accordingly. According to our experiment on top of the newest reference software (ITM17.0) of MPEG Internet Video Coding (IVC) standard, consistent coding performance gains are obtained.

  4. Algorithms for High-speed Generating CRC Error Detection Coding in Separated Ultra-precision Measurement

    Science.gov (United States)

    Zhi, Z.; Tan, J. B.; Huang, X. D.; Chen, F. F.

    2006-10-01

    In order to solve the contradiction between error detection, transmission rate and system resources in data transmission of ultra-precision measurement, a kind of algorithm for high-speed generating CRC code has been put forward in this paper. Theoretical formulae for calculating CRC code of 16-bit segmented data are obtained by derivation. On the basis of 16-bit segmented data formulae, Optimized algorithm for 32-bit segmented data CRC coding is obtained, which solve the contradiction between memory occupancy and coding speed. Data coding experiments are conducted triumphantly by using high-speed ARM embedded system. The results show that this method has features of high error detecting ability, high speed and saving system resources, which improve Real-time Performance and Reliability of the measurement data communication.

  5. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  6. Jointly optimized spatial prediction and block transform for video and image coding.

    Science.gov (United States)

    Han, Jingning; Saxena, Ankur; Melkote, Vinay; Rose, Kenneth

    2012-04-01

    This paper proposes a novel approach to jointly optimize spatial prediction and the choice of the subsequent transform in video and image compression. Under the assumption of a separable first-order Gauss-Markov model for the image signal, it is shown that the optimal Karhunen-Loeve Transform, given available partial boundary information, is well approximated by a close relative of the discrete sine transform (DST), with basis vectors that tend to vanish at the known boundary and maximize energy at the unknown boundary. The overall intraframe coding scheme thus switches between this variant of the DST named asymmetric DST (ADST), and traditional discrete cosine transform (DCT), depending on prediction direction and boundary information. The ADST is first compared with DCT in terms of coding gain under ideal model conditions and is demonstrated to provide significantly improved compression efficiency. The proposed adaptive prediction and transform scheme is then implemented within the H.264/AVC intra-mode framework and is experimentally shown to significantly outperform the standard intra coding mode. As an added benefit, it achieves substantial reduction in blocking artifacts due to the fact that the transform now adapts to the statistics of block edges. An integer version of this ADST is also proposed.

  7. An Efficient Algorithm for Finding Dominant Trapping Sets of LDPC Codes

    CERN Document Server

    Karimi, Mehdi

    2011-01-01

    This paper presents an efficient algorithm for finding the dominant trapping sets of a low-density parity-check (LDPC) code. The algorithm can be used to estimate the error floor of LDPC codes or to be part of the apparatus to design LDPC codes with low error floors. For regular codes, the algorithm is initiated with a set of short cycles as the input. For irregular codes, in addition to short cycles, variable nodes with low degree and cycles with low approximate cycle extrinsic message degree (ACE) are also used as the initial inputs. The initial inputs are then expanded recursively to dominant trapping sets of increasing size. At the core of the algorithm lies the analysis of the graphical structure of dominant trapping sets and the relationship of such structures to short cycles, low-degree variable nodes and cycles with low ACE. The algorithm is universal in the sense that it can be used for an arbitrary graph and that it can be tailored to find other graphical objects, such as absorbing sets and Zyablov-...

  8. Computerized Dental Comparison: A Critical Review of Dental Coding and Ranking Algorithms Used in Victim Identification.

    Science.gov (United States)

    Adams, Bradley J; Aschheim, Kenneth W

    2016-01-01

    Comparison of antemortem and postmortem dental records is a leading method of victim identification, especially for incidents involving a large number of decedents. This process may be expedited with computer software that provides a ranked list of best possible matches. This study provides a comparison of the most commonly used conventional coding and sorting algorithms used in the United States (WinID3) with a simplified coding format that utilizes an optimized sorting algorithm. The simplified system consists of seven basic codes and utilizes an optimized algorithm based largely on the percentage of matches. To perform this research, a large reference database of approximately 50,000 antemortem and postmortem records was created. For most disaster scenarios, the proposed simplified codes, paired with the optimized algorithm, performed better than WinID3 which uses more complex codes. The detailed coding system does show better performance with extremely large numbers of records and/or significant body fragmentation. © 2015 American Academy of Forensic Sciences.

  9. A model of R-D performance evaluation for Rate-Distortion-Complexity evaluation of H.264 video coding

    DEFF Research Database (Denmark)

    Wu, Mo; Forchhammer, Søren

    2007-01-01

    This paper considers a method for evaluation of Rate-Distortion-Complexity (R-D-C) performance of video coding. A statistical model of the transformed coefficients is used to estimate the Rate-Distortion (R-D) performance. A model frame work for rate, distortion and slope of the R-D curve for inter...... and intra frame is presented. Assumptions are given for analyzing an R-D model for fast R-D-C evaluation. The theoretical expressions are combined with H.264 video coding, and confirmed by experimental results. The complexity frame work is applied to the integer motion estimation....

  10. Edge Polynomial Fractal Compression Algorithm for High Quality Video Transmission. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Freddie

    1999-06-01

    In this final report, Physical Optics Corporation (POC) provides a review of its Edge Polynomial Autonomous Compression (EPAC) technology. This project was undertaken to meet the need for low bandwidth transmission of full-motion video images. In addition, this report offers a synopsis of the logical data representation study that was performed to compress still images and video. The mapping singularities and polynomial representation of 3-D surfaces were found to be ideal for very high image compression. Our efforts were then directed to extending the EPAC algorithm for the motion of singularities by tracking the 3-D coordinates of characteristic points and the development of system components. Finally, we describe the integration of the software with the hardware components. This process consists of acquiring and processing each separate camera view, combining the information from different cameras to calculate the location of an object in three dimensions, and tracking the information history and the behavior of the objects.

  11. Rate Allocation in predictive video coding using a Convex Optimization Framework.

    Science.gov (United States)

    Fiengo, Aniello; Chierchia, Giovanni; Cagnazzo, Marco; Pesquet-Popescu, Beatrice

    2016-10-26

    Optimal rate allocation is among the most challenging tasks to perform in the context of predictive video coding, because of the dependencies between frames induced by motion compensation. In this paper, using a recursive rate-distortion model that explicitly takes into account these dependencies, we approach the frame-level rate allocation as a convex optimization problem. This technique is integrated into the recent HEVC encoder, and tested on several standard sequences. Experiments indicate that the proposed rate allocation ensures a better performance (in the rate-distortion sense) than the standard HEVC rate control, and with a little loss w.r.t. an optimal exhaustive research which is largely compensated by a much shorter execution time.

  12. Low-Complexity Hierarchical Mode Decision Algorithms Targeting VLSI Architecture Design for the H.264/AVC Video Encoder

    Directory of Open Access Journals (Sweden)

    Guilherme Corrêa

    2012-01-01

    Full Text Available In H.264/AVC, the encoding process can occur according to one of the 13 intraframe coding modes or according to one of the 8 available interframes block sizes, besides the SKIP mode. In the Joint Model reference software, the choice of the best mode is performed through exhaustive executions of the entire encoding process, which significantly increases the encoder's computational complexity and sometimes even forbids its use in real-time applications. Considering this context, this work proposes a set of heuristic algorithms targeting hardware architectures that lead to earlier selection of one encoding mode. The amount of repetitions of the encoding process is reduced by 47 times, at the cost of a relatively small cost in compression performance. When compared to other works, the fast hierarchical mode decision results are expressively more satisfactory in terms of computational complexity reduction, quality, and bit rate. The low-complexity mode decision architecture proposed is thus a very good option for real-time coding of high-resolution videos. The solution is especially interesting for embedded and mobile applications with support to multimedia systems, since it yields good compression rates and image quality with a very high reduction in the encoder complexity.

  13. Code-based Diagnostic Algorithms for Idiopathic Pulmonary Fibrosis. Case Validation and Improvement.

    Science.gov (United States)

    Ley, Brett; Urbania, Thomas; Husson, Gail; Vittinghoff, Eric; Brush, David R; Eisner, Mark D; Iribarren, Carlos; Collard, Harold R

    2017-06-01

    Population-based studies of idiopathic pulmonary fibrosis (IPF) in the United States have been limited by reliance on diagnostic code-based algorithms that lack clinical validation. To validate a well-accepted International Classification of Diseases, Ninth Revision, code-based algorithm for IPF using patient-level information and to develop a modified algorithm for IPF with enhanced predictive value. The traditional IPF algorithm was used to identify potential cases of IPF in the Kaiser Permanente Northern California adult population from 2000 to 2014. Incidence and prevalence were determined overall and by age, sex, and race/ethnicity. A validation subset of cases (n = 150) underwent expert medical record and chest computed tomography review. A modified IPF algorithm was then derived and validated to optimize positive predictive value. From 2000 to 2014, the traditional IPF algorithm identified 2,608 cases among 5,389,627 at-risk adults in the Kaiser Permanente Northern California population. Annual incidence was 6.8/100,000 person-years (95% confidence interval [CI], 6.1-7.7) and was higher in patients with older age, male sex, and white race. The positive predictive value of the IPF algorithm was only 42.2% (95% CI, 30.6 to 54.6%); sensitivity was 55.6% (95% CI, 21.2 to 86.3%). The corrected incidence was estimated at 5.6/100,000 person-years (95% CI, 2.6-10.3). A modified IPF algorithm had improved positive predictive value but reduced sensitivity compared with the traditional algorithm. A well-accepted International Classification of Diseases, Ninth Revision, code-based IPF algorithm performs poorly, falsely classifying many non-IPF cases as IPF and missing a substantial proportion of IPF cases. A modification of the IPF algorithm may be useful for future population-based studies of IPF.

  14. Real-time distributed video coding for 1K-pixel visual sensor networks

    Science.gov (United States)

    Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian

    2016-07-01

    Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.

  15. Coding algorithms for identifying patients with cirrhosis and hepatitis B or C virus using administrative data.

    Science.gov (United States)

    Niu, Bolin; Forde, Kimberly A; Goldberg, David S

    2015-01-01

    Despite the use of administrative data to perform epidemiological and cost-effectiveness research on patients with hepatitis B or C virus (HBV, HCV), there are no data outside of the Veterans Health Administration validating whether International Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM) codes can accurately identify cirrhotic patients with HBV or HCV. The validation of such algorithms is necessary for future epidemiological studies. We evaluated the positive predictive value (PPV) of ICD-9-CM codes for identifying chronic HBV or HCV among cirrhotic patients within the University of Pennsylvania Health System, a large network that includes a tertiary care referral center, a community-based hospital, and multiple outpatient practices across southeastern Pennsylvania and southern New Jersey. We reviewed a random sample of 200 cirrhotic patients with ICD-9-CM codes for HCV and 150 cirrhotic patients with ICD-9-CM codes for HBV. The PPV of 1 inpatient or 2 outpatient HCV codes was 88.0% (168/191, 95% CI: 82.5-92.2%), while the PPV of 1 inpatient or 2 outpatient HBV codes was 81.3% (113/139, 95% CI: 73.8-87.4%). Several variations of the primary coding algorithm were evaluated to determine if different combinations of inpatient and/or outpatient ICD-9-CM codes could increase the PPV of the coding algorithm. ICD-9-CM codes can identify chronic HBV or HCV in cirrhotic patients with a high PPV and can be used in future epidemiologic studies to examine disease burden and the proper allocation of resources. Copyright © 2014 John Wiley & Sons, Ltd.

  16. ROCIT : a visual object recognition algorithm based on a rank-order coding scheme.

    Energy Technology Data Exchange (ETDEWEB)

    Gonzales, Antonio Ignacio; Reeves, Paul C.; Jones, John J.; Farkas, Benjamin D.

    2004-06-01

    This document describes ROCIT, a neural-inspired object recognition algorithm based on a rank-order coding scheme that uses a light-weight neuron model. ROCIT coarsely simulates a subset of the human ventral visual stream from the retina through the inferior temporal cortex. It was designed to provide an extensible baseline from which to improve the fidelity of the ventral stream model and explore the engineering potential of rank order coding with respect to object recognition. This report describes the baseline algorithm, the model's neural network architecture, the theoretical basis for the approach, and reviews the history of similar implementations. Illustrative results are used to clarify algorithm details. A formal benchmark to the 1998 FERET fafc test shows above average performance, which is encouraging. The report concludes with a brief review of potential algorithmic extensions for obtaining scale and rotational invariance.

  17. Performance Analysis of Iterative Decoding Algorithms for PEG LDPC Codes in Nakagami Fading Channels

    Directory of Open Access Journals (Sweden)

    O. Al Rasheed

    2013-11-01

    Full Text Available In this paper we give a comparative analysis of decoding algorithms of Low Density Parity Check (LDPC codes in a channel with the Nakagami distribution of the fading envelope. We consider the Progressive Edge-Growth (PEG method and Improved PEG method for the parity check matrix construction, which can be used to avoid short girths, small trapping sets and a high level of error floor. A comparative analysis of several classes of LDPC codes in various propagation conditions and decoded using different decoding algorithms is also presented.

  18. Mean-Adaptive Real-Coding Genetic Algorithm and its Applications to Electromagnetic Optimization (Part Two

    Directory of Open Access Journals (Sweden)

    Z. Raida

    2007-09-01

    Full Text Available In the paper, a novel instance of the real-coding genetic algorithm (RCGA, called the Mean-adaptive real-coding genetic algorithm (MAD-RCGA, is applied along with other RCGAs, to selected problems in microwaves. The problems include the design of a microstrip dipole, the design of frequency-selective surfaces, and the design of a Yagi antenna. Apart from other things, the purpose of this paper is to compare these instances with MAD-RCGA on problems having some technical relevance.

  19. An iternative algorithm for correcting sequencing errors in DNA coding regions

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Ying; Mural, R.J.; Uberbacher, E.C.

    1995-12-31

    Insertion and deletion (indel) sequencing errors in DNA coding regions disrupt DNA-to-protein translation frames, and hence make most frame-sensitive coding recognition approaches fail. This paper extends the authors` previous work on indel detection and `correction` algorithms, and presents a more effective algorithm for localizing indels that appear in DNA coding regions and `correcting` the located indels by inserting or deleting DNA bases. The algorithm localizes indels by discovering changes of the preferred translation frames within presumed coding regions, and then `corrects` the indel errors to restore a consistent translation frame within each coding region. An iterative strategy is exploited to repeatedly localize and `correct` indel errors until no more indels can be found. Test results have shown that the algorithm can accurately locate the positions of indels. The technology presented here has proved to be very useful for single pass EST/cDNA or genomic sequences, and is also often beneficial for higher quality sequences from large genomic clones.

  20. Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network.

    Science.gov (United States)

    Lin, Kai; Wang, Di; Hu, Long

    2016-07-01

    With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC). The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S) evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods.

  1. Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network

    Directory of Open Access Journals (Sweden)

    Kai Lin

    2016-07-01

    Full Text Available With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC. The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods.

  2. Partial Encryption of Entropy-Coded Video Compression Using Coupled Chaotic Maps

    Directory of Open Access Journals (Sweden)

    Fadi Almasalha

    2014-10-01

    Full Text Available Due to pervasive communication infrastructures, a plethora of enabling technologies is being developed over mobile and wired networks. Among these, video streaming services over IP are the most challenging in terms of quality, real-time requirements and security. In this paper, we propose a novel scheme to efficiently secure variable length coded (VLC multimedia bit streams, such as H.264. It is based on code word error diffusion and variable size segment shuffling. The codeword diffusion and the shuffling mechanisms are based on random operations from a secure and computationally efficient chaos-based pseudo-random number generator. The proposed scheme is ubiquitous to the end users and can be deployed at any node in the network. It provides different levels of security, with encrypted data volume fluctuating between 5.5–17%. It works on the compressed bit stream without requiring any decoding. It provides excellent encryption speeds on different platforms, including mobile devices. It is 200% faster and 150% more power efficient when compared with AES software-based full encryption schemes. Regarding security, the scheme is robust to well-known attacks in the literature, such as brute force and known/chosen plain text attacks.

  3. Engineering application of in-core fuel management optimization code with CSA algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zhihong; Hu, Yongming [INET, Tsinghua university, Beijing 100084 (China)

    2009-06-15

    PWR in-core loading (reloading) pattern optimization is a complex combined problem. An excellent fuel management optimization code can greatly improve the efficiency of core reloading design, and bring economic and safety benefits. Today many optimization codes with experiences or searching algorithms (such as SA, GA, ANN, ACO) have been developed, while how to improve their searching efficiency and engineering usability still needs further research. CSA (Characteristic Statistic Algorithm) is a global optimization algorithm with high efficiency developed by our team. The performance of CSA has been proved on many problems (such as Traveling Salesman Problems). The idea of CSA is to induce searching direction by the statistic distribution of characteristic values. This algorithm is quite suitable for fuel management optimization. Optimization code with CSA has been developed and was used on many core models. The research in this paper is to improve the engineering usability of CSA code according to all the actual engineering requirements. Many new improvements have been completed in this code, such as: 1. Considering the asymmetry of burn-up in one assembly, the rotation of each assembly is considered as new optimization variables in this code. 2. Worth of control rods must satisfy the given constraint, so some relative modifications are added into optimization code. 3. To deal with the combination of alternate cycles, multi-cycle optimization is considered in this code. 4. To confirm the accuracy of optimization results, many identifications of the physics calculation module in this code have been done, and the parameters of optimization schemes are checked by SCIENCE code. The improved optimization code with CSA has been used on Qinshan nuclear plant of China. The reloading of cycle 7, 8, 9 (12 months, no burnable poisons) and the 18 months equilibrium cycle (with burnable poisons) reloading are optimized. At last, many optimized schemes are found by CSA code

  4. Analysis of Video Signal Transmission Through DWDM Network Based on a Quality Check Algorithm

    Directory of Open Access Journals (Sweden)

    A. Markovic

    2013-04-01

    Full Text Available This paper provides an analysis of the multiplexed video signal transmission through the Dense Wavelength Division Multiplexing (DWDM network based on a quality check algorithm, which determines where the interruption of the transmission quality starts. On the basis of this algorithm, simulations of transmission for specific values of fiber parameters ​​ are executed. The analysis of the results shows how the BER and Q-factor change depends on the length of the fiber, i.e. on the number of amplifiers, and what kind of an effect the number of multiplexed channels and the flow rate per channel have on a transmited signals. Analysis of DWDM systems is performed in the software package OptiSystem 7.0, which is designed for systems with flow rates of 2.5 Gb/s and 10 Gb/s per channel.

  5. A multiscale numerical algorithm for heat transfer simulation between multidimensional CFD and monodimensional system codes

    Science.gov (United States)

    Chierici, A.; Chirco, L.; Da Vià, R.; Manservisi, S.; Scardovelli, R.

    2017-11-01

    Nowadays the rapidly-increasing computational power allows scientists and engineers to perform numerical simulations of complex systems that can involve many scales and several different physical phenomena. In order to perform such simulations, two main strategies can be adopted: one may develop a new numerical code where all the physical phenomena of interest are modelled or one may couple existing validated codes. With the latter option, the creation of a huge and complex numerical code is avoided but efficient methods for data exchange are required since the performance of the simulation is highly influenced by its coupling techniques. In this work we propose a new algorithm that can be used for volume and/or boundary coupling purposes for both multiscale and multiphysics numerical simulations. The proposed algorithm is used for a multiscale simulation involving several CFD domains and monodimensional loops. We adopt the overlapping domain strategy, so the entire flow domain is simulated with the system code. We correct the system code solution by matching averaged inlet and outlet fields located at the boundaries of the CFD domains that overlap parts of the monodimensional loop. In particular we correct pressure losses and enthalpy values with source-sink terms that are imposed in the system code equations. The 1D-CFD coupling is a defective one since the CFD code requires point-wise values on the coupling interfaces and the system code provides only averaged quantities. In particular we impose, as inlet boundary conditions for the CFD domains, the mass flux and the mean enthalpy that are calculated by the system code. With this method the mass balance is preserved at every time step of the simulation. The coupling between consecutive CFD domains is not a defective one since with the proposed algorithm we can interpolate the field solutions on the boundary interfaces. We use the MED data structure as the base structure where all the field operations are

  6. Efficient 3D Watermarked Video Communication with Chaotic Interleaving, Convolution Coding, and LMMSE Equalization

    Science.gov (United States)

    El-Shafai, W.; El-Bakary, E. M.; El-Rabaie, S.; Zahran, O.; El-Halawany, M.; Abd El-Samie, F. E.

    2017-06-01

    Three-Dimensional Multi-View Video (3D-MVV) transmission over wireless networks suffers from Macro-Blocks losses due to either packet dropping or fading-motivated bit errors. Thus, the robust performance of 3D-MVV transmission schemes over wireless channels becomes a recent considerable hot research issue due to the restricted resources and the presence of severe channel errors. The 3D-MVV is composed of multiple video streams shot by several cameras around a single object, simultaneously. Therefore, it is an urgent task to achieve high compression ratios to meet future bandwidth constraints. Unfortunately, the highly-compressed 3D-MVV data becomes more sensitive and vulnerable to packet losses, especially in the case of heavy channel faults. Thus, in this paper, we suggest the application of a chaotic Baker interleaving approach with equalization and convolution coding for efficient Singular Value Decomposition (SVD) watermarked 3D-MVV transmission over an Orthogonal Frequency Division Multiplexing wireless system. Rayleigh fading and Additive White Gaussian Noise are considered in the real scenario of 3D-MVV transmission. The SVD watermarked 3D-MVV frames are primarily converted to their luminance and chrominance components, which are then converted to binary data format. After that, chaotic interleaving is applied prior to the modulation process. It is used to reduce the channel effects on the transmitted bit streams and it also adds a degree of encryption to the transmitted 3D-MVV frames. To test the performance of the proposed framework; several simulation experiments on different SVD watermarked 3D-MVV frames have been executed. The experimental results show that the received SVD watermarked 3D-MVV frames still have high Peak Signal-to-Noise Ratios and watermark extraction is possible in the proposed framework.

  7. Fast Mode Decision in the HEVC Video Coding Standard by Exploiting Region with Dominated Motion and Saliency Features.

    Science.gov (United States)

    Podder, Pallab Kanti; Paul, Manoranjan; Murshed, Manzur

    2016-01-01

    The emerging High Efficiency Video Coding (HEVC) standard introduces a number of innovative and powerful coding tools to acquire better compression efficiency compared to its predecessor H.264. The encoding time complexities have also increased multiple times that is not suitable for realtime video coding applications. To address this limitation, this paper employs a novel coding strategy to reduce the time complexity in HEVC encoder by efficient selection of appropriate block-partitioning modes based on human visual features (HVF). The HVF in the proposed technique comprise with human visual attention modelling-based saliency feature and phase correlation-based motion features. The features are innovatively combined through a fusion process by developing a content-based adaptive weighted cost function to determine the region with dominated motion/saliency (RDMS)- based binary pattern for the current block. The generated binary pattern is then compared with a codebook of predefined binary pattern templates aligned to the HEVC recommended block-paritioning to estimate a subset of inter-prediction modes. Without exhaustive exploration of all modes available in the HEVC standard, only the selected subset of modes are motion estimated and motion compensated for a particular coding unit. The experimental evaluation reveals that the proposed technique notably down-scales the average computational time of the latest HEVC reference encoder by 34% while providing similar rate-distortion (RD) performance for a wide range of video sequences.

  8. A QoE Aware Fairness Bi-level Resource Allocation Algorithm for Multiple Video Streaming in WLAN

    Directory of Open Access Journals (Sweden)

    Hu Zhou

    2015-11-01

    Full Text Available With the increasing of smart devices such as mobile phones and tablets, the scenario of multiple video users watching video streaming simultaneously in one wireless local area network (WLAN becomes more and more popular. However, the quality of experience (QoE and the fairness among multiple users are seriously impacted by the limited bandwidth and shared resources of WLAN. In this paper, we propose a novel bi-level resource allocation algorithm. To maximize the total throughput of the network, the WLAN is firstly tuned to the optimal operation point. Then the wireless resource is carefully allocated at the first level, i.e., between AP and uplink background traffic users, and the second level, i.e., among downlink video users. The simulation results show that the proposed algorithm can guarantee the QoE and the fairness for all the video users, and there is little impact on the average throughput of the background traffic users.

  9. FFT Algorithm for Binary Extension Finite Fields and Its Application to Reed–Solomon Codes

    KAUST Repository

    Lin, Sian Jheng

    2016-08-15

    Recently, a new polynomial basis over binary extension fields was proposed, such that the fast Fourier transform (FFT) over such fields can be computed in the complexity of order O(n lg(n)), where n is the number of points evaluated in FFT. In this paper, we reformulate this FFT algorithm, such that it can be easier understood and be extended to develop frequency-domain decoding algorithms for (n = 2(m), k) systematic Reed-Solomon (RS) codes over F-2m, m is an element of Z(+), with n-k a power of two. First, the basis of syndrome polynomials is reformulated in the decoding procedure so that the new transforms can be applied to the decoding procedure. A fast extended Euclidean algorithm is developed to determine the error locator polynomial. The computational complexity of the proposed decoding algorithm is O(n lg(n-k)+(n-k)lg(2)(n-k)), improving upon the best currently available decoding complexity O(n lg(2)(n) lg lg(n)), and reaching the best known complexity bound that was established by Justesen in 1976. However, Justesen\\'s approach is only for the codes over some specific fields, which can apply Cooley-Tukey FFTs. As revealed by the computer simulations, the proposed decoding algorithm is 50 times faster than the conventional one for the (2(16), 2(15)) RS code over F-216.

  10. Spatial resolution enhancement residual coding using hybrid ...

    Indian Academy of Sciences (India)

    the increasing demands of video communication that motivates researchers to develop cutting- edge algorithms. All the video coding standards, to date, make use of various ... quantization and entropy coding to minimize spatio- temporal, intra-frame, visual, and statistical redundancies, respectively. Intra and inter prediction.

  11. Algorithms For Wireless Channel Equalization With Joint Coding And Soft Decision Feedback

    Directory of Open Access Journals (Sweden)

    Radu DOBRESCU

    2001-12-01

    Full Text Available The paper proposes a new approach based on Joint Entropy Maximisation (JEM using a soft decision feedback equalizer (S-DE to suppress error propagation. In its first section, the paper presents the principle of the solution and the theoretical framework based on entropy maximisation, which allows introducing the soft decision device without assuming that the channel distortion is Gaussian. Because JE is a non-linear function, a gradient descent algorithm is used for maximising. Then an equivalence of JEM and ISIC (Inter-Symbol Interference Cancellation is proved in order to establish that an equalised single carrier system using coded modulation (8-phase shift keying associated with a convolution code offers similar performances when compared with multicarrier modulation. In the second section the paper develop an adequate receiver model for joint convolution coding and S-DFE. The error correction decoder uses a standard Viterbi algorithm. The DFE consists of a feedforward finite impulse response (FIR filter (FFF and a feedback filter (FBF implemented as a transversal FIR filter. FFF eliminates the precursor ISI, while FBF minimise the effect of residual ISI using soft decisions by the joint coding and equalisation process. The third main section of the paper describes the proposed method for estimating optimum soft feedback using a maximum a posteriori probability (MAP algorithm. Then, performances of the soft decision device in a simulated environment are analysed on a structure with 8 taps for FFF and 5 taps for FBF. Since the purpose of the evaluation was to compare the proposed S-DFE with a former H-EFE, the coded packet error rate was estimated in a two-path and in a six- path channel. We have shown that in some case the proposed algorithm offers better convergence rate and robustness when compared with the corresponding existing algorithm. Some conclusions on the extension of the S-DFE techniques in vary applications are finally presented.

  12. Decoding the encoding of functional brain networks: An fMRI classification comparison of non-negative matrix factorization (NMF), independent component analysis (ICA), and sparse coding algorithms.

    Science.gov (United States)

    Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E

    2017-04-15

    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (pICA and sparse coding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (pICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Cardinality enhancement utilizing Sequential Algorithm (SeQ) code in OCDMA system

    Science.gov (United States)

    Fazlina, C. A. S.; Rashidi, C. B. M.; Rahman, A. K.; Aljunid, S. A.

    2017-11-01

    Optical Code Division Multiple Access (OCDMA) has been important with increasing demand for high capacity and speed for communication in optical networks because of OCDMA technique high efficiency that can be achieved, hence fibre bandwidth is fully used. In this paper we will focus on Sequential Algorithm (SeQ) code with AND detection technique using Optisystem design tool. The result revealed SeQ code capable to eliminate Multiple Access Interference (MAI) and improve Bit Error Rate (BER), Phase Induced Intensity Noise (PIIN) and orthogonally between users in the system. From the results, SeQ shows good performance of BER and capable to accommodate 190 numbers of simultaneous users contrast with existing code. Thus, SeQ code have enhanced the system about 36% and 111% of FCC and DCS code. In addition, SeQ have good BER performance 10-25 at 155 Mbps in comparison with 622 Mbps, 1 Gbps and 2 Gbps bit rate. From the plot graph, 155 Mbps bit rate is suitable enough speed for FTTH and LAN networks. Resolution can be made based on the superior performance of SeQ code. Thus, these codes will give an opportunity in OCDMA system for better quality of service in an optical access network for future generation's usage

  14. Stereo side information generation in low-delay distributed stereo video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Forchhammer, Søren

    2012-01-01

    Distributed Video Coding (DVC) is a technique that allows shifting the computational complexity from the encoder to the decoder. One of the core elements of the decoder is the creation of the Side Information (SI), which is a hypothesis of what the to-be-decoded frame looks like. Much work on DVC...... has been carried out: often the decoder can use future and past frames in order to obtain the SI exploiting the time redundancy. Other work has addressed a Multiview scenario; exploiting the frames coming from cameras close to the one we are decoding (usually a left and right camera) it is possible...... to create SI exploiting the inter-view spatial redundancy. A careful fusion of the two SI should be done in order to use the best part of each SI. In this work we study a Stereo Low-Delay scenario using only two views. Due to the delay constraint we use only past frames of the sequence we are decoding...

  15. Validation of ICD-9-CM coding algorithm for improved identification of hypoglycemia visits

    Directory of Open Access Journals (Sweden)

    Lieberman Rebecca M

    2008-04-01

    Full Text Available Abstract Background Accurate identification of hypoglycemia cases by International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM codes will help to describe epidemiology, monitor trends, and propose interventions for this important complication in patients with diabetes. Prior hypoglycemia studies utilized incomplete search strategies and may be methodologically flawed. We sought to validate a new ICD-9-CM coding algorithm for accurate identification of hypoglycemia visits. Methods This was a multicenter, retrospective cohort study using a structured medical record review at three academic emergency departments from July 1, 2005 to June 30, 2006. We prospectively derived a coding algorithm to identify hypoglycemia visits using ICD-9-CM codes (250.3, 250.8, 251.0, 251.1, 251.2, 270.3, 775.0, 775.6, and 962.3. We confirmed hypoglycemia cases by chart review identified by candidate ICD-9-CM codes during the study period. The case definition for hypoglycemia was documented blood glucose 3.9 mmol/l or emergency physician charted diagnosis of hypoglycemia. We evaluated individual components and calculated the positive predictive value. Results We reviewed 636 charts identified by the candidate ICD-9-CM codes and confirmed 436 (64% cases of hypoglycemia by chart review. Diabetes with other specified manifestations (250.8, often excluded in prior hypoglycemia analyses, identified 83% of hypoglycemia visits, and unspecified hypoglycemia (251.2 identified 13% of hypoglycemia visits. The absence of any predetermined co-diagnosis codes improved the positive predictive value of code 250.8 from 62% to 92%, while excluding only 10 (2% true hypoglycemia visits. Although prior analyses included only the first-listed ICD-9 code, more than one-quarter of identified hypoglycemia visits were outside this primary diagnosis field. Overall, the proposed algorithm had 89% positive predictive value (95% confidence interval, 86–92 for

  16. RD Optimized, Adaptive, Error-Resilient Transmission of MJPEG2000-Coded Video over Multiple Time-Varying Channels

    Directory of Open Access Journals (Sweden)

    Bezan Scott

    2006-01-01

    Full Text Available To reliably transmit video over error-prone channels, the data should be both source and channel coded. When multiple channels are available for transmission, the problem extends to that of partitioning the data across these channels. The condition of transmission channels, however, varies with time. Therefore, the error protection added to the data at one instant of time may not be optimal at the next. In this paper, we propose a method for adaptively adding error correction code in a rate-distortion (RD optimized manner using rate-compatible punctured convolutional codes to an MJPEG2000 constant rate-coded frame of video. We perform an analysis on the rate-distortion tradeoff of each of the coding units (tiles and packets in each frame and adapt the error correction code assigned to the unit taking into account the bandwidth and error characteristics of the channels. This method is applied to both single and multiple time-varying channel environments. We compare our method with a basic protection method in which data is either not transmitted, transmitted with no protection, or transmitted with a fixed amount of protection. Simulation results show promising performance for our proposed method.

  17. Pilot study on real-time motion detection in UAS video data by human observer and image exploitation algorithm

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Voit, Michael; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2017-05-01

    Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface - again, we compare gaze-based and traditional mouse-based interaction - we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.

  18. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    Science.gov (United States)

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  19. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    Directory of Open Access Journals (Sweden)

    Behrang Barekatain

    Full Text Available In recent years, Random Network Coding (RNC has emerged as a promising solution for efficient Peer-to-Peer (P2P video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  20. Automation of RELAP5 input calibration and code validation using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)

    2016-04-15

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  1. Resource-Constrained Low-Complexity Video Coding for Wireless Transmission

    DEFF Research Database (Denmark)

    Ukhanova, Ann

    of video quality. We proposed a new metric for objective quality assessment that considers frame rate. As many applications deal with wireless video transmission, we performed an analysis of compression and transmission systems with a focus on power-distortion trade-off. We proposed an approach......Constrained resources like memory, power, bandwidth and delay requirements in many mobile systems pose limitations for video applications. Standard approaches for video compression and transmission do not always satisfy system requirements. In this thesis we have shown that it is possible to modify...... for ratedistortion-complexity optimization of upcoming video compression standard HEVC. We also provided a new method allowing decrease of power consumption on mobile devices in 3G networks. Finally, we proposed low-delay and low-power approaches for video transmission over wireless personal area networks, including...

  2. Are you ready for an office code blue? : online video to prepare for office emergencies.

    Science.gov (United States)

    Moore, Simon

    2015-01-01

    Medical emergencies occur commonly in offices of family physicians, yet many offices are poorly prepared for emergencies. An Internet-based educational video discussing office emergencies might improve the responses of physicians and their staff to emergencies, yet such a tool has not been previously described. To use evidence-based practices to develop an educational video detailing preparation for emergencies in medical offices, disseminate the video online, and evaluate the attitudes of physicians and their staff toward the video. A 6-minute video was created using a review of recent literature and Canadian regulatory body policies. The video describes recommended emergency equipment, emergency response improvement, and office staff training. Physicians and their staff were invited to view the video online at www.OfficeEmergencies.ca. Viewers' opinions of the video format and content were assessed by survey (n = 275). Survey findings indicated the video was well presented and relevant, and the Web-based format was considered convenient and satisfactory. Participants would take other courses using this technology, and agreed this program would enhance patient care. Copyright© the College of Family Physicians of Canada.

  3. GOP-based channel rate allocation using genetic algorithm for scalable video streaming over error-prone networks.

    Science.gov (United States)

    Fang, Tao; Chau, Lap-Pui

    2006-06-01

    In this paper, we address the problem of unequal error protection (UEP) for scalable video transmission over wireless packet-erasure channel. Unequal amounts of protection are allocated to the different frames (I- or P-frame) of a group-of-pictures (GOP), and in each frame, unequal amounts of protection are allocated to the progressive bit-stream of scalable video to provide a graceful degradation of video quality as packet loss rate varies. We use a genetic algorithm (GA) to quickly get the allocation pattern, which is hard to get with other conventional methods, like hill-climbing method. Theoretical analysis and experimental results both demonstrate the advantage of the proposed algorithm.

  4. Optimisation des codes LDPC irréguliers et algorithmes de décodage des codes LDPC q-aires

    OpenAIRE

    cances, Jean Pierre

    2013-01-01

    Cette note technique rappelle les principes d'optimisation pour obtenir les profils de codes LDPC irréguliers performants et rappelle les principes des algorithmes de décodage utilizes pour les codes LDPC q-aires à grande efficacité spectrale.

  5. Colour-reproduction algorithm for transmitting variable video frames and its application to capsule endoscopy.

    Science.gov (United States)

    Khan, Tareq; Shrestha, Ravi; Imtiaz, Md Shamin; Wahid, Khan A

    2015-04-01

    Presented is a new power-efficient colour generation algorithm for wireless capsule endoscopy (WCE) application. In WCE, transmitting colour image data from the human intestine through radio frequency (RF) consumes a huge amount of power. The conventional way is to transmit all R, G and B components of all frames. Using the proposed dictionary-based colour generation scheme, instead of sending all R, G and B frames, first one colour frame is sent followed by a series of grey-scale frames. At the receiver end, the colour information is extracted from the colour frame and then added to colourise the grey-scale frames. After a certain number of grey-scale frames, another colour frame is sent followed by the same number of grey-scale frames. This process is repeated until the end of the video sequence to maintain the colour similarity. As a result, over 50% of RF transmission power can be saved using the proposed scheme, which will eventually lead to a battery life extension of the capsule by 4-7 h. The reproduced colour images have been evaluated both statistically and subjectively by professional gastroenterologists. The algorithm is finally implemented using a WCE prototype and the performance is validated using an ex-vivo trial.

  6. Comprehensive depth estimation algorithm for efficient stereoscopic content creation in three-dimensional video systems

    Science.gov (United States)

    Xu, Huihui; Jiang, Mingyan

    2015-07-01

    Two-dimensional to three-dimensional (3-D) conversion in 3-D video applications has attracted great attention as it can alleviate the problem of stereoscopic content shortage. Depth estimation is an essential part of this conversion since the depth accuracy directly affects the quality of a stereoscopic image. In order to generate a perceptually reasonable depth map, a comprehensive depth estimation algorithm that considers the scenario type is presented. Based on the human visual system mechanism, which is sensitive to a change in the scenario, this study classifies the type of scenario into four classes according to the relationship between the movements of the camera and the object, and then leverages different strategies on the basis of the scenario type. The proposed strategies efficiently extract the depth information from different scenarios. In addition, the depth generation method for a scenario in which there is no motion, neither of the object nor the camera, is also suitable for the single image. Qualitative and quantitative evaluation results demonstrate that the proposed depth estimation algorithm is very effective for generating stereoscopic content and providing a realistic visual experience.

  7. A 3-Step Algorithm Using Region-Based Active Contours for Video Objects Detection

    Directory of Open Access Journals (Sweden)

    Stéphanie Jehan-Besson

    2002-06-01

    Full Text Available We propose a 3-step algorithm for the automatic detection of moving objects in video sequences using region-based active contours. First, we introduce a very full general framework for region-based active contours with a new Eulerian method to compute the evolution equation of the active contour from a criterion including both region-based and boundary-based terms. This framework can be easily adapted to various applications, thanks to the introduction of functions named descriptors of the different regions. With this new Eulerian method based on shape optimization principles, we can easily take into account the case of descriptors depending upon features globally attached to the regions. Second, we propose a 3-step algorithm for detection of moving objects, with a static or a mobile camera, using region-based active contours. The basic idea is to hierarchically associate temporal and spatial information. The active contour evolves with successively three sets of descriptors: a temporal one, and then two spatial ones. The third spatial descriptor takes advantage of the segmentation of the image in intensity homogeneous regions. User interaction is reduced to the choice of a few parameters at the beginning of the process. Some experimental results are supplied.

  8. Colour-reproduction algorithm for transmitting variable video frames and its application to capsule endoscopy

    Science.gov (United States)

    Khan, Tareq; Shrestha, Ravi; Imtiaz, Md. Shamin

    2015-01-01

    Presented is a new power-efficient colour generation algorithm for wireless capsule endoscopy (WCE) application. In WCE, transmitting colour image data from the human intestine through radio frequency (RF) consumes a huge amount of power. The conventional way is to transmit all R, G and B components of all frames. Using the proposed dictionary-based colour generation scheme, instead of sending all R, G and B frames, first one colour frame is sent followed by a series of grey-scale frames. At the receiver end, the colour information is extracted from the colour frame and then added to colourise the grey-scale frames. After a certain number of grey-scale frames, another colour frame is sent followed by the same number of grey-scale frames. This process is repeated until the end of the video sequence to maintain the colour similarity. As a result, over 50% of RF transmission power can be saved using the proposed scheme, which will eventually lead to a battery life extension of the capsule by 4–7 h. The reproduced colour images have been evaluated both statistically and subjectively by professional gastroenterologists. The algorithm is finally implemented using a WCE prototype and the performance is validated using an ex-vivo trial. PMID:26609405

  9. Using image processing technology combined with decision tree algorithm in laryngeal video stroboscope automatic identification of common vocal fold diseases.

    Science.gov (United States)

    Jeffrey Kuo, Chung-Feng; Wang, Po-Chun; Chu, Yueng-Hsiang; Wang, Hsing-Won; Lai, Chun-Yu

    2013-10-01

    This study used the actual laryngeal video stroboscope videos taken by physicians in clinical practice as the samples for experimental analysis. The samples were dynamic vocal fold videos. Image processing technology was used to automatically capture the image of the largest glottal area from the video to obtain the physiological data of the vocal folds. In this study, an automatic vocal fold disease identification system was designed, which can obtain the physiological parameters for normal vocal folds, vocal paralysis and vocal nodules from image processing according to the pathological features. The decision tree algorithm was used as the classifier of the vocal fold diseases. The identification rate was 92.6%, and the identification rate with an image recognition improvement processing procedure after classification can be improved to 98.7%. Hence, the proposed system has value in clinical practices. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  10. Chaos gray-coded genetic algorithm and its application for pollution source identifications in convection diffusion equation

    Science.gov (United States)

    Yang, Xiaohua; Yang, Zhifeng; Yin, Xinan; Li, Jianqiang

    2008-10-01

    In order to reduce the computational amount and improve computational precision for nonlinear optimizations and pollution source identification in convection-diffusion equation, a new algorithm, chaos gray-coded genetic algorithm (CGGA) is proposed, in which initial population are generated by chaos mapping, and new chaos mutation and Hooke-Jeeves evolution operation are used. With the shrinking of searching range, CGGA gradually directs to an optimal result with the excellent individuals obtained by gray-coded genetic algorithm. Its convergence is analyzed. It is very efficient in maintaining the population diversity during the evolution process of gray-coded genetic algorithm. This new algorithm overcomes any Hamming-cliff phenomena existing in other encoding genetic algorithm. Its efficiency is verified by application of 20 nonlinear test functions of 1-20 variables compared with standard binary-coded genetic algorithm and improved genetic algorithm. The position and intensity of pollution source are well found by CGGA. Compared with Gray-coded hybrid-accelerated genetic algorithm and pure random search algorithm, CGGA has rapider convergent speed and higher calculation precision.

  11. Analysis of image content recognition algorithm based on sparse coding and machine learning

    Science.gov (United States)

    Xiao, Yu

    2017-03-01

    This paper presents an image classification algorithm based on spatial sparse coding model and random forest. Firstly, SIFT feature extraction of the image; and then use the sparse encoding theory to generate visual vocabulary based on SIFT features, and using the visual vocabulary of SIFT features into a sparse vector; through the combination of regional integration and spatial sparse vector, the sparse vector gets a fixed dimension is used to represent the image; at last random forest classifier for image sparse vectors for training and testing, using the experimental data set for standard test Caltech-101 and Scene-15. The experimental results show that the proposed algorithm can effectively represent the features of the image and improve the classification accuracy. In this paper, we propose an innovative image recognition algorithm based on image segmentation, sparse coding and multi instance learning. This algorithm introduces the concept of multi instance learning, the image as a multi instance bag, sparse feature transformation by SIFT images as instances, sparse encoding model generation visual vocabulary as the feature space is mapped to the feature space through the statistics on the number of instances in bags, and then use the 1-norm SVM to classify images and generate sample weights to select important image features.

  12. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    Directory of Open Access Journals (Sweden)

    Yingxian Zhang

    2014-01-01

    Full Text Available We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length.

  13. Code Tracking Algorithms for Mitigating Multipath Effects in Fading Channels for Satellite-Based Positioning

    Directory of Open Access Journals (Sweden)

    Markku Renfors

    2007-12-01

    Full Text Available The ever-increasing public interest in location and positioning services has originated a demand for higher performance global navigation satellite systems (GNSSs. In order to achieve this incremental performance, the estimation of line-of-sight (LOS delay with high accuracy is a prerequisite for all GNSSs. The delay lock loops (DLLs and their enhanced variants (i.e., feedback code tracking loops are the structures of choice for the commercial GNSS receivers, but their performance in severe multipath scenarios is still rather limited. In addition, the new satellite positioning system proposals specify the use of a new modulation, the binary offset carrier (BOC modulation, which triggers a new challenge in the code tracking stage. Therefore, in order to meet this emerging challenge and to improve the accuracy of the delay estimation in severe multipath scenarios, this paper analyzes feedback as well as feedforward code tracking algorithms and proposes the peak tracking (PT methods, which are combinations of both feedback and feedforward structures and utilize the inherent advantages of both structures. We propose and analyze here two variants of PT algorithm: PT with second-order differentiation (Diff2, and PT with Teager Kaiser (TK operator, which will be denoted herein as PT(Diff2 and PT(TK, respectively. In addition to the proposal of the PT methods, the authors propose also an improved early-late-slope (IELS multipath elimination technique which is shown to provide very good mean-time-to-lose-lock (MTLL performance. An implementation of a noncoherent multipath estimating delay locked loop (MEDLL structure is also presented. We also incorporate here an extensive review of the existing feedback and feedforward delay estimation algorithms for direct sequence code division multiple access (DS-CDMA signals in satellite fading channels, by taking into account the impact of binary phase shift keying (BPSK as well as the newly proposed BOC modulation

  14. Code Tracking Algorithms for Mitigating Multipath Effects in Fading Channels for Satellite-Based Positioning

    Science.gov (United States)

    Bhuiyan, Mohammad Zahidul H.; Lohan, Elena Simona; Renfors, Markku

    2007-12-01

    The ever-increasing public interest in location and positioning services has originated a demand for higher performance global navigation satellite systems (GNSSs). In order to achieve this incremental performance, the estimation of line-of-sight (LOS) delay with high accuracy is a prerequisite for all GNSSs. The delay lock loops (DLLs) and their enhanced variants (i.e., feedback code tracking loops) are the structures of choice for the commercial GNSS receivers, but their performance in severe multipath scenarios is still rather limited. In addition, the new satellite positioning system proposals specify the use of a new modulation, the binary offset carrier (BOC) modulation, which triggers a new challenge in the code tracking stage. Therefore, in order to meet this emerging challenge and to improve the accuracy of the delay estimation in severe multipath scenarios, this paper analyzes feedback as well as feedforward code tracking algorithms and proposes the peak tracking (PT) methods, which are combinations of both feedback and feedforward structures and utilize the inherent advantages of both structures. We propose and analyze here two variants of PT algorithm: PT with second-order differentiation (Diff2), and PT with Teager Kaiser (TK) operator, which will be denoted herein as PT(Diff2) and PT(TK), respectively. In addition to the proposal of the PT methods, the authors propose also an improved early-late-slope (IELS) multipath elimination technique which is shown to provide very good mean-time-to-lose-lock (MTLL) performance. An implementation of a noncoherent multipath estimating delay locked loop (MEDLL) structure is also presented. We also incorporate here an extensive review of the existing feedback and feedforward delay estimation algorithms for direct sequence code division multiple access (DS-CDMA) signals in satellite fading channels, by taking into account the impact of binary phase shift keying (BPSK) as well as the newly proposed BOC modulation

  15. EMdeCODE: a novel algorithm capable of reading words of epigenetic code to predict enhancers and retroviral integration sites and to identify H3R2me1 as a distinctive mark of coding versus non-coding genes.

    Science.gov (United States)

    Santoni, Federico Andrea

    2013-02-01

    Existence of some extra-genetic (epigenetic) codes has been postulated since the discovery of the primary genetic code. Evident effects of histone post-translational modifications or DNA methylation over the efficiency and the regulation of DNA processes are supporting this postulation. EMdeCODE is an original algorithm that approximate the genomic distribution of given DNA features (e.g. promoter, enhancer, viral integration) by identifying relevant ChIPSeq profiles of post-translational histone marks or DNA binding proteins and combining them in a supermark. EMdeCODE kernel is essentially a two-step procedure: (i) an expectation-maximization process calculates the mixture of epigenetic factors that maximize the Sensitivity (recall) of the association with the feature under study; (ii) the approximated density is then recursively trimmed with respect to a control dataset to increase the precision by reducing the number of false positives. EMdeCODE densities improve significantly the prediction of enhancer loci and retroviral integration sites with respect to previous methods. Importantly, it can also be used to extract distinctive factors between two arbitrary conditions. Indeed EMdeCODE identifies unexpected epigenetic profiles specific for coding versus non-coding RNA, pointing towards a new role for H3R2me1 in coding regions.

  16. One-time collision arbitration algorithm in radio-frequency identification based on the Manchester code

    Science.gov (United States)

    Liu, Chen-Chung; Chan, Yin-Tsung

    2011-02-01

    In radio-requency identification (RFID) systems, when multiple tags transmit data to a reader simultaneously, these data may collide and create unsuccessful identifications; hence, anticollision algorithms are needed to reduce collisions (collision cycles) to improve the tag identification speed. We propose a one-time collision arbitration algorithm to reduce both the number of collisions and the time consumption for tags' identification in RFID. The proposed algorithm uses Manchester coding to detect the locations of collided bits, uses the divide-and-conquer strategy to find the structure of colliding bits to generate 96-bit query strings as the 96-bit candidate query strings (96BCQSs), and uses query-tree anticollision schemes with 96BCQSs to identify tags. The performance analysis and experimental results show that the proposed algorithm has three advantages: (i) reducing the number of collisions to only one, so that the time complexity of tag identification is the simplest O(1), (ii) storing identified identification numbers (IDs) and the 96BCQSs in a register to save the used memory, and (iii) resulting in the number of bits transmitted by both the reader and tags being evidently less than the other algorithms in one-tag identification or in all tags identification.

  17. A fast method for video deblurring based on a combination of gradient methods and denoising algorithms in Matlab and C environments

    Science.gov (United States)

    Mirzadeh, Zeynab; Mehri, Razieh; Rabbani, Hossein

    2010-01-01

    In this paper the degraded video with blur and noise is enhanced by using an algorithm based on an iterative procedure. In this algorithm at first we estimate the clean data and blur function using Newton optimization method and then the estimation procedure is improved using appropriate denoising methods. These noise reduction techniques are based on local statistics of clean data and blur function. For estimated blur function we use LPA-ICI (local polynomial approximation - intersection of confidence intervals) method that use an anisotropic window around each point and obtain the enhanced data employing Wiener filter in this local window. Similarly, to improvement the quality of estimated clean video, at first we transform the data to wavelet transform domain and then improve our estimation using maximum a posterior (MAP) estimator and local Laplace prior. This procedure (initial estimation and improvement of estimation by denoising) is iterated and finally the clean video is obtained. The implementation of this algorithm is slow in MATLAB1 environment and so it is not suitable for online applications. However, MATLAB has the capability of running functions written in C. The files which hold the source for these functions are called MEX-Files. The MEX functions allow system-specific APIs to be called to extend MATLAB's abilities. So, in this paper to speed up our algorithm, the written code in MATLAB is sectioned and the elapsed time for each section is measured and slow sections (that use 60% of complete running time) are selected. Then these slow sections are translated to C++ and linked to MATLAB. In fact, the high loads of information in images and processed data in the "for" loops of relevant code, makes MATLAB an unsuitable candidate for writing such programs. The written code for our video deblurring algorithm in MATLAB contains eight "for" loops. These eighth "for" utilize 60% of the total execution time of the entire program and so the runtime should be

  18. On models of the genetic code generated by binary dichotomic algorithms.

    Science.gov (United States)

    Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz

    2015-02-01

    In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. The XH-map algorithm: A method to process stereo video to produce a real-time obstacle map

    Science.gov (United States)

    Rosselot, Donald; Hall, Ernest L.

    2005-10-01

    This paper presents a novel, simple and fast algorithm to produce a "floor plan" obstacle map in real time using video. The XH-map algorithm is a transformation of stereo vision data in disparity map space into a two dimensional obstacle map space using a method that can be likened to a histogram reduction of image information. The classic floor-ground background noise problem is addressed with a simple one-time semi-automatic calibration method incorporated into the algorithm. This implementation of this algorithm utilizes the Intel Performance Primitives library and OpenCV libraries for extremely fast and efficient execution, creating a scaled obstacle map from a 480x640x256 stereo pair in 1.4 milliseconds. This algorithm has many applications in robotics and computer vision including enabling an "Intelligent Robot" robot to "see" for path planning and obstacle avoidance.

  20. A Smart Video Coding Method for Time Lag Reduction in Telesurgery

    National Research Council Canada - National Science Library

    Sun, Mingui; Liu, Qiang; Xu, Jian; Kassam, Amin; Enos, Sharon E; Marchessault, Ronald; Gilbert, Gary; Sclabassi, Robert J

    2006-01-01

    .... These advances have made remotely operable telemedicine possible. However, a key technology that rapidly encodes, transmits, and decodes surgical video with the minimum round-trip delay and the least influence by network jitter...

  1. The Construction of an Universal Linearized Control Flow Graph for Static Code Analysis of Algorithms

    Directory of Open Access Journals (Sweden)

    V. A. Bitner

    2013-01-01

    Full Text Available This paper presents the description of a possible way to build the universal linearized control flow graph which is supposed to be architecture-independent and applicable to the description of any high level language programs. The practical usefulness of the graph considered is the existence of the fast and optimal search of the unique execution paths that is valuable in the methods of static code analysis of algorithms for race condition search. Optimizing compiler CLANG&LLVM is used as a technical tool for building a linearized control flow graph. The analysis of LLVM compiler procedural optimizations is carried out in the article. Transformations of intermediate representation of those optimizations result in reduction of the number of instructions responsible for conditional or unconditional branches in the code as well as the elimination or simplification of the whole loops and conditional constructions. The results of the analysis performed in the paper allowed revealing the most effective optimizations line of the LLVM compiler, which leads to a significant linearization of the control flow graph. That fact was demonstrated by the example code of the Peterson mutual execution algorithm for 2 threads.

  2. A Morse-code recognition system with LMS and matching algorithms for persons with disabilities.

    Science.gov (United States)

    Shih, C H; Luo, C H

    1997-05-01

    Single-switch communication is an effective auxiliary method for persons with disabilities. However, it is not easy to recognize the Morse codes typed by them. In our earlier proposed Morse code auto-recognition method, using the Least-Mean-Square (LMS) adaptive algorithm, it was demonstrated that the system could successfully recognize the Morse-coded messages at unstable typing speeds. However, the speed variation had to be limited to a range between 0.67 and two times the present speed. In the case of beginners or those with heavy disabilities, this rule can not always be complied with, producing a low recognition rate of 20%. To address this limitation, this paper offers an advanced recognition method which combines the Least-Mean-Square algorithm with a character-by-character matching technique. The recognition rate for this method from simulated and real data from various sources is as high as 75% or more on average. This practical application of the single-switch method means a step forward toward alternative communication for disabled persons.

  3. Analysis of Packet-Loss-Induced Distortion in View Synthesis Prediction-Based 3D Video Coding.

    Science.gov (United States)

    Gao, Pan; Peng, Qiang; Xiang, Wei

    2017-06-01

    View synthesis prediction (VSP) is a crucial coding tool for improving compression efficiency in the next generation 3D video systems. However, VSP is susceptible to catastrophic error propagation when multi-view video plus depth (MVD) data are transmitted over lossy networks. This paper aims at accurately modeling the transmission errors propagated in the inter-view direction caused by VSP. Toward this end, we first study how channel errors gradually propagate along the VSP-based inter-view prediction path. Then, a new recursive model is formulated to estimate the expected end-to-end distortion caused by those channel losses. For the proposed model, the compound impact of the transmission distortions of both the texture video and depth map on the quality of the synthetic reference view is mathematically analyzed. Especially, the expected view synthesis distortion due to depth errors is characterized in the frequency domain using a new approach, which combines the energy densities of the reconstructed texture image and the channel errors. The proposed model also explicitly considers the disparity rounding operation invoked for the sub-pixel precision rendering of the synthesized reference view. Experimental results are presented to demonstrate that the proposed analytic model is capable of effectively modeling the channel-induced distortion for MVD-based 3D video transmission.

  4. A database of linear codes over F_13 with minimum distance bounds and new quasi-twisted codes from a heuristic search algorithm

    Directory of Open Access Journals (Sweden)

    Eric Z. Chen

    2015-01-01

    Full Text Available Error control codes have been widely used in data communications and storage systems. One central problem in coding theory is to optimize the parameters of a linear code and construct codes with best possible parameters. There are tables of best-known linear codes over finite fields of sizes up to 9. Recently, there has been a growing interest in codes over $\\mathbb{F}_{13}$ and other fields of size greater than 9. The main purpose of this work is to present a database of best-known linear codes over the field $\\mathbb{F}_{13}$ together with upper bounds on the minimum distances. To find good linear codes to establish lower bounds on minimum distances, an iterative heuristic computer search algorithm is employed to construct quasi-twisted (QT codes over the field $\\mathbb{F}_{13}$ with high minimum distances. A large number of new linear codes have been found, improving previously best-known results. Tables of $[pm, m]$ QT codes over $\\mathbb{F}_{13}$ with best-known minimum distances as well as a table of lower and upper bounds on the minimum distances for linear codes of length up to 150 and dimension up to 6 are presented.

  5. Video-based eyetracking methods and algorithms in head-mounted displays

    Science.gov (United States)

    Hua, Hong; Krishnaswamy, Prasanna; Rolland, Jannick P.

    2006-05-01

    Head pose is utilized to approximate a user’s line-of-sight for real-time image rendering and interaction in most of the 3D visualization applications using head-mounted displays (HMD). The eye often reaches an object of interest before the completion of most head movements. It is highly desirable to integrate eye-tracking capability into HMDs in various applications. While the added complexity of an eyetracked-HMD (ET-HMD) imposes challenges on designing a compact, portable, and robust system, the integration offers opportunities to improve eye tracking accuracy and robustness. In this paper, based on the modeling of an eye imaging and tracking system, we examine the challenges and identify parametric requirements for video-based pupil-glint tracking methods in an ET-HMD design, and predict how these parameters may affect the tracking accuracy, resolution, and robustness. We further present novel methods and associated algorithms that effectively improve eye-tracking accuracy and extend the tracking range.

  6. Genetic algorithms applied to reconstructing coded imaging of neutrons and analysis of residual watermark.

    Science.gov (United States)

    Zhang, Tiankui; Hu, Huasi; Jia, Qinggang; Zhang, Fengna; Chen, Da; Li, Zhenghong; Wu, Yuelei; Liu, Zhihua; Hu, Guang; Guo, Wei

    2012-11-01

    Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. "Residual watermark," which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.

  7. Implementation of Finite Volume based Navier Stokes Algorithm Within General Purpose Flow Network Code

    Science.gov (United States)

    Schallhorn, Paul; Majumdar, Alok

    2012-01-01

    This paper describes a finite volume based numerical algorithm that allows multi-dimensional computation of fluid flow within a system level network flow analysis. There are several thermo-fluid engineering problems where higher fidelity solutions are needed that are not within the capacity of system level codes. The proposed algorithm will allow NASA's Generalized Fluid System Simulation Program (GFSSP) to perform multi-dimensional flow calculation within the framework of GFSSP s typical system level flow network consisting of fluid nodes and branches. The paper presents several classical two-dimensional fluid dynamics problems that have been solved by GFSSP's multi-dimensional flow solver. The numerical solutions are compared with the analytical and benchmark solution of Poiseulle, Couette and flow in a driven cavity.

  8. Study of Optimal EG Placement in Radial Distribution System Using Real Coded Genetic Algorithm

    Science.gov (United States)

    Sulaiman, Mohd Herwan; Aliman, Omar

    2011-06-01

    This paper proposes a study of embedded generation (EG) placement in radial distribution system by utilizing real coded genetic algorithm (RCGA) technique. Several cases of EG models placements are studied in order to minimize the total power losses and to improve voltage profiles of the system. RCGA is a method that uses continuous floating numbers as representation which is different from conventional GA which is using binary numbers. The RCGA is used as a tool, which can determine the optimal location and size of EG in radial system concurrently. This method is developed in MATLAB. The IEEE-69 bus system is utilized as a test case in this study.

  9. A Platform for Antenna Optimization with Numerical Electromagnetics Code Incorporated with Genetic Algorithms

    Science.gov (United States)

    2006-03-01

    gain. B.6 Disc Cone Listing B.6: An example of a fat Disc Cone.(appendix2/discone.nec) CM Biconical antenna CM Cone angle 30 deg. CE 4 GW 1 1 0.0000...A Platform for Antenna Optimization with Numerical Electromagnetics Code Incorporated with Genetic Algorithms THESIS Timothy L. Pitzer, Second...of the United States Air Force, Department of Defense, or U.S. Government. AFIT/GE/ENG/06-46 A Platform for Antenna Optimization with Numerical

  10. An Efficient VQ Codebook Search Algorithm Applied to AMR-WB Speech Coding

    Directory of Open Access Journals (Sweden)

    Cheng-Yu Yeh

    2017-04-01

    Full Text Available The adaptive multi-rate wideband (AMR-WB speech codec is widely used in modern mobile communication systems for high speech quality in handheld devices. Nonetheless, a major disadvantage is that vector quantization (VQ of immittance spectral frequency (ISF coefficients takes a considerable computational load in the AMR-WB coding. Accordingly, a binary search space-structured VQ (BSS-VQ algorithm is adopted to efficiently reduce the complexity of ISF quantization in AMR-WB. This search algorithm is done through a fast locating technique combined with lookup tables, such that an input vector is efficiently assigned to a subspace where relatively few codeword searches are required to be executed. In terms of overall search performance, this work is experimentally validated as a superior search algorithm relative to a multiple triangular inequality elimination (MTIE, a TIE with dynamic and intersection mechanisms (DI-TIE, and an equal-average equal-variance equal-norm nearest neighbor search (EEENNS approach. With a full search algorithm as a benchmark for overall search load comparison, this work provides an 87% search load reduction at a threshold of quantization accuracy of 0.96, a figure far beyond 55% in the MTIE, 76% in the EEENNS approach, and 83% in the DI-TIE approach.

  11. Validity of code based algorithms to identify primary open angle glaucoma (POAG) in Veterans Affairs (VA) administrative databases.

    Science.gov (United States)

    Biggerstaff, K S; Frankfort, B J; Orengo-Nania, S; Garcia, J; Chiao, E; Kramer, J R; White, D

    2017-09-25

    The validity of the International Classification of Diseases, 9th revision, Clinical Modification (ICD-9) code for primary open angle glaucoma (POAG) in the Department of Veterans Affairs (VA) electronic medical record has not been examined. We determined the accuracy of the ICD-9 code for POAG and developed diagnostic algorithms for the detection of POAG. We conducted a retrospective study of abstracted data from the Michael E. DeBakey VA Medical Center's medical records of 334 unique patients with at least one visit to the Eye Clinic between 1999 and 2013. Algorithms were developed to validly identify POAG using ICD-9 codes and pharmacy data. The positive predictive value (PPV), negative predictive value (NPV), sensitivity, specificity and percent agreement of the various algorithms were calculated. For the ICD-9 code 365.1x, the PPV was 65.9%, NPV was 95.2%, sensitivity was 100%, specificity was 82.6%, and percent agreement was 87.8%. The algorithm with the highest PPV was 76.3%, using pharmacy data in conjunction with two or more ICD-9 codes for POAG, but this algorithm also had the lowest NPV at 88.2%. Various algorithms for identifying POAG in the VA administrative databases have variable validity. Depending on the type of research being done, the ICD-9 code 365.1x can be used for epidemiologic or health services database research.

  12. A Distributed Flow Rate Control Algorithm for Networked Agent System with Multiple Coding Rates to Optimize Multimedia Data Transmission

    Directory of Open Access Journals (Sweden)

    Shuai Zeng

    2013-01-01

    Full Text Available With the development of wireless technologies, mobile communication applies more and more extensively in the various walks of life. The social network of both fixed and mobile users can be seen as networked agent system. At present, kinds of devices and access network technology are widely used. Different users in this networked agent system may need different coding rates multimedia data due to their heterogeneous demand. This paper proposes a distributed flow rate control algorithm to optimize multimedia data transmission of the networked agent system with the coexisting various coding rates. In this proposed algorithm, transmission path and upload bandwidth of different coding rate data between source node, fixed and mobile nodes are appropriately arranged and controlled. On the one hand, this algorithm can provide user nodes with differentiated coding rate data and corresponding flow rate. On the other hand, it makes the different coding rate data and user nodes networked, which realizes the sharing of upload bandwidth of user nodes which require different coding rate data. The study conducts mathematical modeling on the proposed algorithm and compares the system that adopts the proposed algorithm with the existing system based on the simulation experiment and mathematical analysis. The results show that the system that adopts the proposed algorithm achieves higher upload bandwidth utilization of user nodes and lower upload bandwidth consumption of source node.

  13. High Quality Real-Time Video with Scanning Electron Microscope Using Total Variation Algorithm on a Graphics Processing Unit

    Science.gov (United States)

    Ouarti, Nizar; Sauvet, Bruno; Régnier, Stéphane

    2012-04-01

    The scanning electron microscope (SEM) is usually dedicated to taking a picture of micro-nanoscopic objects. In the present study, we wondered whether a SEM can be converted as a real-time video display. To this end, we designed a new methodology. We use the slow mode of the SEM to acquire a high quality reference image that can then be used to estimate the optimal parameters that regularize the signal for a given method. Here, we employ Total Variation, a method which minimizes the noise and regularizes the image. An optimal lagrangian multiplier can be computed that regularizes the image efficiently. We showed that a limited number of iterations for Total Variation algorithm can lead to an acceptable quality of regularization. This algorithm is parallel and deployed on a Graphics Processing Unit to obtain a real-time high quality video with a SEM. It opens the possibility of a real-time interaction at micro-nanoscales.

  14. Fast compressed domain motion detection in H.264 video streams for video surveillance applications

    DEFF Research Database (Denmark)

    Szczerba, Krzysztof; Forchhammer, Søren; Støttrup-Andersen, Jesper

    2009-01-01

    numbers of video streams on a single server. The focus of the work is on using the information in coded video streams to reduce the computational complexity and memory requirements, which translates into reduced hardware requirements and costs. The devised algorithm detects and segments activity based...

  15. Quality optimized medical image information hiding algorithm that employs edge detection and data coding.

    Science.gov (United States)

    Al-Dmour, Hayat; Al-Ani, Ahmed

    2016-04-01

    The present work has the goal of developing a secure medical imaging information system based on a combined steganography and cryptography technique. It attempts to securely embed patient's confidential information into his/her medical images. The proposed information security scheme conceals coded Electronic Patient Records (EPRs) into medical images in order to protect the EPRs' confidentiality without affecting the image quality and particularly the Region of Interest (ROI), which is essential for diagnosis. The secret EPR data is converted into ciphertext using private symmetric encryption method. Since the Human Visual System (HVS) is less sensitive to alterations in sharp regions compared to uniform regions, a simple edge detection method has been introduced to identify and embed in edge pixels, which will lead to an improved stego image quality. In order to increase the embedding capacity, the algorithm embeds variable number of bits (up to 3) in edge pixels based on the strength of edges. Moreover, to increase the efficiency, two message coding mechanisms have been utilized to enhance the ±1 steganography. The first one, which is based on Hamming code, is simple and fast, while the other which is known as the Syndrome Trellis Code (STC), is more sophisticated as it attempts to find a stego image that is close to the cover image through minimizing the embedding impact. The proposed steganography algorithm embeds the secret data bits into the Region of Non Interest (RONI), where due to its importance; the ROI is preserved from modifications. The experimental results demonstrate that the proposed method can embed large amount of secret data without leaving a noticeable distortion in the output image. The effectiveness of the proposed algorithm is also proven using one of the efficient steganalysis techniques. The proposed medical imaging information system proved to be capable of concealing EPR data and producing imperceptible stego images with minimal

  16. Accelerating Wavelet-Based Video Coding on Graphics Hardware using CUDA

    NARCIS (Netherlands)

    Laan, Wladimir J. van der; Roerdink, Jos B.T.M.; Jalba, Andrei C.; Zinterhof, P; Loncaric, S; Uhl, A; Carini, A

    2009-01-01

    The Discrete Wavelet Transform (DWT) has a wide range of applications from signal processing to video and image compression. This transform, by means of the lifting scheme, can be performed in a memory mid computation efficient way on modern, programmable GPUs, which can be regarded as massively

  17. A nuclear reload optimization approach using a real coded genetic algorithm with random keys

    Energy Technology Data Exchange (ETDEWEB)

    Lima, Alan M.M. de; Schirru, Roberto; Medeiros, Jose A.C.C., E-mail: alan@lmp.ufrj.b, E-mail: schirru@lmp.ufrj.b, E-mail: canedo@lmp.ufrj.b [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear

    2009-07-01

    The fuel reload of a Pressurized Water Reactor is made whenever the burn up of the fuel assemblies in the nucleus of the reactor reaches a certain value such that it is not more possible to maintain a critical reactor producing energy at nominal power. The problem of fuel reload optimization consists on determining the positioning of the fuel assemblies within the nucleus of the reactor in an optimized way to minimize the cost benefit relationship of fuel assemblies cost per maximum burn up, and also satisfying symmetry and safety restrictions. The fuel reload optimization problem difficulty grows exponentially with the number of fuel assemblies in the nucleus of the reactor. During decades the fuel reload optimization problem was solved manually by experts that used their knowledge and experience to build configurations of the reactor nucleus, and testing them to verify if safety restrictions of the plant are satisfied. To reduce this burden, several optimization techniques have been used, included the binary code genetic algorithm. In this work we show the use of a real valued coded approach of the genetic algorithm, with different recombination methods, together with a transformation mechanism called random keys, to transform the real values of the genes of each chromosome in a combination of discrete fuel assemblies for evaluation of the reload optimization. Four different recombination methods were tested: discrete recombination, intermediate recombination, linear recombination and extended linear recombination. For each of the 4 recombination methods 10 different tests using different seeds for the random number generator were conducted 10 generating, totaling 40 tests. The results of the application of the genetic algorithm are shown with formulation of real numbers for the problem of the nuclear reload of the plant Angra 1 type PWR. Since the best results in the literature for this problem were found by the parallel PSO we will it use for comparison

  18. QEFSM model and Markov Algorithm for translating Quran reciting rules into Braille code

    Directory of Open Access Journals (Sweden)

    Abdallah M. Abualkishik

    2015-07-01

    Full Text Available The Holy Quran is the central religious verbal text of Islam. Muslims are expected to read, understand, and apply the teachings of the Holy Quran. The Holy Quran was translated to Braille code as a normal Arabic text without having its reciting rules included. It is obvious that the users of this transliteration will not be able to recite the Quran the right way. Through this work, Quran Braille Translator (QBT presents a specific translator to translate Quran verses and their reciting rules into the Braille code. Quran Extended Finite State Machine (QEFSM model is proposed through this study as it is able to detect the Quran reciting rules (QRR from the Quran text. Basis path testing was used to evaluate the inner work for the model by checking all the test cases for the model. Markov Algorithm (MA was used for translating the detected QRR and Quran text into the matched Braille code. The data entries for QBT are Arabic letters and diacritics. The outputs of this study are seen in the double lines of Braille symbols; the first line is the proposed Quran reciting rules and the second line is for the Quran scripts.

  19. Inclusion of the fitness sharing technique in an evolutionary algorithm to analyze the fitness landscape of the genetic code adaptability.

    Science.gov (United States)

    Santos, José; Monteagudo, Ángel

    2017-03-27

    The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.

  20. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    Science.gov (United States)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  1. Temporal Scalability through Adaptive -Band Filter Banks for Robust H.264/MPEG-4 AVC Video Coding

    Directory of Open Access Journals (Sweden)

    Pau G

    2006-01-01

    Full Text Available This paper presents different structures that use adaptive -band hierarchical filter banks for temporal scalability. Open-loop and closed-loop configurations are introduced and illustrated using existing video codecs. In particular, it is shown that the H.264/MPEG-4 AVC codec allows us to introduce scalability by frame shuffling operations, thus keeping backward compatibility with the standard. The large set of shuffling patterns introduced here can be exploited to adapt the encoding process to the video content features, as well as to the user equipment and transmission channel characteristics. Furthermore, simulation results show that this scalability is obtained with no degradation in terms of subjective and objective quality in error-free environments, while in error-prone channels the scalable versions provide increased robustness.

  2. A novel construction scheme of QC-LDPC codes based on the RU algorithm for optical transmission systems

    Science.gov (United States)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-03-01

    A novel lower-complexity construction scheme of quasi-cyclic low-density parity-check (QC-LDPC) codes for optical transmission systems is proposed based on the structure of the parity-check matrix for the Richardson-Urbanke (RU) algorithm. Furthermore, a novel irregular QC-LDPC(4 288, 4 020) code with high code-rate of 0.937 is constructed by this novel construction scheme. The simulation analyses show that the net coding gain ( NCG) of the novel irregular QC-LDPC(4 288,4 020) code is respectively 2.08 dB, 1.25 dB and 0.29 dB more than those of the classic RS(255, 239) code, the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code at the bit error rate ( BER) of 10-6. The irregular QC-LDPC(4 288, 4 020) code has the lower encoding/decoding complexity compared with the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code. The proposed novel QC-LDPC(4 288, 4 020) code can be more suitable for the increasing development requirements of high-speed optical transmission systems.

  3. Algorithmic complexity for psychology: a user-friendly implementation of the coding theorem method.

    Science.gov (United States)

    Gauvrit, Nicolas; Singmann, Henrik; Soler-Toscano, Fernando; Zenil, Hector

    2016-03-01

    Kolmogorov-Chaitin complexity has long been believed to be impossible to approximate when it comes to short sequences (e.g. of length 5-50). However, with the newly developed coding theorem method the complexity of strings of length 2-11 can now be numerically estimated. We present the theoretical basis of algorithmic complexity for short strings (ACSS) and describe an R-package providing functions based on ACSS that will cover psychologists' needs and improve upon previous methods in three ways: (1) ACSS is now available not only for binary strings, but for strings based on up to 9 different symbols, (2) ACSS no longer requires time-consuming computing, and (3) a new approach based on ACSS gives access to an estimation of the complexity of strings of any length. Finally, three illustrative examples show how these tools can be applied to psychology.

  4. Optimization of energy saving device combined with a propeller using real-coded genetic algorithm

    Directory of Open Access Journals (Sweden)

    Ryu Tomohiro

    2014-06-01

    Full Text Available This paper presents a numerical optimization method to improve the performance of the propeller with Turbo-Ring using real-coded genetic algorithm. In the presented method, Unimodal Normal Distribution Crossover (UNDX and Minimal Generation Gap (MGG model are used as crossover operator and generation-alternation model, respectively. Propeller characteristics are evaluated by a simple surface panel method “SQCM” in the optimization process. Blade sections of the original Turbo-Ring and propeller are replaced by the NACA66 a = 0.8 section. However, original chord, skew, rake and maximum blade thickness distributions in the radial direction are unchanged. Pitch and maximum camber distributions in the radial direction are selected as the design variables. Optimization is conducted to maximize the efficiency of the propeller with Turbo-Ring. The experimental result shows that the efficiency of the optimized propeller with Turbo-Ring is higher than that of the original propeller with Turbo-Ring.

  5. Cross-Layer Design for Video Transmission over Wireless Rician Slow-Fading Channels Using an Adaptive Multiresolution Modulation and Coding Scheme

    Directory of Open Access Journals (Sweden)

    James W. Modestino

    2007-01-01

    Full Text Available We describe a multilayered video transport scheme for wireless channels capable of adapting to channel conditions in order to maximize end-to-end quality of service (QoS. This scheme combines a scalable H.263+ video source coder with unequal error protection (UEP across layers. The UEP is achieved by employing different channel codes together with a multiresolution modulation approach to transport the different priority layers. Adaptivity to channel conditions is provided through a joint source-channel coding (JSCC approach which attempts to jointly optimize the source and channel coding rates together with the modulation parameters to obtain the maximum achievable end-to-end QoS for the prevailing channel conditions. In this work, we model the wireless links as slow-fading Rician channel where the channel conditions can be described in terms of the channel signal-to-noise ratio (SNR and the ratio of specular-to-diffuse energy ζ2. The multiresolution modulation/coding scheme consists of binary rate-compatible punctured convolutional (RCPC codes used together with nonuniform phase-shift keyed (PSK signaling constellations. Results indicate that this adaptive JSCC scheme employing scalable video encoding together with a multiresolution modulation/coding approach leads to significant improvements in delivered video quality for specified channel conditions. In particular, the approach results in considerably improved graceful degradation properties for decreasing channel SNR.

  6. Cross-Layer Design for Video Transmission over Wireless Rician Slow-Fading Channels Using an Adaptive Multiresolution Modulation and Coding Scheme

    Directory of Open Access Journals (Sweden)

    Modestino James W

    2007-01-01

    Full Text Available We describe a multilayered video transport scheme for wireless channels capable of adapting to channel conditions in order to maximize end-to-end quality of service (QoS. This scheme combines a scalable H.263+ video source coder with unequal error protection (UEP across layers. The UEP is achieved by employing different channel codes together with a multiresolution modulation approach to transport the different priority layers. Adaptivity to channel conditions is provided through a joint source-channel coding (JSCC approach which attempts to jointly optimize the source and channel coding rates together with the modulation parameters to obtain the maximum achievable end-to-end QoS for the prevailing channel conditions. In this work, we model the wireless links as slow-fading Rician channel where the channel conditions can be described in terms of the channel signal-to-noise ratio (SNR and the ratio of specular-to-diffuse energy . The multiresolution modulation/coding scheme consists of binary rate-compatible punctured convolutional (RCPC codes used together with nonuniform phase-shift keyed (PSK signaling constellations. Results indicate that this adaptive JSCC scheme employing scalable video encoding together with a multiresolution modulation/coding approach leads to significant improvements in delivered video quality for specified channel conditions. In particular, the approach results in considerably improved graceful degradation properties for decreasing channel SNR.

  7. Study on the Detection of Moving Target in the Mining Method Based on Hybrid Algorithm for Sports Video Analysis

    Directory of Open Access Journals (Sweden)

    Huang Tian

    2014-10-01

    Full Text Available Moving object detection and tracking is the computer vision and image processing is a hot research direction, based on the analysis of the moving target detection and tracking algorithm in common use, focus on the sports video target tracking non rigid body. In sports video, non rigid athletes often have physical deformation in the process of movement, and may be associated with the occurrence of moving target under cover. Media data is surging to fast search and query causes more difficulties in data. However, the majority of users want to be able to quickly from the multimedia data to extract the interested content and implicit knowledge (concepts, rules, rules, models and correlation, retrieval and query quickly to take advantage of them, but also can provide the decision support problem solving hierarchy. Based on the motion in sport video object as the object of study, conducts the system research from the theoretical level and technical framework and so on, from the layer by layer mining between low level motion features to high-level semantic motion video, not only provides support for users to find information quickly, but also can provide decision support for the user to solve the problem.

  8. Development of a Coding Instrument to Assess the Quality and Content of Anti-Tobacco Video Games.

    Science.gov (United States)

    Alber, Julia M; Watson, Anna M; Barnett, Tracey E; Mercado, Rebeccah; Bernhardt, Jay M

    2015-07-01

    Previous research has shown the use of electronic video games as an effective method for increasing content knowledge about the risks of drugs and alcohol use for adolescents. Although best practice suggests that theory, health communication strategies, and game appeal are important characteristics for developing games, no instruments are currently available to examine the quality and content of tobacco prevention and cessation electronic games. This study presents the systematic development of a coding instrument to measure the quality, use of theory, and health communication strategies of tobacco cessation and prevention electronic games. Using previous research and expert review, a content analysis coding instrument measuring 67 characteristics was developed with three overarching categories: type and quality of games, theory and approach, and type and format of messages. Two trained coders applied the instrument to 88 games on four platforms (personal computer, Nintendo DS, iPhone, and Android phone) to field test the instrument. Cohen's kappa for each item ranged from 0.66 to 1.00, with an average kappa value of 0.97. Future research can adapt this coding instrument to games addressing other health issues. In addition, the instrument questions can serve as a useful guide for evidence-based game development.

  9. Development of a Coding Instrument to Assess the Quality and Content of Anti-Tobacco Video Games

    Science.gov (United States)

    Alber, Julia M.; Watson, Anna M.; Barnett, Tracey E.; Mercado, Rebeccah

    2015-01-01

    Abstract Previous research has shown the use of electronic video games as an effective method for increasing content knowledge about the risks of drugs and alcohol use for adolescents. Although best practice suggests that theory, health communication strategies, and game appeal are important characteristics for developing games, no instruments are currently available to examine the quality and content of tobacco prevention and cessation electronic games. This study presents the systematic development of a coding instrument to measure the quality, use of theory, and health communication strategies of tobacco cessation and prevention electronic games. Using previous research and expert review, a content analysis coding instrument measuring 67 characteristics was developed with three overarching categories: type and quality of games, theory and approach, and type and format of messages. Two trained coders applied the instrument to 88 games on four platforms (personal computer, Nintendo DS, iPhone, and Android phone) to field test the instrument. Cohen's kappa for each item ranged from 0.66 to 1.00, with an average kappa value of 0.97. Future research can adapt this coding instrument to games addressing other health issues. In addition, the instrument questions can serve as a useful guide for evidence-based game development. PMID:26167842

  10. A New Real-coded Genetic Algorithm with an Adaptive Mating Selection for UV-landscapes

    Science.gov (United States)

    Oshima, Dan; Miyamae, Atsushi; Nagata, Yuichi; Kobayashi, Shigenobu; Ono, Isao; Sakuma, Jun

    The purpose of this paper is to propose a new real-coded genetic algorithm (RCGA) named Networked Genetic Algorithm (NGA) that intends to find multiple optima simultaneously in deceptive globally multimodal landscapes. Most current techniques such as niching for finding multiple optima take into account big valley landscapes or non-deceptive globally multimodal landscapes but not deceptive ones called UV-landscapes. Adaptive Neighboring Search (ANS) is a promising approach for finding multiple optima in UV-landscapes. ANS utilizes a restricted mating scheme with a crossover-like mutation in order to find optima in deceptive globally multimodal landscapes. However, ANS has a fundamental problem that it does not find all the optima simultaneously in many cases. NGA overcomes the problem by an adaptive parent-selection scheme and an improved crossover-like mutation. We show the effectiveness of NGA over ANS in terms of the number of detected optima in a single run on Fletcher and Powell functions as benchmark problems that are known to have multiple optima, ill-scaledness, and UV-landscapes.

  11. FPGA Implementation of Gaussian Mixture Model Algorithm for 47 fps Segmentation of 1080p Video

    Directory of Open Access Journals (Sweden)

    Mariangela Genovese

    2013-01-01

    Full Text Available Circuits and systems able to process high quality video in real time are fundamental in nowadays imaging systems. The circuit proposed in the paper, aimed at the robust identification of the background in video streams, implements the improved formulation of the Gaussian Mixture Model (GMM algorithm that is included in the OpenCV library. An innovative, hardware oriented, formulation of the GMM equations, the use of truncated binary multipliers, and ROM compression techniques allow reduced hardware complexity and increased processing capability. The proposed circuit has been designed having commercial FPGA devices as target and provides speed and logic resources occupation that overcome previously proposed implementations. The circuit, when implemented on Virtex6 or StratixIV, processes more than 45 frame per second in 1080p format and uses few percent of FPGA logic resources.

  12. Search for Active-State Conformation of Drug Target GPCR Using Real-Coded Genetic Algorithm

    Science.gov (United States)

    Ishino, Yoko; Harada, Takanori; Aida, Misako

    G-Protein coupled receptors (GPCRs) comprise a large superfamily of proteins and are a target for nearly 50% of drugs in clinical use today. GPCRs have a unique structural motif, seven transmembrane helices, and it is known that agonists and antagonists dock with a GPCR in its ``active'' and ``inactive'' condition, respectively. Knowing conformations of both states is eagerly anticipated for elucidation of drug action mechanism. Since GPCRs are difficult to crystallize, the 3D structures of these receptors have not yet been determined by X-ray crystallography, except the inactive-state conformation of two proteins. The conformation of them enabled the inactive form of other GPCRs to be modeled by computer-aided homology modeling. However, to date, the active form of GPCRs has not been solved. This paper describes a novel method to predict the 3D structure of an active-state GPCR aiming at molecular docking-based virtual screening using real-coded genetic algorithm (real-coded GA), receptor-ligand docking simulations, and molecular dynamics (MD) simulations. The basic idea of the method is that the MD is first used to calculate an average 3D coordinates of all atoms of a GPCR protein against heat fluctuation on the pico- or nano- second time scale, and then real-coded GA involving receptor-ligand docking simulations functions to determine the rotation angle of each helix as a movement on wider time scale. The method was validated using human leukotriene B4 receptor BLT1 as a sample GPCR. Our study demonstrated that the established evolutionary search for the active state of the leukotriene receptor provided the appropriate 3D structure of the receptor to dock with its agonists.

  13. A Hybrid Scheme Based on Pipelining and Multitasking in Mobile Application Processors for Advanced Video Coding

    Directory of Open Access Journals (Sweden)

    Muhammad Asif

    2015-01-01

    Full Text Available One of the key requirements for mobile devices is to provide high-performance computing at lower power consumption. The processors used in these devices provide specific hardware resources to handle computationally intensive video processing and interactive graphical applications. Moreover, processors designed for low-power applications may introduce limitations on the availability and usage of resources, which present additional challenges to the system designers. Owing to the specific design of the JZ47x series of mobile application processors, a hybrid software-hardware implementation scheme for H.264/AVC encoder is proposed in this work. The proposed scheme distributes the encoding tasks among hardware and software modules. A series of optimization techniques are developed to speed up the memory access and data transferring among memories. Moreover, an efficient data reusage design is proposed for the deblock filter video processing unit to reduce the memory accesses. Furthermore, fine grained macroblock (MB level parallelism is effectively exploited and a pipelined approach is proposed for efficient utilization of hardware processing cores. Finally, based on parallelism in the proposed design, encoding tasks are distributed between two processing cores. Experiments show that the hybrid encoder is 12 times faster than a highly optimized sequential encoder due to proposed techniques.

  14. Multiple description coding for SNR scalable video transmission over unreliable networks

    NARCIS (Netherlands)

    Choupani, R.; Wong, S.; Tolun, M.

    2012-01-01

    Streaming multimedia data on best-effort networks such as the Internet requires measures against bandwidth fluctuations and frame loss. Multiple Description Coding (MDC) methods are used to overcome the jitter and delay problems arising from frame losses by making the transmitted data more error

  15. Coding Local and Global Binary Visual Features Extracted From Video Sequences.

    Science.gov (United States)

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.

  16. Optimal modulation and coding scheme allocation of scalable video multicast over IEEE 802.16e networks

    Directory of Open Access Journals (Sweden)

    Tsai Chia-Tai

    2011-01-01

    Full Text Available Abstract With the rapid development of wireless communication technology and the rapid increase in demand for network bandwidth, IEEE 802.16e is an emerging network technique that has been deployed in many metropolises. In addition to the features of high data rate and large coverage, it also enables scalable video multicasting, which is a potentially promising application, over an IEEE 802.16e network. How to optimally assign the modulation and coding scheme (MCS of the scalable video stream for the mobile subscriber stations to improve spectral efficiency and maximize utility is a crucial task. We formulate this MCS assignment problem as an optimization problem, called the total utility maximization problem (TUMP. This article transforms the TUMP into a precedence constraint knapsack problem, which is a NP-complete problem. Then, a branch and bound method, which is based on two dominance rules and a lower bound, is presented to solve the TUMP. The simulation results show that the proposed branch and bound method can find the optimal solution efficiently.

  17. Using High-Fidelity Simulation and Video-Assisted Debriefing to Enhance Obstetrical Hemorrhage Mock Code Training.

    Science.gov (United States)

    Jacobs, Peggy J

    The purpose of this descriptive, one-group posttest study was to explore the nursing staff's perception of the benefits of using high-fidelity simulation during mandated obstetrical hemorrhage mock code training. In addition, the use of video-assisted debriefing was used to enhance the nursing staff's evaluation of their communication and teamwork processes during a simulated obstetrical crisis. The convenience sample of 84 members of the nursing staff consented to completing data collection forms and being videotaped during the simulation. Quantitative results for the postsimulation survey showed that 93% of participants agreed or totally agreed that the use of SimMan made the simulation more realistic and enhanced learning and that debriefing and the use of videotaped playback improved their evaluation of team communication. Participants derived greatest benefit from reviewing their performance on videotape and discussing it during postsimulation debriefing. Simulation with video-assisted debriefing offers hospital educators the ability to evaluate team processes and offer support to improve teamwork with the ultimate goal of improving patient outcomes during obstetrical hemorrhage.

  18. Performance Improvement of the Goertzel Algorithm in Estimating of Protein Coding Regions Using Modified Anti-notch Filter and Linear Predictive Coding Model.

    Science.gov (United States)

    Farsani, Mahsa Saffari; Sahhaf, Masoud Reza Aghabozorgi; Abootalebi, Vahid

    2016-01-01

    The aim of this paper is to improve the performance of the conventional Goertzel algorithm in determining the protein coding regions in deoxyribonucleic acid (DNA) sequences. First, the symbolic DNA sequences are converted into numerical signals using electron ion interaction potential method. Then by combining the modified anti-notch filter and linear predictive coding model, we proposed an efficient algorithm to achieve the performance improvement in the Goertzel algorithm for estimating genetic regions. Finally, a thresholding method is applied to precisely identify the exon and intron regions. The proposed algorithm is applied to several genes, including genes available in databases BG570 and HMR195 and the results are compared to other methods based on the nucleotide level evaluation criteria. Results demonstrate that our proposed method reduces the number of incorrect nucleotides which are estimated to be in the noncoding region. In addition, the area under the receiver operating characteristic curve has improved by the factor of 1.35 and 1.12 in HMR195 and BG570 datasets respectively, in comparison with the conventional Goertzel algorithm.

  19. A parallel 3-D discrete wavelet transform architecture using pipelined lifting scheme approach for video coding

    Science.gov (United States)

    Hegde, Ganapathi; Vaya, Pukhraj

    2013-10-01

    This article presents a parallel architecture for 3-D discrete wavelet transform (3-DDWT). The proposed design is based on the 1-D pipelined lifting scheme. The architecture is fully scalable beyond the present coherent Daubechies filter bank (9, 7). This 3-DDWT architecture has advantages such as no group of pictures restriction and reduced memory referencing. It offers low power consumption, low latency and high throughput. The computing technique is based on the concept that lifting scheme minimises the storage requirement. The application specific integrated circuit implementation of the proposed architecture is done by synthesising it using 65 nm Taiwan Semiconductor Manufacturing Company standard cell library. It offers a speed of 486 MHz with a power consumption of 2.56 mW. This architecture is suitable for real-time video compression even with large frame dimensions.

  20. Hardware acceleration of lucky-region fusion (LRF) algorithm for high-performance real-time video processing

    Science.gov (United States)

    Browning, Tyler; Jackson, Christopher; Cayci, Furkan; Carhart, Gary W.; Liu, J. J.; Kiamilev, Fouad

    2015-06-01

    "Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm extracts sharp regions of an image obtained from a series of short exposure frames from fast, high-resolution image sensors, and fuses the sharp regions into a final, improved image. In our previous research, the LRF algorithm had been implemented on CPU and field programmable gate array (FPGA) platforms. The CPU did not have sufficient processing power to handle real-time processing of video. Last year, we presented a real-time LRF implementation using an FPGA. However, due to the slow register-transfer level (RTL) development and simulation time, it was difficult to adjust and discover optimal LRF settings such as Gaussian kernel radius and synthetic frame buffer size. To overcome this limitation, we implemented the LRF algorithm on an off-the-shelf graphical processing unit (GPU) in order to take advantage of built-in parallelization and significantly faster development time. Our initial results show that the unoptimized GPU implementation has almost comparable turbulence mitigation to the FPGA version. In our presentation, we will explore optimization of the LRF algorithm on the GPU to achieve higher performance results, and adding new performance capabilities such as image stabilization.

  1. Code Coupling via Jacobian-Free Newton-Krylov Algorithms with Application to Magnetized Fluid Plasma and Kinetic Neutral Models

    Energy Technology Data Exchange (ETDEWEB)

    Joseph, Ilon [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-05-27

    Jacobian-free Newton-Krylov (JFNK) algorithms are a potentially powerful class of methods for solving the problem of coupling codes that address dfferent physics models. As communication capability between individual submodules varies, different choices of coupling algorithms are required. The more communication that is available, the more possible it becomes to exploit the simple sparsity pattern of the Jacobian, albeit of a large system. The less communication that is available, the more dense the Jacobian matrices become and new types of preconditioners must be sought to efficiently take large time steps. In general, methods that use constrained or reduced subsystems can offer a compromise in complexity. The specific problem of coupling a fluid plasma code to a kinetic neutrals code is discussed as an example.

  2. Monte Carlo simulation using the PENELOPE code with an ant colony algorithm to study MOSFET detectors

    Energy Technology Data Exchange (ETDEWEB)

    Carvajal, M A; Palma, A J [Departamento de Electronica y Tecnologia de Computadores, Universidad de Granada, E-18071 Granada (Spain); Garcia-Pareja, S [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda Carlos Haya, s/n, E-29010 Malaga (Spain); Guirado, D [Servicio de RadiofIsica, Hospital Universitario ' San Cecilio' , Avda Dr Oloriz, 16, E-18012 Granada (Spain); Vilches, M [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda Fuerzas Armadas, 2, E-18014 Granada (Spain); Anguiano, M; Lallena, A M [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)], E-mail: carvajal@ugr.es, E-mail: garciapareja@gmail.com, E-mail: dguirado@ugr.es, E-mail: mvilches@ugr.es, E-mail: mangui@ugr.es, E-mail: ajpalma@ugr.es, E-mail: lallena@ugr.es

    2009-10-21

    In this work we have developed a simulation tool, based on the PENELOPE code, to study the response of MOSFET devices to irradiation with high-energy photons. The energy deposited in the extremely thin silicon dioxide layer has been calculated. To reduce the statistical uncertainties, an ant colony algorithm has been implemented to drive the application of splitting and Russian roulette as variance reduction techniques. In this way, the uncertainty has been reduced by a factor of {approx}5, while the efficiency is increased by a factor of above 20. As an application, we have studied the dependence of the response of the pMOS transistor 3N163, used as a dosimeter, with the incidence angle of the radiation for three common photons sources used in radiotherapy: a {sup 60}Co Theratron-780 and the 6 and 18 MV beams produced by a Mevatron KDS LINAC. Experimental and simulated results have been obtained for gantry angles of 0 deg., 15 deg., 30 deg., 45 deg., 60 deg. and 75 deg. The agreement obtained has permitted validation of the simulation tool. We have studied how to reduce the angular dependence of the MOSFET response by using an additional encapsulation made of brass in the case of the two LINAC qualities considered.

  3. The HTM Spatial Pooler—A Neocortical Algorithm for Online Sparse Distributed Coding

    Directory of Open Access Journals (Sweden)

    Yuwei Cui

    2017-11-01

    Full Text Available Hierarchical temporal memory (HTM provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP. The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.

  4. Parameter optimization and sensitivity analysis for large kinetic models using a real-coded genetic algorithm.

    Science.gov (United States)

    Tohsato, Yukako; Ikuta, Kunihiko; Shionoya, Akitaka; Mazaki, Yusaku; Ito, Masahiro

    2013-04-10

    Dynamic modeling is a powerful tool for predicting changes in metabolic regulation. However, a large number of input parameters, including kinetic constants and initial metabolite concentrations, are required to construct a kinetic model. Therefore, it is important not only to optimize the kinetic parameters, but also to investigate the effects of their perturbations on the overall system. We investigated the efficiency of the use of a real-coded genetic algorithm (RCGA) for parameter optimization and sensitivity analysis in the case of a large kinetic model involving glycolysis and the pentose phosphate pathway in Escherichia coli K-12. Sensitivity analysis of the kinetic model using an RCGA demonstrated that the input parameter values had different effects on model outputs. The results showed highly influential parameters in the model and their allowable ranges for maintaining metabolite-level stability. Furthermore, it was revealed that changes in these influential parameters may complement one another. This study presents an efficient approach based on the use of an RCGA for optimizing and analyzing parameters in large kinetic models. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Monte Carlo simulation using the PENELOPE code with an ant colony algorithm to study MOSFET detectors.

    Science.gov (United States)

    Carvajal, M A; García-Pareja, S; Guirado, D; Vilches, M; Anguiano, M; Palma, A J; Lallena, A M

    2009-10-21

    In this work we have developed a simulation tool, based on the PENELOPE code, to study the response of MOSFET devices to irradiation with high-energy photons. The energy deposited in the extremely thin silicon dioxide layer has been calculated. To reduce the statistical uncertainties, an ant colony algorithm has been implemented to drive the application of splitting and Russian roulette as variance reduction techniques. In this way, the uncertainty has been reduced by a factor of approximately 5, while the efficiency is increased by a factor of above 20. As an application, we have studied the dependence of the response of the pMOS transistor 3N163, used as a dosimeter, with the incidence angle of the radiation for three common photons sources used in radiotherapy: a (60)Co Theratron-780 and the 6 and 18 MV beams produced by a Mevatron KDS LINAC. Experimental and simulated results have been obtained for gantry angles of 0 degrees, 15 degrees, 30 degrees, 45 degrees, 60 degrees and 75 degrees. The agreement obtained has permitted validation of the simulation tool. We have studied how to reduce the angular dependence of the MOSFET response by using an additional encapsulation made of brass in the case of the two LINAC qualities considered.

  6. An Android Malicious Code Detection Method Based on Improved DCA Algorithm

    OpenAIRE

    Chundong Wang; Zhiyuan Li; Liangyi Gong; Xiuliang Mo; Hong Yang; Yi Zhao

    2017-01-01

    Recently, Android malicious code has increased dramatically and the technology of reinforcement is increasingly powerful. Due to the development of code obfuscation and polymorphic deformation technology, the current Android malicious code static detection method whose feature selected is the semantic of application source code can not completely extract malware’s code features. The Android malware static detection methods whose features used are only obtained from the AndroidManifest.xml fil...

  7. Watermarking textures in video games

    Science.gov (United States)

    Liu, Huajian; Berchtold, Waldemar; Schäfer, Marcel; Lieb, Patrick; Steinebach, Martin

    2014-02-01

    Digital watermarking is a promising solution to video game piracy. In this paper, based on the analysis of special challenges and requirements in terms of watermarking textures in video games, a novel watermarking scheme for DDS textures in video games is proposed. To meet the performance requirements in video game applications, the proposed algorithm embeds the watermark message directly in the compressed stream in DDS files and can be straightforwardly applied in watermark container technique for real-time embedding. Furthermore, the embedding approach achieves high watermark payload to handle collusion secure fingerprinting codes with extreme length. Hence, the scheme is resistant to collusion attacks, which is indispensable in video game applications. The proposed scheme is evaluated in aspects of transparency, robustness, security and performance. Especially, in addition to classical objective evaluation, the visual quality and playing experience of watermarked games is assessed subjectively in game playing.

  8. An Approximation of the Error Backpropagation Algorithm in a Predictive Coding Network with Local Hebbian Synaptic Plasticity.

    Science.gov (United States)

    Whittington, James C R; Bogacz, Rafal

    2017-05-01

    To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output.

  9. Improving Video Segmentation by Fusing Depth Cues and the Visual Background Extractor (ViBe Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaoqin Zhou

    2017-05-01

    Full Text Available Depth-sensing technology has led to broad applications of inexpensive depth cameras that can capture human motion and scenes in three-dimensional space. Background subtraction algorithms can be improved by fusing color and depth cues, thereby allowing many issues encountered in classical color segmentation to be solved. In this paper, we propose a new fusion method that combines depth and color information for foreground segmentation based on an advanced color-based algorithm. First, a background model and a depth model are developed. Then, based on these models, we propose a new updating strategy that can eliminate ghosting and black shadows almost completely. Extensive experiments have been performed to compare the proposed algorithm with other, conventional RGB-D (Red-Green-Blue and Depth algorithms. The experimental results suggest that our method extracts foregrounds with higher effectiveness and efficiency.

  10. A binary mixed integer coded genetic algorithm for multi-objective optimization of nuclear research reactor fuel reloading

    Energy Technology Data Exchange (ETDEWEB)

    Binh, Do Quang [University of Technical Education Ho Chi Minh City (Viet Nam); Huy, Ngo Quang [University of Industry Ho Chi Minh City (Viet Nam); Hai, Nguyen Hoang [Centre for Research and Development of Radiation Technology, Ho Chi Minh City (Viet Nam)

    2014-12-15

    This paper presents a new approach based on a binary mixed integer coded genetic algorithm in conjunction with the weighted sum method for multi-objective optimization of fuel loading patterns for nuclear research reactors. The proposed genetic algorithm works with two types of chromosomes: binary and integer chromosomes, and consists of two types of genetic operators: one working on binary chromosomes and the other working on integer chromosomes. The algorithm automatically searches for the most suitable weighting factors of the weighting function and the optimal fuel loading patterns in the search process. Illustrative calculations are implemented for a research reactor type TRIGA MARK II loaded with the Russian VVR-M2 fuels. Results show that the proposed genetic algorithm can successfully search for both the best weighting factors and a set of approximate optimal loading patterns that maximize the effective multiplication factor and minimize the power peaking factor while satisfying operational and safety constraints for the research reactor.

  11. A New Method Of Gene Coding For A Genetic Algorithm Designed For Parametric Optimization

    Directory of Open Access Journals (Sweden)

    Radu BELEA

    2003-12-01

    Full Text Available In a parametric optimization problem the genes code the real parameters of the fitness function. There are two coding techniques known under the names of: binary coded genes and real coded genes. The comparison between these two is a controversial subject since the first papers about parametric optimization have appeared. An objective analysis regarding the advantages and disadvantages of the two coding techniques is difficult to be done while different format information is compared. The present paper suggests a gene coding technique that uses the same format for both binary coded genes and for the real coded genes. After unifying the real parameters representation, the next criterion is going to be applied: the differences between the two techniques are statistically measured by the effect of the genetic operators over some random generated fellows.

  12. A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection.

    Science.gov (United States)

    Thounaojam, Dalton Meitei; Khelchandra, Thongam; Manglem Singh, Kh; Roy, Sudipta

    2016-01-01

    This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter.

  13. Motion estimation optimization in a MPEG-1-like video coding scheme for low-bit-rate applications

    Science.gov (United States)

    Roser, Miguel; Villegas, Paulo

    1994-05-01

    In this paper we present a work based on a coding algorithm for visual information that follows the International Standard ISO-IEC IS 11172, `Coding of Moving Pictures and Associated Audio for Digital Storage Media up to about 1.5 Mbit/s', widely known as MPEG1. The main intention in the definition of the MPEG 1 standard was to provide a large degree of flexibility to be used in many different applications. The interest of this paper is to adapt the MPEG 1 scheme for low bitrate operation and optimize it for special situations, as for example, a talking head with low movement, which is a usual situation in videotelephony application. An adapted and compatible MPEG 1 scheme, previously developed, able to operate at px8 Kbit/s will be used in this work. Looking for a low complexity scheme and taking into account that the most expensive (from the point of view of consumed computer time) step in the scheme is the motion estimation process (almost 80% of the total computer time is spent on the ME), an improvement of the motion estimation module based on the use of a new search pattern is presented in this paper.

  14. Simulation of video sequences for an accurate evaluation of tracking algorithms on complex scenes

    Science.gov (United States)

    Dubreu, Christine; Manzanera, Antoine; Bohain, Eric

    2008-04-01

    As target tracking is arousing more and more interest, the necessity to reliably assess tracking algorithms in any conditions is becoming essential. The evaluation of such algorithms requires a database of sequences representative of the whole range of conditions in which the tracking system is likely to operate, together with its associated ground truth. However, building such a database with real sequences, and collecting the associated ground truth appears to be hardly possible and very time-consuming. Therefore, more and more often, synthetic sequences are generated by complex and heavy simulation platforms to evaluate the performance of tracking algorithms. Some methods have also been proposed using simple synthetic sequences generated without such complex simulation platforms. These sequences are generated from a finite number of discriminating parameters, and are statistically representative, as regards these parameters, of real sequences. They are very simple and not photorealistic, but can be reliably used for low-level tracking algorithms evaluation in any operating conditions. The aim of this paper is to assess the reliability of these non-photorealistic synthetic sequences for evaluation of tracking systems on complex-textured objects, and to show how the number of parameters can be increased to synthesize more elaborated scenes and deal with more complex dynamics, including occlusions and three-dimensional deformations.

  15. A Time-Consistent Video Segmentation Algorithm Designed for Real-Time Implementation

    Directory of Open Access Journals (Sweden)

    M. El Hassani

    2008-01-01

    Temporal consistency of the segmentation is ensured by incorporating motion information through the use of an improved change-detection mask. This mask is designed using both illumination differences between frames and region segmentation of the previous frame. By considering both pixel and region levels, we obtain a particularly efficient algorithm at a low computational cost, allowing its implementation in real-time on the TriMedia processor for CIF image sequences.

  16. Continuous BP Decoding Algorithm for a Low-Density Parity-Check Coded Hybrid ARQ System

    Science.gov (United States)

    Park, Sangjoon; Choi, Sooyong; Hwang, Seung-Hoon

    A continuous belief propagation (BP) decoding algorithm for a hybrid automatic repeat request (ARQ) system is proposed in this paper. The proposed continuous BP decoding algorithm utilizes the extrinsic information generated in the last iteration of the previous transmission for a continuous progression of the decoding through retransmissions. This allows the continuous BP decoding algorithm to accelerate the decoding convergence for codeword determination, especially when the number of retransmissions is large or a currently combined packet has punctured nodes. Simulation results verify the effectiveness of the proposed continuous BP decoding algorithm.

  17. The ALF (Algorithms for Lattice Fermions project release 1.0. Documentation for the auxiliary field quantum Monte Carlo code

    Directory of Open Access Journals (Sweden)

    Martin Bercx, Florian Goth, Johannes S. Hofmann, Fakher F. Assaad

    2017-08-01

    Full Text Available The Algorithms for Lattice Fermions package provides a general code for the finite temperature auxiliary field quantum Monte Carlo algorithm. The code is engineered to be able to simulate any model that can be written in terms of sums of single-body operators, of squares of single-body operators and single-body operators coupled to an Ising field with given dynamics. We provide predefined types that allow the user to specify the model, the Bravais lattice as well as equal time and time displaced observables. The code supports an MPI implementation. Examples such as the Hubbard model on the honeycomb lattice and the Hubbard model on the square lattice coupled to a transverse Ising field are provided and discussed in the documentation. We furthermore discuss how to use the package to implement the Kondo lattice model and the $SU(N$-Hubbard-Heisenberg model. One can download the code from our Git instance at https://alf.physik.uni-wuerzburg.de and sign in to file issues.

  18. Efficient DS-UWB MUD Algorithm Using Code Mapping and RVM

    Directory of Open Access Journals (Sweden)

    Pingyan Shi

    2016-01-01

    Full Text Available A hybrid multiuser detection (MUD using code mapping and a wrong code recognition based on relevance vector machine (RVM for direct sequence ultra wide band (DS-UWB system is developed to cope with the multiple access interference (MAI and the computational efficiency. A new MAI suppression mechanism is studied in the following steps: firstly, code mapping, an optimal decision function, is constructed and the output candidate code of the matched filter is mapped to a feature space by the function. In the feature space, simulation results show that the error codes caused by MAI and the single user mapped codes can be classified by a threshold which is related to SNR of the receiver. Then, on the base of code mapping, use RVM to distinguish the wrong codes from the right ones and finally correct them. Compared with the traditional MUD approaches, the proposed method can considerably improve the bit error ratio (BER performance due to its special MAI suppression mechanism. Simulation results also show that the proposed method can approximately achieve the BER performance of optimal multiuser detection (OMD and the computational complexity approximately equals the matched filter. Moreover, the proposed method is less sensitive to the number of users.

  19. LEARNING ALGORITHM EFFECT ON MULTILAYER FEED FORWARD ARTIFICIAL NEURAL NETWORK PERFORMANCE IN IMAGE CODING

    Directory of Open Access Journals (Sweden)

    OMER MAHMOUD

    2007-08-01

    Full Text Available One of the essential factors that affect the performance of Artificial Neural Networks is the learning algorithm. The performance of Multilayer Feed Forward Artificial Neural Network performance in image compression using different learning algorithms is examined in this paper. Based on Gradient Descent, Conjugate Gradient, Quasi-Newton techniques three different error back propagation algorithms have been developed for use in training two types of neural networks, a single hidden layer network and three hidden layers network. The essence of this study is to investigate the most efficient and effective training methods for use in image compression and its subsequent applications. The obtained results show that the Quasi-Newton based algorithm has better performance as compared to the other two algorithms.

  20. Colour-reproduction algorithm for transmitting variable video frames and its application to capsule endoscopy

    OpenAIRE

    Khan, Tareq; Shrestha, Ravi; Imtiaz, Md. Shamin; Khan A. Wahid

    2015-01-01

    Presented is a new power-efficient colour generation algorithm for wireless capsule endoscopy (WCE) application. In WCE, transmitting colour image data from the human intestine through radio frequency (RF) consumes a huge amount of power. The conventional way is to transmit all R, G and B components of all frames. Using the proposed dictionary-based colour generation scheme, instead of sending all R, G and B frames, first one colour frame is sent followed by a series of grey-scale frames. At ...

  1. High precision three-dimensional iterative indoor localization algorithm using code division multiple access modulation based on visible light communication

    Science.gov (United States)

    Guan, Weipeng; Wu, Yvxiang; Wen, Shangsheng; Yang, Chen; Chen, Hao; Zhang, Zhaoze; Chen, Yingcong

    2016-10-01

    To solve the problem of positioning accuracy affected by mutual interference among multiple reference points in the traditional visible light communication positioning system, an iterative algorithm of received signal strength (RSS) based on code division multiple access (CDMA) coding is proposed. Every light-emitting diode (LED) source broadcasts a unique CDMA modulation identification (ID) code, which is associated with geographic position. The mobile terminal receives a mixed light signal from each LED reference point. By using the orthogonality of the spreading codes, the corresponding ID position information and the intensity attenuation factor of each LED reference point source can be available. According to the ID information and signal attenuation intensity, the location area of each LED and the distance between the receiver end and each LED can be determined. The three-dimensional (3-D) position of the receiver can be obtained by using the iterative algorithm of RSS triangulation. The experimental results show that the proposed algorithm can achieve a positioning accuracy of 5.25 cm in a two-dimensional (2-D) positioning system. And in the 3-D positioning system, the maximum positioning error is 10.27 cm, the minimum positioning error is 0.45 cm, the average positioning error is 3.97 cm, and the proportion of the positioning error exceeding 5 cm is <25%. With a very good positioning accuracy, this system is simple and does not require synchronization processing. What is more, it can be applied to both the 2-D and 3-D localization systems, which has a broad application prospect.

  2. Peri-operative imaging of cancer margins with reflectance confocal microscopy during Mohs micrographic surgery: feasibility of a video-mosaicing algorithm

    Science.gov (United States)

    Flores, Eileen; Yelamos, Oriol; Cordova, Miguel; Kose, Kivanc; Phillips, William; Rossi, Anthony; Nehal, Kishwer; Rajadhyaksha, Milind

    2017-02-01

    Reflectance confocal microscopy (RCM) imaging shows promise for guiding surgical treatment of skin cancers. Recent technological advancements such as the introduction of the handheld version of the reflectance confocal microscope, video acquisition and video-mosaicing have improved RCM as an emerging tool to evaluate cancer margins during routine surgical skin procedures such as Mohs micrographic surgery (MMS). Detection of residual non-melanoma skin cancer (NMSC) tumor during MMS is feasible, as demonstrated by the introduction of real-time perioperative imaging on patients in the surgical setting. Our study is currently testing the feasibility of a new mosaicing algorithm for perioperative RCM imaging of NMSC cancer margins on patients during MMS. We report progress toward imaging and image analysis on forty-five patients, who presented for MMS at the MSKCC Dermatology service. The first 10 patients were used as a training set to establish an RCM imaging algorithm, which was implemented on the remaining test set of 35 patients. RCM imaging, using 35% AlCl3 for nuclear contrast, was performed pre- and intra-operatively with the Vivascope 3000 (Caliber ID). Imaging was performed in quadrants in the wound, to simulate the Mohs surgeon's examination of pathology. Videos were taken at the epidermal and deep dermal margins. Our Mohs surgeons assessed all videos and video-mosaics for quality and correlation to histology. Overall, our RCM video-mosaicing algorithm is feasible. RCM videos and video-mosaics of the epidermal and dermal margins were found to be of clinically acceptable quality. Assessment of cancer margins was affected by type of NMSC, size and location. Among the test set of 35 patients, 83% showed acceptable imaging quality, resolution and contrast. Visualization of nuclear and cellular morphology of residual BCC/SCC tumor and normal skin features could be detected in the peripheral and deep dermal margins. We observed correlation between the RCM videos/video

  3. Video transmission on ATM networks. Ph.D. Thesis

    Science.gov (United States)

    Chen, Yun-Chung

    1993-01-01

    The broadband integrated services digital network (B-ISDN) is expected to provide high-speed and flexible multimedia applications. Multimedia includes data, graphics, image, voice, and video. Asynchronous transfer mode (ATM) is the adopted transport techniques for B-ISDN and has the potential for providing a more efficient and integrated environment for multimedia. It is believed that most broadband applications will make heavy use of visual information. The prospect of wide spread use of image and video communication has led to interest in coding algorithms for reducing bandwidth requirements and improving image quality. The major results of a study on the bridging of network transmission performance and video coding are: Using two representative video sequences, several video source models are developed. The fitness of these models are validated through the use of statistical tests and network queuing performance. A dual leaky bucket algorithm is proposed as an effective network policing function. The concept of the dual leaky bucket algorithm can be applied to a prioritized coding approach to achieve transmission efficiency. A mapping of the performance/control parameters at the network level into equivalent parameters at the video coding level is developed. Based on that, a complete set of principles for the design of video codecs for network transmission is proposed.

  4. Link adaptation algorithm for distributed coded transmissions in cooperative OFDMA systems

    DEFF Research Database (Denmark)

    Varga, Mihaly; Badiu, Mihai Alin; Bota, Vasile

    2015-01-01

    This paper proposes a link adaptation algorithm for cooperative transmissions in the down-link connection of an OFDMA-based wireless system. The algorithm aims at maximizing the spectral efficiency of a relay-aided communication link, while satisfying the block error rate constraints at both...... an intractable complexity. Our solution is to use a link performance prediction method and a trellis diagram representation such that the resulting algorithm outputs the link configuration that conveys as many information bits as possible and also fulfilling the block error rate constraints. The proposed link...

  5. Electron-beam lithographic computer-generated holograms designed by direct search coding algorithm

    Science.gov (United States)

    Tamura, Hitoshi; Torii, Yasuhiro

    2009-08-01

    An optimized encoding algorithm is required to produce high-quality computer generated holograms (CGH). For such purpose, we have proposed that usage of the direct search algorithm (DSA) is effective for encoding the Lohmann-type binary amplitude and phase CGH. However, it takes much time for a computation time to get an optical solution by a DSA. To solve this problem, we have newly found that simultaneously selective direct search algorithm (SDSA) is greatly effective to shorten a computing time for encoding a Lohmann-type CGH.

  6. Algorithms

    Indian Academy of Sciences (India)

    positive numbers. The word 'algorithm' was most often associated with this algorithm till 1950. It may however be pOinted out that several non-trivial algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used.

  7. Algorithms and computer codes for atomic and molecular quantum scattering theory

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, L. (ed.)

    1979-01-01

    This workshop has succeeded in bringing up 11 different coupled equation codes on the NRCC computer, testing them against a set of 24 different test problems and making them available to the user community. These codes span a wide variety of methodologies, and factors of up to 300 were observed in the spread of computer times on specific problems. A very effective method was devised for examining the performance of the individual codes in the different regions of the integration range. Many of the strengths and weaknesses of the codes have been identified. Based on these observations, a hybrid code has been developed which is significantly superior to any single code tested. Thus, not only have the original goals been fully met, the workshop has resulted directly in an advancement of the field. All of the computer programs except VIVS are available upon request from the NRCC. Since an improved version of VIVS is contained in the hybrid program, VIVAS, it was not made available for distribution. The individual program LOGD is, however, available. In addition, programs which compute the potential energy matrices of the test problems are also available. The software library names for Tests 1, 2 and 4 are HEH2, LICO, and EN2, respectively.

  8. Architectural and Algorithmic Requirements for a Next-Generation System Analysis Code

    Energy Technology Data Exchange (ETDEWEB)

    V.A. Mousseau

    2010-05-01

    This document presents high-level architectural and system requirements for a next-generation system analysis code (NGSAC) to support reactor safety decision-making by plant operators and others, especially in the context of light water reactor plant life extension. The capabilities of NGSAC will be different from those of current-generation codes, not only because computers have evolved significantly in the generations since the current paradigm was first implemented, but because the decision-making processes that need the support of next-generation codes are very different from the decision-making processes that drove the licensing and design of the current fleet of commercial nuclear power reactors. The implications of these newer decision-making processes for NGSAC requirements are discussed, and resulting top-level goals for the NGSAC are formulated. From these goals, the general architectural and system requirements for the NGSAC are derived.

  9. Algorithms and computer codes for atomic and molecular quantum scattering theory. Volume I

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, L. (ed.)

    1979-01-01

    The goals of this workshop are to identify which of the existing computer codes for solving the coupled equations of quantum molecular scattering theory perform most efficiently on a variety of test problems, and to make tested versions of those codes available to the chemistry community through the NRCC software library. To this end, many of the most active developers and users of these codes have been invited to discuss the methods and to solve a set of test problems using the LBL computers. The first volume of this workshop report is a collection of the manuscripts of the talks that were presented at the first meeting held at the Argonne National Laboratory, Argonne, Illinois June 25-27, 1979. It is hoped that this will serve as an up-to-date reference to the most popular methods with their latest refinements and implementations.

  10. An enhancement of selection and crossover operations in real-coded genetic algorithm for large-dimensionality optimization

    Energy Technology Data Exchange (ETDEWEB)

    Kwak, Noh Sung; Lee, Jongsoo [Yonsei University, Seoul (Korea, Republic of)

    2016-01-15

    The present study aims to implement a new selection method and a novel crossover operation in a real-coded genetic algorithm. The proposed selection method facilitates the establishment of a successively evolved population by combining several subpopulations: an elitist subpopulation, an off-spring subpopulation and a mutated subpopulation. A probabilistic crossover is performed based on the measure of probabilistic distance between the individuals. The concept of ‘allowance’ is suggested to describe the level of variance in the crossover operation. A number of nonlinear/non-convex functions and engineering optimization problems are explored to verify the capacities of the proposed strategies. The results are compared with those obtained from other genetic and nature-inspired algorithms.

  11. Optimal design of FIR high pass filter based on L1 error approximation using real coded genetic algorithm

    Directory of Open Access Journals (Sweden)

    Apoorva Aggarwal

    2015-12-01

    Full Text Available In this paper, an optimal design of linear phase digital finite impulse response (FIR highpass (HP filter using the L1-norm based real-coded genetic algorithm (RCGA is investigated. A novel fitness function based on L1 norm is adopted to enhance the design accuracy. Optimized filter coefficients are obtained by defining the filter objective function in L1 sense using RCGA. Simulation analysis unveils that the performance of the RCGA adopting this fitness function is better in terms of signal attenuation ability of the filter, flatter passband and the convergence rate. Observations are made on the percentage improvement of this algorithm over the gradient-based L1 optimization approach on various factors by a large amount. It is concluded that RCGA leads to the best solution under specified parameters for the FIR filter design on account of slight unnoticeable higher transition width.

  12. Validation of a Case-Finding Algorithm for Hidradenitis Suppurativa Using Administrative Coding from a Clinical Database.

    Science.gov (United States)

    Strunk, Andrew; Midura, Margaretta; Papagermanos, Vassiliki; Alloo, Allireza; Garg, Amit

    2017-01-01

    Requisite to the application of clinical databases for observational research in hidradenitis suppurativa (HS) is the identification of an accurate case cohort. To assess the validity of utilizing administrative codes to establish the HS cohort from a large clinical database. In this retrospective study using chart review as the reference standard, we calculated several estimates of the diagnostic accuracy of at least 1 ICD-9 code for HS. Estimates of the diagnostic accuracy of at least 1 ICD-9 code for HS include sensitivity 100% (95% CI 98-100), specificity 83% (95% CI 77-88), positive predictive value 79% (95% CI 72-85), negative predictive value 100% (95% CI 98-100), accuracy 90% (95% CI 86-93), and kappa statistic 79% (95% CI 73-86). The case-finding algorithm employing at least 1 ICD-9 code for HS provides balance in achieving accuracy and adequate power, both necessary in the evaluation of a less common disease and its potential association with uncommon or even rare events. © 2017 S. Karger AG, Basel.

  13. Direct search coding algorithm with reduction in computing time by simultaneous selection rule

    Science.gov (United States)

    Tamura, Hitoshi

    2014-05-01

    An optimized encoding algorithm is required to produce high-quality computer-generated holograms (CGHs). For such a purpose, I have proposed that the use of the direct search algorithm (DSA) is effective for encoding the amplitude and phase in the Lohmann-type CGH. However, it takes much computation time to obtain an optimum solution by the DSA. To solve this problem, I have newly found that the simultaneous direct search algorithm (SDSA) is greatly effective for shortening the computation time for encoding the Lohmann-type CGH. As a result, the evaluation value of the reconstructed image for the SDSA is the same as that of 0.992 for the DSA. The computation time for the SDSA is drastically shortened from 3575 to 55 s for the DSA.

  14. Multimedia signal coding and transmission

    CERN Document Server

    Ohm, Jens-Rainer

    2015-01-01

    This textbook covers the theoretical background of one- and multidimensional signal processing, statistical analysis and modelling, coding and information theory with regard to the principles and design of image, video and audio compression systems. The theoretical concepts are augmented by practical examples of algorithms for multimedia signal coding technology, and related transmission aspects. On this basis, principles behind multimedia coding standards, including most recent developments like High Efficiency Video Coding, can be well understood. Furthermore, potential advances in future development are pointed out. Numerous figures and examples help to illustrate the concepts covered. The book was developed on the basis of a graduate-level university course, and most chapters are supplemented by exercises. The book is also a self-contained introduction both for researchers and developers of multimedia compression systems in industry.

  15. The Guruswami--Sudan Decoding Algorithm for Reed--Solomon Codes

    Science.gov (United States)

    McEliece, R. J.

    2003-01-01

    This article is a tutorial discussion of the Guruswami-Sudan (GS) Reed-Solomon decoding algorithm, including self-contained treatments of the Kotter and Roth-Ruckenstein (RR) improvements. It also contains a number of new results, including a rigorous discussion of the average size of the decoder's list, an improvement in the RR algorithm's stopping rule, a simplified treatment of the combinatorics of weighted monomial orders, and a proof of the monotonicity of the GS decoding radius as a function of the interpolation multiplicity.

  16. Algorithms

    Indian Academy of Sciences (India)

    In the description of algorithms and programming languages, what is the role of control abstraction? • What are the inherent limitations of the algorithmic processes? In future articles in this series, we will show that these constructs are powerful and can be used to encode any algorithm. In the next article, we will discuss ...

  17. A Multi-Frame Post-Processing Approach to Improved Decoding of H.264/AVC Video

    DEFF Research Database (Denmark)

    Huang, Xin; Li, Huiying; Forchhammer, Søren

    2007-01-01

    Video compression techniques may yield visually annoying artifacts for limited bitrate coding. In order to improve video quality, a multi-frame based motion compensated filtering algorithm is reported based on combining multiple pictures to form a single super-resolution picture and decimation...

  18. Performance analysis of a decoding algorithm for algebraic-geometry codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund

    1999-01-01

    that in the typical case, where the error points are "independent," one can prove that the algorithm always fails, that is gives a wrong or no answer, except for high rates where it does much better than expected. This explains the simulation results presented by O'Sullivan at the 1997 ISIT, We also show...

  19. Sparse Coding Algorithm with Negentropy and Weighted ℓ1-Norm for Signal Reconstruction

    Directory of Open Access Journals (Sweden)

    Yingxin Zhao

    2017-11-01

    Full Text Available Compressive sensing theory has attracted widespread attention in recent years and sparse signal reconstruction has been widely used in signal processing and communication. This paper addresses the problem of sparse signal recovery especially with non-Gaussian noise. The main contribution of this paper is the proposal of an algorithm where the negentropy and reweighted schemes represent the core of an approach to the solution of the problem. The signal reconstruction problem is formalized as a constrained minimization problem, where the objective function is the sum of a measurement of error statistical characteristic term, the negentropy, and a sparse regularization term, ℓp-norm, for 0 < p < 1. The ℓp-norm, however, leads to a non-convex optimization problem which is difficult to solve efficiently. Herein we treat the ℓp -norm as a serious of weighted ℓ1-norms so that the sub-problems become convex. We propose an optimized algorithm that combines forward-backward splitting. The algorithm is fast and succeeds in exactly recovering sparse signals with Gaussian and non-Gaussian noise. Several numerical experiments and comparisons demonstrate the superiority of the proposed algorithm.

  20. The implementation of the Gaussian filter for dimensional metrology basics, algorithms and C code

    CERN Document Server

    Krystek, Michael, Dr

    2012-01-01

    The Gaussian filter is set to remain of enduring importance in metrology. This publication deals with the digital implementation of the Gaussian filter and the estimation of the occurring errors. Guidance is given on how to keep these errors as small as possible. In addition algorithms for the implementation of the Gaussian filter are given.

  1. Edge-based intramode selection for depth-map coding in 3D-HEVC.

    Science.gov (United States)

    Park, Chun-Su

    2015-01-01

    The 3D video extension of High Efficiency Video Coding (3D-HEVC) is the state-of-the-art video coding standard for the compression of the multiview video plus depth format. In the 3D-HEVC design, new depth-modeling modes (DMMs) are utilized together with the existing intraprediction modes for depth intracoding. The DMMs can provide more accurate prediction signals and thereby achieve better compression efficiency. However, testing the DMMs in the intramode decision process causes a drastic increase in the computational complexity. In this paper, we propose a fast mode decision algorithm for depth intracoding. The proposed algorithm first performs a simple edge classification in the Hadamard transform domain. Then, based on the edge classification results, the proposed algorithm selectively omits unnecessary DMMs in the mode decision process. Experimental results demonstrate that the proposed algorithm speeds up the mode decision process by up to 37.65% with negligible loss of coding efficiency.

  2. ICRPfinder: a fast pattern design algorithm for coding sequences and its application in finding potential restriction enzyme recognition sites

    Directory of Open Access Journals (Sweden)

    Stafford Phillip

    2009-09-01

    Full Text Available Abstract Background Restriction enzymes can produce easily definable segments from DNA sequences by using a variety of cut patterns. There are, however, no software tools that can aid in gene building -- that is, modifying wild-type DNA sequences to express the same wild-type amino acid sequences but with enhanced codons, specific cut sites, unique post-translational modifications, and other engineered-in components for recombinant applications. A fast DNA pattern design algorithm, ICRPfinder, is provided in this paper and applied to find or create potential recognition sites in target coding sequences. Results ICRPfinder is applied to find or create restriction enzyme recognition sites by introducing silent mutations. The algorithm is shown capable of mapping existing cut-sites but importantly it also can generate specified new unique cut-sites within a specified region that are guaranteed not to be present elsewhere in the DNA sequence. Conclusion ICRPfinder is a powerful tool for finding or creating specific DNA patterns in a given target coding sequence. ICRPfinder finds or creates patterns, which can include restriction enzyme recognition sites, without changing the translated protein sequence. ICRPfinder is a browser-based JavaScript application and it can run on any platform, in on-line or off-line mode.

  3. A Combination of Modal Synthesis and Subspace Iteration for an Efficient Algorithm for Modal Analysis within a FE-Code

    Directory of Open Access Journals (Sweden)

    M.W. Zehn

    2003-01-01

    Full Text Available Various well-known modal synthesis methods exist in the literature, which are all based upon certain assumptions for the relation of generalised modal co-ordinates with internal modal co-ordinates. If employed in a dynamical FE substructure/superelement technique the generalised modal co-ordinates are represented by the master degrees of freedom (DOF of the master nodes of the substructure. To conduct FE modal analysis the modal synthesis method can be integrated to reduce the number of necessary master nodes or to ease the process of defining additional master points within the structure. The paper presents such a combined method, which can be integrated very efficiently and seamless into a special subspace eigenvalue problem solver with no need to alter the FE system matrices within the FE code. Accordingly, the merits of using the new algorithm are the easy implementation into a FE code, the less effort to carry out modal synthesis, and the versatility in dealing with superelements. The paper presents examples to illustrate the proper work of the algorithm proposed.

  4. Singer product apertures—A coded aperture system with a fast decoding algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Byard, Kevin, E-mail: kevin.byard@aut.ac.nz [School of Economics, Faculty of Business, Economics and Law, Auckland University of Technology, Auckland 1142 (New Zealand); Shutler, Paul M.E. [National Institute of Education, Nanyang Technological University, 1 Nanyang Walk, Singapore 637616 (Singapore)

    2017-06-01

    A new type of coded aperture configuration that enables fast decoding of the coded aperture shadowgram data is presented. Based on the products of incidence vectors generated from the Singer difference sets, we call these Singer product apertures. For a range of aperture dimensions, we compare experimentally the performance of three decoding methods: standard decoding, induction decoding and direct vector decoding. In all cases the induction and direct vector methods are several orders of magnitude faster than the standard method, with direct vector decoding being significantly faster than induction decoding. For apertures of the same dimensions the increase in speed offered by direct vector decoding over induction decoding is better for lower throughput apertures.

  5. Algorithm for Wave-Particle Resonances in Fluid Codes - Final Report

    CERN Document Server

    Mattor, N

    2000-01-01

    We review the work performed under LDRD ER grant 98-ERD-099. The goal of this work is to write a subroutine for a fluid turbulence code that allows it to incorporate wave-particle resonances (WPR). WPR historically have required a kinetic code, with extra dimensions needed to evolve the phase space distribution function, f(x, v, t). The main results accomplished under this grant have been: (1) Derivation of a nonlinear closure term for 1D electrostatic collisionless fluid; (2) Writing of a 1D electrostatic fluid code, ''es1f,'' with a subroutine to calculate the aforementioned closure term; (3) derivation of several methods to calculate the closure term, including Eulerian, Euler-local, fully local, linearized, and linearized zero-phase-velocity, and implementation of these in es1f; (4) Successful modeling of the Landau damping of an arbitrary Langmuir wave; (5) Successful description of a kinetic two-stream instability up to the point of the first bounce; and (6) a spin-off project which uses a mathematical ...

  6. Processing Decoded Video for Backlight Dimming

    DEFF Research Database (Denmark)

    Burini, Nino; Korhonen, Jari

    Quality of digital image and video signals on TV screens is aected by many factors, including the display technology and compression standards. An accurate knowledge of the characteristics of the display andof the video signals can be used to develop advanced algorithms that improve the visual...... rendition of the signals, particularly in the case of LCDs with dynamic local backlight. This thesis shows that it is possible to model LCDs with dynamic backlight to design algorithms that improve the visual quality of 2D and 3D content, and that digital video coding artifacts like blocking or ringing can...... be reduced with post-processing. LCD screens with dynamic local backlight are modeled in their main aspects, like pixel luminance, light diusion and light perception. Following the model, novel algorithms based on optimization are presented and extended, then reduced in complexity, to produce backlights...

  7. Algorithms and data structures for massively parallel generic adaptive finite element codes

    KAUST Repository

    Bangerth, Wolfgang

    2011-12-01

    Today\\'s largest supercomputers have 100,000s of processor cores and offer the potential to solve partial differential equations discretized by billions of unknowns. However, the complexity of scaling to such large machines and problem sizes has so far prevented the emergence of generic software libraries that support such computations, although these would lower the threshold of entry and enable many more applications to benefit from large-scale computing. We are concerned with providing this functionality for mesh-adaptive finite element computations. We assume the existence of an "oracle" that implements the generation and modification of an adaptive mesh distributed across many processors, and that responds to queries about its structure. Based on querying the oracle, we develop scalable algorithms and data structures for generic finite element methods. Specifically, we consider the parallel distribution of mesh data, global enumeration of degrees of freedom, constraints, and postprocessing. Our algorithms remove the bottlenecks that typically limit large-scale adaptive finite element analyses. We demonstrate scalability of complete finite element workflows on up to 16,384 processors. An implementation of the proposed algorithms, based on the open source software p4est as mesh oracle, is provided under an open source license through the widely used deal.II finite element software library. © 2011 ACM 0098-3500/2011/12-ART10 $10.00.

  8. Hamevol1.0: a C++ code for differential equations based on Runge-Kutta algorithm. An application to matter enhanced neutrino oscillation

    CERN Document Server

    Aliani, P; Picariello, M; Torrente-Lujan, E; Torrente-Lujan, Emilio

    2003-01-01

    We present a C++ implementation of a fifth order semi-implicit Runge-Kutta algorithm for solving Ordinary Differential Equations. This algorithm can be used for studying many different problems and in particular it can be applied for computing the evolution of any system whose Hamiltonian is known. We consider in particular the problem of calculating the neutrino oscillation probabilities in presence of matter interactions. The time performance and the accuracy of this implementation is competitive with respect to the other analytical and numerical techniques used in literature. The algorithm design and the salient features of the code are presented and discussed and some explicit examples of code application are given.

  9. Visualizing and quantifying movement from pre-recorded videos: The spectral time-lapse (STL algorithm [v1; ref status: indexed, http://f1000r.es/2qo

    Directory of Open Access Journals (Sweden)

    Christopher R Madan

    2014-01-01

    Full Text Available When studying animal behaviour within an open environment, movement-related data are often important for behavioural analyses. Therefore, simple and efficient techniques are needed to present and analyze the data of such movements. However, it is challenging to present both spatial and temporal information of movements within a two-dimensional image representation. To address this challenge, we developed the spectral time-lapse (STL algorithm that re-codes an animal’s position at every time point with a time-specific color, and overlays it with a reference frame of the video, to produce a summary image. We additionally incorporated automated motion tracking, such that the animal’s position can be extracted and summary statistics such as path length and duration can be calculated, as well as instantaneous velocity and acceleration. Here we describe the STL algorithm and offer a freely available MATLAB toolbox that implements the algorithm and allows for a large degree of end-user control and flexibility.

  10. Design of a Code-Maker Translator Assistive Input Device with a Contest Fuzzy Recognition Algorithm for the Severely Disabled

    Directory of Open Access Journals (Sweden)

    Chung-Min Wu

    2015-01-01

    Full Text Available This study developed an assistive system for the severe physical disabilities, named “code-maker translator assistive input device” which utilizes a contest fuzzy recognition algorithm and Morse codes encoding to provide the keyboard and mouse functions for users to access a standard personal computer, smartphone, and tablet PC. This assistive input device has seven features that are small size, easy installing, modular design, simple maintenance, functionality, very flexible input interface selection, and scalability of system functions, when this device combined with the computer applications software or APP programs. The users with severe physical disabilities can use this device to operate the various functions of computer, smartphone, and tablet PCs, such as sending e-mail, Internet browsing, playing games, and controlling home appliances. A patient with a brain artery malformation participated in this study. The analysis result showed that the subject could make himself familiar with operating of the long/short tone of Morse code in one month. In the future, we hope this system can help more people in need.

  11. Research on High-Frequency Combination Coding-Based SSVEP-BCIs and Its Signal Processing Algorithms

    Directory of Open Access Journals (Sweden)

    Feng Zhang

    2015-01-01

    Full Text Available This study presents a new steady-state visual evoked potential (SSVEP paradigm for brain computer interface (BCI systems. The new paradigm is High-Frequency Combination Coding-Based SSVEP (HFCC-SSVEP. The goal of this study is to increase the number of targets using fewer stimulation frequencies, with diminishing subject’s fatigue and reducing the risk of photosensitive epileptic seizures. This paper investigated the HFCC-SSVEP high-frequency response (beyond 25 Hz for 3 frequencies (25 Hz, 33.33 Hz, and 40 Hz. HFCC-SSVEP produces nn with n high stimulation frequencies through Time Series Combination Code. Furthermore, The Improved Hilbert-Huang Transform (IHHT is adopted to extract time-frequency feature of the proposed SSVEP response. Lastly, the differentiation combination (DC method is proposed to select the combination coding sequence in order to increase the recognition rate; as a result, IHHT algorithm and DC method for the proposed SSVEP paradigm in this study increase recognition efficiency so as to improve ITR and increase the stability of the BCI system. Furthermore, SSVEPs evoked by high-frequency stimuli (beyond 25 Hz minimally diminish subject’s fatigue and prevent safety hazards linked to photo-induced epileptic seizures. This study tests five subjects in order to verify the feasibility of the proposed method.

  12. Feasibility study for improved steady-state initialization algorithms for the RELAP5 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Paulsen, M.P.; Peterson, C.E.; Katsma, K.R. (Computer Simulation and Analysis, Inc., Idaho Falls, ID (United States))

    1993-04-01

    A design for a new steady-state initialization method is presented that represents an improvement over the current method used in RELAP5. Current initialization methods for RELAP5 solve the transient fluidflow balance equations simulating a transient to achieve steady-state conditions. Because the transient solution is used, the initial conditions may change from the desired values requiring the use of controllers and long transient running times to obtain steady-state conditions for system problems. The new initialization method allows the user to fix thermal-hydraulic values in volumes and junctions where the conditions are best known and have the code compute the initial conditions in other areas of the system. The steady-state balance equations and solution methods are presented. The constitutive, component, and specialpurpose models are reviewed with respect to modifications required for the new steady-state initialization method. The requirements for user input are defined and the feasibility of the method is demonstrated with a testbed code by initializing some simple channel problems. The initialization of the sample problems using, the old and the new methods are compared.

  13. On the Impact of Zero-padding in Network Coding Efficiency with Internet Traffic and Video Traces

    DEFF Research Database (Denmark)

    Taghouti, Maroua; Roetter, Daniel Enrique Lucani; Pedersen, Morten Videbæk

    2016-01-01

    compiled by Arizona State University. Our numerical results show the dependence of the zero-padding overhead with the number of packets combined in a generation using RLNC. Surprisingly, medium and large TCP generations are strongly affected with more than 100% of padding overhead. Although all video...

  14. Optimization of high-definition video coding and hybrid fiber-wireless transmission in the 60 GHz band

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Pham, Tien Thang; Beltrán, Marta

    2011-01-01

    The paper addresses the problem of distribution of highdefinition video over fiber-wireless networks. The physical layer architecture with the low complexity envelope detection solution is investigated. We present both experimental studies and simulation of high quality high-definition compressed...

  15. Algorithms

    Indian Academy of Sciences (India)

    , i is referred to as the loop-index, 'stat-body' is any sequence of ... while i ~ N do stat-body; i: = i+ 1; endwhile. The algorithm for sorting the numbers is described in Table 1 and the algorithmic steps on a list of 4 numbers shown in. Figure 1.

  16. Edge-preserving Intra mode for efficient depth map coding based on H.264/AVC

    DEFF Research Database (Denmark)

    Zamarin, Marco; Forchhammer, Søren

    2014-01-01

    Depth-image-based-rendering (DIBR) algorithms for 3D video communication systems based on the “multi-view video plus depth” format are very sensitive to the accuracy of depth information. Specifically, edge regions in the depth data should be preserved in the coding/decoding process to ensure good...

  17. PENELOPE, an algorithm and computer code for Monte Carlo simulation of electron-photon showers

    Energy Technology Data Exchange (ETDEWEB)

    Salvat, F.; Fernandez-Varea, J.M.; Baro, J.; Sempau, J.

    1996-07-01

    The FORTRAN 77 subroutine package PENELOPE performs Monte Carlo simulation of electron-photon showers in arbitrary for a wide energy range, from 1 keV to several hundred MeV. Photon transport is simulated by means of the standard, detailed simulation scheme. Electron and positron histories are generated on the basis of a mixed procedure, which combines detailed simulation of hard events with condensed simulation of soft interactions. A simple geometry package permits the generation of random electron-photon showers in material systems consisting of homogeneous bodies limited by quadric surfaces, i.e. planes, spheres, cylinders, etc. This report is intended not only to serve as a manual of the simulation package, but also to provide the user with the necessary information to understand the details of the Monte Carlo algorithm. (Author) 108 refs.

  18. PENELOPE, and algorithm and computer code for Monte Carlo simulation of electron-photon showers

    Energy Technology Data Exchange (ETDEWEB)

    Salvat, F.; Fernandez-Varea, J.M.; Baro, J.; Sempau, J.

    1996-10-01

    The FORTRAN 77 subroutine package PENELOPE performs Monte Carlo simulation of electron-photon showers in arbitrary for a wide energy range, from similar{sub t}o 1 KeV to several hundred MeV. Photon transport is simulated by means of the standard, detailed simulation scheme. Electron and positron histories are generated on the basis of a mixed procedure, which combines detailed simulation of hard events with condensed simulation of soft interactions. A simple geometry package permits the generation of random electron-photon showers in material systems consisting of homogeneous bodies limited by quadric surfaces, i.e. planes, spheres cylinders, etc. This report is intended not only to serve as a manual of the simulation package, but also to provide the user with the necessary information to understand the details of the Monte Carlo algorithm.

  19. Coupling HYDRUS-1D Code with PA-DDS Algorithms for Inverse Calibration

    Science.gov (United States)

    Wang, Xiang; Asadzadeh, Masoud; Holländer, Hartmut

    2017-04-01

    Numerical modelling requires calibration to predict future stages. A standard method for calibration is inverse calibration where generally multi-objective optimization algorithms are used to find a solution, e.g. to find an optimal solution of the van Genuchten Mualem (VGM) parameters to predict water fluxes in the vadose zone. We coupled HYDRUS-1D with PA-DDS to add a new, robust function for inverse calibration to the model. The PA-DDS method is a recently developed multi-objective optimization algorithm, which combines Dynamically Dimensioned Search (DDS) and Pareto Archived Evolution Strategy (PAES). The results were compared to a standard method (Marquardt-Levenberg method) implemented in HYDRUS-1D. Calibration performance is evaluated using observed and simulated soil moisture at two soil layers in the Southern Abbotsford, British Columbia, Canada in the terms of the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE). Results showed low RMSE values of 0.014 and 0.017 and strong NSE values of 0.961 and 0.939. Compared to the results by the Marquardt-Levenberg method, we received better calibration results for deeper located soil sensors. However, VGM parameters were similar comparing with previous studies. Both methods are equally computational efficient. We claim that a direct implementation of PA-DDS into HYDRUS-1D should reduce the computation effort further. This, the PA-DDS method is efficient for calibrating recharge for complex vadose zone modelling with multiple soil layer and can be a potential tool for calibration of heat and solute transport. Future work should focus on the effectiveness of PA-DDS for calibrating more complex versions of the model with complex vadose zone settings, with more soil layers, and against measured heat and solute transport. Keywords: Recharge, Calibration, HYDRUS-1D, Multi-objective Optimization

  20. Electromyography-based seizure detector: Preliminary results comparing a generalized tonic-clonic seizure detection algorithm to video-EEG recordings.

    Science.gov (United States)

    Szabó, Charles Ákos; Morgan, Lola C; Karkar, Kameel M; Leary, Linda D; Lie, Octavian V; Girouard, Michael; Cavazos, José E

    2015-09-01

    Automatic detection of generalized tonic-clonic seizures (GTCS) will facilitate patient monitoring and early intervention to prevent comorbidities, recurrent seizures, or death. Brain Sentinel (San Antonio, Texas, USA) developed a seizure-detection algorithm evaluating surface electromyography (sEMG) signals during GTCS. This study aims to validate the seizure-detection algorithm using inpatient video-electroencephalography (EEG) monitoring. sEMG was recorded unilaterally from the biceps/triceps muscles in 33 patients (17white/16 male) with a mean age of 40 (range 14-64) years who were admitted for video-EEG monitoring. Maximum voluntary biceps contraction was measured in each patient to set up the baseline physiologic muscle threshold. The raw EMG signal was recorded using conventional amplifiers, sampling at 1,024 Hz and filtered with a 60 Hz noise detection algorithm before it was processed with three band-pass filters at pass frequencies of 3-40, 130-240, and 300-400 Hz. A seizure-detection algorithm utilizing Hotelling's T-squared power analysis of compound muscle action potentials was used to identify GTCS and correlated with video-EEG recordings. In 1,399 h of continuous recording, there were 196 epileptic seizures (21 GTCS, 96 myoclonic, 28 tonic, 12 absence, and 42 focal seizures with or without loss of awareness) and 4 nonepileptic spells. During retrospective, offline evaluation of sEMG from the biceps alone, the algorithm detected 20 GTCS (95%) in 11 patients, averaging within 20 s of electroclinical onset of generalized tonic activity, as identified by video-EEG monitoring. Only one false-positive detection occurred during the postictal period following a GTCS, but false alarms were not triggered by other seizure types or spells. Brain Sentinel's seizure detection algorithm demonstrated excellent sensitivity and specificity for identifying GTCS recorded in an epilepsy monitoring unit. Further studies are needed in larger patient groups, including

  1. Application for pinch design of heat exchanger networks by use of a computer code employing an improved problem algorithm table

    Energy Technology Data Exchange (ETDEWEB)

    Ozkan, Semra; Dincer, Salih [Department of Chemical Engineering, Chemical-Metallurgical Faculty, Yildiz Technical University, Davutpasa Kampusu, No 127, 34210 Esenler, Istanbul (Turkey)

    2001-12-01

    In this work, the methods used in pinch design were applied to a heat exchanger network with the aid of an improved problem algorithm table. This table enables one to compose composite and grand composite curves in a simplified way. A user friendly computer code entitled DarboTEK, compiled by using Visual Basic 3.0, was developed for the design of integrated heat exchanger networks and estimation of related capital costs. Based on the data obtained from the TUPRAS petroleum refinery at Izmit, a retrofit design of heat exchanger networks was accomplished using DarboTEK. An investment of 3,576,627 dollars is needed which will be paid back in 1.69 years simply by energy conservation due to heat integration. (Author)

  2. Stochastic optimization of GeantV code by use of genetic algorithms

    Science.gov (United States)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.

  3. A Model for Video Quality Assessment Considering Packet Loss for Broadcast Digital Television Coded in H.264

    Directory of Open Access Journals (Sweden)

    Jose Joskowicz

    2014-01-01

    Full Text Available This paper presents a model to predict video quality perceived by the broadcast digital television (DTV viewer. We present how noise on DTV can introduce individual transport stream (TS packet losses at the receiver. The type of these errors is different than the produced on IP networks. Different scenarios of TS packet loss are analyzed, including uniform and burst distributions. The results show that there is a high variability on the perceived quality for a given percentage of packet loss and type of error. This implies that there is practically no correlation between the type of error or the percentage of packets loss and the perceived degradation. A new metric is introduced, the weighted percentage of slice loss, which takes into account the affected slice type in each lost TS packet. We show that this metric is correlated with the video quality degradation. A novel parametric model for video quality estimation is proposed, designed, and verified based on the results of subjective tests in SD and HD. The results were compared to a standard model used in IP transmission scenarios. The proposed model improves Pearson Correlation and root mean square error between the subjective and the predicted MOS.

  4. An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes

    CERN Document Server

    Vincenti, H; Sasanka, R; Vay, J-L

    2016-01-01

    In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (10pJ/word on-die to 10,000pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use more and more cores on each compute nodes ("fat nodes") that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering...

  5. Simulations for Full Unit-memory and Partial Unit-memory Convolutional Codes with Real-time Minimal-byte-error Probability Decoding Algorithm

    Science.gov (United States)

    Vo, Q. D.

    1984-01-01

    A program which was written to simulate Real Time Minimal-Byte-Error Probability (RTMBEP) decoding of full unit-memory (FUM) convolutional codes on a 3-bit quantized AWGN channel is described. This program was used to compute the symbol-error probability of FUM codes and to determine the signal to noise (SNR) required to achieve a bit error rate (BER) of 10 to the minus 6th power for corresponding concatenated systems. A (6,6/30) FUM code, 6-bit Reed-Solomon code combination was found to achieve the required BER at a SNR of 1.886 dB. The RTMBEP algorithm was then modified for decoding partial unit-memory (PUM) convolutional codes. A simulation program was also written to simulate the symbol-error probability of these codes.

  6. The Continual Intercomparison of Radiation Codes (CIRC) Assessing Anew the Quality of GCM Radiation Algorithms

    Science.gov (United States)

    Oreopoulos, Lazaros; Mlawer, Eli

    2010-01-01

    The simulation of changes in the Earth's climate due to solar and thermal radiative processes with global climate models (GCMs) is highly complex, depending on the parameterization of a multitude of nonlinearly coupled physical processes. In contrast, the germ of global climate change, the radiative forcing from enhanced abundances of greenhouse gases, is relatively well understood. The impressive agreement between detailed radiation calculations and highly resolved spectral radiation measurements in the thermal infrared under cloudless conditions (see, for example, Fig. 1) instills confidence in our knowledge of the sources of gaseous absorption. That the agreement spans a broad range of temperature and humidity regimes using instruments mounted on surface, aircraft, and satellite platforms not only attests to our capability to accurately calculate radiative fluxes under present conditions, but also provides confidence in the spectroscopic basis for computation of fluxes under conditions that might characterize future global climate (e.g., radiative forcing). Alas, the computational costs of highly resolved spectral radiation calculations cannot be afforded presently in GCMs. Such calculations have instead been used as the foundation for approximations implemented in fast but generally less accurate algorithms performing the needed radiative transfer (RT) calculations in GCMs. Credible climate simulations by GCMs cannot be ensured without accurate solar and thermal radiative flux calculations under all types of sky conditions: pristine cloudless, aerosol-laden, and cloudy. The need for accuracy in RT calculations is not only important for greenhouse gas forcing scenarios, but is also profoundly needed for the robust simulation of many other atmospheric phenomena, such as convective processes.

  7. Adaptive Noise Model for Transform Domain Wyner-Ziv Video using Clustering of DCT Blocks

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    The noise model is one of the most important aspects influencing the coding performance of Distributed Video Coding. This paper proposes a novel noise model for Transform Domain Wyner-Ziv (TDWZ) video coding by using clustering of DCT blocks. The clustering algorithm takes advantage of the residual...... information of all frequency bands, iteratively classifies blocks into different categories and estimates the noise parameter in each category. The experimental results show that the coding performance of the proposed cluster level noise model is competitive with state-ofthe- art coefficient level noise...... modelling. Furthermore, the proposed cluster level noise model is adaptively combined with a coefficient level noise model in this paper to robustly improve coding performance of TDWZ video codec up to 1.24 dB (by Bjøntegaard metric) compared to the DISCOVER TDWZ video codec....

  8. Transmission of object based fine-granular-scalability video over networks

    Science.gov (United States)

    Shi, Xu-li; Jin, Zhi-cheng; Teng, Guo-wei; Zhang, Zhao-yang; An, Ping; Xiao, Guang

    2006-05-01

    It is a hot focus of current researches in video standards that how to transmit video streams over Internet and wireless networks. One of the key methods is FGS(Fine-Granular-Scalability), which can always adapt to the network bandwidth varying but with some sacrifice of coding efficiency, is supported by MPEG-4. Object-based video coding algorithm has been firstly included in MPEG-4 standard that can be applied in interactive video. However, the real time segmentation of VOP(video object plan) is difficult that limit the application of MPEG-4 standard in interactive video. H.264/AVC is the up-to-date video-coding standard, which enhance compression performance and provision a network-friendly video representation. In this paper, we proposed a new Object Based FGS(OBFGS) coding algorithm embedded in H.264/AVC that is different from that in mpeg-4. After the algorithms optimization for the H.264 encoder, the FGS first finish the base-layer coding. Then extract moving VOP using the base-layer information of motion vectors and DCT coefficients. Sparse motion vector field of p-frame composed of 4*4 blocks, 4*8 blocks and 8*4 blocks in base-layer is interpolated. The DCT coefficient of I-frame is calculated by using information of spatial intra-prediction. After forward projecting each p-frame vector to the immediate adjacent I-frame, the method extracts moving VOPs (video object plan) using a recursion 4*4 block classification process. Only the blocks that belong to the moving VOP in 4*4 block-level accuracy is coded to produce enhancement-layer stream. Experimental results show that our proposed system can obtain high interested VOP quality at the cost of fewer coding efficiency.

  9. Algorithms

    Indian Academy of Sciences (India)

    exhausted. If one of the arrays gets exhausted before the other, then the merging essentially corresponds to extending the merged array with the elements of the array that is not yet exhausted. The program for merging two arrays is shown in Table 2 . For the sake of clarity, the actual program code shown is simplified.

  10. Algorithms

    Indian Academy of Sciences (India)

    Algorithms. 3. Procedures and Recursion. R K Shyamasundar. In this article we introduce procedural abstraction and illustrate its uses. Further, we illustrate the notion of recursion which is one of the most useful features of procedural abstraction. Procedures. Let us consider a variation of the pro blem of summing the first M.

  11. Algorithms

    Indian Academy of Sciences (India)

    number of elements. We shall illustrate the widely used matrix multiplication algorithm using the two dimensional arrays in the following. Consider two matrices A and B of integer type with di- mensions m x nand n x p respectively. Then, multiplication of. A by B denoted, A x B , is defined by matrix C of dimension m xp where.

  12. Convolutional-Code-Specific CRC Code Design

    OpenAIRE

    Lou, Chung-Yu; Daneshrad, Babak; Wesel, Richard D.

    2015-01-01

    Cyclic redundancy check (CRC) codes check if a codeword is correctly received. This paper presents an algorithm to design CRC codes that are optimized for the code-specific error behavior of a specified feedforward convolutional code. The algorithm utilizes two distinct approaches to computing undetected error probability of a CRC code used with a specific convolutional code. The first approach enumerates the error patterns of the convolutional code and tests if each of them is detectable. Th...

  13. “First-person view” of pathogen transmission and hand hygiene – use of a new head-mounted video capture and coding tool

    Directory of Open Access Journals (Sweden)

    Lauren Clack

    2017-10-01

    Full Text Available Abstract Background Healthcare workers’ hands are the foremost means of pathogen transmission in healthcare, but detailed hand trajectories have been insufficiently researched so far. We developed and applied a new method to systematically document hand-to-surface exposures (HSE to delineate true hand transmission pathways in real-life healthcare settings. Methods A head-mounted camera and commercial coding software were used to capture ten active care episodes by eight nurses and two physicians and code HSE type and duration using a hierarchical coding scheme. We identified HSE sequences of particular relevance to infectious risks for patients based on the WHO ‘Five Moments for Hand Hygiene’. The study took place in a trauma intensive care unit in a 900-bed university hospital in Switzerland. Results Overall, the ten videos totaled 296.5 min and featured eight nurses and two physicians. A total of 4222 HSE were identified (1 HSE every 4.2 s, which concerned bare (79% and gloved (21% hands. The HSE inside the patient zone (n = 1775; 42% included mobile objects (33%, immobile surfaces (5%, and patient intact skin (4%, while HSE outside the patient zone (n = 1953; 46% included HCW’s own body (10%, mobile objects (28%, and immobile surfaces (8%. A further 494 (12% events involved patient critical sites. Sequential analysis revealed 291 HSE transitions from outside to inside patient zone, i.e. “colonization events”, and 217 from any surface to critical sites, i.e. “infection events”. Hand hygiene occurred 97 times, 14 (5% adherence times at colonization events and three (1% adherence times at infection events. On average, hand rubbing lasted 13 ± 9 s. Conclusions The abundance of HSE underscores the central role of hands in the spread of potential pathogens while hand hygiene occurred rarely at potential colonization and infection events. Our approach produced a valid video and coding instrument for in-depth analysis of

  14. "First-person view" of pathogen transmission and hand hygiene - use of a new head-mounted video capture and coding tool.

    Science.gov (United States)

    Clack, Lauren; Scotoni, Manuela; Wolfensberger, Aline; Sax, Hugo

    2017-01-01

    Healthcare workers' hands are the foremost means of pathogen transmission in healthcare, but detailed hand trajectories have been insufficiently researched so far. We developed and applied a new method to systematically document hand-to-surface exposures (HSE) to delineate true hand transmission pathways in real-life healthcare settings. A head-mounted camera and commercial coding software were used to capture ten active care episodes by eight nurses and two physicians and code HSE type and duration using a hierarchical coding scheme. We identified HSE sequences of particular relevance to infectious risks for patients based on the WHO 'Five Moments for Hand Hygiene'. The study took place in a trauma intensive care unit in a 900-bed university hospital in Switzerland. Overall, the ten videos totaled 296.5 min and featured eight nurses and two physicians. A total of 4222 HSE were identified (1 HSE every 4.2 s), which concerned bare (79%) and gloved (21%) hands. The HSE inside the patient zone (n = 1775; 42%) included mobile objects (33%), immobile surfaces (5%), and patient intact skin (4%), while HSE outside the patient zone (n = 1953; 46%) included HCW's own body (10%), mobile objects (28%), and immobile surfaces (8%). A further 494 (12%) events involved patient critical sites. Sequential analysis revealed 291 HSE transitions from outside to inside patient zone, i.e. "colonization events", and 217 from any surface to critical sites, i.e. "infection events". Hand hygiene occurred 97 times, 14 (5% adherence) times at colonization events and three (1% adherence) times at infection events. On average, hand rubbing lasted 13 ± 9 s. The abundance of HSE underscores the central role of hands in the spread of potential pathogens while hand hygiene occurred rarely at potential colonization and infection events. Our approach produced a valid video and coding instrument for in-depth analysis of hand trajectories during active patient care that may help to design

  15. Watermarking in H.264/AVC compressed domain using Exp-Golomb code words mapping

    Science.gov (United States)

    Xu, Dawen; Wang, Rangding

    2011-09-01

    In this paper, a fast watermarking algorithm for the H.264/AVC compressed video using Exponential-Golomb (Exp-Golomb) code words mapping is proposed. During the embedding process, the eligible Exp-Golomb code words of reference frames are first identified, and then the mapping rules between these code words and the watermark bits are established. Watermark embedding is performed by modulating the corresponding Exp-Golomb code words, which is based on the established mapping rules. The watermark information can be extracted directly from the encoded stream without resorting to the original video, and merely requires parsing the Exp-Golomb code from bit stream rather than decoding the video. Experimental results show that the proposed watermarking scheme can effectively embed information with no bit rate increase and almost no quality degradation. The algorithm, however, is fragile and re-encoding at alternate bit rates or transcoding removes the watermark.

  16. Temporal compressive imaging for video

    Science.gov (United States)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  17. Transmission of compressed video

    Science.gov (United States)

    Pasch, H. L.

    1990-09-01

    An overview of video coding is presented. The aim is not to give a technical summary of possible coding techniques, but to address subjects related to video compression in general and to the transmission of compressed video in more detail. Bit rate reduction is in general possible by removing redundant information; removing information the eye does not use anyway; and reducing the quality of the video. The codecs which are used for reducing the bit rate, can be divided into two groups: Constant Bit rate Codecs (CBC's), which keep the bit rate constant, but vary the video quality; and Variable Bit rate Codecs (VBC's), which keep the video quality constant by varying the bit rate. VBC's can be in general reach a higher video quality than CBC's using less bandwidth, but need a transmission system that allows the bandwidth of a connection to fluctuate in time. The current and the next generation of the PSTN does not allow this; ATM might. There are several factors which influence the quality of video: the bit error rate of the transmission channel, slip rate, packet loss rate/packet insertion rate, end-to-end delay, phase shift between voice and video, and bit rate. Based on the bit rate of the coded video, the following classification of coded video can be made: High Definition Television (HDTV); Broadcast Quality Television (BQTV); video conferencing; and video telephony. The properties of these classes are given. The video conferencing and video telephony equipment available now and in the next few years can be divided into three categories: conforming to 1984 CCITT standard for video conferencing; conforming to 1988 CCITT standard; and conforming to no standard.

  18. Instructional DVD video lesson with code switching: its effect on the performance level in dancing social dance among grade 10 special program in the art students of the Philippines

    Directory of Open Access Journals (Sweden)

    Capilitan Fernando T.

    2017-01-01

    Full Text Available This paper shows that the experimental group who are exposed to DVD Video Lesson that uses code switching language has an average mean score in the pretest of 1.56, and this increased to an average mean of 3.50 in the posttest. The control group that uses DVD Video Lesson that uses purely English language got an average mean of 1.06 in the pretest and increased to 1.53 in the posttest. Based on the results of the performance posttest taken by the two groups, the experimental group has a dramatic increase in scores from the pretest to posttest. Although both groups had increased in their performance scores from pretest to posttest, the experimental group (code switching language performs well in the posttest than the control group. As revealed in this findings , there is a significant difference in the posttest scores between the experimental group who are exposed to DVD lesson that uses code switching as a medium of instruction and the control group who are exposed to DVD lesson that uses English. The students who are exposed to the Video Lesson that uses code switching perform well than those students who are exposed in DVD video lesson that uses purely English language. DVD Video lesson that uses code switching as a medium of instruction in teaching social dance is the useful approach in teaching Grade 10 Special Program in the Art students. The language used (code switching is the powerful medium of instruction that enhances the learning outcomes of the students to perform well. This paper could be an eye opener to the Department of Education to inculcate the used of first language/local language or MTB-MLE, not only in Grade I to III but all level in K to 12 programs, since education is a key factor for building a better nation.

  19. Differences in the causes of death of HIV-positive patients in a cohort study by data sources and coding algorithms.

    Science.gov (United States)

    Hernando, Victoria; Sobrino-Vegas, Paz; Burriel, M Carmen; Berenguer, Juan; Navarro, Gemma; Santos, Ignacio; Reparaz, Jesús; Martínez, M Angeles; Antela, Antonio; Gutiérrez, Félix; del Amo, Julia

    2012-09-10

    To compare causes of death (CoDs) from two independent sources: National Basic Death File (NBDF) and deaths reported to the Spanish HIV Research cohort [Cohort de adultos con infección por VIH de la Red de Investigación en SIDA CoRIS)] and compare the two coding algorithms: International Classification of Diseases, 10th revision (ICD-10) and revised version of Coding Causes of Death in HIV (revised CoDe). Between 2004 and 2008, CoDs were obtained from the cohort records (free text, multiple causes) and also from NBDF (ICD-10). CoDs from CoRIS were coded according to ICD-10 and revised CoDe by a panel. Deaths were compared by 13 disease groups: HIV/AIDS, liver diseases, malignancies, infections, cardiovascular, blood disorders, pulmonary, central nervous system, drug use, external, suicide, other causes and ill defined. There were 160 deaths. Concordance for the 13 groups was observed in 111 (69%) cases for the two sources and in 115 (72%) cases for the two coding algorithms. According to revised CoDe, the commonest CoDs were HIV/AIDS (53%), non-AIDS malignancies (11%) and liver related (9%), these percentages were similar, 57, 10 and 8%, respectively, for NBDF (coded as ICD-10). When using ICD-10 to code deaths in CoRIS, wherein HIV infection was known in everyone, the proportion of non-AIDS malignancies was 13%, liver-related accounted for 3%, while HIV/AIDS reached 70% due to liver-related, infections and ill-defined causes being coded as HIV/AIDS. There is substantial variation in CoDs in HIV-infected persons according to sources and algorithms. ICD-10 in patients known to be HIV-positive overestimates HIV/AIDS-related deaths at the expense of underestimating liver-related diseases, infections and ill defined causes. CoDe seems as the best option for cohort studies.

  20. Robust real-time segmentation of images and videos using a smooth-spline snake-based algorithm.

    Science.gov (United States)

    Precioso, Frederic; Barlaud, Michel; Blu, Thierry; Unser, Michael

    2005-07-01

    This paper deals with fast image and video segmentation using active contours. Region-based active contours using level sets are powerful techniques for video segmentation, but they suffer from large computational cost. A parametric active contour method based on B-Spline interpolation has been proposed in to highly reduce the computational cost, but this method is sensitive to noise. Here, we choose to relax the rigid interpolation constraint in order to robustify our method in the presence of noise: by using smoothing splines, we trade a tunable amount of interpolation error for a smoother spline curve. We show by experiments on natural sequences that this new flexibility yields segmentation results of higher quality at no additional computational cost. Hence, real-time processing for moving objects segmentation is preserved.

  1. Parity Bit Replenishment for JPEG 2000-Based Video Streaming

    Directory of Open Access Journals (Sweden)

    François-Olivier Devaux

    2009-01-01

    Full Text Available This paper envisions coding with side information to design a highly scalable video codec. To achieve fine-grained scalability in terms of resolution, quality, and spatial access as well as temporal access to individual frames, the JPEG 2000 coding algorithm has been considered as the reference algorithm to encode INTRA information, and coding with side information has been envisioned to refresh the blocks that change between two consecutive images of a video sequence. One advantage of coding with side information compared to conventional closed-loop hybrid video coding schemes lies in the fact that parity bits are designed to correct stochastic errors and not to encode deterministic prediction errors. This enables the codec to support some desynchronization between the encoder and the decoder, which is particularly helpful to adapt on the fly pre-encoded content to fluctuating network resources and/or user preferences in terms of regions of interest. Regarding the coding scheme itself, to preserve both quality scalability and compliance to the JPEG 2000 wavelet representation, a particular attention has been devoted to the definition of a practical coding framework able to exploit not only the temporal but also spatial correlation among wavelet subbands coefficients, while computing the parity bits on subsets of wavelet bit-planes. Simulations have shown that compared to pure INTRA-based conditional replenishment solutions, the addition of the parity bits option decreases the transmission cost in terms of bandwidth, while preserving access flexibility.

  2. MATLAB code to estimate landslide volume from single remote sensed image using genetic algorithm and imagery similarity measurement

    Science.gov (United States)

    Wang, Ting-Shiuan; Yu, Teng-To; Lee, Shing-Tsz; Peng, Wen-Fei; Lin, Wei-Ling; Li, Pei-Ling

    2014-09-01

    Information regarding the scale of a hazard is crucial for the evaluation of its associated impact. Quantitative analysis of landslide volume immediately following the event can offer better understanding and control of contributory factors and their relative importance. Such information cannot be gathered for each landslide event, owing to limitations in obtaining useable raw data and the necessary procedures of each applied technology. Empirical rules are often used to predict volume change, but the resulting accuracy is very low. Traditional methods use photogrammetry or light detection and ranging (LiDAR) to produce a post-event digital terrain model (DTM). These methods are both costly and time-intensive. This study presents a technique to estimate terrain change volumes quickly and easily, not only reducing waiting time but also offering results with less than 25% error. A genetic algorithm (GA) programmed MATLAB is used to intelligently predict the elevation change for each pixel of an image. This deviation from the pre-event DTM becomes a candidate for the post-event DTM. Thus, each changed DTM is converted into a shadow relief image and compared with a single post-event remotely sensed image for similarity ranking. The candidates ranked in the top two thirds are retained as parent chromosomes to produce offspring in the next generation according to the rules of GAs. When the highest similarity index reaches 0.75, the DTM corresponding to that hillshade image is taken as the calculated post-event DTM. As an example, a pit with known volume is removed from a flat, inclined plane to demonstrate the theoretical capability of the code. The method is able to rapidly estimate the volume of terrain change within an error of 25%, without the delays involved in obtaining stereo image pairs, or the need for ground control points (GCPs) or professional photogrammetry software.

  3. FBCOT: a fast block coding option for JPEG 2000

    Science.gov (United States)

    Taubman, David; Naman, Aous; Mathew, Reji

    2017-09-01

    Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).

  4. Spacelabs Innovative Project Award winner--2008. Megacode simulation workshop and education video--a megatonne of care and Code blue: live and interactive.

    Science.gov (United States)

    Loucks, Lynda; Leskowski, Jessica; Fallis, Wendy

    2010-01-01

    Skill acquisition and knowledge translation of best practices can be successfully facilitated using simulation methods. The 2008 Spacelabs Innovative Project Award was awarded for a unique training workshop that used simulation in the area of cardiac life support and resuscitation to train multiple health care personnel in basic and advanced skills. The megacode simulation workshop and education video was an educational event held in 2007 in Winnipeg, MB, for close to 60 participants and trainers from multiple disciplines across the provinces of Manitoba and Northwestern Ontario. The event included lectures, live simulation of a megacode, and hands-on training in the latest techniques in resuscitation. The goals of this project were to promote efficiency and better outcomes related to resuscitation measures, to foster teamwork, to emphasize the importance of each team member's role, and to improve knowledge and skills in resuscitation. The workshop was filmed to produce a training DVD that could be used for future knowledge enhancement and introductory training of health care personnel. Substantial positive feedback was received and evaluations indicated that participants reported improvement and expansion of their knowledge of advanced cardiac life support. Given their regular participation in cardiac arrest codes and the importance of staying up-to-date on best practice, the workshop was particularly useful to health care staff and nurses working in critical care areas. In addition, those who participate less frequently in cardiac resuscitation will benefit from the educational video for ongoing competency. Through accelerating knowledge translation from the literature to the bedside, it is hoped that this event contributed to improved patient care and outcomes with respect to advanced cardiac life support.

  5. An Evolutionary Video Assignment Optimization Technique for VOD System in Heterogeneous Environment

    Directory of Open Access Journals (Sweden)

    King-Man Ho

    2010-01-01

    Full Text Available We investigate the video assignment problem of a hierarchical Video-on-Demand (VOD system in heterogeneous environments where different quality levels of videos can be encoded using either replication or layering. In such systems, videos are delivered to clients either through a proxy server or video broadcast/unicast channels. The objective of our work is to determine the appropriate coding strategy as well as the suitable delivery mechanism for a specific quality level of a video such that the overall system blocking probability is minimized. In order to find a near-optimal solution for such a complex video assignment problem, an evolutionary approach based on genetic algorithm (GA is proposed. From the results, it is shown that the system performance can be significantly enhanced by efficiently coupling the various techniques.

  6. Transcoding-Based Error-Resilient Video Adaptation for 3G Wireless Networks

    Directory of Open Access Journals (Sweden)

    Dogan Safak

    2007-01-01

    Full Text Available Transcoding is an effective method to provide video adaptation for heterogeneous internetwork video access and communication environments, which require the tailoring (i.e., repurposing of coded video properties to channel conditions, terminal capabilities, and user preferences. This paper presents a video transcoding system that is capable of applying a suite of error resilience tools on the input compressed video streams while controlling the output rates to provide robust communications over error-prone and bandwidth-limited 3G wireless networks. The transcoder is also designed to employ a new adaptive intra-refresh algorithm, which is responsive to the detected scene activity inherently embedded into the video content and the reported time-varying channel error conditions of the wireless network. Comprehensive computer simulations demonstrate significant improvements in the received video quality performances using the new transcoding architecture without an extra computational cost.

  7. Transcoding-Based Error-Resilient Video Adaptation for 3G Wireless Networks

    Science.gov (United States)

    Eminsoy, Sertac; Dogan, Safak; Kondoz, Ahmet M.

    2007-12-01

    Transcoding is an effective method to provide video adaptation for heterogeneous internetwork video access and communication environments, which require the tailoring (i.e., repurposing) of coded video properties to channel conditions, terminal capabilities, and user preferences. This paper presents a video transcoding system that is capable of applying a suite of error resilience tools on the input compressed video streams while controlling the output rates to provide robust communications over error-prone and bandwidth-limited 3G wireless networks. The transcoder is also designed to employ a new adaptive intra-refresh algorithm, which is responsive to the detected scene activity inherently embedded into the video content and the reported time-varying channel error conditions of the wireless network. Comprehensive computer simulations demonstrate significant improvements in the received video quality performances using the new transcoding architecture without an extra computational cost.

  8. PMD compensation in multilevel coded-modulation schemes with coherent detection using BLAST algorithm and iterative polarization cancellation.

    Science.gov (United States)

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-09-15

    We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.

  9. Intelligent video surveillance systems

    CERN Document Server

    Dufour, Jean-Yves

    2012-01-01

    Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi

  10. Instance-based Policy Learning by Real-coded Genetic Algorithms and Its Application to Control of Nonholonomic Systems

    Science.gov (United States)

    Miyamae, Atsushi; Sakuma, Jun; Ono, Isao; Kobayashi, Shigenobu

    The stabilization control of nonholonomic systems have been extensively studied because it is essential for nonholonomic robot control problems. The difficulty in this problem is that the theoretical derivation of control policy is not necessarily guaranteed achievable. In this paper, we present a reinforcement learning (RL) method with instance-based policy (IBP) representation, in which control policies for this class are optimized with respect to user-defined cost functions. Direct policy search (DPS) is an approach for RL; the policy is represented by parametric models and the model parameters are directly searched by optimization techniques including genetic algorithms (GAs). In IBP representation an instance consists of a state and an action pair; a policy consists of a set of instances. Several DPSs with IBP have been previously proposed. In these methods, sometimes fail to obtain optimal control policies when state-action variables are continuous. In this paper, we present a real-coded GA for DPSs with IBP. Our method is specifically designed for continuous domains. Optimization of IBP has three difficulties; high-dimensionality, epistasis, and multi-modality. Our solution is designed for overcoming these difficulties. The policy search with IBP representation appears to be high-dimensional optimization; however, instances which can improve the fitness are often limited to active instances (instances used for the evaluation). In fact, the number of active instances is small. Therefore, we treat the search problem as a low dimensional problem by restricting search variables only to active instances. It has been commonly known that functions with epistasis can be efficiently optimized with crossovers which satisfy the inheritance of statistics. For efficient search of IBP, we propose extended crossover-like mutation (extended XLM) which generates a new instance around an instance with satisfying the inheritance of statistics. For overcoming multi-modality, we

  11. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  12. Downlink Video Streaming for Users Non-Equidistant from Base Station

    Directory of Open Access Journals (Sweden)

    S. Pejoski

    2012-06-01

    Full Text Available We consider multiuser video transmission for users that are non-equidistantly positioned from base station. We propose a greedy algorithm for video streaming in a wireless system with capacity achieving channel coding, that implements the cross-layer principle by partially separating the physical and the application layer. In such a system the parameters at the physical layer are dependent on the packet length and the conditions in the wireless channel and the parameters at the application layer are dependent on the reduction of the expected distortion assuming no packet errors in the system. We also address the fairness in the multiuser video system with non-equidistantly positioned users. Our fairness algorithm is based on modified opportunistic round robin scheduling. We evaluate the performance of the proposed algorithms by simulating the transmission of H.264/AVC video signals in a TDMA wireless system.

  13. Dynamic iteration stopping algorithm for non-binary LDPC-coded high-order PRCPM in the Rayleigh fading channel

    National Research Council Canada - National Science Library

    Xue, Rui; Sun, Yanbo; Wei, Qiang

    2016-01-01

    This paper mainly studies the association between non-binary low-density parity-check codes and high-order partial response continuous phase modulation, which prevents information loss in the mutual...

  14. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  15. Advanced algorithms for information science

    Energy Technology Data Exchange (ETDEWEB)

    Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.

    1998-12-31

    This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression.

  16. Scalable and Media Aware Adaptive Video Streaming over Wireless Networks

    Science.gov (United States)

    Tizon, Nicolas; Pesquet-Popescu, Béatrice

    2008-12-01

    This paper proposes an advanced video streaming system based on scalable video coding in order to optimize resource utilization in wireless networks with retransmission mechanisms at radio protocol level. The key component of this system is a packet scheduling algorithm which operates on the different substreams of a main scalable video stream and which is implemented in a so-called media aware network element. The concerned type of transport channel is a dedicated channel subject to parameters (bitrate, loss rate) variations on the long run. Moreover, we propose a combined scalability approach in which common temporal and SNR scalability features can be used jointly with a partitioning of the image into regions of interest. Simulation results show that our approach provides substantial quality gain compared to classical packet transmission methods and they demonstrate how ROI coding combined with SNR scalability allows to improve again the visual quality.

  17. SPECIAL REPORT: Creating Conference Video

    Directory of Open Access Journals (Sweden)

    Noel F. Peden

    2008-12-01

    Full Text Available Capturing video at a conference is easy. Doing it so the product is useful is another matter. Many subtle problems come into play so that video and audio obtained can be used to create a final product. This article discusses what the author learned in the two years of shooting and editing video for Code4Lib conference.

  18. An implement of fast hiding data into H.264 bitstream based on intra-prediction coding

    Science.gov (United States)

    Hua, Cao; Jingli, Zhou; Shengsheng, Yu

    2005-10-01

    Digital watermarking is the technique which embeds an invisible signal including owner identification and copy control information into multimedia data such as image, audio and video for copyright protection. A blind robust algorithm of hiding data into H.264 video stream rapidly was proposed in this paper, copyright protection can be achieved by embedding the robust watermark during the procedure of intra prediction encoding which is characteristic of H.264 standard. This scheme is well compatible with H.264 video coding standard and can directly extract the embedded data from the watermarked H.264 compression video stream without using the original video. Experimental results demonstrate that this scheme is very computational efficient during watermark embedding and extraction and the embedded data not lead to increasing the bit-rate of H.264 bit-stream too many. This algorithm is feasible for real time system implementation.

  19. EFFICIENT BLOCK MATCHING ALGORITHMS FOR MOTION ESTIMATION IN H.264/AVC

    Directory of Open Access Journals (Sweden)

    P. Muralidhar

    2015-02-01

    Full Text Available In Scalable Video Coding (SVC, motion estimation and inter-layer prediction play an important role in elimination of temporal and spatial redundancies between consecutive layers. This paper evaluates the performance of widely accepted block matching algorithms used in various video compression standards, with emphasis on the performance of the algorithms for a didactic scalable video codec. Many different implementations of Fast Motion Estimation Algorithms have been proposed to reduce motion estimation complexity. The block matching algorithms have been analyzed with emphasis on Peak Signal to Noise Ratio (PSNR and computations using MATLAB. In addition to the above comparisons, a survey has been done on Spiral Search Motion Estimation Algorithms for Video Coding. A New Modified Spiral Search (NMSS motion estimation algorithm has been proposed with lower computational complexity. The proposed algorithm achieves 72% reduction in computation with a minimal (<1dB reduction in PSNR. A brief introduction to the entire flow of video compression H.264/SVC is also presented in this paper.

  20. Fine-Grained Rate Shaping for Video Streaming over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Chen Tsuhan

    2004-01-01

    Full Text Available Video streaming over wireless networks faces challenges of time-varying packet loss rate and fluctuating bandwidth. In this paper, we focus on streaming precoded video that is both source and channel coded. Dynamic rate shaping has been proposed to “shape” the precompressed video to adapt to the fluctuating bandwidth. In our earlier work, rate shaping was extended to shape the channel coded precompressed video, and to take into account the time-varying packet loss rate as well as the fluctuating bandwidth of the wireless networks. However, prior work on rate shaping can only adjust the rate oarsely. In this paper, we propose “fine-grained rate shaping (FGRS” to allow for bandwidth adaptation over a wide range of bandwidth and packet loss rate in fine granularities. The video is precoded with fine granularity scalability (FGS followed by channel coding. Utilizing the fine granularity property of FGS and channel coding, FGRS selectively drops part of the precoded video and still yields decodable bit-stream at the decoder. Moreover, FGRS optimizes video streaming rather than achieves heuristic objectives as conventional methods. A two-stage rate-distortion (RD optimization algorithm is proposed for FGRS. Promising results of FGRS are shown.

  1. Error-erasure decoding of product codes.

    Science.gov (United States)

    Wainberg, S.

    1972-01-01

    Two error-erasure decoding algorithms for product codes that correct all the error-erasure patterns guaranteed correctable by the minimum Hamming distance of the product code are given. The first algorithm works when at least one of the component codes is majority-logic decodable. The second algorithm works for any product code. Both algorithms use the decoders of the component codes.

  2. Adaptive deblocking and deringing of H.264/AVC video sequences

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Burini, Nino; Forchhammer, Søren

    2013-01-01

    We present a method to reduce blocking and ringing artifacts in H.264/AVC video sequences. For deblocking, the proposed method uses a quality measure of a block based coded image to find filtering modes. Based on filtering modes, the images are segmented to three classes and a specific deblocking...... filter is applied to each class. Deringing is obtained by an adaptive bilateral filter; spatial and intensity spread parameters are selected adaptively using texture and edge mapping. The analysis of objective and subjective experimental results shows that the proposed algorithm is effective...... in deblocking and deringing low bit-rate H.264 video sequences....

  3. Multiview video codec based on KTA techniques

    Science.gov (United States)

    Seo, Jungdong; Kim, Donghyun; Ryu, Seungchul; Sohn, Kwanghoon

    2011-03-01

    Multi-view video coding (MVC) is a video coding standard developed by MPEG and VCEG for multi-view video. It showed average PSNR gain of 1.5dB compared with view-independent coding by H.264/AVC. However, because resolutions of multi-view video are getting higher for more realistic 3D effect, high performance video codec is needed. MVC adopted hierarchical B-picture structure and inter-view prediction as core techniques. The hierarchical B-picture structure removes the temporal redundancy, and the inter-view prediction reduces the inter-view redundancy by compensated prediction from the reconstructed neighboring views. Nevertheless, MVC has inherent limitation in coding efficiency, because it is based on H.264/AVC. To overcome the limit, an enhanced video codec for multi-view video based on Key Technology Area (KTA) is proposed. KTA is a high efficiency video codec by Video Coding Expert Group (VCEG), and it was carried out for coding efficiency beyond H.264/AVC. The KTA software showed better coding gain than H.264/AVC by using additional coding techniques. The techniques and the inter-view prediction are implemented into the proposed codec, which showed high coding gain compared with the view-independent coding result by KTA. The results presents that the inter-view prediction can achieve higher efficiency in a multi-view video codec based on a high performance video codec such as HEVC.

  4. Error Concealment for 3-D DWT Based Video Codec Using Iterative Thresholding

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Forchhammer, Søren; Codreanu, Marian

    2017-01-01

    Error concealment for video coding based on a 3-D discrete wavelet transform (DWT) is considered. We assume that the video sequence has a sparse representation in a known basis different from the DWT, e.g., in a 2-D discrete cosine transform basis. Then, we formulate the concealment problem as l1......-norm minimization and solve it utilizing an iterative thresholding algorithm. Comparing different thresholding operators, we show that video block-matching and 3-D filtering provide the best reconstruction by utilizing spatial similarity within a frame and temporal similarity between neighbor frames...

  5. Pathway Detection from Protein Interaction Networks and Gene Expression Data Using Color-Coding Methods and A* Search Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yu Yeh

    2012-01-01

    Full Text Available With the large availability of protein interaction networks and microarray data supported, to identify the linear paths that have biological significance in search of a potential pathway is a challenge issue. We proposed a color-coding method based on the characteristics of biological network topology and applied heuristic search to speed up color-coding method. In the experiments, we tested our methods by applying to two datasets: yeast and human prostate cancer networks and gene expression data set. The comparisons of our method with other existing methods on known yeast MAPK pathways in terms of precision and recall show that we can find maximum number of the proteins and perform comparably well. On the other hand, our method is more efficient than previous ones and detects the paths of length 10 within 40 seconds using CPU Intel 1.73GHz and 1GB main memory running under windows operating system.

  6. A Global Clustering Algorithm to Identify Long Intergenic Non-Coding RNA - with Applications in Mouse Macrophages

    OpenAIRE

    Garmire, Lana X.; Garmire, David G.; Huang, Wendy; Yao, Joyee; Glass, Christopher K.; Subramaniam, Shankar

    2011-01-01

    Identification of diffuse signals from the chromatin immunoprecipitation and high-throughput massively parallel sequencing (ChIP-Seq) technology poses significant computational challenges, and there are few methods currently available. We present a novel global clustering approach to enrich diffuse CHIP-Seq signals of RNA polymerase II and histone 3 lysine 4 trimethylation (H3K4Me3) and apply it to identify putative long intergenic non-coding RNAs (lincRNAs) in macrophage cells. Our global cl...

  7. Content-based image and video compression

    Science.gov (United States)

    Du, Xun; Li, Honglin; Ahalt, Stanley C.

    2002-08-01

    The term Content-Based appears often in applications for which MPEG-7 is expected to play a significant role. MPEG-7 standardizes descriptors of multimedia content, and while compression is not the primary focus of MPEG-7, the descriptors defined by MPEG-7 can be used to reconstruct a rough representation of an original multimedia source. In contrast, current image and video compression standards such as JPEG and MPEG are not designed to encode at the very low bit-rates that could be accomplished with MPEG-7 using descriptors. In this paper we show that content-based mechanisms can be introduced into compression algorithms to improve the scalability and functionality of current compression methods such as JPEG and MPEG. This is the fundamental idea behind Content-Based Compression (CBC). Our definition of CBC is a compression method that effectively encodes a sufficient description of the content of an image or a video in order to ensure that the recipient is able to reconstruct the image or video to some degree of accuracy. The degree of accuracy can be, for example, the classification error rate of the encoded objects, since in MPEG-7 the classification error rate measures the performance of the content descriptors. We argue that the major difference between a content-based compression algorithm and conventional block-based or object-based compression algorithms is that content-based compression replaces the quantizer with a more sophisticated classifier, or with a quantizer which minimizes classification error. Compared to conventional image and video compression methods such as JPEG and MPEG, our results show that content-based compression is able to achieve more efficient image and video coding by suppressing the background while leaving the objects of interest nearly intact.

  8. Divergence coding for convolutional codes

    Directory of Open Access Journals (Sweden)

    Valery Zolotarev

    2017-01-01

    Full Text Available In the paper we propose a new coding/decoding on the divergence principle. A new divergent multithreshold decoder (MTD for convolutional self-orthogonal codes contains two threshold elements. The second threshold element decodes the code with the code distance one greater than for the first threshold element. Errorcorrecting possibility of the new MTD modification have been higher than traditional MTD. Simulation results show that the performance of the divergent schemes allow to approach area of its effective work to channel capacity approximately on 0,5 dB. Note that we include the enough effective Viterbi decoder instead of the first threshold element, the divergence principle can reach more. Index Terms — error-correcting coding, convolutional code, decoder, multithreshold decoder, Viterbi algorithm.

  9. Fracture Analysis of Vessels. Oak Ridge FAVOR, v06.1, Computer Code: Theory and Implementation of Algorithms, Methods, and Correlations

    Energy Technology Data Exchange (ETDEWEB)

    Williams, P. T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dickson, T. L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Yin, S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2007-12-01

    The current regulations to insure that nuclear reactor pressure vessels (RPVs) maintain their structural integrity when subjected to transients such as pressurized thermal shock (PTS) events were derived from computational models developed in the early-to-mid 1980s. Since that time, advancements and refinements in relevant technologies that impact RPV integrity assessment have led to an effort by the NRC to re-evaluate its PTS regulations. Updated computational methodologies have been developed through interactions between experts in the relevant disciplines of thermal hydraulics, probabilistic risk assessment, materials embrittlement, fracture mechanics, and inspection (flaw characterization). Contributors to the development of these methodologies include the NRC staff, their contractors, and representatives from the nuclear industry. These updated methodologies have been integrated into the Fracture Analysis of Vessels -- Oak Ridge (FAVOR, v06.1) computer code developed for the NRC by the Heavy Section Steel Technology (HSST) program at Oak Ridge National Laboratory (ORNL). The FAVOR, v04.1, code represents the baseline NRC-selected applications tool for re-assessing the current PTS regulations. This report is intended to document the technical bases for the assumptions, algorithms, methods, and correlations employed in the development of the FAVOR, v06.1, code.

  10. Indexing, Browsing, and Searching of Digital Video.

    Science.gov (United States)

    Smeaton, Alan F.

    2004-01-01

    Presents a literature review that covers the following topics related to indexing, browsing, and searching of digital video: video coding and standards; conventional approaches to accessing digital video; automatically structuring and indexing digital video; searching, browsing, and summarization; measurement and evaluation of the effectiveness of…

  11. Error Protection of Wavelet Scalable Video Streaming Using Wyner-Ziv Technique over a Lossy Network

    Science.gov (United States)

    Liu, Benjian; Xu, Ke

    This paper presents a novel error resilience scheme for wavelet scalable video coding. We use Wyner-Ziv codec to produce extra bits protecting the important parts of the embedded video streaming. At the same time these bits also as the second description of important parts are transmitted over auxiliary channel to the receiver for error resilience. The errors in the embedded video streaming can be corrected by Wyner-Ziv description which regards the decoded frame as side information. Moreover, Wyner-Ziv decoder utilizes a coarse estimated version of the corrupted parts exploiting frame correlation in wavelet video decoder to generate a refine version. Simulation results show that our proposed method can achieve much better performance compared with Forward Error Correction code. Meanwhile, this error resilient algorithm can achieve 2-3 dB PSNR gains over the motion compensation error concealment.

  12. Efficient coding unit partition strategy for HEVC intracoding

    Science.gov (United States)

    Sun, Xuebin; Chen, Xiaodong; Xu, Yong; Wang, Yi; Yu, Daoyin

    2017-07-01

    As the newest international video compression standard, high efficiency video coding (HEVC) achieves a higher compression ratio and better video quality, compared with the previous standard, H.264/advanced video coding. However, higher compression efficiency is obtained at the cost of extraordinary computational load, which obstructs the implementation of the HEVC encoder for real-time applications and mobile devices. Intracoding is one of the high computational stages due to the flexible coding unit (CU) sizes and high density of angular prediction modes. This paper presents an intraencoding technique to speed up the process, which is composed of an early coding tree unit (CTU) depth interval prediction and an efficient CU partition method. The encoded CU depth information in the already encoded surrounding CTUs is utilized to predict the encoding CU search depth interval of the current CTU. By analyzing the textural features of CU, an early CU splitting termination is proposed to decide whether a CU should be decomposed into four lower-dimensions CUs or not. The experimental results indicate that the proposed algorithm outperforms the reference software HM16.7 by decreasing the coding time up to 53.67% with a negligible bit rate increase of 0.52%, and peak signal-to-noise ratio losses lower 0.06 dB, respectively.

  13. Fast generation of three-dimensional video holograms by combined use of data compression and lookup table techniques.

    Science.gov (United States)

    Kim, Seung-Cheol; Yoon, Jung-Hoon; Kim, Eun-Soo

    2008-11-10

    Even though there are many types of methods to generate CGH (computer-generated hologram) patterns of three-dimensional (3D) objects, most of them have been applied to still images but not to video images due to their computational complexity in applications of 3D video holograms. A new method for fast computation of CGH patterns for 3D video images is proposed by combined use of data compression and lookup table techniques. Temporally redundant data of the 3D video images are removed with the differential pulse code modulation (DPCM) algorithm, and then the CGH patterns for these compressed videos are generated with the novel lookup table (N-LUT) technique. To confirm the feasibility of the proposed method, some experiments with test 3D videos are carried out, and the results are comparatively discussed with the conventional methods in terms of the number of object points and computation time.

  14. Development of a Two-Phase Flow Analysis Code based on a Unstructured-Mesh SIMPLE Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Tae; Park, Ik Kyu; Cho, Heong Kyu; Yoon, Han Young; Kim, Kyung Doo; Jeong, Jae Jun

    2008-09-15

    For analyses of multi-phase flows in a water-cooled nuclear power plant, a three-dimensional SIMPLE-algorithm based hydrodynamic solver CUPID-S has been developed. As governing equations, it adopts a two-fluid three-field model for the two-phase flows. The three fields represent a continuous liquid, a dispersed droplets, and a vapour field. The governing equations are discretized by a finite volume method on an unstructured grid to handle the geometrical complexity of the nuclear reactors. The phasic momentum equations are coupled and solved with a sparse block Gauss-Seidel matrix solver to increase a numerical stability. The pressure correction equation derived by summing the phasic volume fraction equations is applied on the unstructured mesh in the context of a cell-centered co-located scheme. This paper presents the numerical method and the preliminary results of the calculations.

  15. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Thommesen, Christian

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes.......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes....

  16. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Thommesen, Christian; Høholdt, Tom

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved Reed/Solomon codes, which allows close to errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes. (NK) N-K......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved Reed/Solomon codes, which allows close to errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes. (NK) N-K...

  17. CHARACTER RECOGNITION OF VIDEO SUBTITLES\\

    Directory of Open Access Journals (Sweden)

    Satish S Hiremath

    2016-11-01

    Full Text Available An important task in content based video indexing is to extract text information from videos. The challenges involved in text extraction and recognition are variation of illumination on each video frame with text, the text present on the complex background and different font size of the text. Using various image processing algorithms like morphological operations, blob detection and histogram of oriented gradients the character recognition of video subtitles is implemented. Segmentation, feature extraction and classification are the major steps of character recognition. Several experimental results are shown to demonstrate the performance of the proposed algorithm

  18. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  19. Duality based optical flow algorithms with applications

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau

    We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... the motion estimation becomes more robust to noise and large displacements, and the computational workload is more than halved compared to usual bidirectional methods. Finally we consider two applications of frame interpolation for distributed video coding. The first of these considers the use of depth data...... to improve interpolation, and the second considers using the information from partially decoded video frames to improve interpolation accuracy in high-motion video sequences....

  20. Digital video transcoding for transmission and storage

    CERN Document Server

    Sun, Huifang; Chen, Xuemin

    2004-01-01

    Professionals in the video and multimedia industries need a book that explains industry standards for video coding and how to convert the compressed information between standards. Digital Video Transcoding for Transmission and Storage answers this demand while also supplying the theories and principles of video compression and transcoding technologies. Emphasizing digital video transcoding techniques, this book summarizes its content via examples of practical methods for transcoder implementation. It relates almost all of its featured transcoding technologies to practical applications.This vol

  1. Encoder power consumption comparison of Distributed Video Codec and H.264/AVC in low-complexity mode

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Belyaev, Evgeny; Forchhammer, Søren

    2010-01-01

    This paper presents a power consumption comparison of a novel approach to video compression based on distributed video coding (DVC) and widely used video compression based on H.264/AVC standard. We have used a low-complexity configuration for H.264/AVC codec. It is well-known that motion estimation...... (ME) and CABAC entropy coder consume much power so we eliminate ME from the codec and use CAVLC instead of CABAC. Some investigations show that low-complexity DVC outperforms other algorithms in terms of encoder side energy consumption . However, estimations of power consumption for H.264/AVC and DVC...

  2. Reconfigurable Secure Video Codec Based on DWT and AES Processor

    Directory of Open Access Journals (Sweden)

    Rached Tourki

    2010-01-01

    Full Text Available In this paper, we proposed a secure video codec based on the discrete wavelet transformation (DWT and the Advanced Encryption Standard (AES processor. Either, use of video coding with DWT or encryption using AES is well known. However, linking these two designs to achieve secure video coding is leading. The contributions of our work are as follows. First, a new method for image and video compression is proposed. This codec is a synthesis of JPEG and JPEG2000,which is implemented using Huffman coding to the JPEG and DWT to the JPEG2000. Furthermore, an improved motion estimation algorithm is proposed. Second, the encryptiondecryption effects are achieved by the AES processor. AES is aim to encrypt group of LL bands. The prominent feature of this method is an encryption of LL bands by AES-128 (128-bit keys, or AES-192 (192-bit keys, or AES-256 (256-bit keys.Third, we focus on a method that implements partial encryption of LL bands. Our approach provides considerable levels of security (key size, partial encryption, mode encryption, and has very limited adverse impact on the compression efficiency. The proposed codec can provide up to 9 cipher schemes within a reasonable software cost. Latency, correlation, PSNR and compression rate results are analyzed and shown.

  3. Layered Video Transmission on Adaptive OFDM Wireless Systems

    Directory of Open Access Journals (Sweden)

    D. Dardari

    2004-09-01

    Full Text Available Future wireless video transmission systems will consider orthogonal frequency division multiplexing (OFDM as the basic modulation technique due to its robustness and low complexity implementation in the presence of frequency-selective channels. Recently, adaptive bit loading techniques have been applied to OFDM showing good performance gains in cable transmission systems. In this paper a multilayer bit loading technique, based on the so called “ordered subcarrier selection algorithm,” is proposed and applied to a Hiperlan2-like wireless system at 5 GHz for efficient layered multimedia transmission. Different schemes realizing unequal error protection both at coding and modulation levels are compared. The strong impact of this technique in terms of video quality is evaluated for MPEG-4 video transmission.

  4. Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    Science.gov (United States)

    Shalkhauser, Mary JO; Whyte, Wayne A.

    1991-01-01

    Advances in very large scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible an potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for DPCM (differential pulse code midulation)-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the codec are described, and performance results are provided.

  5. Model-based video segmentation for vision-augmented interactive games

    Science.gov (United States)

    Liu, Lurng-Kuo

    2000-04-01

    This paper presents an architecture and algorithms for model based video object segmentation and its applications to vision augmented interactive game. We are especially interested in real time low cost vision based applications that can be implemented in software in a PC. We use different models for background and a player object. The object segmentation algorithm is performed in two different levels: pixel level and object level. At pixel level, the segmentation algorithm is formulated as a maximizing a posteriori probability (MAP) problem. The statistical likelihood of each pixel is calculated and used in the MAP problem. Object level segmentation is used to improve segmentation quality by utilizing the information about the spatial and temporal extent of the object. The concept of an active region, which is defined based on motion histogram and trajectory prediction, is introduced to indicate the possibility of a video object region for both background and foreground modeling. It also reduces the overall computation complexity. In contrast with other applications, the proposed video object segmentation system is able to create background and foreground models on the fly even without introductory background frames. Furthermore, we apply different rate of self-tuning on the scene model so that the system can adapt to the environment when there is a scene change. We applied the proposed video object segmentation algorithms to several prototype virtual interactive games. In our prototype vision augmented interactive games, a player can immerse himself/herself inside a game and can virtually interact with other animated characters in a real time manner without being constrained by helmets, gloves, special sensing devices, or background environment. The potential applications of the proposed algorithms including human computer gesture interface and object based video coding such as MPEG-4 video coding.

  6. [Algorithms for the identification of hospital stays due to osteoporotic femoral neck fractures in European medical administrative databases using ICD-10 codes: A non-systematic review of the literature].

    Science.gov (United States)

    Caillet, P; Oberlin, P; Monnet, E; Guillon-Grammatico, L; Métral, P; Belhassen, M; Denier, P; Banaei-Bouchareb, L; Viprey, M; Biau, D; Schott, A-M

    2017-10-01

    Osteoporotic hip fractures (OHF) are associated with significant morbidity and mortality. The French medico-administrative database (SNIIRAM) offers an interesting opportunity to improve the management of OHF. However, the validity of studies conducted with this database relies heavily on the quality of the algorithm used to detect OHF. The aim of the REDSIAM network is to facilitate the use of the SNIIRAM database. The main objective of this study was to present and discuss several OHF-detection algorithms that could be used with this database. A non-systematic literature search was performed. The Medline database was explored during the period January 2005-August 2016. Furthermore, a snowball search was then carried out from the articles included and field experts were contacted. The extraction was conducted using the chart developed by the REDSIAM network's "Methodology" task force. The ICD-10 codes used to detect OHF are mainly S72.0, S72.1, and S72.2. The performance of these algorithms is at best partially validated. Complementary use of medical and surgical procedure codes would affect their performance. Finally, few studies described how they dealt with fractures of non-osteoporotic origin, re-hospitalization, and potential contralateral fracture cases. Authors in the literature encourage the use of ICD-10 codes S72.0 to S72.2 to develop algorithms for OHF detection. These are the codes most frequently used for OHF in France. Depending on the study objectives, other ICD10 codes and medical and surgical procedures could be usefully discussed for inclusion in the algorithm. Detection and management of duplicates and non-osteoporotic fractures should be considered in the process. Finally, when a study is based on such an algorithm, all these points should be precisely described in the publication. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  7. Quality scalable video data stream

    OpenAIRE

    Wiegand, T.; Kirchhoffer, H.; Schwarz, H

    2008-01-01

    An apparatus for generating a quality-scalable video data stream (36) is described which comprises means (42) for coding a video signal (18) using block-wise transformation to obtain transform blocks (146, 148) of transformation coefficient values for a picture (140) of the video signal, a predetermined scan order (154, 156, 164, 166) with possible scan positions being defined among the transformation coefficient values within the transform blocks so that in each transform block, for each pos...

  8. Code Generation = A* + BURS

    NARCIS (Netherlands)

    Nymeyer, Albert; Katoen, Joost P.; Westra, Ymte; Alblas, H.; Gyimóthy, Tibor

    1996-01-01

    A system called BURS that is based on term rewrite systems and a search algorithm A* are combined to produce a code generator that generates optimal code. The theory underlying BURS is re-developed, formalised and explained in this work. The search algorithm uses a cost heuristic that is derived

  9. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  10. Low complexity hevc intra coding

    OpenAIRE

    Ruiz Coll, José Damián

    2016-01-01

    Over the last few decades, much research has focused on the development and optimization of video codecs for media distribution to end-users via the Internet, broadcasts or mobile networks, but also for videoconferencing and for the recording on optical disks for media distribution. Most of the video coding standards for delivery are characterized by using a high efficiency hybrid schema, based on inter-prediction coding for temporal picture decorrelation, and intra-prediction coding for spat...

  11. Representing videos in tangible products

    Science.gov (United States)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  12. A Nonlinear Decision-Based Algorithm for Removal of Strip Lines, Drop Lines, Blotches, Band Missing and Impulses in Images and Videos

    Directory of Open Access Journals (Sweden)

    D. Ebenezer

    2008-10-01

    Full Text Available A decision-based nonlinear algorithm for removal of strip lines, drop lines, blotches, band missing, and impulses in images is presented. The algorithm performs two simultaneous operations, namely, detection of corrupted pixels and estimation of new pixels for replacing the corrupted pixels. Removal of these artifacts is achieved without damaging edges and details. The algorithm uses an adaptive length window whose maximum size is 5×5 to avoid blurring due to large window sizes. However, the restricted window size renders median operation less effective whenever noise is excessive in which case the proposed algorithm automatically switches to mean filtering. The performance of the algorithm is analyzed in terms of mean square error [MSE], peak-signal-to-noise ratio [PSNR], and image enhancement factor [IEF] and compared with standard algorithms already in use. Improved performance of the proposed algorithm is demonstrated. The advantage of the proposed algorithm is that a single algorithm can replace several independent algorithms required for removal of different artifacts.

  13. A Nonlinear Decision-Based Algorithm for Removal of Strip Lines, Drop Lines, Blotches, Band Missing and Impulses in Images and Videos

    Directory of Open Access Journals (Sweden)

    Manikandan S

    2008-01-01

    Full Text Available Abstract A decision-based nonlinear algorithm for removal of strip lines, drop lines, blotches, band missing, and impulses in images is presented. The algorithm performs two simultaneous operations, namely, detection of corrupted pixels and estimation of new pixels for replacing the corrupted pixels. Removal of these artifacts is achieved without damaging edges and details. The algorithm uses an adaptive length window whose maximum size is to avoid blurring due to large window sizes. However, the restricted window size renders median operation less effective whenever noise is excessive in which case the proposed algorithm automatically switches to mean filtering. The performance of the algorithm is analyzed in terms of mean square error [MSE], peak-signal-to-noise ratio [PSNR], and image enhancement factor [IEF] and compared with standard algorithms already in use. Improved performance of the proposed algorithm is demonstrated. The advantage of the proposed algorithm is that a single algorithm can replace several independent algorithms required for removal of different artifacts.

  14. Moving traffic object retrieval in H.264/MPEG compressed video

    Science.gov (United States)

    Shi, Xu-li; Xiao, Guang; Wang, Shuo-zhong; Zhang, Zhao-yang; An, Ping

    2006-05-01

    Moving object retrieval technique in compressed domain plays an important role in many real-time applications, e.g. Vehicle Detection and Classification. A number of retrieval techniques that operate in compressed domain have been reported in the literature. H.264/AVC is the up-to-date video-coding standard that is likely to lead to the proliferation of retrieval techniques in the compressed domain. Up to now, few literatures on H.264/AVC compressed video have been reported. Compared with the MPEG standard, H.264/AVC employs several new coding block types and different entropy coding method, which result in moving object retrieval in H.264/ AVC compressed video a new task and challenging work. In this paper, an approach to extract and retrieval moving traffic object in H.264/AVC compressed video is proposed. Our algorithm first Interpolates the sparse motion vector of p-frame that is composed of 4*4 blocks, 4*8 blocks and 8*4 blocks and so on. After forward projecting each p-frame vector to the immediate adjacent I-frame and calculating the DCT coefficients of I-frame using information of spatial intra-prediction, the method extracts moving VOPs (video object plan) using an interactive 4*4 block classification process. In Vehicle Detection application, the segmented VOP in 4*4 block-level accuracy is insufficient. Once we locate the target VOP, the actual edges of the VOP in 4*4 block accuracy can be extracted by applying Canny Edge Detection only on the moving VOP in 4*4 block accuracy. The VOP in pixel accuracy is then achieved by decompressing the DCT blocks of the VOPs. The edge-tracking algorithm is applied to find the missing edge pixels. After the segmentation process a retrieval algorithm that based on CSS (Curvature Scale Space) is used to search the interested shape of vehicle in H.264/AVC compressed video sequence. Experiments show that our algorithm can extract and retrieval moving vehicles efficiency and robustly.

  15. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  16. Improved Transient Performance of a Fuzzy Modified Model Reference Adaptive Controller for an Interacting Coupled Tank System Using Real-Coded Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Asan Mohideen Khansadurai

    2014-01-01

    Full Text Available The main objective of the paper is to design a model reference adaptive controller (MRAC with improved transient performance. A modification to the standard direct MRAC called fuzzy modified MRAC (FMRAC is used in the paper. The FMRAC uses a proportional control based Mamdani-type fuzzy logic controller (MFLC to improve the transient performance of a direct MRAC. The paper proposes the application of real-coded genetic algorithm (RGA to tune the membership function parameters of the proposed FMRAC offline so that the transient performance of the FMRAC is improved further. In this study, a GA based modified MRAC (GAMMRAC, an FMRAC, and a GA based FMRAC (GAFMRAC are designed for a coupled tank setup in a hybrid tank process and their transient performances are compared. The results show that the proposed GAFMRAC gives a better transient performance than the GAMMRAC or the FMRAC. It is concluded that the proposed controller can be used to obtain very good transient performance for the control of nonlinear processes.

  17. Research on key technologies in multiview video and interactive multiview video streaming

    OpenAIRE

    Xiu, Xiaoyu

    2011-01-01

    Emerging video applications are being developed where multiple views of a scene are captured. Two central issues in the deployment of future multiview video (MVV) systems are compression efficiency and interactive video experience, which makes it necessary to develop advanced technologies on multiview video coding (MVC) and interactive multiview video streaming (IMVS). The former aims at efficient compression of all MVV data in a ratedistortion (RD) optimal manner by exploiting both temporal ...

  18. On the Combination of Multi-Layer Source Coding and Network Coding for Wireless Networks

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Fitzek, Frank; Pedersen, Morten Videbæk

    2013-01-01

    This paper introduces a mutually beneficial interplay between network coding and scalable video source coding in order to propose an energy-efficient video streaming approach accommodating multiple heterogeneous receivers, for which current solutions are either inefficient or insufficient. State...... support of multi-resolution video streaming....

  19. Smart calibration for video game play by people with a movement impairment.

    Science.gov (United States)

    Perez, Sergi; Benitez, Raul; Reinkensmeyer, David J

    2011-01-01

    People with movement impairment often cannot move with the range, speed, or acceleration required to play an off-the-shelf video game. This paper describes a smart calibration algorithm designed to facilitate video game play by people with movement impairment. The algorithm continuously adapts the calibration of the gaming input device by comparing the maximum range of motion measured in previous time periods, then adjusting the current required range of motion based on their difference. In several experiments with simple acceleration-based video games using a Nintendo Wiimote, we show that the algorithm adapts the calibration to allow healthy users to play the game with their full available range of acceleration without need for a special calibration protocol. Importantly, the algorithm described here can be used without altering the game software by inserting a hardware or software module between the gaming input device and the game console. Thus, the algorithm can be used with off-the-shelf video games without altering their source code.

  20. Video-rate fluorescence lifetime imaging camera with CMOS single-photon avalanche diode arrays and high-speed imaging algorithm

    NARCIS (Netherlands)

    Li, D.D.U.; Arlt, J.; Tyndall, D.; Walker, R.; Richardson, J.; Stoppa, D.; Charbon, E.; Henderson, R.K.

    2011-01-01

    A high-speed and hardware-only algorithm using a center of mass method has been proposed for single-detector fluorescence lifetime sensing applications. This algorithm is now implemented on a field programmable gate array to provide fast lifetime estimates from a 32 × 32 low dark count 0.13 ?m

  1. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  2. Feedback-aided error resilience technique based on Wyner-Ziv coding

    Science.gov (United States)

    Liang, Liang; Salama, Paul; Delp, Edward J.

    2008-01-01

    Compressed video is very sensitive to channel errors. A few bit losses can stop the entire decoding process. Therefore, protecting compressed video is always necessary for reliable visual communications. In recent years, Wyner-Ziv lossy coding has been used for error resilience and has achieved improvement over conventional techniques. In our previous work, we proposed an unequal error protection algorithm for protecting data elements in a video stream using a Wyner-Ziv codec. We also presented an improved method by adapting the parity data rates of protected video information to the video content. In this paper, we describe a feedback aided error resilience technique, based on Wyner-Ziv coding. By utilizing feedback regarding current channel packet-loss rates, a turbo coder can adaptively adjust the amount of parity bits needed for correcting corrupted slices at the decoder. This results in an effcient usage of the data rate budget for Wyner-Ziv coding while maintaining good quality decoded video when the data has been corrupted by transmission errors.

  3. Binary video codec for data reduction in wireless visual sensor networks

    Science.gov (United States)

    Khursheed, Khursheed; Ahmad, Naeem; Imran, Muhammad; O'Nils, Mattias

    2013-02-01

    Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring, stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as well as the communication energy consumption of the VSN needs to be optimized in such a way that the network remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient computational resources for processing the images and wide communication bandwidth for the transmission of the results. Image processing algorithms must be designed and developed in such a way that they are computationally less complex and must provide high compression rate. For some applications of WVSN, the captured images can be segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are computationally less complex and provide better compression rate than that of bi-level image coding methods. Change coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide better compression efficiency compared to image coding but it is effective for applications having slight changes between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is higher than a certain level then the compression efficiency

  4. Image content authentication based on channel coding

    Science.gov (United States)

    Zhang, Fan; Xu, Lei

    2008-03-01

    The content authentication determines whether an image has been tampered or not, and if necessary, locate malicious alterations made on the image. Authentication on a still image or a video are motivated by recipient's interest, and its principle is that a receiver must be able to identify the source of this document reliably. Several techniques and concepts based on data hiding or steganography designed as a means for the image authentication. This paper presents a color image authentication algorithm based on convolution coding. The high bits of color digital image are coded by the convolution codes for the tamper detection and localization. The authentication messages are hidden in the low bits of image in order to keep the invisibility of authentication. All communications channels are subject to errors introduced because of additive Gaussian noise in their environment. Data perturbations cannot be eliminated but their effect can be minimized by the use of Forward Error Correction (FEC) techniques in the transmitted data stream and decoders in the receiving system that detect and correct bits in error. This paper presents a color image authentication algorithm based on convolution coding. The message of each pixel is convolution encoded with the encoder. After the process of parity check and block interleaving, the redundant bits are embedded in the image offset. The tamper can be detected and restored need not accessing the original image.

  5. Study on multi-description coding for ROI medical image based on EBCOT

    Science.gov (United States)

    Hou, Alin; Zhang, Lihong; Shi, Dongcheng; Cui, Guangming; Xu, Kun; Zhou, Wen; Liu, Jie

    2008-03-01

    Embedded block coding with optimized truncation (EBCOT) with the wavelet shape of tree encoding structure is more flexible because it encodes each code block respectively by decomposing the subband into code blocks, so that the embedded code streams will come into being to support the mass classification, the hierarchical resolution and the random access. However the anti-missing performance via network of the algorithm is worse. The source signal has been divided into many code streams by multi-description coding (MDC) of image and video so that it will be transferred through the insecure transmission channel. Region of Interest (ROI) coding gives priority to the focus of doctor's interest generally occupies lesser part of entire medical image. In this paper, ROI coding, which combines MDC and EBCOT, has been done according to JPEG2000 ROI coding standard in medical images. The algorithm not only uses the hierarchical spatial resolution and random access to ROI of EBCOT, but also has improved the anti-missing performance via network, and formed robust code stream. The experimental results demonstrated that the coding method improved the system compression ratio without influence on the medical diagnosis.

  6. Digital video.

    Science.gov (United States)

    Johnson, Don; Johnson, Mike

    2004-04-01

    The process of digital capture, editing, and archiving video has become an important aspect of documenting arthroscopic surgery. Recording the arthroscopic findings before and after surgery is an essential part of the patient's medical record. The hardware and software has become more reasonable to purchase, but the learning curve to master the software is steep. Digital video is captured at the time of arthroscopy to a hard disk, and written to a CD at the end of the operative procedure. The process of obtaining video of open procedures is more complex. Outside video of the procedure is recorded on digital tape with a digital video camera. The camera must be plugged into a computer to capture the video on the hard disk. Adobe Premiere software is used to edit the video and render the finished video to the hard drive. This finished video is burned onto a CD. We outline the choice of computer hardware and software for the manipulation of digital video. The techniques of backup and archiving the completed projects and files also are outlined. The uses of digital video for education and the formats that can be used in PowerPoint presentations are discussed.

  7. Novel Integration of Frame Rate Up Conversion and HEVC Coding Based on Rate-Distortion Optimization.

    Science.gov (United States)

    Lu, Guo; Zhang, Xiaoyun; Chen, Li; Gao, Zhiyong

    2018-02-01

    Frame rate up conversion (FRUC) can improve the visual quality by interpolating new intermediate frames. However, high frame rate videos by FRUC are confronted with more bitrate consumption or annoying artifacts of interpolated frames. In this paper, a novel integration framework of FRUC and high efficiency video coding (HEVC) is proposed based on rate-distortion optimization, and the interpolated frames can be reconstructed at encoder side with low bitrate cost and high visual quality. First, joint motion estimation (JME) algorithm is proposed to obtain robust motion vectors, which are shared between FRUC and video coding. What's more, JME is embedded into the coding loop and employs the original motion search strategy in HEVC coding. Then, the frame interpolation is formulated as a rate-distortion optimization problem, where both the coding bitrate consumption and visual quality are taken into account. Due to the absence of original frames, the distortion model for interpolated frames is established according to the motion vector reliability and coding quantization error. Experimental results demonstrate that the proposed framework can achieve 21% ~ 42% reduction in BDBR, when compared with the traditional methods of FRUC cascaded with coding.

  8. Contextual analysis of videos

    CERN Document Server

    Thida, Myo; Monekosso, Dorothy

    2013-01-01

    Video context analysis is an active and vibrant research area, which provides means for extracting, analyzing and understanding behavior of a single target and multiple targets. Over the last few decades, computer vision researchers have been working to improve the accuracy and robustness of algorithms to analyse the context of a video automatically. In general, the research work in this area can be categorized into three major topics: 1) counting number of people in the scene 2) tracking individuals in a crowd and 3) understanding behavior of a single target or multiple targets in the scene.

  9. Video-based rendering

    CERN Document Server

    Magnor, Marcus A

    2005-01-01

    Driven by consumer-market applications that enjoy steadily increasing economic importance, graphics hardware and rendering algorithms are a central focus of computer graphics research. Video-based rendering is an approach that aims to overcome the current bottleneck in the time-consuming modeling process and has applications in areas such as computer games, special effects, and interactive TV. This book offers an in-depth introduction to video-based rendering, a rapidly developing new interdisciplinary topic employing techniques from computer graphics, computer vision, and telecommunication en

  10. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  11. Video Pulses: User-Based Modeling of Interesting Video Segments

    Directory of Open Access Journals (Sweden)

    Markos Avlonitis

    2014-01-01

    Full Text Available We present a user-based method that detects regions of interest within a video in order to provide video skims and video summaries. Previous research in video retrieval has focused on content-based techniques, such as pattern recognition algorithms that attempt to understand the low-level features of a video. We are proposing a pulse modeling method, which makes sense of a web video by analyzing users' Replay interactions with the video player. In particular, we have modeled the user information seeking behavior as a time series and the semantic regions as a discrete pulse of fixed width. Then, we have calculated the correlation coefficient between the dynamically detected pulses at the local maximums of the user activity signal and the pulse of reference. We have found that users' Replay activity significantly matches the important segments in information-rich and visually complex videos, such as lecture, how-to, and documentary. The proposed signal processing of user activity is complementary to previous work in content-based video retrieval and provides an additional user-based dimension for modeling the semantics of a social video on the web.

  12. Compression of mixed video and graphics images for TV systems

    Science.gov (United States)

    van der Schaar-Mitrea, Mihaela; de With, Peter H. N.

    1998-01-01

    The diversity in TV images has augmented with the increased application of computer graphics. In this paper we study z coding system that supports both the lossless coding of such graphics data and regular lossy video compression. The lossless coding techniques are based on runlength and arithmetical coding. For video compression, we introduce a simple block predictive coding technique featuring individual pixel access, so that it enables a gradual shift from lossless coding of graphics to the lossy coding of video. An overall bit rate control completes the system. Computer simulations show a very high quality with a compression factor between 2-3.

  13. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  14. Video Stabilization Using Feature Point Matching

    Science.gov (United States)

    Kulkarni, Shamsundar; Bormane, D. S.; Nalbalwar, S. L.

    2017-01-01

    Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper an algorithm is proposed to stabilize jittery videos. A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video is identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.

  15. No-reference pixel based video quality assessment for HEVC decoded video

    DEFF Research Database (Denmark)

    Huang, Xin; Søgaard, Jacob; Forchhammer, Søren

    2017-01-01

    This paper proposes a No-Reference (NR) Video Quality Assessment (VQA) method for videos subject to the distortion given by the High Efficiency Video Coding (HEVC) scheme. The assessment is performed without access to the bitstream. The proposed analysis is based on the transform coefficients...

  16. Performance Analysis of Video Transmission Using Sequential Distortion Minimization Method for Digital Video Broadcasting Terrestrial

    Directory of Open Access Journals (Sweden)

    Novita Astin

    2016-12-01

    Full Text Available This paper presents about the transmission of Digital Video Broadcasting system with streaming video resolution 640x480 on different IQ rate and modulation. In the video transmission, distortion often occurs, so the received video has bad quality. Key frames selection algorithm is flexibel on a change of video, but on these methods, the temporal information of a video sequence is omitted. To minimize distortion between the original video and received video, we aimed at adding methodology using sequential distortion minimization algorithm. Its aim was to create a new video, better than original video without significant loss of content between the original video and received video, fixed sequentially. The reliability of video transmission was observed based on a constellation diagram, with the best result on IQ rate 2 Mhz and modulation 8 QAM. The best video transmission was also investigated using SEDIM (Sequential Distortion Minimization Method and without SEDIM. The experimental result showed that the PSNR (Peak Signal to Noise Ratio average of video transmission using SEDIM was an increase from 19,855 dB to 48,386 dB and SSIM (Structural Similarity average increase 10,49%. The experimental results and comparison of proposed method obtained a good performance. USRP board was used as RF front-end on 2,2 GHz.

  17. Low-complexity JPEG-based progressive video codec for wireless video transmission

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Forchhammer, Søren

    2010-01-01

    This paper discusses the question of video codec enhancement for wireless video transmission of high definition video data taking into account constraints on memory and complexity. Starting from parameter adjustment for JPEG2000 compression algorithm used for wireless transmission and achieving...

  18. Motion-Compensated Coding and Frame-Rate Up-Conversion: Models and Analysis

    OpenAIRE

    Dar, Yehuda; Bruckstein, Alfred M.

    2014-01-01

    Block-based motion estimation (ME) and compensation (MC) techniques are widely used in modern video processing algorithms and compression systems. The great variety of video applications and devices results in numerous compression specifications. Specifically, there is a diversity of frame-rates and bit-rates. In this paper, we study the effect of frame-rate and compression bit-rate on block-based ME and MC as commonly utilized in inter-frame coding and frame-rate up conversion (FRUC). This j...

  19. Search in Real-Time Video Games

    OpenAIRE

    Cowling, Peter I.; Buro, Michael; Bida, Michal; Botea, Adi; Bouzy, Bruno; Butz, Martin V.; Hingston, Philip; Muñoz-Avila, Hector; Nau, Dana; Sipper, Moshe

    2013-01-01

    This chapter arises from the discussions of an experienced international group of researchers interested in the potential for creative application of algorithms for searching finite discrete graphs, which have been highly successful in a wide range of application areas, to address a broad range of problems arising in video games. The chapter first summarises the state of the art in search algorithms for games. It then considers the challenges in implementing these algorithms in video games (p...

  20. Estimation of Web video multiplicity

    Science.gov (United States)

    Cheung, SenChing S.; Zakhor, Avideh

    1999-12-01

    With ever more popularity of video web-publishing, many popular contents are being mirrored, reformatted, modified and republished, resulting in excessive content duplication. While such redundancy provides fault tolerance for continuous availability of information, it could potentially create problems for multimedia search engines in that the search results for a given query might become repetitious, and cluttered with a large number of duplicates. As such, developing techniques for detecting similarity and duplication is important to multimedia search engines. In addition, content providers might be interested in identifying duplicates of their content for legal, contractual or other business related reasons. In this paper, we propose an efficient algorithm called video signature to detect similar video sequences for large databases such as the web. The idea is to first form a 'signature' for each video sequence by selection a small number of its frames that are most similar to a number of randomly chosen seed images. Then the similarity between any tow video sequences can be reliably estimated by comparing their respective signatures. Using this method, we achieve 85 percent recall and precision ratios on a test database of 377 video sequences. As a proof of concept, we have applied our proposed algorithm to a collection of 1800 hours of video corresponding to around 45000 clips from the web. Our results indicate that, on average, every video in our collection from the web has around five similar copies.