WorldWideScience

Sample records for video coding standard

  1. Video Coding Technique using MPEG Compression Standards

    Directory of Open Access Journals (Sweden)

    A. J. Falade

    2013-06-01

    Full Text Available Digital video compression technologies have become part of life, in the way visual information is created, communicated and consumed. Some application areas of video compression focused on the problem of optimizing storage space and transmission bandwidth (BW. The two dimensional discrete cosine transform (2-D DCT is an integral part of video and image compression, which is used in Moving Picture Expert Group (MPEG encoding standards. Thus, several video compression algorithms had been developed to reduce the data quantity and provide the acceptable quality standard. In the proposed study, the Matlab Simulink Model (MSM has been used for video coding/compression. The approach is more modern and reduces error resilience image distortion.

  2. Very low bit rate video coding standards

    Science.gov (United States)

    Zhang, Ya-Qin

    1995-04-01

    Very low bit rate video coding has received considerable attention in academia and industry in terms of both coding algorithms and standards activities. In addition to the earlier ITU-T efforts on H.320 standardization for video conferencing from 64 kbps to 1.544 Mbps in ISDN environment, the ITU-T/SG15 has formed an expert group on low bit coding (LBC) for visual telephone below 64 kbps. The ITU-T/SG15/LBC work consists of two phases: the near-term and long-term. The near-term standard H.32P/N, based on existing compression technologies, mainly addresses the issues related to visual telephony at below 28.8 kbps, the V.34 modem rate used in the existing Public Switched Telephone Network (PSTN). H.32P/N will be technically frozen in January '95. The long-term standard H.32P/L, relying on fundamentally new compression technologies with much improved performance, will address video telephony in both PSTN and mobile environment. The ISO/SG29/WG11, after its highly visible and successful MPEG 1/2 work, is starting to focus on the next- generation audiovisual multimedia coding standard MPEG 4. With the recent change of direction, MPEG 4 intends to provide an audio visual coding standard allowing for interactivity, high compression, and/or universal accessibility, with high degree of flexibility and extensibility. This paper briefly summarizes these on-going standards activities undertaken by ITU-T/LBC and ISO/MPEG 4 as of December 1994.

  3. Video Coding Technique using MPEG Compression Standards

    African Journals Online (AJOL)

    Akorede

    The two dimensional discrete cosine transform (2-D DCT) is an integral part of video and image compression, which is used ... Park, 1989). MPEG-1 systems and MPEG-2 video have been developed collaboratively with the International. Telecommunications Union- (ITU-T). The DVB selected. MPEG-2 added specifications ...

  4. Video Coding Technique using MPEG Compression Standards ...

    African Journals Online (AJOL)

    Digital video compression technologies have become part of life, in the way visual information is created, communicated and consumed. Some application areas of video compression focused on the problem of optimizing storage space and transmission bandwidth (BW). The two dimensional discrete cosine transform (2-D ...

  5. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  6. 3D video coding: an overview of present and upcoming standards

    Science.gov (United States)

    Merkle, Philipp; Müller, Karsten; Wiegand, Thomas

    2010-07-01

    An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.

  7. Standards-based approaches to 3D and multiview video coding

    Science.gov (United States)

    Sullivan, Gary J.

    2009-08-01

    The extension of video applications to enable 3D perception, which typically is considered to include a stereo viewing experience, is emerging as a mass market phenomenon, as is evident from the recent prevalence of 3D major cinema title releases. For high quality 3D video to become a commonplace user experience beyond limited cinema distribution, adoption of an interoperable coded 3D digital video format will be needed. Stereo-view video can also be studied as a special case of the more general technologies of multiview and "free-viewpoint" video systems. The history of standardization work on this topic is actually richer than people may typically realize. The ISO/IEC Moving Picture Experts Group (MPEG), in particular, has been developing interoperability standards to specify various such coding schemes since the advent of digital video as we know it. More recently, the ITU-T Visual Coding Experts Group (VCEG) has been involved as well in the Joint Video Team (JVT) work on development of 3D features for H.264/14496-10 Advanced Video Coding, including Multiview Video Coding (MVC) extensions. This paper surveys the prior, ongoing, and anticipated future standardization efforts on this subject to provide an overview and historical perspective on feasible approaches to 3D and multiview video coding.

  8. Basic prediction techniques in modern video coding standards

    CERN Document Server

    Kim, Byung-Gyu

    2016-01-01

    This book discusses in detail the basic algorithms of video compression that are widely used in modern video codec. The authors dissect complicated specifications and present material in a way that gets readers quickly up to speed by describing video compression algorithms succinctly, without going to the mathematical details and technical specifications. For accelerated learning, hybrid codec structure, inter- and intra- prediction techniques in MPEG-4, H.264/AVC, and HEVC are discussed together. In addition, the latest research in the fast encoder design for the HEVC and H.264/AVC is also included.

  9. Subjective Video Quality Assessment in H.264/AVC Video Coding Standard

    Directory of Open Access Journals (Sweden)

    Z. Miličević

    2012-11-01

    Full Text Available This paper seeks to provide an approach for subjective video quality assessment in the H.264/AVC standard. For this purpose a special software program for the subjective assessment of quality of all the tested video sequences is developed. It was developed in accordance with recommendation ITU-T P.910, since it is suitable for the testing of multimedia applications. The obtained results show that in the proposed selective intra prediction and optimized inter prediction algorithm there is a small difference in picture quality (signal-to-noise ratio between decoded original and modified video sequences.

  10. Video coding standards AVS China, H.264/MPEG-4 PART 10, HEVC, VP6, DIRAC and VC-1

    CERN Document Server

    Rao, K R; Hwang, Jae Jeong

    2014-01-01

    Review by Ashraf A. Kassim, Professor, Department of Electrical & Computer Engineering, and Associate Dean, School of Engineering, National University of Singapore.     The book consists of eight chapters of which the first two provide an overview of various video & image coding standards, and video formats. The next four chapters present in detail the Audio & video standard (AVS) of China, the H.264/MPEG-4 Advanced video coding (AVC) standard, High efficiency video coding (HEVC) standard and the VP6 video coding standard (now VP10) respectively. The performance of the wavelet based Dirac video codec is compared with H.264/MPEG-4 AVC in chapter 7. Finally in chapter 8, the VC-1 video coding standard is presented together with VC-2 which is based on the intra frame coding of Dirac and an outline of a H.264/AVC to VC-1 transcoder.   The authors also present and discuss relevant research literature such as those which document improved methods & techniques, and also point to other related reso...

  11. Fast Mode Decision in the HEVC Video Coding Standard by Exploiting Region with Dominated Motion and Saliency Features.

    Science.gov (United States)

    Podder, Pallab Kanti; Paul, Manoranjan; Murshed, Manzur

    2016-01-01

    The emerging High Efficiency Video Coding (HEVC) standard introduces a number of innovative and powerful coding tools to acquire better compression efficiency compared to its predecessor H.264. The encoding time complexities have also increased multiple times that is not suitable for realtime video coding applications. To address this limitation, this paper employs a novel coding strategy to reduce the time complexity in HEVC encoder by efficient selection of appropriate block-partitioning modes based on human visual features (HVF). The HVF in the proposed technique comprise with human visual attention modelling-based saliency feature and phase correlation-based motion features. The features are innovatively combined through a fusion process by developing a content-based adaptive weighted cost function to determine the region with dominated motion/saliency (RDMS)- based binary pattern for the current block. The generated binary pattern is then compared with a codebook of predefined binary pattern templates aligned to the HEVC recommended block-paritioning to estimate a subset of inter-prediction modes. Without exhaustive exploration of all modes available in the HEVC standard, only the selected subset of modes are motion estimated and motion compensated for a particular coding unit. The experimental evaluation reveals that the proposed technique notably down-scales the average computational time of the latest HEVC reference encoder by 34% while providing similar rate-distortion (RD) performance for a wide range of video sequences.

  12. Performance and Complexity Co-evaluation of the Advanced Video Coding Standard for Cost-Effective Multimedia Communications

    Science.gov (United States)

    Saponara, Sergio; Denolf, Kristof; Lafruit, Gauthier; Blanch, Carolina; Bormans, Jan

    2004-12-01

    The advanced video codec (AVC) standard, recently defined by a joint video team (JVT) of ITU-T and ISO/IEC, is introduced in this paper together with its performance and complexity co-evaluation. While the basic framework is similar to the motion-compensated hybrid scheme of previous video coding standards, additional tools improve the compression efficiency at the expense of an increased implementation cost. As a first step to bridge the gap between the algorithmic design of a complex multimedia system and its cost-effective realization, a high-level co-evaluation approach is proposed and applied to a real-life AVC design. An exhaustive analysis of the codec compression efficiency versus complexity (memory and computational costs) project space is carried out at the early algorithmic design phase. If all new coding features are used, the improved AVC compression efficiency (up to 50% compared to current video coding technology) comes with a complexity increase of a factor 2 for the decoder and larger than one order of magnitude for the encoder. This represents a challenge for resource-constrained multimedia systems such as wireless devices or high-volume consumer electronics. The analysis also highlights important properties of the AVC framework allowing for complexity reduction at the high system level: when combining the new coding features, the implementation complexity accumulates, while the global compression efficiency saturates. Thus, a proper use of the AVC tools maintains the same performance as the most complex configuration while considerably reducing complexity. The reported results provide inputs to assist the profile definition in the standard, highlight the AVC bottlenecks, and select optimal trade-offs between algorithmic performance and complexity.

  13. Performance and Complexity Co-evaluation of the Advanced Video Coding Standard for Cost-Effective Multimedia Communications

    Directory of Open Access Journals (Sweden)

    Saponara Sergio

    2004-01-01

    Full Text Available The advanced video codec (AVC standard, recently defined by a joint video team (JVT of ITU-T and ISO/IEC, is introduced in this paper together with its performance and complexity co-evaluation. While the basic framework is similar to the motion-compensated hybrid scheme of previous video coding standards, additional tools improve the compression efficiency at the expense of an increased implementation cost. As a first step to bridge the gap between the algorithmic design of a complex multimedia system and its cost-effective realization, a high-level co-evaluation approach is proposed and applied to a real-life AVC design. An exhaustive analysis of the codec compression efficiency versus complexity (memory and computational costs project space is carried out at the early algorithmic design phase. If all new coding features are used, the improved AVC compression efficiency (up to 50% compared to current video coding technology comes with a complexity increase of a factor 2 for the decoder and larger than one order of magnitude for the encoder. This represents a challenge for resource-constrained multimedia systems such as wireless devices or high-volume consumer electronics. The analysis also highlights important properties of the AVC framework allowing for complexity reduction at the high system level: when combining the new coding features, the implementation complexity accumulates, while the global compression efficiency saturates. Thus, a proper use of the AVC tools maintains the same performance as the most complex configuration while considerably reducing complexity. The reported results provide inputs to assist the profile definition in the standard, highlight the AVC bottlenecks, and select optimal trade-offs between algorithmic performance and complexity.

  14. Intra Frame Coding In Advanced Video Coding Standard (H.264) to Obtain Consistent PSNR and Reduce Bit Rate for Diagonal Down Left Mode Using Gaussian Pulse

    Science.gov (United States)

    Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma

    2017-08-01

    Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids

  15. High efficiency video coding coding tools and specification

    CERN Document Server

    Wien, Mathias

    2015-01-01

    The video coding standard High Efficiency Video Coding (HEVC) targets at improved compression performance for video resolutions of HD and beyond, providing Ultra HD video at similar compressed bit rates as for HD video encoded with the well-established video coding standard H.264 | AVC. Based on known concepts, new coding structures and improved coding tools have been developed and specified in HEVC. The standard is expected to be taken up easily by established industry as well as new endeavors, answering the needs of todays connected and ever-evolving online world. This book presents the High Efficiency Video Coding standard and explains it in a clear and coherent language. It provides a comprehensive and consistently written description, all of a piece. The book targets at both, newbies to video coding as well as experts in the field. While providing sections with introductory text for the beginner, it suits as a well-arranged reference book for the expert. The book provides a comprehensive reference for th...

  16. Layered Wyner-Ziv video coding.

    Science.gov (United States)

    Xu, Qian; Xiong, Zixiang

    2006-12-01

    Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks.

  17. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  18. Fast mode decision for the H.264/AVC video coding standard based on frequency domain motion estimation

    Science.gov (United States)

    Abdelazim, Abdelrahman; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel

    2011-07-01

    The H.264 video coding standard achieves high performance compression and image quality at the expense of increased encoding complexity. Consequently, several fast mode decision and motion estimation techniques have been developed to reduce the computational cost. These approaches successfully reduce the computational time by reducing the image quality and/or increasing the bitrate. In this paper we propose a novel fast mode decision and motion estimation technique. The algorithm utilizes preprocessing frequency domain motion estimation in order to accurately predict the best mode and the search range. Experimental results show that the proposed algorithm significantly reduces the motion estimation time by up to 97%, while maintaining similar rate distortion performance when compared to the Joint Model software.

  19. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  20. Scalable Video Coding

    NARCIS (Netherlands)

    Choupani, R.

    2017-01-01

    With the rapid improvements in digital communication technologies, distributing high-definition visual information has become more widespread. However, the available technologies were not sufficient to support the rising demand for high-definition video. This situation is further complicated when

  1. Video Coding for ESL.

    Science.gov (United States)

    King, Kevin

    1992-01-01

    Coding tasks, a valuable technique for teaching English as a Second Language, are presented that enable students to look at patterns and structures of marital communication as well as objectively evaluate the degree of happiness or distress in the marriage. (seven references) (JL)

  2. Super-Resolution Still and Video Reconstruction from MPEG Coded Video

    National Research Council Canada - National Science Library

    Altunbasak, Yucel

    2004-01-01

    Transform coding is a popular and effective compression method for both still images and video sequences, as is evident from its widespread use in international media coding standards such as MPEG, H.263 and JPEG...

  3. Fast prediction algorithm for multiview video coding

    Science.gov (United States)

    Abdelazim, Abdelrahman; Mein, Stephen James; Varley, Martin Roy; Ait-Boudaoud, Djamel

    2013-03-01

    The H.264/multiview video coding (MVC) standard has been developed to enable efficient coding for three-dimensional and multiple viewpoint video sequences. The inter-view statistical dependencies are utilized and an inter-view prediction is employed to provide more efficient coding; however, this increases the overall encoding complexity. Motion homogeneity is exploited here to selectively enable inter-view prediction, and to reduce complexity in the motion estimation (ME) and the mode selection processes. This has been accomplished by defining situations that relate macro-blocks' motion characteristics to the mode selection and the inter-view prediction processes. When comparing the proposed algorithm to the H.264/MVC reference software and other recent work, the experimental results demonstrate a significant reduction in ME time while maintaining similar rate-distortion performance.

  4. Patent landscape for royalty-free video coding

    Science.gov (United States)

    Reader, Cliff

    2016-09-01

    Digital video coding is over 60 years old and the first major video coding standard - H.261 - is over 25 years old, yet today there are more patents than ever related to, or evaluated as essential to video coding standards. This paper examines the historical development of video coding standards, from the perspective of when the significant contributions for video coding technology were made, what performance can be attributed to those contributions and when original patents were filed for those contributions. These patents have now expired, so the main video coding tools, which provide the significant majority of coding performance, are now royalty-free. The deployment of video coding tools in a standard involves several related developments. The tools themselves have evolved over time to become more adaptive, taking advantage of the increased complexity afforded by advances in semiconductor technology. In most cases, the improvement in performance for any given tool has been incremental, although significant improvement has occurred in aggregate across all tools. The adaptivity must be mirrored by the encoder and decoder, and advances have been made in reducing the overhead of signaling adaptive modes and parameters. Efficient syntax has been developed to provide such signaling. Furthermore, efficient ways of implementing the tools with limited precision, simple mathematical operators have been developed. Correspondingly, categories of patents related to video coding can be defined. Without discussing active patents, this paper provides the timeline of the developments of video coding and lays out the landscape of patents related to video coding. This provides a foundation on which royalty free video codec design can take place.

  5. P2P Video Streaming Strategies based on Scalable Video Coding

    Directory of Open Access Journals (Sweden)

    F.A. López-Fuentes

    2015-02-01

    Full Text Available Video streaming over the Internet has gained significant popularity during the last years, and the academy and industry have realized a great research effort in this direction. In this scenario, scalable video coding (SVC has emerged as an important video standard to provide more functionality to video transmission and storage applications. This paper proposes and evaluates two strategies based on scalable video coding for P2P video streaming services. In the first strategy, SVC is used to offer differentiated quality video to peers with heterogeneous capacities. The second strategy uses SVC to reach a homogeneous video quality between different videos from different sources. The obtained results show that our proposed strategies enable a system to improve its performance and introduce benefits such as differentiated quality of video for clients with heterogeneous capacities and variable network conditions.

  6. Overview of MPEG internet video coding

    Science.gov (United States)

    Wang, R. G.; Li, G.; Park, S.; Kim, J.; Huang, T.; Jang, E. S.; Gao, W.

    2015-09-01

    MPEG has produced standards that have provided the industry with the best video compression technologies. In order to address the diversified needs of the Internet, MPEG issued the Call for Proposals (CfP) for internet video coding in July, 2011. It is anticipated that any patent declaration associated with the Baseline Profile of this standard will indicate that the patent owner is prepared to grant a free of charge license to an unrestricted number of applicants on a worldwide, non-discriminatory basis and under other reasonable terms and conditions to make, use, and sell implementations of the Baseline Profile of this standard in accordance with the ITU-T/ITU-R/ISO/IEC Common Patent Policy. Three different codecs had responded to the CfP, which are WVC, VCB and IVC. WVC was proposed jointly by Apple, Cisco, Fraunhofer HHI, Magnum Semiconductor, Polycom and RIM etc. it's in fact AVC baseline. VCB was proposed by Google, and it's in fact VP8. IVC was proposed by several Universities (Peking University, Tsinghua University, Zhejiang University, Hanyang University and Korea Aerospace University etc.) and its coding tools was developed from Zero. In this paper, we give an overview of the coding tools in IVC, and evaluate its performance by comparing it with WVC, VCB and AVC High Profile.

  7. Multi-hypothesis distributed stereo video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Zamarin, Marco; Forchhammer, Søren

    2013-01-01

    Distributed Video Coding (DVC) is a video coding paradigm that exploits the source statistics at the decoder based on the availability of the Side Information (SI). Stereo sequences are constituted by two views to give the user an illusion of depth. In this paper, we present a DVC decoder...

  8. Scalable-to-lossless transform domain distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Veselov, Anton

    2010-01-01

    Distributed video coding (DVC) is a novel approach providing new features as low complexity encoding by mainly exploiting the source statistics at the decoder based on the availability of decoder side information. In this paper, scalable-tolossless DVC is presented based on extending a lossy...... TransformDomain Wyner-Ziv (TDWZ) distributed video codec with feedback.The lossless coding is obtained by using a reversible integer DCT.Experimental results show that the performance of the proposed scalable-to-lossless TDWZ video codec can outperform alternatives based on the JPEG 2000 standard. The TDWZ...... codec provides frame by frame encoding. Comparing the lossless coding efficiency, the proposed scalable-to-lossless TDWZ video codec can save up to 5%-13% bits compared to JPEG LS and H.264 Intra frame lossless coding and do so as a scalable-to-lossless coding....

  9. Distributed video coding with multiple side information

    DEFF Research Database (Denmark)

    Huang, Xin; Brites, C.; Ascenso, J.

    2009-01-01

    Distributed Video Coding (DVC) is a new video coding paradigm which mainly exploits the source statistics at the decoder based on the availability of some decoder side information. The quality of the side information has a major impact on the DVC rate-distortion (RD) performance in the same way...... the quality of the predictions had a major impact in predictive video coding. In this paper, a DVC solution exploiting multiple side information is proposed; the multiple side information is generated by frame interpolation and frame extrapolation targeting to improve the side information of a single...

  10. Complexity-aware high efficiency video coding

    CERN Document Server

    Correa, Guilherme; Agostini, Luciano; Cruz, Luis A da Silva

    2016-01-01

    This book discusses computational complexity of High Efficiency Video Coding (HEVC) encoders with coverage extending from the analysis of HEVC compression efficiency and computational complexity to the reduction and scaling of its encoding complexity. After an introduction to the topic and a review of the state-of-the-art research in the field, the authors provide a detailed analysis of the HEVC encoding tools compression efficiency and computational complexity.  Readers will benefit from a set of algorithms for scaling the computational complexity of HEVC encoders, all of which take advantage from the flexibility of the frame partitioning structures allowed by the standard.  The authors also provide a set of early termination methods based on data mining and machine learning techniques, which are able to reduce the computational complexity required to find the best frame partitioning structures. The applicability of the proposed methods is finally exemplified with an encoding time control system that emplo...

  11. Communicating pictures a course in image and video coding

    CERN Document Server

    Bull, David R

    2014-01-01

    Communicating Pictures starts with a unique historical perspective of the role of images in communications and then builds on this to explain the applications and requirements of a modern video coding system. It draws on the author's extensive academic and professional experience of signal processing and video coding to deliver a text that is algorithmically rigorous, yet accessible, relevant to modern standards, and practical. It offers a thorough grounding in visual perception, and demonstrates how modern image and video compression methods can be designed in order to meet the rate-quality performance levels demanded by today's applications, networks and users. With this book you will learn: Practical issues when implementing a codec, such as picture boundary extension and complexity reduction, with particular emphasis on efficient algorithms for transforms, motion estimators and error resilience Conflicts between conventional video compression, based on variable length coding and spatiotemporal prediction,...

  12. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  13. Feature-based fast coding unit partition algorithm for high efficiency video coding

    Directory of Open Access Journals (Sweden)

    Yih-Chuan Lin

    2015-04-01

    Full Text Available High Efficiency Video Coding (HEVC, which is the newest video coding standard, has been developed for the efficient compression of ultra high definition videos. One of the important features in HEVC is the adoption of a quad-tree based video coding structure, in which each incoming frame is represented as a set of non-overlapped coding tree blocks (CTB by variable-block sized prediction and coding process. To do this, each CTB needs to be recursively partitioned into coding unit (CU, predict unit (PU and transform unit (TU during the coding process, leading to a huge computational load in the coding of each video frame. This paper proposes to extract visual features in a CTB and uses them to simplify the coding procedure by reducing the depth of quad-tree partition for each CTB in HEVC intra coding mode. A measure for the edge strength in a CTB, which is defined with simple Sobel edge detection, is used to constrain the possible maximum depth of quad-tree partition of the CTB. With the constrained partition depth, the proposed method can reduce a lot of encoding time. Experimental results by HM10.1 show that the average time-savings is about 13.4% under the increase of encoded BD-Rate by only 0.02%, which is a less performance degradation in comparison to other similar methods.

  14. High efficiency video coding (HEVC) algorithms and architectures

    CERN Document Server

    Budagavi, Madhukar; Sullivan, Gary

    2014-01-01

    This book provides developers, engineers, researchers and students with detailed knowledge about the High Efficiency Video Coding (HEVC) standard. HEVC is the successor to the widely successful H.264/AVC video compression standard, and it provides around twice as much compression as H.264/AVC for the same level of quality. The applications for HEVC will not only cover the space of the well-known current uses and capabilities of digital video – they will also include the deployment of new services and the delivery of enhanced video quality, such as ultra-high-definition television (UHDTV) and video with higher dynamic range, wider range of representable color, and greater representation precision than what is typically found today. HEVC is the next major generation of video coding design – a flexible, reliable and robust solution that will support the next decade of video applications and ease the burden of video on world-wide network traffic. This book provides a detailed explanation of the various parts ...

  15. Motion estimation techniques for digital video coding

    CERN Document Server

    Metkar, Shilpa

    2013-01-01

    The book deals with the development of a methodology to estimate the motion field between two frames for video coding applications. This book proposes an exhaustive study of the motion estimation process in the framework of a general video coder. The conceptual explanations are discussed in a simple language and with the use of suitable figures. The book will serve as a guide for new researchers working in the field of motion estimation techniques.

  16. Error Transmission in Video Coding with Gaussian Noise

    Directory of Open Access Journals (Sweden)

    A Purwadi

    2015-06-01

    Full Text Available In video transmission, there is a possibility of packet lost and a large load variation in the bandwidth. These are the sources of network congestion, which can interfere the communication data rate. The coding system used is a video coding standard, which is either MPEG-2 or H.263 with SNR scalability. The algorithm used for motion compensation, temporal redundancy and spatial redundancy is the Discrete Cosine Transform (DCT and quantization. The transmission error is simulated by adding Gaussian noise (error on motion vectors. From the simulation results, the SNR and Peak Signal to Noise Ratio (PSNR in the noisy video frames decline with averages of 3dB and increase Mean Square Error (MSE on video frames received noise.

  17. Coding Transparency in Object-Based Video

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2006-01-01

    A novel algorithm for coding gray level alpha planes in object-based video is presented. The scheme is based on segmentation in multiple layers. Different coders are specifically designed for each layer. In order to reduce the bit rate, cross-layer redundancies as well as temporal correlation...... are exploited. Coding results show the superior efficiency of the proposed scheme compared with MPEG-4...

  18. Combined Scalable Video Coding Method for Wireless Transmission

    Directory of Open Access Journals (Sweden)

    Achmad Affandi

    2011-08-01

    Full Text Available Mobile video streaming is one of multimedia services that has developed very rapidly. Recently, bandwidth utilization for wireless transmission is the main problem in the field of multimedia communications. In this research, we offer a combination of scalable methods as the most attractive solution to this problem. Scalable method for wireless communication should adapt to input video sequence. Standard ITU (International Telecommunication Union - Joint Scalable Video Model (JSVM is employed to produce combined scalable video coding (CSVC method that match the required quality of video streaming services for wireless transmission. The investigation in this paper shows that combined scalable technique outperforms the non-scalable one, in using bit rate capacity at certain layer.

  19. Distributed Video Coding: Iterative Improvements

    DEFF Research Database (Denmark)

    Luong, Huynh Van

    at the decoder side offering such benefits for these applications. Although there have been some advanced improvement techniques, improving the DVC coding efficiency is still challenging. The thesis addresses this challenge by proposing several iterative algorithms at different working levels, e.g. bitplane......, band, and frame levels. In order to show the information theoretic basis, theoretical foundations of DVC are introduced. The first proposed algorithm applies parallel iterative decoding using multiple LDPC decoders to utilize cross bitplane correlation. To improve Side Information (SI) generation...... and noise modeling and also learn from the previous decoded Wyner-Ziv (WZ) frames, side information and noise learning (SING) is proposed. The SING scheme introduces an optical flow technique to compensate the weaknesses of the block based SI generation and also utilizes clustering of DCT blocks to capture...

  20. Adaptive subband coding of full motion video

    Science.gov (United States)

    Sharifi, Kamran; Xiao, Leping; Leon-Garcia, Alberto

    1993-10-01

    In this paper a new algorithm for digital video coding is presented that is suitable for digital storage and video transmission applications in the range of 5 to 10 Mbps. The scheme is based on frame differencing and, unlike recent proposals, does not employ motion estimation and compensation. A novel adaptive grouping structure is used to segment the video sequence into groups of frames of variable sizes. Within each group, the frame difference is taken in a closed loop Differential Pulse Code Modulation (DPCM) structure and then decomposed into different frequency subbands. The important subbands are transformed using the Discrete Cosine Transform (DCT) and the resulting coefficients are adaptively quantized and runlength coded. The adaptation is based on the variance of sample values in each subband. To reduce the computation load, a very simple and efficient way has been used to estimate the variance of the subbands. It is shown that for many types of sequences, the performance of the proposed coder is comparable to that of coding methods which use motion parameters.

  1. Performance evaluation of nonscalable MPEG-2 video coding

    Science.gov (United States)

    Schmidt, Robert L.; Puri, Atul; Haskell, Barry G.

    1994-09-01

    The second phase of the ISO Moving Picture Experts Group audio-visual coding standard (MPEG-2) is nearly complete and this standard is expected to be used in a wide range of applications at variety of bitrates. While the standard specifies the syntax of the compressed bitstream and the semantics of the decoding process, it allows considerably flexibility in choice of encoding parameters and options enabling appropriate tradeoffs in performance versus complexity as might be suitable for an application. First, we present a review of profile and level structure in MPEG-2 which is the key for enabling use of coding tools in MPEG-2. Next, we include a brief review of tools for nonscalable coding within MPEG-2 standard. Finally, we investigate via simulations, tradeoffs in coding performance with choice of various parameters and options so that within the encoder complexity that can be afforded, encoder design with good performance tradeoffs can be accomplished. Simulations are performed on standard TV and HDTV resolution video of various formats and at many bitrates using nonscalable (single layer) video coding tools of the MPEG-2 standard.

  2. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau

    2013-03-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However, it has never been practically implemented using a commercial 4G system. This paper demonstrates our prototype in achieving the SCM using a standard 802.16 based testbed for scalable video transmissions. In particular, to implement the superposition coded (SPC) modulation, we take advantage a novel software approach, namely logical SPC (L-SPC), which aims to mimic the physical layer superposition coded modulation. The emulation results show improved throughput comparing with generic multicast method.

  3. Distributed Video Coding for Multiview and Video-plus-depth Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo

    The interest in Distributed Video Coding (DVC) systems has grown considerably in the academic world in recent years. With DVC the correlation between frames is exploited at the decoder (joint decoding). The encoder codes the frame independently, performing relatively simple operations. Therefore......, with DVC the complexity is shifted from encoder to decoder, making the coding architecture a viable solution for encoders with limited resources. DVC may empower new applications which can benefit from this reversed coding architecture. Multiview Distributed Video Coding (M-DVC) is the application...... of the DVC principles to camera networks. Thanks to its reversed coding paradigm M-DVC enables the exploitation of inter-camera redundancy without inter-camera communication, because the frames are encoded independently. One of the key elements in DVC is the Side Information (SI) which is an estimation...

  4. Fully scalable video coding in multicast applications

    Science.gov (United States)

    Lerouge, Sam; De Sutter, Robbie; Lambert, Peter; Van de Walle, Rik

    2004-01-01

    The increasing diversity of the characteristics of the terminals and networks that are used to access multimedia content through the internet introduces new challenges for the distribution of multimedia data. Scalable video coding will be one of the elementary solutions in this domain. This type of coding allows to adapt an encoded video sequence to the limitations of the network or the receiving device by means of very basic operations. Algorithms for creating fully scalable video streams, in which multiple types of scalability are offered at the same time, are becoming mature. On the other hand, research on applications that use such bitstreams is only recently emerging. In this paper, we introduce a mathematical model for describing such bitstreams. In addition, we show how we can model applications that use scalable bitstreams by means of definitions that are built on top of this model. In particular, we chose to describe a multicast protocol that is targeted at scalable bitstreams. This way, we will demonstrate that it is possible to define an abstract model for scalable bitstreams, that can be used as a tool for reasoning about such bitstreams and related applications.

  5. Video coding for decoding power-constrained embedded devices

    Science.gov (United States)

    Lu, Ligang; Sheinin, Vadim

    2004-01-01

    Low power dissipation and fast processing time are crucial requirements for embedded multimedia devices. This paper presents a technique in video coding to decrease the power consumption at a standard video decoder. Coupled with a small dedicated video internal memory cache on a decoder, the technique can substantially decrease the amount of data traffic to the external memory at the decoder. A decrease in data traffic to the external memory at decoder will result in multiple benefits: faster real-time processing and power savings. The encoder, given prior knowledge of the decoder"s dedicated video internal memory cache management scheme, regulates its choice of motion compensated predictors to reduce the decoder"s external memory accesses. This technique can be used in any standard or proprietary encoder scheme to generate a compliant output bit stream decodable by standard CPU-based and dedicated hardware-based decoders for power savings with the best quality-power cost trade off. Our simulation results show that with a relatively small amount of dedicated video internal memory cache, the technique may decrease the traffic between CPU and external memory over 50%.

  6. Efficient Enhancement for Spatial Scalable Video Coding Transmission

    Directory of Open Access Journals (Sweden)

    Mayada Khairy

    2017-01-01

    Full Text Available Scalable Video Coding (SVC is an international standard technique for video compression. It is an extension of H.264 Advanced Video Coding (AVC. In the encoding of video streams by SVC, it is suitable to employ the macroblock (MB mode because it affords superior coding efficiency. However, the exhaustive mode decision technique that is usually used for SVC increases the computational complexity, resulting in a longer encoding time (ET. Many other algorithms were proposed to solve this problem with imperfection of increasing transmission time (TT across the network. To minimize the ET and TT, this paper introduces four efficient algorithms based on spatial scalability. The algorithms utilize the mode-distribution correlation between the base layer (BL and enhancement layers (ELs and interpolation between the EL frames. The proposed algorithms are of two categories. Those of the first category are based on interlayer residual SVC spatial scalability. They employ two methods, namely, interlayer interpolation (ILIP and the interlayer base mode (ILBM method, and enable ET and TT savings of up to 69.3% and 83.6%, respectively. The algorithms of the second category are based on full-search SVC spatial scalability. They utilize two methods, namely, full interpolation (FIP and the full-base mode (FBM method, and enable ET and TT savings of up to 55.3% and 76.6%, respectively.

  7. Probabilistic Decision Based Block Partitioning for Future Video Coding

    KAUST Repository

    Wang, Zhao

    2017-11-29

    In the latest Joint Video Exploration Team development, the quadtree plus binary tree (QTBT) block partitioning structure has been proposed for future video coding. Compared to the traditional quadtree structure of High Efficiency Video Coding (HEVC) standard, QTBT provides more flexible patterns for splitting the blocks, which results in dramatically increased combinations of block partitions and high computational complexity. In view of this, a confidence interval based early termination (CIET) scheme is proposed for QTBT to identify the unnecessary partition modes in the sense of rate-distortion (RD) optimization. In particular, a RD model is established to predict the RD cost of each partition pattern without the full encoding process. Subsequently, the mode decision problem is casted into a probabilistic framework to select the final partition based on the confidence interval decision strategy. Experimental results show that the proposed CIET algorithm can speed up QTBT block partitioning structure by reducing 54.7% encoding time with only 1.12% increase in terms of bit rate. Moreover, the proposed scheme performs consistently well for the high resolution sequences, of which the video coding efficiency is crucial in real applications.

  8. Wyner-Ziv Bayer-pattern video coding

    OpenAIRE

    Chen, Hu

    2013-01-01

    This thesis addresses the problem of Bayer-pattern video communications using Wyner-Ziv video coding. There are three major contributions. Firstly, a state-of-the-art Wyner-Ziv video codec using turbo codes is optimized and its functionality is extended. Secondly, it is studied how to realize joint source-channel coding using Wyner-Ziv video coding. The motivation is to achieve high error resiliency for wireless video transmission. Thirdly, a new color space transform method is proposed speci...

  9. Novel Intermode Prediction Algorithm for High Efficiency Video Coding Encoder

    Directory of Open Access Journals (Sweden)

    Chan-seob Park

    2014-01-01

    Full Text Available The joint collaborative team on video coding (JCT-VC is developing the next-generation video coding standard which is called high efficiency video coding (HEVC. In the HEVC, there are three units in block structure: coding unit (CU, prediction unit (PU, and transform unit (TU. The CU is the basic unit of region splitting like macroblock (MB. Each CU performs recursive splitting into four blocks with equal size, starting from the tree block. In this paper, we propose a fast CU depth decision algorithm for HEVC technology to reduce its computational complexity. In 2N×2N PU, the proposed method compares the rate-distortion (RD cost and determines the depth using the compared information. Moreover, in order to speed up the encoding time, the efficient merge SKIP detection method is developed additionally based on the contextual mode information of neighboring CUs. Experimental result shows that the proposed algorithm achieves the average time-saving factor of 44.84% in the random access (RA at Main profile configuration with the HEVC test model (HM 10.0 reference software. Compared to HM 10.0 encoder, a small BD-bitrate loss of 0.17% is also observed without significant loss of image quality.

  10. Coding B-Frames of Color Videos with Fuzzy Transforms

    Directory of Open Access Journals (Sweden)

    Ferdinando Di Martino

    2013-01-01

    Full Text Available We use a new method based on discrete fuzzy transforms for coding/decoding frames of color videos in which we determine dynamically the GOP sequences. Frames can be differentiated into intraframes, predictive frames, and bidirectional frames, and we consider particular frames, called Δ-frames (resp., R-frames, for coding P-frames (resp., B-frames by using two similarity measures based on Lukasiewicz -norm; moreover, a preprocessing phase is proposed to determine similarity thresholds for classifying the above types of frame. The proposed method provides acceptable results in terms of quality of the reconstructed videos to a certain extent if compared with classical-based F-transforms method and the standard MPEG-4.

  11. Multiresolution coding for video-based service applications

    Science.gov (United States)

    Gharavi, Hami

    1995-12-01

    The video coding and distribution approach presented in this paper has two key characteristics that make it ideal for integration of video communication services over common broadband digital networks. The modular multi-resolution nature of the coding scheme provides the necessary flexibility to accommodate future advances in video technology as well as robust distribution over various network environments. This paper will present an efficient and scalable coding scheme for video communications. The scheme is capable of encoding and decoding video signals in a hierarchical, multilayer fashion to provide video at differing quality grades. Subsequently, the utilization of this approach to enable efficient bandwidth sharing and robust distribution of video signals in multipoint communications is presented. Coding and distribution architectures are discussed which include multi-party communications in a multi-window fashion within ATM environments. Furthermore, under the limited capabilities typical of wideband/broadband access networks, this architecture accommodates important video-based service applications such as Interactive Distance Learning.

  12. Low Bit Rate Video Coding | Mishra | Nigerian Journal of Technology

    African Journals Online (AJOL)

    ... length bit rate (VLBR) broadly encompasses video coding which mandates a temporal frequency of 10 frames per second (fps) or less. Object-based video coding represents a very promising option for VLBR coding, though the problems of object identification and segmentation need to be addressed by further research.

  13. 3D video coding for embedded devices energy efficient algorithms and architectures

    CERN Document Server

    Zatt, Bruno; Bampi, Sergio; Henkel, Jörg

    2013-01-01

    This book shows readers how to develop energy-efficient algorithms and hardware architectures to enable high-definition 3D video coding on resource-constrained embedded devices.  Users of the Multiview Video Coding (MVC) standard face the challenge of exploiting its 3D video-specific coding tools for increasing compression efficiency at the cost of increasing computational complexity and, consequently, the energy consumption.  This book enables readers to reduce the multiview video coding energy consumption through jointly considering the algorithmic and architectural levels.  Coverage includes an introduction to 3D videos and an extensive discussion of the current state-of-the-art of 3D video coding, as well as energy-efficient algorithms for 3D video coding and energy-efficient hardware architecture for 3D video coding.     ·         Discusses challenges related to performance and power in 3D video coding for embedded devices; ·         Describes energy-efficient algorithms for reduci...

  14. Interlayer Simplified Depth Coding for Quality Scalability on 3D High Efficiency Video Coding

    Directory of Open Access Journals (Sweden)

    Mengmeng Zhang

    2014-01-01

    Full Text Available A quality scalable extension design is proposed for the upcoming 3D video on the emerging standard for High Efficiency Video Coding (HEVC. A novel interlayer simplified depth coding (SDC prediction tool is added to reduce the amount of bits for depth maps representation by exploiting the correlation between coding layers. To further improve the coding performance, the coded prediction quadtree and texture data from corresponding SDC-coded blocks in the base layer can be used in interlayer simplified depth coding. In the proposed design, the multiloop decoder solution is also extended into the proposed scalable scenario for texture views and depth maps, and will be achieved by the interlayer texture prediction method. The experimental results indicate that the average Bjøntegaard Delta bitrate decrease of 54.4% can be gained in interlayer simplified depth coding prediction tool on multiloop decoder solution compared with simulcast. Consequently, significant rate savings confirm that the proposed method achieves better performance.

  15. Error resilience technique for video coding using concealment

    Science.gov (United States)

    Li, Rong; Yu, Sheng-sheng; Zhu, Li

    2009-10-01

    The traditional error resilience technique has been widely used in video coding. Many literatures have shown that with the technique's help, the video coding bit stream can been protected and the reconstructed image will get high improvement. In this paper, we review the error resilience for video coding and give the experiment of this new technology. These techniques are based on coding simultaneously for synchronization and error protection or detection. We apply the techniques to improve the performance of the multiplexing protocol and also to improve the robustness of the coded video. The techniques proposed for the video also have the advantage of simple trans-coding with bit streams complying in H.263.

  16. Scalable Video Coding with Interlayer Signal Decorrelation Techniques

    Directory of Open Access Journals (Sweden)

    Yang Wenxian

    2007-01-01

    Full Text Available Scalability is one of the essential requirements in the compression of visual data for present-day multimedia communications and storage. The basic building block for providing the spatial scalability in the scalable video coding (SVC standard is the well-known Laplacian pyramid (LP. An LP achieves the multiscale representation of the video as a base-layer signal at lower resolution together with several enhancement-layer signals at successive higher resolutions. In this paper, we propose to improve the coding performance of the enhancement layers through efficient interlayer decorrelation techniques. We first show that, with nonbiorthogonal upsampling and downsampling filters, the base layer and the enhancement layers are correlated. We investigate two structures to reduce this correlation. The first structure updates the base-layer signal by subtracting from it the low-frequency component of the enhancement layer signal. The second structure modifies the prediction in order that the low-frequency component in the new enhancement layer is diminished. The second structure is integrated in the JSVM 4.0 codec with suitable modifications in the prediction modes. Experimental results with some standard test sequences demonstrate coding gains up to 1 dB for I pictures and up to 0.7 dB for both I and P pictures.

  17. Depth-based coding of MVD data for 3D video extension of H.264/AVC

    Science.gov (United States)

    Rusanovskyy, Dmytro; Hannuksela, Miska M.; Su, Wenyi

    2013-06-01

    This paper describes a novel approach of using depth information for advanced coding of associated video data in Multiview Video plus Depth (MVD)-based 3D video systems. As a possible implementation of this conception, we describe two coding tools that have been developed for H.264/AVC based 3D Video Codec as response to Moving Picture Experts Group (MPEG) Call for Proposals (CfP). These tools are Depth-based Motion Vector Prediction (DMVP) and Backward View Synthesis Prediction (BVSP). Simulation results conducted under JCT-3V/MPEG 3DV Common Test Conditions show, that proposed in this paper tools reduce bit rate of coded video data by 15% of average delta bit rate reduction, which results in 13% of bit rate savings on total for the MVD data over the state-of-the-art MVC+D coding. Moreover, presented in this paper conception of depth-based coding of video has been further developed by MPEG 3DV and JCT-3V and this work resulted in even higher compression efficiency, bringing about 20% of delta bit rate reduction on total for coded MVD data over the reference MVC+D coding. Considering significant gains, proposed in this paper coding approach can be beneficial for development of new 3D video coding standards. [Figure not available: see fulltext.

  18. Expressing Youth Voice through Video Games and Coding

    Science.gov (United States)

    Martin, Crystle

    2017-01-01

    A growing body of research focuses on the impact of video games and coding on learning. The research often elevates learning the technical skills associated with video games and coding or the importance of problem solving and computational thinking, which are, of course, necessary and relevant. However, the literature less often explores how young…

  19. Coding the Complexity of Activity in Video Recordings

    DEFF Research Database (Denmark)

    Harter, Christopher Daniel; Otrel-Cass, Kathrin

    2017-01-01

    This paper presents a theoretical approach to coding and analyzing video data on human interaction and activity, using principles found in cultural historical activity theory. The systematic classification or coding of information contained in video data on activity can be arduous and time consum...

  20. A comparative study of scalable video coding schemes utilizing wavelet technology

    Science.gov (United States)

    Schelkens, Peter; Andreopoulos, Yiannis; Barbarien, Joeri; Clerckx, Tom; Verdicchio, Fabio; Munteanu, Adrian; van der Schaar, Mihaela

    2004-02-01

    Video transmission over variable-bandwidth networks requires instantaneous bit-rate adaptation at the server site to provide an acceptable decoding quality. For this purpose, recent developments in video coding aim at providing a fully embedded bit-stream with seamless adaptation capabilities in bit-rate, frame-rate and resolution. A new promising technology in this context is wavelet-based video coding. Wavelets have already demonstrated their potential for quality and resolution scalability in still-image coding. This led to the investigation of various schemes for the compression of video, exploiting similar principles to generate embedded bit-streams. In this paper we present scalable wavelet-based video-coding technology with competitive rate-distortion behavior compared to standardized non-scalable technology.

  1. Joint distributed source-channel coding for 3D videos

    Science.gov (United States)

    Palma, Veronica; Cancellaro, Michela; Neri, Alessandro

    2011-03-01

    This paper presents a distributed joint source-channel 3D video coding system. Our aim is the design of an efficient coding scheme for stereoscopic video communication over noisy channels that preserves the perceived visual quality while guaranteeing a low computational complexity. The drawback in using stereo sequences is the increased amount of data to be transmitted. Several methods are being used in the literature for encoding stereoscopic video. A significantly different approach respect to traditional video coding has been represented by Distributed Video Coding (DVC), which introduces a flexible architecture with the design of low complex video encoders. In this paper we propose a novel method for joint source-channel coding in a distributed approach. We choose turbo code for our application and study the new setting of distributed joint source channel coding of a video. Turbo code allows to send the minimum amount of data while guaranteeing near channel capacity error correcting performance. In this contribution, the mathematical framework will be fully detailed and tradeoff among redundancy and perceived quality and quality of experience will be analyzed with the aid of numerical experiments.

  2. Video over DSL with LDGM Codes for Interactive Applications

    Directory of Open Access Journals (Sweden)

    Laith Al-Jobouri

    2016-05-01

    Full Text Available Digital Subscriber Line (DSL network access is subject to error bursts, which, for interactive video, can introduce unacceptable latencies if video packets need to be re-sent. If the video packets are protected against errors with Forward Error Correction (FEC, calculation of the application-layer channel codes themselves may also introduce additional latency. This paper proposes Low-Density Generator Matrix (LDGM codes rather than other popular codes because they are more suitable for interactive video streaming, not only for their computational simplicity but also for their licensing advantage. The paper demonstrates that a reduction of up to 4 dB in video distortion is achievable with LDGM Application Layer (AL FEC. In addition, an extension to the LDGM scheme is demonstrated, which works by rearranging the columns of the parity check matrix so as to make it even more resilient to burst errors. Telemedicine and video conferencing are typical target applications.

  3. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    Science.gov (United States)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  4. Fast motion prediction algorithm for multiview video coding

    Science.gov (United States)

    Abdelazim, Abdelrahman; Zhang, Guang Y.; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel

    2011-06-01

    Multiview Video Coding (MVC) is an extension to the H.264/MPEG-4 AVC video compression standard developed with joint efforts by MPEG/VCEG to enable efficient encoding of sequences captured simultaneously from multiple cameras using a single video stream. Therefore the design is aimed at exploiting inter-view dependencies in addition to reducing temporal redundancies. However, this further increases the overall encoding complexity In this paper, the high correlation between a macroblock and its enclosed partitions is utilised to estimate motion homogeneity, and based on the result inter-view prediction is selectively enabled or disabled. Moreover, if the MVC is divided into three layers in terms of motion prediction; the first being the full and sub-pixel motion search, the second being the mode selection process and the third being repetition of the first and second for inter-view prediction, the proposed algorithm significantly reduces the complexity in the three layers. To assess the proposed algorithm, a comprehensive set of experiments were conducted. The results show that the proposed algorithm significantly reduces the motion estimation time whilst maintaining similar Rate Distortion performance, when compared to both the H.264/MVC reference software and recently reported work.

  5. Improved side information generation for distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2008-01-01

    As a new coding paradigm, distributed video coding (DVC) deals with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. The performance of DVC highly depends on the quality of side information. With a better side...... information generation method, fewer bits will be requested from the encoder and more reliable decoded frames will be obtained. In this paper, a side information generation method is introduced to further improve the rate-distortion (RD) performance of transform domain distributed video coding. This algorithm...... consists of a variable block size based Y, U and V component motion estimation and an adaptive weighted overlapped block motion compensation (OBMC). The proposal is tested and compared with the results of an executable DVC codec released by DISCOVER group (DIStributed COding for Video sERvices). RD...

  6. Extending JPEG-LS for low-complexity scalable video coding

    DEFF Research Database (Denmark)

    Ukhanova, Anna; Sergeev, Anton; Forchhammer, Søren

    2011-01-01

    JPEG-LS, the well-known international standard for lossless and near-lossless image compression, was originally designed for non-scalable applications. In this paper we propose a scalable modification of JPEG-LS and compare it with the leading image and video coding standards JPEG2000 and H.264/SVC...

  7. Scalable video transmission over Rayleigh fading channels using LDPC codes

    Science.gov (United States)

    Bansal, Manu; Kondi, Lisimachos P.

    2005-03-01

    In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.

  8. Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding

    Directory of Open Access Journals (Sweden)

    Pengyu Liu

    2013-01-01

    Full Text Available A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI. The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding.

  9. SCTP as scalable video coding transport

    Science.gov (United States)

    Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.

    2013-12-01

    This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.

  10. Codes and Standards Technical Team Roadmap

    Energy Technology Data Exchange (ETDEWEB)

    None

    2013-06-01

    The Hydrogen Codes and Standards Tech Team (CSTT) mission is to enable and facilitate the appropriate research, development, & demonstration (RD&D) for the development of safe, performance-based defensible technical codes and standards that support the technology readiness and are appropriate for widespread consumer use of fuel cells and hydrogen-based technologies with commercialization by 2020. Therefore, it is important that the necessary codes and standards be in place no later than 2015.

  11. Least-Square Prediction for Backward Adaptive Video Coding

    Directory of Open Access Journals (Sweden)

    Li Xin

    2006-01-01

    Full Text Available Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP, and demonstrate its potential in video coding. Motivated by the duality between edge contour in images and motion trajectory in video, we propose to derive the best prediction of the current frame from its causal past using least-square method. It is demonstrated that LSP is particularly effective for modeling video material with slow motion and can be extended to handle fast motion by temporal warping and forward adaptation. For typical QCIF test sequences, LSP often achieves smaller MSE than , full-search, quarter-pel block matching algorithm (BMA without the need of transmitting any overhead.

  12. Context based Coding of Quantized Alpha Planes for Video Objects

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2002-01-01

    In object based video, each frame is a composition of objects that are coded separately. The composition is performed through the alpha plane that represents the transparency of the object. We present an alternative to MPEG-4 for coding of alpha planes that considers their specific properties. Co....... Comparisons in terms of rate and distortion are provided, showing that the proposed coding scheme for still alpha planes is better than the algorithms for I-frames used in MPEG-4.......In object based video, each frame is a composition of objects that are coded separately. The composition is performed through the alpha plane that represents the transparency of the object. We present an alternative to MPEG-4 for coding of alpha planes that considers their specific properties...

  13. EZBC video streaming with channel coding and error concealment

    Science.gov (United States)

    Bajic, Ivan V.; Woods, John W.

    2003-06-01

    In this text we present a system for streaming video content encoded using the motion-compensated Embedded Zero Block Coder (EZBC). The system incorporates unequal loss protection in the form of multiple description FEC (MD-FEC) coding, which provides adequate protection for the embedded video bitstream when the loss process is not very bursty. The adverse effects of burst losses are reduced using a novel motion-compensated error concealmet method.

  14. Efficient Coding of Shape and Transparency for Video Objects

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2007-01-01

    A novel scheme for coding gray-level alpha planes in object-based video is presented. Gray-level alpha planes convey the shape and the transparency information, which are required for smooth composition of video objects. The algorithm proposed is based on the segmentation of the alpha plane...... shape layer is processed by a novel video shape coder. In intra mode, the DSLSC binary image coder presented in is used. This is extended here with an intermode utilizing temporal redundancies in shape image sequences. Then the opaque layer is compressed by a newly designed scheme which models...

  15. Video coding for 3D-HEVC based on saliency information

    Science.gov (United States)

    Yu, Fang; An, Ping; Yang, Chao; You, Zhixiang; Shen, Liquan

    2016-11-01

    As an extension of High Efficiency Video Coding ( HEVC), 3D-HEVC has been widely researched under the impetus of the new generation coding standard in recent years. Compared with H.264/AVC, its compression efficiency is doubled while keeping the same video quality. However, its higher encoding complexity and longer encoding time are not negligible. To reduce the computational complexity and guarantee the subjective quality of virtual views, this paper presents a novel video coding method for 3D-HEVC based on the saliency informat ion which is an important part of Human Visual System (HVS). First of all, the relationship between the current coding unit and its adjacent units is used to adjust the maximum depth of each largest coding unit (LCU) and determine the SKIP mode reasonably. Then, according to the saliency informat ion of each frame image, the texture and its corresponding depth map will be divided into three regions, that is, salient area, middle area and non-salient area. Afterwards, d ifferent quantization parameters will be assigned to different regions to conduct low complexity coding. Finally, the compressed video will generate new view point videos through the renderer tool. As shown in our experiments, the proposed method saves more bit rate than other approaches and achieves up to highest 38% encoding time reduction without subjective quality loss in compression or rendering.

  16. Spatial Pyramid Covariance based Compact Video Code for Robust Face Retrieval in TV-series.

    Science.gov (United States)

    Li, Yan; Wang, Ruiping; Cui, Zhen; Shan, Shiguang; Chen, Xilin

    2016-10-10

    We address the problem of face video retrieval in TV-series which searches video clips based on the presence of specific character, given one face track of his/her. This is tremendously challenging because on one hand, faces in TV-series are captured in largely uncontrolled conditions with complex appearance variations, and on the other hand retrieval task typically needs efficient representation with low time and space complexity. To handle this problem, we propose a compact and discriminative representation for the huge body of video data, named Compact Video Code (CVC). Our method first models the face track by its sample (i.e., frame) covariance matrix to capture the video data variations in a statistical manner. To incorporate discriminative information and obtain more compact video signature suitable for retrieval, the high-dimensional covariance representation is further encoded as a much lower-dimensional binary vector, which finally yields the proposed CVC. Specifically, each bit of the code, i.e., each dimension of the binary vector, is produced via supervised learning in a max margin framework, which aims to make a balance between the discriminability and stability of the code. Besides, we further extend the descriptive granularity of covariance matrix from traditional pixel-level to more general patchlevel, and proceed to propose a novel hierarchical video representation named Spatial Pyramid Covariance (SPC) along with a fast calculation method. Face retrieval experiments on two challenging TV-series video databases, i.e., the Big Bang Theory and Prison Break, demonstrate the competitiveness of the proposed CVC over state-of-the-art retrieval methods. In addition, as a general video matching algorithm, CVC is also evaluated in traditional video face recognition task on a standard Internet database, i.e., YouTube Celebrities, showing its quite promising performance by using an extremely compact code with only 128 bits.

  17. Efficient Power Allocation for Video over Superposition Coding

    KAUST Repository

    Lau, Chun Pong

    2013-03-01

    In this paper we consider a wireless multimedia system by mapping scalable video coded (SVC) bit stream upon superposition coded (SPC) signals, referred to as (SVC-SPC) architecture. Empirical experiments using a software-defined radio(SDR) emulator are conducted to gain a better understanding of its efficiency, specifically, the impact of the received signal due to different power allocation ratios. Our experimental results show that to maintain high video quality, the power allocated to the base layer should be approximately four times higher than the power allocated to the enhancement layer.

  18. SBASIC video coding and its 3D-DCT extension for MPEG-4 multimedia

    Science.gov (United States)

    Puri, Atul; Schmidt, Robert L.; Haskell, Barry G.

    1996-02-01

    Due to the need to interchange video data in a seamless and cost effective manner, interoperability between applications, terminals and services has become increasingly important. The ISO Moving Picture Experts Group (MPEG) has developed the MPEG-1 and the MPEG-2 audio-visual coding standards to meet these challenges; these standards allow a range of applications at bitrates from 1 Mbits to 100 Mbit/s. However, in the meantime, a new breed of applications has arisen which demand higher compression, more interactivity and increased error resilience. These applications are expected to be addressed by the next phase standard, called MPEG-4, which is currently in progress. We discuss the various functionalities expected to be offered by the MPEG-4 standard along with the development plan and the framework used for evaluation of video coding proposals in the recent first evaluation tests. Having clarified the requirements, functionalities and the development process of MPEG-4, we propose a generalized approach for video coding referred to as adaptive scalable interframe coding (ASIC) for MPEG-4. Using this generalized approach we develop a video coding scheme suitable for MPEG-4 based multimedia applications in bitrate range of 320 kbit/s to 1024 kbit/s. The proposed scheme is referred to as source and bandwidth adaptive scalable interframe coding (SBASIC) and builds not only on the proven framework of motion compensated DCT coding and scalability but also introduces several new concepts. The SNR and MPEG-4 subjective evaluation results are presented to show the good performance achieved by SBASIC. Next, extension of SBASIC by motion compensated 3D- DCT coding is discussed. It is envisaged that this extension when complete will further improve the coding efficiency of SBASIC.

  19. Distributed source coding of video with non-stationary side-information

    NARCIS (Netherlands)

    Meyer, P.F.A.; Westerlaken, R.P.; Klein Gunnewiek, R.; Lagendijk, R.L.

    2005-01-01

    In distributed video coding, the complexity of the video encoder is reduced at the cost of a more complex video decoder. Using the principles of Slepian andWolf, video compression is then carried out using channel coding principles, under the assumption that the video decoder can temporally predict

  20. Intra prediction using face continuity in 360-degree video coding

    Science.gov (United States)

    Hanhart, Philippe; He, Yuwen; Ye, Yan

    2017-09-01

    This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.

  1. Two-description distributed video coding for robust transmission

    Directory of Open Access Journals (Sweden)

    Zhao Yao

    2011-01-01

    Full Text Available Abstract In this article, a two-description distributed video coding (2D-DVC is proposed to address the robust video transmission of low-power capturers. The odd/even frame-splitting partitions a video into two sub-sequences to produce two descriptions. Each description consists of two parts, where part 1 is a zero-motion based H.264-coded bitstream of a sub-sequence and part 2 is a Wyner-Ziv (WZ-coded bitstream of the other sub-sequence. As the redundant part, the WZ-coded bitstream guarantees that the lost sub-sequence is recovered when one description is lost. On the other hand, the redundancy degrades the rate-distortion performance as no loss occurs. A residual 2D-DVC is employed to further improve the rate-distortion performance, where the difference of two sub-sequences is WZ encoded to generate part 2 in each description. Furthermore, an optimization method is applied to control an appropriate amount of redundancy and therefore facilitate the tuning of central/side distortion tradeoff. The experimental results show that the proposed schemes achieve better performance than the referenced one especially for low-motion videos. Moreover, our schemes still maintain low-complexity encoding property.

  2. Error Resilience in Current Distributed Video Coding Architectures

    Directory of Open Access Journals (Sweden)

    Tonoli Claudia

    2009-01-01

    Full Text Available In distributed video coding the signal prediction is shifted at the decoder side, giving therefore most of the computational complexity burden at the receiver. Moreover, since no prediction loop exists before transmission, an intrinsic robustness to transmission errors has been claimed. This work evaluates and compares the error resilience performance of two distributed video coding architectures. In particular, we have considered a video codec based on the Stanford architecture (DISCOVER codec and a video codec based on the PRISM architecture. Specifically, an accurate temporal and rate/distortion based evaluation of the effects of the transmission errors for both the considered DVC architectures has been performed and discussed. These approaches have been also compared with H.264/AVC, in both cases of no error protection, and simple FEC error protection. Our evaluations have highlighted in all cases a strong dependence of the behavior of the various codecs to the content of the considered video sequence. In particular, PRISM seems to be particularly well suited for low-motion sequences, whereas DISCOVER provides better performance in the other cases.

  3. Scene-aware joint global and local homographic video coding

    Science.gov (United States)

    Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.

    2016-09-01

    Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.

  4. Standardized Definitions for Code Verification Test Problems

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-14

    This document contains standardized definitions for several commonly used code verification test problems. These definitions are intended to contain sufficient information to set up the test problem in a computational physics code. These definitions are intended to be used in conjunction with exact solutions to these problems generated using Exact- Pack, www.github.com/lanl/exactpack.

  5. Codes & standards research, development & demonstration Roadmap

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2008-07-22

    This Roadmap is a guide to the Research, Development & Demonstration activities that will provide data required for SDOs to develop performance-based codes and standards for a commercial hydrogen fueled transportation sector in the U.S.

  6. Efficient temporal and interlayer parameter prediction for weighted prediction in scalable high efficiency video coding

    Science.gov (United States)

    Tsang, Sik-Ho; Chan, Yui-Lam; Siu, Wan-Chi

    2017-01-01

    Weighted prediction (WP) is an efficient video coding tool that was introduced since the establishment of the H.264/AVC video coding standard, for compensating the temporal illumination change in motion estimation and compensation. WP parameters, including a multiplicative weight and an additive offset for each reference frame, are required to be estimated and transmitted to the decoder by slice header. These parameters cause extra bits in the coded video bitstream. High efficiency video coding (HEVC) provides WP parameter prediction to reduce the overhead. Therefore, WP parameter prediction is crucial to research works or applications, which are related to WP. Prior art has been suggested to further improve the WP parameter prediction by implicit prediction of image characteristics and derivation of parameters. By exploiting both temporal and interlayer redundancies, we propose three WP parameter prediction algorithms, enhanced implicit WP parameter, enhanced direct WP parameter derivation, and interlayer WP parameter, to further improve the coding efficiency of HEVC. Results show that our proposed algorithms can achieve up to 5.83% and 5.23% bitrate reduction compared to the conventional scalable HEVC in the base layer for SNR scalability and 2× spatial scalability, respectively.

  7. 75 FR 19944 - International Code Council: The Update Process for the International Codes and Standards

    Science.gov (United States)

    2010-04-16

    ... National Institute of Standards and Technology International Code Council: The Update Process for the International Codes and Standards AGENCY: National Institute of Standards and Technology, Commerce. ACTION: Notice. SUMMARY: The International Code Council (ICC), promulgator of the International Codes and...

  8. Noise Residual Learning for Noise Modeling in Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The noise model is one of the inherently difficult challenges in DVC. This paper considers Transform Domain Wyner-Ziv (TDWZ) coding and proposes...... noise residual learning techniques that take residues from previously decoded frames into account to estimate the decoding residue more precisely. Moreover, the techniques calculate a number of candidate noise residual distributions within a frame to adaptively optimize the soft side information during...... decoding. A residual refinement step is also introduced to take advantage of correlation of DCT coefficients. Experimental results show that the proposed techniques robustly improve the coding efficiency of TDWZ DVC and for GOP=2 bit-rate savings up to 35% on WZ frames are achieved compared with DISCOVER....

  9. Robust video transmission with distributed source coded auxiliary channel.

    Science.gov (United States)

    Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan

    2009-12-01

    We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints.

  10. Region-of-interest based rate control for UAV video coding

    Science.gov (United States)

    Zhao, Chun-lei; Dai, Ming; Xiong, Jing-ying

    2016-05-01

    To meet the requirement of high-quality transmission of videos captured by unmanned aerial vehicles (UAV) with low bandwidth, a novel rate control (RC) scheme based on region-of-interest (ROI) is proposed. First, the ROI information is sent to the encoder with the latest high efficient video coding (HEVC) standard to generate an ROI map. Then, by using the ROI map, bit allocation methods are developed at frame level and large coding unit (LCU) level, to avoid inaccurate bit allocation produced by camera movement. At last, by using a better robustness R- λ model, the quantization parameter ( QP) for each LCU is calculated. The experimental results show that the proposed RC method can get a lower bitrate error and a higher quality for reconstructed video by choosing appropriate pixel weight on the HEVC platform.

  11. Distributed Coding/Decoding Complexity in Video Sensor Networks

    Science.gov (United States)

    Cordeiro, Paulo J.; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972

  12. Depth-based Multi-View 3D Video Coding

    DEFF Research Database (Denmark)

    Zamarin, Marco

    on edge-preserving solutions. In a proposed scheme, texture-depth correlation is exploited to predict surface shapes in the depth signal. In this way depth coding performance can be improved in terms of both compression gain and edge-preservation. Another solution proposes a new intra coding mode targeted...... to depth blocks featuring arbitrarily-shaped edges. Edge information is encoded exploiting previously coded edge blocks. Integrated in H.264/AVC, the proposed mode allows significant bit rate savings compared with a number of state-of-the-art depth codecs. View synthesis performances are also improved......, both in terms of objective and visual evaluations. Depth coding based on standard H.264/AVC is explored for multi-view plus depth image coding. A single depth map is used to disparity-compensate multiple views and allow more efficient coding than H.264 MVC at low bit rates. Lossless coding of depth...

  13. Seminar on building codes and standards

    Energy Technology Data Exchange (ETDEWEB)

    1980-01-01

    A seminar was conducted for state building code officials and state energy officials to discuss the following: status of the states regulatory activities for energy conservation standards for buildings; the development, administration, and enforcement processes for energy conservation standards affecting new construction; lighting and thermal standards for existing buildings; status of the development and implementation of the Title III Program, Building Energy Performance Standards (BEPS); and current status of the State Energy Conservation Program. The welcoming address was given by John Wenning and the keynote address was delivered by John Millhone. Four papers presented were: Building Energy Performance Standards Development, James Binkley; Lighting Standards in Existing Buildings, Dorothy Cronheim; Implementation of BEPS, Archie Twitchell; Sanctions for Building Energy Performance Standards, Sue Sicherman.

  14. Cross-band noise model refinement for transform domain Wyner–Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2012-01-01

    TDWZ video coding trails that of conventional video coding solutions, mainly due to the quality of side information, inaccurate noise modeling and loss in the final coding step. The major goal of this paper is to enhance the accuracy of the noise modeling, which is one of the most important aspects...... that the proposed noise model and noise residue refinement scheme can improve the rate-distortion (RD) performance of TDWZ video coding significantly. The quality of the side information modeling is also evaluated by a measure of the ideal code length.......Distributed Video Coding (DVC) is a new video coding paradigm, which mainly exploits the source statistics at the decoder based on the availability of decoder side information. One approach to DVC is feedback channel based Transform Domain Wyner–Ziv (TDWZ) video coding. The efficiency of current...

  15. Single-layer HDR video coding with SDR backward compatibility

    Science.gov (United States)

    Lasserre, S.; François, E.; Le Léannec, F.; Touzé, D.

    2016-09-01

    The migration from High Definition (HD) TV to Ultra High Definition (UHD) is already underway. In addition to an increase of picture spatial resolution, UHD will bring more color and higher contrast by introducing Wide Color Gamut (WCG) and High Dynamic Range (HDR) video. As both Standard Dynamic Range (SDR) and HDR devices will coexist in the ecosystem, the transition from Standard Dynamic Range (SDR) to HDR will require distribution solutions supporting some level of backward compatibility. This paper presents a new HDR content distribution scheme, named SL-HDR1, using a single layer codec design and providing SDR compatibility. The solution is based on a pre-encoding HDR-to-SDR conversion, generating a backward compatible SDR video, with side dynamic metadata. The resulting SDR video is then compressed, distributed and decoded using standard-compliant decoders (e.g. HEVC Main 10 compliant). The decoded SDR video can be directly rendered on SDR displays without adaptation. Dynamic metadata of limited size are generated by the pre-processing and used to reconstruct the HDR signal from the decoded SDR video, using a post-processing that is the functional inverse of the pre-processing. Both HDR quality and artistic intent are preserved. Pre- and post-processing are applied independently per picture, do not involve any inter-pixel dependency, and are codec agnostic. Compression performance, and SDR quality are shown to be solidly improved compared to the non-backward and backward-compatible approaches, respectively using the Perceptual Quantization (PQ) and Hybrid Log Gamma (HLG) Opto-Electronic Transfer Functions (OETF).

  16. Application-adapted mobile 3D video coding and streaming — A survey

    Science.gov (United States)

    Liu, Yanwei; Ci, Song; Tang, Hui; Ye, Yun

    2012-03-01

    3D video technologies have been gradually matured to be moved into mobile platforms. In the mobile environments, the specific characteristics of wireless network and mobile device present great challenges for 3D video coding and streaming. The application-adapted mobile 3D video coding and streaming technologies are urgently needed. Based on the mobile 3D video application framework, this paper reviews the state-of-the-art technologies of mobile 3D video coding and streaming. Specifically, the mobile 3D video formats and the corresponding coding methods are firstly reviewed and then the streaming adaptation technologies including 3D video transcoding, 3D video rate control and cross-layer optimized 3D video streaming are surveyed. [Figure not available: see fulltext.

  17. Spherical rotation orientation indication for HEVC and JEM coding of 360 degree video

    Science.gov (United States)

    Boyce, Jill; Xu, Qian

    2017-09-01

    Omnidirectional (or "360 degree") video, representing a panoramic view of a spherical 360° ×180° scene, can be encoded using conventional video compression standards, once it has been projection mapped to a 2D rectangular format. Equirectangular projection format is currently used for mapping 360 degree video to a rectangular representation for coding using HEVC/JEM. However, video in the top and bottom regions of the image, corresponding to the "north pole" and "south pole" of the spherical representation, is significantly warped. We propose to perform spherical rotation of the input video prior to HEVC/JEM encoding in order to improve the coding efficiency, and to signal parameters in a supplemental enhancement information (SEI) message that describe the inverse rotation process recommended to be applied following HEVC/JEM decoding, prior to display. Experiment results show that up to 17.8% bitrate gain (using the WS-PSNR end-to-end metric) can be achieved for the Chairlift sequence using HM16.15 and 11.9% gain using JEM6.0, and an average gain of 2.9% for HM16.15 and 2.2% for JEM6.0.

  18. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    Thomas André

    2007-03-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  19. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    André Thomas

    2007-01-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  20. NRL Radar Division C++ Coding Standard

    Science.gov (United States)

    2016-12-05

    are: • Free of common types of errors • Maintainable by different programmers • Portable to other operating systems • Easy to read and understand...const-declared parameters is that the compiler will actually give you an error if you modify such a parameter by mistake , thus helping you to avoid...The coding standard provides tools aimed at helping C++ programmers develop programs that are free of common types of errors , maintainable by

  1. Data Representation, Coding, and Communication Standards.

    Science.gov (United States)

    Amin, Milon; Dhir, Rajiv

    2015-06-01

    The immense volume of cases signed out by surgical pathologists on a daily basis gives little time to think about exactly how data are stored. An understanding of the basics of data representation has implications that affect a pathologist's daily practice. This article covers the basics of data representation and its importance in the design of electronic medical record systems. Coding in surgical pathology is also discussed. Finally, a summary of communication standards in surgical pathology is presented, including suggested resources that establish standards for select aspects of pathology reporting. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Traffic and Quality Characterization of the H.264/AVC Scalable Video Coding Extension

    Directory of Open Access Journals (Sweden)

    Geert Van der Auwera

    2008-01-01

    Full Text Available The recent scalable video coding (SVC extension to the H.264/AVC video coding standard has unprecedented compression efficiency while supporting a wide range of scalability modes, including temporal, spatial, and quality (SNR scalability, as well as combined spatiotemporal SNR scalability. The traffic characteristics, especially the bit rate variabilities, of the individual layer streams critically affect their network transport. We study the SVC traffic statistics, including the bit rate distortion and bit rate variability distortion, with long CIF resolution video sequences and compare them with the corresponding MPEG-4 Part 2 traffic statistics. We consider (i temporal scalability with three temporal layers, (ii spatial scalability with a QCIF base layer and a CIF enhancement layer, as well as (iii quality scalability modes FGS and MGS. We find that the significant improvement in RD efficiency of SVC is accompanied by substantially higher traffic variabilities as compared to the equivalent MPEG-4 Part 2 streams. We find that separately analyzing the traffic of temporal-scalability only encodings gives reasonable estimates of the traffic statistics of the temporal layers embedded in combined spatiotemporal encodings and in the base layer of combined FGS-temporal encodings. Overall, we find that SVC achieves significantly higher compression ratios than MPEG-4 Part 2, but produces unprecedented levels of traffic variability, thus presenting new challenges for the network transport of scalable video.

  3. Exploiting the Error-Correcting Capabilities of Low Density Parity Check Codes in Distributed Video Coding using Optical Flow

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau; Søgaard, Jacob; Salmistraro, Matteo

    2012-01-01

    We consider Distributed Video Coding (DVC) in presence of communication errors. First, we present DVC side information generation based on a new method of optical flow driven frame interpolation, where a highly optimized TV-L1 algorithm is used for the flow calculations and combine three flows....... Thereafter methods for exploiting the error-correcting capabilities of the LDPCA code in DVC are investigated. The proposed frame interpolation includes a symmetric flow constraint to the standard forward-backward frame interpolation scheme, which improves quality and handling of large motion. The three...... flows are combined in one solution. The proposed frame interpolation method consistently outperforms an overlapped block motion compensation scheme and a previous TV-L1 optical flow frame interpolation method with an average PSNR improvement of 1.3 dB and 2.3 dB respectively. For a GOP size of 2...

  4. 3D Scan-Based Wavelet Transform and Quality Control for Video Coding

    Directory of Open Access Journals (Sweden)

    Parisot Christophe

    2003-01-01

    Full Text Available Wavelet coding has been shown to achieve better compression than DCT coding and moreover allows scalability. 2D DWT can be easily extended to 3D and thus applied to video coding. However, 3D subband coding of video suffers from two drawbacks. The first is the amount of memory required for coding large 3D blocks; the second is the lack of temporal quality due to the sequence temporal splitting. In fact, 3D block-based video coders produce jerks. They appear at blocks temporal borders during video playback. In this paper, we propose a new temporal scan-based wavelet transform method for video coding combining the advantages of wavelet coding (performance, scalability with acceptable reduced memory requirements, no additional CPU complexity, and avoiding jerks. We also propose an efficient quality allocation procedure to ensure a constant quality over time.

  5. 3D Scan-Based Wavelet Transform and Quality Control for Video Coding

    Science.gov (United States)

    Parisot, Christophe; Antonini, Marc; Barlaud, Michel

    2003-12-01

    Wavelet coding has been shown to achieve better compression than DCT coding and moreover allows scalability. 2D DWT can be easily extended to 3D and thus applied to video coding. However, 3D subband coding of video suffers from two drawbacks. The first is the amount of memory required for coding large 3D blocks; the second is the lack of temporal quality due to the sequence temporal splitting. In fact, 3D block-based video coders produce jerks. They appear at blocks temporal borders during video playback. In this paper, we propose a new temporal scan-based wavelet transform method for video coding combining the advantages of wavelet coding (performance, scalability) with acceptable reduced memory requirements, no additional CPU complexity, and avoiding jerks. We also propose an efficient quality allocation procedure to ensure a constant quality over time.

  6. Transform domain Wyner-Ziv video coding with refinement of noise residue and side information

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2010-01-01

    are successively updating the estimated noise residue for noise modeling and side information frame quality during decoding. Experimental results show that the proposed decoder can improve the Rate- Distortion (RD) performance of a state-of-the-art Wyner Ziv video codec for the set of test sequences.......Distributed Video Coding (DVC) is a video coding paradigm which mainly exploits the source statistics at the decoder based on the availability of side information at the decoder. This paper considers feedback channel based Transform Domain Wyner-Ziv (TDWZ) DVC. The coding efficiency of TDWZ video...... coding does not match that of conventional video coding yet, mainly due to the quality of side information and inaccurate noise estimation. In this context, a novel TDWZ video decoder with noise residue refinement (NRR) and side information refinement (SIR) is proposed. The proposed refinement schemes...

  7. 3D Scan-Based Wavelet Transform and Quality Control for Video Coding

    OpenAIRE

    Parisot Christophe; Antonini Marc; Barlaud Michel

    2003-01-01

    Wavelet coding has been shown to achieve better compression than DCT coding and moreover allows scalability. 2D DWT can be easily extended to 3D and thus applied to video coding. However, 3D subband coding of video suffers from two drawbacks. The first is the amount of memory required for coding large 3D blocks; the second is the lack of temporal quality due to the sequence temporal splitting. In fact, 3D block-based video coders produce jerks. They appear at blocks temporal borders during v...

  8. Joint Source-Channel Coding for Wavelet-Based Scalable Video Transmission Using an Adaptive Turbo Code

    Directory of Open Access Journals (Sweden)

    Ramzan Naeem

    2007-01-01

    Full Text Available An efficient approach for joint source and channel coding is presented. The proposed approach exploits the joint optimization of a wavelet-based scalable video coding framework and a forward error correction method based on turbo codes. The scheme minimizes the reconstructed video distortion at the decoder subject to a constraint on the overall transmission bitrate budget. The minimization is achieved by exploiting the source rate distortion characteristics and the statistics of the available codes. Here, the critical problem of estimating the bit error rate probability in error-prone applications is discussed. Aiming at improving the overall performance of the underlying joint source-channel coding, the combination of the packet size, interleaver, and channel coding rate is optimized using Lagrangian optimization. Experimental results show that the proposed approach outperforms conventional forward error correction techniques at all bit error rates. It also significantly improves the performance of end-to-end scalable video transmission at all channel bit rates.

  9. Joint Source-Channel Coding for Wavelet-Based Scalable Video Transmission Using an Adaptive Turbo Code

    Directory of Open Access Journals (Sweden)

    Naeem Ramzan

    2007-03-01

    Full Text Available An efficient approach for joint source and channel coding is presented. The proposed approach exploits the joint optimization of a wavelet-based scalable video coding framework and a forward error correction method based on turbo codes. The scheme minimizes the reconstructed video distortion at the decoder subject to a constraint on the overall transmission bitrate budget. The minimization is achieved by exploiting the source rate distortion characteristics and the statistics of the available codes. Here, the critical problem of estimating the bit error rate probability in error-prone applications is discussed. Aiming at improving the overall performance of the underlying joint source-channel coding, the combination of the packet size, interleaver, and channel coding rate is optimized using Lagrangian optimization. Experimental results show that the proposed approach outperforms conventional forward error correction techniques at all bit error rates. It also significantly improves the performance of end-to-end scalable video transmission at all channel bit rates.

  10. Progressive Dictionary Learning with Hierarchical Predictive Structure for Scalable Video Coding.

    Science.gov (United States)

    Dai, Wenrui; Shen, Yangmei; Xiong, Hongkai; Jiang, Xiaoqian; Zou, Junni; Taubman, David

    2017-04-12

    Dictionary learning has emerged as a promising alternative to the conventional hybrid coding framework. However, the rigid structure of sequential training and prediction degrades its performance in scalable video coding. This paper proposes a progressive dictionary learning framework with hierarchical predictive structure for scalable video coding, especially in low bitrate region. For pyramidal layers, sparse representation based on spatio-temporal dictionary is adopted to improve the coding efficiency of enhancement layers (ELs) with a guarantee of reconstruction performance. The overcomplete dictionary is trained to adaptively capture local structures along motion trajectories as well as exploit the correlations between neighboring layers of resolutions. Furthermore, progressive dictionary learning is developed to enable the scalability in temporal domain and restrict the error propagation in a close-loop predictor. Under the hierarchical predictive structure, online learning is leveraged to guarantee the training and prediction performance with an improved convergence rate. To accommodate with the stateof- the-art scalable extension of H.264/AVC and latest HEVC, standardized codec cores are utilized to encode the base and enhancement layers. Experimental results show that the proposed method outperforms the latest SHVC and HEVC simulcast over extensive test sequences with various resolutions.

  11. Scene-library-based video coding scheme exploiting long-term temporal correlation

    Science.gov (United States)

    Zuo, Xuguang; Yu, Lu; Yu, Hualong; Mao, Jue; Zhao, Yin

    2017-07-01

    In movies and TV shows, it is common that several scenes repeat alternately. These videos are characterized with the long-term temporal correlation, which can be exploited to improve video coding efficiency. However, in applications supporting random access (RA), a video is typically divided into a number of RA segments (RASs) by RA points (RAPs), and different RASs are coded independently. In such a way, the long-term temporal correlation among RASs with similar scenes cannot be used. We present a scene-library-based video coding scheme for the coding of videos with repeated scenes. First, a compact scene library is built by clustering similar scenes and extracting representative frames in encoding video. Then, the video is coded using a layered scene-library-based coding structure, in which the library frames serve as long-term reference frames. The scene library is not cleared by RAPs so that the long-term temporal correlation between RASs from similar scenes can be exploited. Furthermore, the RAP frames are coded as interframes by only referencing library frames so as to improve coding efficiency while maintaining RA property. Experimental results show that the coding scheme can achieve significant coding gain over state-of-the-art methods.

  12. A Scalable Multiple Description Scheme for 3D Video Coding Based on the Interlayer Prediction Structure

    Directory of Open Access Journals (Sweden)

    Lorenzo Favalli

    2010-01-01

    Full Text Available The most recent literature indicates multiple description coding (MDC as a promising coding approach to handle the problem of video transmission over unreliable networks with different quality and bandwidth constraints. Furthermore, following recent commercial availability of autostereoscopic 3D displays that allow 3D visual data to be viewed without the use of special headgear or glasses, it is anticipated that the applications of 3D video will increase rapidly in the near future. Moving from the concept of spatial MDC, in this paper we introduce some efficient algorithms to obtain 3D substreams that also exploit some form of scalability. These algorithms are then applied to both coded stereo sequences and to depth image-based rendering (DIBR. In these algorithms, we first generate four 3D subsequences by subsampling, and then two of these subsequences are jointly used to form each of the two descriptions. For each description, one of the original subsequences is predicted from the other one via some scalable algorithms, focusing on the inter layer prediction scheme. The proposed algorithms can be implemented as pre- and postprocessing of the standard H.264/SVC coder that remains fully compatible with any standard coder. The experimental results presented show that these algorithms provide excellent results.

  13. Emerging technologies for 3D video creation, coding, transmission and rendering

    CERN Document Server

    Dufaux, Frederic; Cagnazzo, Marco

    2013-01-01

    With the expectation of greatly enhanced user experience, 3D video is widely perceived as the next major advancement in video technology. In order to fulfil the expectation of enhanced user experience, 3D video calls for new technologies addressing efficient content creation, representation/coding, transmission and display. Emerging Technologies for 3D Video will deal with all aspects involved in 3D video systems and services, including content acquisition and creation, data representation and coding, transmission, view synthesis, rendering, display technologies, human percepti

  14. Mutiple LDPC Decoding using Bitplane Correlation for Transform Domain Wyner-Ziv Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    Distributed video coding (DVC) is an emerging video coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. This paper considers a Low Density Parity Check (LDPC) based Transform Domain Wyner-Ziv (TDWZ) video...... codec. To improve the LDPC coding performance in the context of TDWZ, this paper proposes a Wyner-Ziv video codec using bitplane correlation through multiple parallel LDPC decoding. The proposed scheme utilizes inter bitplane correlation to enhance the bitplane decoding performance. Experimental results...

  15. Game-Theoretic Rate-Distortion-Complexity Optimization of High Efficiency Video Coding

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Milani, Simone; Forchhammer, Søren

    2013-01-01

    This paper presents an algorithm for rate-distortioncomplexity optimization for the emerging High Efficiency Video Coding (HEVC) standard, whose high computational requirements urge the need for low-complexity optimization algorithms. Optimization approaches need to specify different complexity...... profiles in order to tailor the computational load to the different hardware and power-supply resources of devices. In this work, we focus on optimizing the quantization parameter and partition depth in HEVC via a game-theoretic approach. The proposed rate control strategy alone provides 0.2 dB improvement...

  16. A Fast PDE Algorithm Using Adaptive Scan and Search for Video Coding

    Science.gov (United States)

    Kim, Jong-Nam

    In this paper, we propose an algorithm that reduces unnecessary computations, while keeping the same prediction quality as that of the full search algorithm. In the proposed algorithm, we can reduce unnecessary computations efficiently by calculating initial matching error point from first 1/N partial errors. We can increase the probability that hits minimum error point as soon as possible. Our algorithm decreases the computational amount by about 20% of the conventional PDE algorithm without any degradation of prediction quality. Our algorithm would be useful in real-time video coding applications using MPEG-2/4 AVC standards.

  17. Review of codes, standards, and regulations for natural gas locomotives.

    Science.gov (United States)

    2014-06-01

    This report identified, collected, and summarized relevant international codes, standards, and regulations with potential : applicability to the use of natural gas as a locomotive fuel. Few international or country-specific codes, standards, and regu...

  18. Transform extension for block-based hybrid video codec with decoupling transform sizes from prediction sizes and coding sizes

    Science.gov (United States)

    Chen, Jing; Li, Ge; Fan, Kui; Guo, Xiaoqiang

    2017-09-01

    In the block-based hybrid video coding framework, transform is applied to the residual signal resulting from intra/inter prediction. Thus in the most of video codecs, transform block (TB) size is equal to the prediction block (PB) size. To further improve coding efficiency, recent video coding techniques have supported decoupling transform and prediction sizes. By splitting one prediction block into small transform blocks, the Residual Quad-tree (RQT) structure attempts to search the best transform size. However, in the current RQT, the transform size cannot be larger than the size of prediction block. In this paper, we introduce a transform extension method by decoupling transform sizes from prediction sizes and coding sizes. In addition to getting the transform block within the current PB partition, we combine multiple adjacent PBs to form a larger TB and select best block size accordingly. According to our experiment on top of the newest reference software (ITM17.0) of MPEG Internet Video Coding (IVC) standard, consistent coding performance gains are obtained.

  19. Two Perspectives on the Origin of the Standard Genetic Code

    Science.gov (United States)

    Sengupta, Supratim; Aggarwal, Neha; Bandhu, Ashutosh Vishwa

    2014-12-01

    The origin of a genetic code made it possible to create ordered sequences of amino acids. In this article we provide two perspectives on code origin by carrying out simulations of code-sequence coevolution in finite populations with the aim of examining how the standard genetic code may have evolved from more primitive code(s) encoding a small number of amino acids. We determine the efficacy of the physico-chemical hypothesis of code origin in the absence and presence of horizontal gene transfer (HGT) by allowing a diverse collection of code-sequence sets to compete with each other. We find that in the absence of horizontal gene transfer, natural selection between competing codes distinguished by differences in the degree of physico-chemical optimization is unable to explain the structure of the standard genetic code. However, for certain probabilities of the horizontal transfer events, a universal code emerges having a structure that is consistent with the standard genetic code.

  20. Improved virtual channel noise model for transform domain Wyner-Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2009-01-01

    Distributed video coding (DVC) has been proposed as a new video coding paradigm to deal with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. A virtual channel noise model is utilized at the decoder to estimate...... the noise distribution between the side information frame and the original frame. This is one of the most important aspects influencing the coding performance of DVC. Noise models with different granularity have been proposed. In this paper, an improved noise model for transform domain Wyner-Ziv video...

  1. 41 CFR 101-30.701-2 - Item standardization code.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true Item standardization code. 101-30.701-2 Section 101-30.701-2 Public Contracts and Property Management Federal Property Management....7-Item Reduction Program § 101-30.701-2 Item standardization code. Item standardization code (ISC...

  2. 30 CFR 56.19093 - Standard signal code.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Standard signal code. 56.19093 Section 56.19093 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Signaling § 56.19093 Standard signal code. A standard code of hoisting signals shall be adopted and used at...

  3. 30 CFR 57.19093 - Standard signal code.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Standard signal code. 57.19093 Section 57.19093 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Signaling § 57.19093 Standard signal code. A standard code of hoisting signals shall be adopted and used at...

  4. Efficient Video Transcoding from H.263 to H.264/AVC Standard with Enhanced Rate Control

    Directory of Open Access Journals (Sweden)

    Nguyen Viet-Anh

    2006-01-01

    Full Text Available A new video coding standard H.264/AVC has been recently developed and standardized. The standard represents a number of advances in video coding technology in terms of both coding efficiency and flexibility and is expected to replace the existing standards such as H.263 and MPEG-1/2/4 in many possible applications. In this paper we investigate and present efficient syntax transcoding and downsizing transcoding methods from H.263 to H.264/AVC standard. Specifically, we propose an efficient motion vector reestimation scheme using vector median filtering and a fast intraprediction mode selection scheme based on coarse edge information obtained from integer-transform coefficients. Furthermore, an enhanced rate control method based on a quadratic model is proposed for selecting quantization parameters at the sequence and frame levels together with a new frame-layer bit allocation scheme based on the side information in the precoded video. Extensive experiments have been conducted and the results show the efficiency and effectiveness of the proposed methods.

  5. Application of Enhanced Hadamard Error Correcting Code in Video-Watermarking and his comparison to Reed-Solomon Code

    Directory of Open Access Journals (Sweden)

    Dziech Andrzej

    2017-01-01

    Full Text Available Error Correcting Codes are playing a very important role in Video Watermarking technology. Because of very high compression rate (about 1:200 normally the watermarks can barely survive such massive attacks, despite very sophisticated embedding strategies. It can only work with a sufficient error correcting code method. In this paper, the authors compare the new developed Enhanced Hadamard Error Correcting Code (EHC with well known Reed-Solomon Code regarding its ability to preserve watermarks in the embedded video. The main idea of this new developed multidimensional Enhanced Hadamard Error Correcting Code is to map the 2D basis images into a collection of one-dimensional rows and to apply a 1D Hadamard decoding procedure on them. After this, the image is reassembled, and the 2D decoding procedure can be applied more efficiently. With this approach, it is possible to overcome the theoretical limit of error correcting capability of (d-1/2 bits, where d is a Hamming distance. Even better results could be achieved by expanding the 2D EHC to 3D. To prove the efficiency and practicability of this new Enhanced Hadamard Code, the method was applied to a video Watermarking Coding Scheme. The Video Watermarking Embedding procedure decomposes the initial video trough multi-Level Interframe Wavelet Transform. The low pass filtered part of the video stream is used for embedding the watermarks, which are protected respectively by Enhanced Hadamard or Reed-Solomon Correcting Code. The experimental results show that EHC performs much better than RS Code and seems to be very robust against strong MPEG compression.

  6. Rate Allocation in predictive video coding using a Convex Optimization Framework.

    Science.gov (United States)

    Fiengo, Aniello; Chierchia, Giovanni; Cagnazzo, Marco; Pesquet-Popescu, Beatrice

    2016-10-26

    Optimal rate allocation is among the most challenging tasks to perform in the context of predictive video coding, because of the dependencies between frames induced by motion compensation. In this paper, using a recursive rate-distortion model that explicitly takes into account these dependencies, we approach the frame-level rate allocation as a convex optimization problem. This technique is integrated into the recent HEVC encoder, and tested on several standard sequences. Experiments indicate that the proposed rate allocation ensures a better performance (in the rate-distortion sense) than the standard HEVC rate control, and with a little loss w.r.t. an optimal exhaustive research which is largely compensated by a much shorter execution time.

  7. The future of 3D and video coding in mobile and the internet

    Science.gov (United States)

    Bivolarski, Lazar

    2013-09-01

    The current Internet success has already changed our social and economic world and is still continuing to revolutionize the information exchange. The exponential increase of amount and types of data that is currently exchanged on the Internet represents significant challenge for the design of future architectures and solutions. This paper reviews the current status and trends in the design of solutions and research activities in the future Internet from point of view of managing the growth of bandwidth requirements and complexity of the multimedia that is being created and shared. Outlines the challenges that are present before the video coding and approaches to the design of standardized media formats and protocols while considering the expected convergence of multimedia formats and exchange interfaces. The rapid growth of connected mobile devices adds to the current and the future challenges in combination with the expected, in near future, arrival of multitude of connected devices. The new Internet technologies connecting the Internet of Things with wireless visual sensor networks and 3D virtual worlds requires conceptually new approaches of media content handling from acquisition to presentation in the 3D Media Internet. Accounting for the entire transmission system properties and enabling adaptation in real-time to context and content throughout the media proceeding path will be paramount in enabling the new media architectures as well as the new applications and services. The common video coding formats will need to be conceptually redesigned to allow for the implementation of the necessary 3D Media Internet features.

  8. Bit Plane Coding based Steganography Technique for JPEG2000 Images and Videos

    Directory of Open Access Journals (Sweden)

    Geeta Kasana

    2016-02-01

    Full Text Available In this paper, a Bit Plane Coding (BPC based steganography technique for JPEG2000 images and Motion JPEG2000 video is proposed. Embedding in this technique is performed in the lowest significant bit planes of the wavelet coefficients of a cover image. In JPEG2000 standard, the number of bit planes of wavelet coefficients to be used in encoding is dependent on the compression rate and are used in Tier-2 process of JPEG2000. In the proposed technique, Tier-1 and Tier-2 processes of JPEG2000 and Motion JPEG2000 are executed twice on the encoder side to collect the information about the lowest bit planes of all code blocks of a cover image, which is utilized in embedding and transmitted to the decoder. After embedding secret data, Optimal Pixel Adjustment Process (OPAP is applied on stego images to enhance its visual quality. Experimental results show that proposed technique provides large embedding capacity and better visual quality of stego images than existing steganography techniques for JPEG2000 compressed images and videos. Extracted secret image is similar to the original secret image.

  9. C++ Coding Standards 101 Rules, Guidelines, and Best Practices

    CERN Document Server

    Sutter, Herb

    2005-01-01

    Consistent, high-quality coding standards improve software quality, reduce time-to-market, promote teamwork, eliminate time wasted on inconsequential matters, and simplify maintenance. Now, two of the world's most respected C++ experts distill the rich collective experience of the global C++ community into a set of coding standards that every developer and development team can understand and use as a basis for their own coding standards.

  10. Resource allocation for error resilient video coding over AWGN using optimization approach.

    Science.gov (United States)

    An, Cheolhong; Nguyen, Truong Q

    2008-12-01

    The number of slices for error resilient video coding is jointly optimized with 802.11a-like media access control and the physical layers with automatic repeat request and rate compatible punctured convolutional code over additive white gaussian noise channel as well as channel times allocation for time division multiple access. For error resilient video coding, the relation between the number of slices and coding efficiency is analyzed and formulated as a mathematical model. It is applied for the joint optimization problem, and the problem is solved by a convex optimization method such as the primal-dual decomposition method. We compare the performance of a video communication system which uses the optimal number of slices with one that codes a picture as one slice. From numerical examples, end-to-end distortion of utility functions can be significantly reduced with the optimal slices of a picture especially at low signal-to-noise ratio.

  11. Variable disparity-motion estimation based fast three-view video coding

    Science.gov (United States)

    Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo

    2009-02-01

    In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.

  12. Subjective Video Quality Assessment of H.265 Compression Standard for Full HD Resolution

    Directory of Open Access Journals (Sweden)

    Miroslav Uhrina

    2015-01-01

    Full Text Available Recently increasing interest in multimedia services leads to requirements for quality assessment, especially in the video domain. There are many factors that influence the video quality. Compression technology and transmission link imperfection can be considered as the main ones. This paper deals with the assessment of the impact of H.265/HEVC compression standard on the video quality using subjective metrics. The evaluation is done for two types of sequences with Full HD resolution depending on content. The paper is divided as follows. In the first part of the article, a short characteristic of the H.265/HEVC compression standard is written. In the second part, the subjective video quality methods used in our experiments are described. The last part of this article deals with the measurements and experimental results. They showed that quality of sequences coded between 5 and 7 Mbps is for observers sufficient, so there is no need for providers to use higher bitrates in streaming than this threshold. These results are part of a new model that is still being created and will be used for predicting the video quality in networks based on IP.

  13. Joint disparity and motion estimation using optical flow for multiview Distributed Video Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Raket, Lars Lau; Brites, Catarina

    2014-01-01

    Distributed Video Coding (DVC) is a video coding paradigm where the source statistics are exploited at the decoder based on the availability of Side Information (SI). In a monoview video codec, the SI is generated by exploiting the temporal redundancy of the video, through motion estimation and c...... flow. The proposed SI generation algorithm allows for RD improvements up to 10% (Bjøntegaard) in bit-rate savings, when compared with block-based SI generation algorithms leveraging temporal and inter-view redundancies....... and compensation techniques. In a multiview scenario, the correlation between views can also be exploited to further enhance the overall Rate-Distortion (RD) performance. Thus, to generate SI in a multiview distributed coding scenario, a joint disparity and motion estimation technique is proposed, based on optical...

  14. Motion estimation for video coding efficient algorithms and architectures

    CERN Document Server

    Chakrabarti, Indrajit; Chatterjee, Sumit Kumar

    2015-01-01

    The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.

  15. Water cycle algorithm: A detailed standard code

    Science.gov (United States)

    Sadollah, Ali; Eskandar, Hadi; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon

    Inspired by the observation of the water cycle process and movements of rivers and streams toward the sea, a population-based metaheuristic algorithm, the water cycle algorithm (WCA) has recently been proposed. Lately, an increasing number of WCA applications have appeared and the WCA has been utilized in different optimization fields. This paper provides detailed open source code for the WCA, of which the performance and efficiency has been demonstrated for solving optimization problems. The WCA has an interesting and simple concept and this paper aims to use its source code to provide a step-by-step explanation of the process it follows.

  16. Real-time transmission of digital video using variable-length coding

    Science.gov (United States)

    Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1993-01-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  17. Jointly optimized spatial prediction and block transform for video and image coding.

    Science.gov (United States)

    Han, Jingning; Saxena, Ankur; Melkote, Vinay; Rose, Kenneth

    2012-04-01

    This paper proposes a novel approach to jointly optimize spatial prediction and the choice of the subsequent transform in video and image compression. Under the assumption of a separable first-order Gauss-Markov model for the image signal, it is shown that the optimal Karhunen-Loeve Transform, given available partial boundary information, is well approximated by a close relative of the discrete sine transform (DST), with basis vectors that tend to vanish at the known boundary and maximize energy at the unknown boundary. The overall intraframe coding scheme thus switches between this variant of the DST named asymmetric DST (ADST), and traditional discrete cosine transform (DCT), depending on prediction direction and boundary information. The ADST is first compared with DCT in terms of coding gain under ideal model conditions and is demonstrated to provide significantly improved compression efficiency. The proposed adaptive prediction and transform scheme is then implemented within the H.264/AVC intra-mode framework and is experimentally shown to significantly outperform the standard intra coding mode. As an added benefit, it achieves substantial reduction in blocking artifacts due to the fact that the transform now adapts to the statistics of block edges. An integer version of this ADST is also proposed.

  18. Standardization of Code on Dental Procedures

    Science.gov (United States)

    1992-02-13

    Steel. Aluminum, Tin, Composite.and/or Glass Ionomer. A crown used for short-term temporization or for pediatric dentistry treatment. Credit one per...provided as a management tool and should not be construed to represent the total practice of military dentistry . Category of Service Code Series 1...Class I. Occlusal surface restoration of molars and premolars, including buccal or occlusal pits, cast crown repairs, and small lingual surface

  19. Codes and standards and other guidance cited in regulatory documents

    Energy Technology Data Exchange (ETDEWEB)

    Nickolaus, J.R.; Bohlander, K.L.

    1996-08-01

    As part of the U.S. Nuclear Regulatory Commission (NRC) Standard Review Plan Update and Development Program (SRP-UDP), Pacific Northwest National Laboratory developed a listing of industry consensus codes and standards and other government and industry guidance referred to in regulatory documents. The SRP-UDP has been completed and the SRP-Maintenance Program (SRP-MP) is now maintaining this listing. Besides updating previous information, Revision 3 adds approximately 80 citations. This listing identifies the version of the code or standard cited in the regulatory document, the regulatory document, and the current version of the code or standard. It also provides a summary characterization of the nature of the citation. This listing was developed from electronic searches of the Code of Federal Regulations and the NRC`s Bulletins, Information Notices, Circulars, Enforcement Manual, Generic Letters, Inspection Manual, Policy Statements, Regulatory Guides, Standard Technical Specifications and the Standard Review Plan (NUREG-0800).

  20. Resource-Constrained Low-Complexity Video Coding for Wireless Transmission

    DEFF Research Database (Denmark)

    Ukhanova, Ann

    of video quality. We proposed a new metric for objective quality assessment that considers frame rate. As many applications deal with wireless video transmission, we performed an analysis of compression and transmission systems with a focus on power-distortion trade-off. We proposed an approach......Constrained resources like memory, power, bandwidth and delay requirements in many mobile systems pose limitations for video applications. Standard approaches for video compression and transmission do not always satisfy system requirements. In this thesis we have shown that it is possible to modify...... for ratedistortion-complexity optimization of upcoming video compression standard HEVC. We also provided a new method allowing decrease of power consumption on mobile devices in 3G networks. Finally, we proposed low-delay and low-power approaches for video transmission over wireless personal area networks, including...

  1. CowLog – Cross-Platform Application for Coding Behaviours from Video

    Directory of Open Access Journals (Sweden)

    Matti Pastell

    2016-04-01

    Full Text Available CowLog is a cross-platform application to code behaviours from video recordings for use in behavioural research. The software has been used in several studies e.g. to study sleep in dairy calves, emotions in goats and the mind wandering related to computer use during lectures. CowLog 3 is implemented using JavaScript and HTML using the Electron framework. The framework allows the development of packaged cross-platform applications using features from web browser (Chromium as well as server side JavaScript from Node.js. The program supports using multiple videos simultaneously and HTML5 and VLC video players. CowLog can be used for any project that requires coding the time of events from digital video. It is released under GNU GPL v2 making it possible for users to modify the application for their own needs. The software is available through its website http://cowlog.org.

  2. Optimization of high-definition video coding and hybrid fiber-wireless transmission in the 60 GHz band

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Pham, Tien Thang; Beltrán, Marta

    2011-01-01

    We demonstrate that, by jointly optimizing video coding and radio-over-fibre transmission, we extend the reach of 60-GHz wireless distribution of high-quality high-definition video satisfying low complexity and low delay constraints, while preserving superb video quality.......We demonstrate that, by jointly optimizing video coding and radio-over-fibre transmission, we extend the reach of 60-GHz wireless distribution of high-quality high-definition video satisfying low complexity and low delay constraints, while preserving superb video quality....

  3. Low-Complexity Multiple Description Coding of Video Based on 3D Block Transforms

    Directory of Open Access Journals (Sweden)

    Andrey Norkin

    2007-02-01

    Full Text Available The paper presents a multiple description (MD video coder based on three-dimensional (3D transforms. Two balanced descriptions are created from a video sequence. In the encoder, video sequence is represented in a form of coarse sequence approximation (shaper included in both descriptions and residual sequence (details which is split between two descriptions. The shaper is obtained by block-wise pruned 3D-DCT. The residual sequence is coded by 3D-DCT or hybrid, LOT+DCT, 3D-transform. The coding scheme is targeted to mobile devices. It has low computational complexity and improved robustness of transmission over unreliable networks. The coder is able to work at very low redundancies. The coding scheme is simple, yet it outperforms some MD coders based on motion-compensated prediction, especially in the low-redundancy region. The margin is up to 3 dB for reconstruction from one description.

  4. VORTEX: video retrieval and tracking from compressed multimedia databases--template matching from MPEG-2 video compression standard

    Science.gov (United States)

    Schonfeld, Dan; Lelescu, Dan

    1998-10-01

    In this paper, a novel visual search engine for video retrieval and tracking from compressed multimedia databases is proposed. Our approach exploits the structure of video compression standards in order to perform object matching directly on the compressed video data. This is achieved by utilizing motion compensation--a critical prediction filter embedded in video compression standards--to estimate and interpolate the desired method for template matching. Motion analysis is used to implement fast tracking of objects of interest on the compressed video data. Being presented with a query in the form of template images of objects, the system operates on the compressed video in order to find the images or video sequences where those objects are presented and their positions in the image. This in turn enables the retrieval and display of the query-relevant sequences.

  5. Transform Domain Unidirectional Distributed Video Coding Using Dynamic Parity Allocation

    Science.gov (United States)

    Badem, Murat B.; Fernando, Anil; Weerakkody, Rajitha; Arachchi, Hemantha K.; Kondoz, Ahmet M.

    DVC based video codecs proposed in the literature generally include a reverse (feedback) channel between the encoder and the decoder. This channel is used to communicate the dynamic parity bit request messages from the decoder to the encoder resulting in an optimum dynamic variable rate control implementation. However it is observed that this dynamic feedback mechanism is a practical hindrance in a number of practical consumer electronics applications. In this paper we proposed a novel transform domain Unidirectional Distributed Video Codec (UDVC) without a feedback channel. First, all Wyner-Ziv frames are divided into rectangular macroblocks. A simple metric is used for each block to represent the correlations between the corresponding blocks in the adjacent key frame and the Wyner-Ziv frame. Based on the value of this metric, parity is allocated dynamically for each block. These parities are either stored for offline processing or transmitted to the DVC decoder for on line processing. Simulation results show that the proposed codec outperforms the existing UDVC solutions by a significant margin.

  6. 1994 Building energy codes and standards workshops: Summary and documentation

    Energy Technology Data Exchange (ETDEWEB)

    Sandahl, L.J.; Shankle, D.L.

    1994-09-01

    During the spring of 1994, Pacific Northwest Laboratory (PNL), on behalf of the U.S. Department of Energy (DOE) Office of Codes and Standards, conducted five two-day Regional Building Energy Codes and Standards workshops across the United States. Workshops were held in Chicago, Philadelphia, Atlanta, Dallas, and Denver. The workshops were designed to benefit state-level officials including staff of building code commissions, energy offices, public utility commissions, and others involved with adopting/updating, implementing, and enforcing state building codes in their states. The workshops provided an opportunity for state and other officials to learn more about the Energy Policy Act of 1992 (EPAct) requirements for residential and commercial building energy codes, the Climate Change Action Plan, the role of the U.S. Department of Energy and the Building Energy Standards Program at Pacific Northwest Laboratory, the commercial and residential codes and standards, the Home Energy Rating Systems (HERS), Energy Efficient Mortgages (EEM), training issues, and other topics related to the development, adoption, implementation, and enforcement of building energy codes. In addition to receiving information on the above topics, workshop participants were also encouraged to inform DOE of their needs, particularly with regard to implementing building energy codes, enhancing current implementation efforts, and building on training efforts already in place. This paper documents the workshop findings and workshop planning and follow-up processes.

  7. EIFS in China - History, Codes and Standards, Features, and Problems

    Science.gov (United States)

    Shi, Xing; Li, Rui

    Exterior Insulated Finishing System (EIFS) was introduced into China in the early 1980's. Since then, it has gone through several stages of development and is now enjoying the largest market share among all energy-efficient wall systems. The history of EIFS gradually becoming the most common energy-efficient exterior wall system is briefly reviewed. China has a series of EIFS-related codes and standards that can be divided into two categories: (1) codes and standards on building energy efficiency in general, (2) codes and standards specifically addressing EIFS. This paper reviews these codes and standards with a focus on the important regulations that are different from those in North America and Europe. Chinese EIFS has some features and problems that may or may not exist in its foreign counterparts. These features and problems, along with the underlying reasons, are discussed.

  8. Vehicle Codes and Standards: Overview and Gap Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Blake, C.; Buttner, W.; Rivkin, C.

    2010-02-01

    This report identifies gaps in vehicle codes and standards and recommends ways to fill the gaps, focusing on six alternative fuels: biodiesel, natural gas, electricity, ethanol, hydrogen, and propane.

  9. A robust fusion method for multiview distributed video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Ascenso, Joao; Brites, Catarina

    2014-01-01

    to have the various views available simultaneously. However, in multiview DVC (M-DVC), the decoder can still exploit the redundancy between views, avoiding the need for inter-camera communication. The key element of every DVC decoder is the side information (SI), which can be generated by leveraging intra......-view or inter-view redundancy for multiview video data. In this paper, a novel learning-based fusion technique is proposed, which is able to robustly fuse an inter-view SI and an intra-view (temporal) SI. An inter-view SI generation method capable of identifying occluded areas is proposed and is coupled...... values. The proposed solution is able to achieve gains up to 0.9 dB in Bjøntegaard difference when compared with the best-performing (in a RD sense) single SI DVC decoder, chosen as the best of an inter-view and a temporal SI-based decoder one....

  10. Re-estimation of Motion and Reconstruction for Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Raket, Lars Lau; Forchhammer, Søren

    2014-01-01

    Transform domain Wyner-Ziv (TDWZ) video coding is an efficient approach to distributed video coding (DVC), which provides low complexity encoding by exploiting the source statistics at the decoder side. The DVC coding efficiency depends mainly on side information and noise modeling. This paper...... proposes a motion re-estimation technique based on optical flow to improve side information and noise residual frames by taking partially decoded information into account. To improve noise modeling, a noise residual motion re-estimation technique is proposed. Residual motion compensation with motion...... re-estimation (MORE) are integrated in the SING TDWZ codec, which uses side information and noise learning. For Wyner-Ziv frames using GOP size 2, the MORE codec significantly improves the TDWZ coding efficiency with an average (Bjøntegaard) PSNR improvement of 2.5 dB and up to 6 dB improvement...

  11. Final Technical Report: Hydrogen Codes and Standards Outreach

    Energy Technology Data Exchange (ETDEWEB)

    Hall, Karen I.

    2007-05-12

    This project contributed significantly to the development of new codes and standards, both domestically and internationally. The NHA collaborated with codes and standards development organizations to identify technical areas of expertise that would be required to produce the codes and standards that industry and DOE felt were required to facilitate commercialization of hydrogen and fuel cell technologies and infrastructure. NHA staff participated directly in technical committees and working groups where issues could be discussed with the appropriate industry groups. In other cases, the NHA recommended specific industry experts to serve on technical committees and working groups where the need for this specific industry expertise would be on-going, and where this approach was likely to contribute to timely completion of the effort. The project also facilitated dialog between codes and standards development organizations, hydrogen and fuel cell experts, the government and national labs, researchers, code officials, industry associations, as well as the public regarding the timeframes for needed codes and standards, industry consensus on technical issues, procedures for implementing changes, and general principles of hydrogen safety. The project facilitated hands-on learning, as participants in several NHA workshops and technical meetings were able to experience hydrogen vehicles, witness hydrogen refueling demonstrations, see metal hydride storage cartridges in operation, and view other hydrogen energy products.

  12. 78 FR 24725 - National Fire Codes: Request for Public Input for Revision of Codes and Standards

    Science.gov (United States)

    2013-04-26

    ... Wastewater Treatment and Collection Facilities. NFPA 900--2013 Building Energy Code... 1/3/2014 NFPA 901.../2015 Extraction Plants. NFPA 40--2011 Standard for the 7/8/2013 Storage and Handling of Cellulose...

  13. Codes and standards research, development and demonstration roadmap

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2008-07-22

    C&S RD&D Roadmap - 2008: This Roadmap is a guide to the Research, Development & Demonstration activities that will provide data required for Standards Development Organizations (SDOs) to develop performance-based codes and standards for a commercial hydrogen fueled transportation sector in the U.S.

  14. Building climate change into infrastructure codes and standards

    Energy Technology Data Exchange (ETDEWEB)

    Auld, H.; Klaasen, J.; Morris, R.; Fernandez, S.; MacIver, D.; Bernstein, D. [Environment Canada, Adaptation and Impact Research Division, Toronto, Ontario (Canada)

    2009-07-01

    'Full text:' Building codes and standards and the climatic design values embedded within these legal to semi-legal documents have profound safety, health and economic implications for Canada's infrastructure systems. The climatic design values that have been used for the design of almost all of today's more than $5.5 Trillion in infrastructure are based on historical climate data and assume that the extremes of the past will represent future conditions. Since new infrastructure based on codes and standards will be built to survive for decades to come, it is critically important that existing climatic design information be as accurate and up-to-date as possible, that the changing climate be monitored to detect and highlight vulnerabilities of existing infrastructure, that forensic studies of climate-related failures be undertaken and that codes and standards processes incorporate future climates and extremes as much as possible. Uncertainties in the current climate change models and their scenarios currently challenge our ability to project future extremes regionally and locally. Improvements to the spatial and temporal resolution of these climate change scenarios, along with improved methodologies to treat model biases and localize results, will allow future codes and standards to better reflect the extremes and weathering conditions expected over the lifespan of structures. In the meantime, other information and code processes can be used to incorporate changing climate conditions into upcoming infrastructure codes and standards, to “bridge” the model uncertainty gap and to complement the state of existing projections. This presentation will outline some of the varied information and processes that will be used to incorporate climate change adaptation into the next development cycle of the National Building Code of Canada and numerous other national CSA infrastructure standards. (author)

  15. Optimization of high-definition video coding and hybrid fiber-wireless transmission in the 60 GHz band.

    Science.gov (United States)

    Lebedev, Alexander; Pham, Tien Thang; Beltrán, Marta; Yu, Xianbin; Ukhanova, Anna; Llorente, Roberto; Monroy, Idelfonso Tafur; Forchhammer, Søren

    2011-12-12

    The paper addresses the problem of distribution of high-definition video over fiber-wireless networks. The physical layer architecture with the low complexity envelope detection solution is investigated. We present both experimental studies and simulation of high quality high-definition compressed video transmission over 60 GHz fiber-wireless link. Using advanced video coding we satisfy low complexity and low delay constraints, meanwhile preserving the superb video quality after significantly extended wireless distance. © 2011 Optical Society of America

  16. Combined Wavelet Video Coding and Error Control for Internet Streaming and Multicast

    Science.gov (United States)

    Chu, Tianli; Xiong, Zixiang

    2003-12-01

    This paper proposes an integrated approach to Internet video streaming and multicast (e.g., receiver-driven layered multicast (RLM) by McCanne) based on combined wavelet video coding and error control. We design a packetized wavelet video (PWV) coder to facilitate its integration with error control. The PWV coder produces packetized layered bitstreams that are independent among layers while being embedded within each layer. Thus, a lost packet only renders the following packets in the same layer useless. Based on the PWV coder, we search for a multilayered error-control strategy that optimally trades off source and channel coding for each layer under a given transmission rate to mitigate the effects of packet loss. While both the PWV coder and the error-control strategy are new—the former incorporates embedded wavelet video coding and packetization and the latter extends the single-layered approach for RLM by Chou et al.—the main distinction of this paper lies in the seamless integration of the two parts. Theoretical analysis shows a gain of up to 1 dB on a channel with 20% packet loss using our combined approach over separate designs of the source coder and the error-control mechanism. This is also substantiated by our simulations with a gain of up to 0.6 dB. In addition, our simulations show a gain of up to 2.2 dB over previous results reported by Chou et al.

  17. Experiences with Codes and Standards for the Arnhem SOFC Project

    Energy Technology Data Exchange (ETDEWEB)

    Gerwen, Rob J.F. van; Veer, Jan H.C. van der [KEMA Power Generation and Sustainables PO-Box 9035. NL-6800 ET Arnhem (Netherlands); Kuipers, Joop [Antiloopstraat 79, NL-6531 TL Nijmegen (Netherlands)

    2000-07-01

    By the end of 1994, a group of Danish and Dutch utilities agreed with Westinghouse (now Siemens-Westinghouse) upon the delivery of a 100 kWe Solid Oxide Fuel Cell unit. Identifying applicable codes and standards for the SOFC system was considered part of the learning process and was done after the contract was signed. This has proven to be a viable procedure, despite the fact that it took a lot of discussion (and time) to come to an agreement. This demonstration project has taught which codes and standards for this new and promising technology are applicable and which should be established. (author)

  18. Iterative Multiview Side Information for Enhanced Reconstruction in Distributed Video Coding

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available Distributed video coding (DVC is a new paradigm for video compression based on the information theoretical results of Slepian and Wolf (SW and Wyner and Ziv (WZ. DVC entails low-complexity encoders as well as separate encoding of correlated video sources. This is particularly attractive for multiview camera systems in video surveillance and camera sensor network applications, where low complexity is required at the encoder. In addition, the separate encoding of the sources implies no communication between the cameras in a practical scenario. This is an advantage since communication is time and power consuming and requires complex networking. In this work, different intercamera estimation techniques for side information (SI generation are explored and compared in terms of estimating quality, complexity, and rate distortion (RD performance. Further, a technique called iterative multiview side information (IMSI is introduced, where the final SI is used in an iterative reconstruction process. The simulation results show that IMSI significantly improves the RD performance for video with significant motion and activity. Furthermore, DVC outperforms AVC/H.264 Intra for video with average and low motion but it is still inferior to the Inter No Motion and Inter Motion modes.

  19. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Directory of Open Access Journals (Sweden)

    Rik Van de Walle

    2007-01-01

    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4%.

  20. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Directory of Open Access Journals (Sweden)

    Lerouge Sam

    2007-01-01

    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4 .

  1. Quantitative Analysis of Standardized Dress Code and Minority Academic Achievement

    Science.gov (United States)

    Proctor, J. R.

    2013-01-01

    This study was designed to investigate if a statistically significant variance exists in African American and Hispanic students' attendance and Texas Assessment of Knowledge and Skills test scores in mathematics before and after the implementation of a standardized dress code. For almost two decades supporters and opponents of public school…

  2. 10 CFR 50.55a - Codes and standards.

    Science.gov (United States)

    2010-01-01

    ... Retaining Welds in Class 1 Components Fabricated with Alloy 600/82/182 Materials, Section XI, Division 1..., tested, and inspected to quality standards commensurate with the importance of the safety function to be... Guide 1.84, Revision 34, “Design, Fabrication, and Materials Code Case Acceptability, ASME Section III...

  3. A Complete Video Coding Chain Based on Multi-Dimensional Discrete Cosine Transform

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2010-09-01

    Full Text Available The paper deals with a video compression method based on the multi-dimensional discrete cosine transform. In the text, the encoder and decoder architectures including the definitions of all mathematical operations like the forward and inverse 3-D DCT, quantization and thresholding are presented. According to the particular number of currently processed pictures, the new quantization tables and entropy code dictionaries are proposed in the paper. The practical properties of the 3-D DCT coding chain compared with the modern video compression methods (such as H.264 and WebM and the computing complexity are presented as well. It will be proved the best compress properties could be achieved by complex H.264 codec. On the other hand the computing complexity - especially on the encoding side - is lower for the 3-D DCT method.

  4. Distributed video streaming using multiple description coding and unequal error protection.

    Science.gov (United States)

    Kim, Joohee; Mersereau, Russell M; Altunbasak, Yucel

    2005-07-01

    This paper presents a distributed video streaming framework using unbalanced multiple description coding (MDC) and unequal error protection. In the proposed video streaming framework, two senders simultaneously stream complementary descriptions to a single receiver over different paths. To minimize the overall distortion and exploit the benefits of multipath transport when the characteristics of each path are different, an unbalanced MDC method for wavelet-based coders combined with a TCP-friendly rate allocation algorithm is proposed. The proposed rate allocation algorithm adjusts the transmission rates and the channel coding rates for all senders in a coordinated fashion to minimize the overall distortion. Simulation results show that the proposed unbalanced MDC combined with our rate allocation algorithm achieves about 1-6 dB higher peal signal-to-noise ratio compared to conventional balanced MDC when the available bandwidths along the two paths are different under time-varying network conditions.

  5. Side Information and Noise Learning for Distributed Video Coding using Optical Flow and Clustering

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Rakêt, Lars Lau; Huang, Xin

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The coding efficiency of DVC critically depends on the quality of side information generation and accuracy of noise modeling. This paper considers...... side information frames. Clustering is introduced to capture cross band correlation and increase local adaptivity in the noise modeling. This paper also proposes techniques to learn from previously decoded (WZ) frames. Different techniques are combined by calculating a number of candidate soft side...... information for (LDPCA) decoding. The proposed decoder side techniques for side information and noise learning (SING) are integrated in a TDWZ scheme. On test sequences, the proposed SING codec robustly improves the coding efficiency of TDWZ DVC. For WZ frames using a GOP size of 2, up to 4dB improvement...

  6. Bridging Inter-flow and Intra-flow Network Coding for Video Applications

    DEFF Research Database (Denmark)

    Hansen, Jonas; Krigslund, Jeppe; Roetter, Daniel Enrique Lucani

    2013-01-01

    enhance reliability, common of the former, while maintaining an efficient spectrum usage, typical of the latter. This paper uses the intuition provided in [1] to propose a practical implementation of the protocol leveraging Random Linear Network Coding (RLNC) for intra-flow coding, a credit based packet...... transmission approach to decide how much and when to send redundancy in the network, and a minimalistic feedback mechanism to guarantee delivery of generations of the different flows. Given the delay constraints of video applications, we proposed a simple yet effective coding mechanism, Block Coding On The Fly...... (BCFly), that allows a block encoder to be fed on-the-fly, thus reducing the delay to accumulate enough packets that is introduced by typical generation based NC techniques. Our measurements and comparison to forwarding and COPE show that CORE not only outperforms these schemes even for small packet...

  7. Mixture block coding with progressive transmission in packet video. Appendix 1: Item 2. M.S. Thesis

    Science.gov (United States)

    Chen, Yun-Chung

    1989-01-01

    Video transmission will become an important part of future multimedia communication because of dramatically increasing user demand for video, and rapid evolution of coding algorithm and VLSI technology. Video transmission will be part of the broadband-integrated services digital network (B-ISDN). Asynchronous transfer mode (ATM) is a viable candidate for implementation of B-ISDN due to its inherent flexibility, service independency, and high performance. According to the characteristics of ATM, the information has to be coded into discrete cells which travel independently in the packet switching network. A practical realization of an ATM video codec called Mixture Block Coding with Progressive Transmission (MBCPT) is presented. This variable bit rate coding algorithm shows how a constant quality performance can be obtained according to user demand. Interactions between codec and network are emphasized including packetization, service synchronization, flow control, and error recovery. Finally, some simulation results based on MBCPT coding with error recovery are presented.

  8. Improved Side Information Generation for Distributed Video Coding by Exploiting Spatial and Temporal Correlations

    Directory of Open Access Journals (Sweden)

    Ye Shuiming

    2009-01-01

    Full Text Available Distributed video coding (DVC is a video coding paradigm allowing low complexity encoding for emerging applications such as wireless video surveillance. Side information (SI generation is a key function in the DVC decoder, and plays a key-role in determining the performance of the codec. This paper proposes an improved SI generation for DVC, which exploits both spatial and temporal correlations in the sequences. Partially decoded Wyner-Ziv (WZ frames, based on initial SI by motion compensated temporal interpolation, are exploited to improve the performance of the whole SI generation. More specifically, an enhanced temporal frame interpolation is proposed, including motion vector refinement and smoothing, optimal compensation mode selection, and a new matching criterion for motion estimation. The improved SI technique is also applied to a new hybrid spatial and temporal error concealment scheme to conceal errors in WZ frames. Simulation results show that the proposed scheme can achieve up to 1.0 dB improvement in rate distortion performance in WZ frames for video with high motion, when compared to state-of-the-art DVC. In addition, both the objective and perceptual qualities of the corrupted sequences are significantly improved by the proposed hybrid error concealment scheme, outperforming both spatial and temporal concealments alone.

  9. Impact of scan conversion methods on the performance of scalable video coding

    Science.gov (United States)

    Dubois, Eric; Baaziz, Nadia; Matta, Marwan

    1995-04-01

    The ability to flexibly access coded video data at different resolutions or bit rates is referred to as scalability. We are concerned here with the class of methods referred to as pyramidal embedded coding in which specific subsets of the binary data can be used to decode lower- resolution versions of the video sequence. Two key techniques in such a pyramidal coder are the scan-conversion operations of down-conversion and up-conversion. Down-conversion is required to produce the smaller, lower-resolution versions of the image sequence. Up- conversion is used to perform conditional coding, whereby the coded lower-resolution image is interpolated to the same resolution as the next higher image and used to assist in the encoding of that level. The coding efficiency depends on the accuracy of this up-conversion process. In this paper techniques for down-conversion and up-conversion are addressed in the context of a two-level pyramidal representation. We first present the pyramidal technique for spatial scalability and review the methods used in MPEG-2. We then discuss some enhanced methods for down- and up-conversion, and evaluate their performance in the context of the two-level scalable system.

  10. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    Science.gov (United States)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  11. Future direction of ASME nuclear codes and standards

    Energy Technology Data Exchange (ETDEWEB)

    Ennis, Kevin; Sheehan, Mark E. [Codes and Standards ASME international, New York(United States)

    2003-04-01

    While the nuclear power industry in the US is in a period of stasis, there continues to be a great deal of activity in the ASME nuclear standards development arena. As plants age, the need for new approaches in standardization changes with the changing needs of the industry. New tools are becoming available in the form of risk analysis, and this is finding its way into more and more of ASME's standards activities. This paper will take a look at the direction that ASME nuclear Codes and Standards are heading in this and other areas, as well as taking a look at some advance reactor concepts and plans for standards to address new technologies.

  12. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    Science.gov (United States)

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  13. Combined Wavelet Video Coding and Error Control for Internet Streaming and Multicast

    Directory of Open Access Journals (Sweden)

    Tianli Chu

    2003-01-01

    Full Text Available This paper proposes an integrated approach to Internet video streaming and multicast (e.g., receiver-driven layered multicast (RLM by McCanne based on combined wavelet video coding and error control. We design a packetized wavelet video (PWV coder to facilitate its integration with error control. The PWV coder produces packetized layered bitstreams that are independent among layers while being embedded within each layer. Thus, a lost packet only renders the following packets in the same layer useless. Based on the PWV coder, we search for a multilayered error-control strategy that optimally trades off source and channel coding for each layer under a given transmission rate to mitigate the effects of packet loss. While both the PWV coder and the error-control strategy are new—the former incorporates embedded wavelet video coding and packetization and the latter extends the single-layered approach for RLM by Chou et al.—the main distinction of this paper lies in the seamless integration of the two parts. Theoretical analysis shows a gain of up to 1 dB on a channel with 20% packet loss using our combined approach over separate designs of the source coder and the error-control mechanism. This is also substantiated by our simulations with a gain of up to 0.6 dB. In addition, our simulations show a gain of up to 2.2 dB over previous results reported by Chou et al.

  14. 40 CFR 35.936-16 - Code or standards of conduct.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Code or standards of conduct. 35.936-16... Code or standards of conduct. (a) The grantee must maintain a code or standards of conduct which shall...-related violations of law or of the code or standards of conduct by either the grantee officers, employees...

  15. Adaptive mode decision with residual motion compensation for distributed video coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren; Slowack, Jurgen

    2013-01-01

    Distributed video coding (DVC) is a coding paradigm that entails low complexity encoding by exploiting the source statistics at the decoder. To improve the DVC coding efficiency, this paper proposes a novel adaptive technique for mode decision to control and take advantage of skip mode and intra...... mode in DVC. The adaptive mode decision is not only based on quality of key frames but also the rate of Wyner-Ziv (WZ) frames. To improve noise distribution estimation for a more accurate mode decision, a residual motion compensation is proposed to estimate a current noise residue based on a previously...... decoded frame. The experimental results show that the proposed adaptive mode decision DVC significantly improves the rate distortion performance without increasing the encoding complexity. For a GOP size of 2 on the set of test sequences, the average bitrate saving of the proposed codec is 35.5% on WZ...

  16. Adaptive mode decision with residual motion compensation for distributed video coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren; Slowack, Jurgen

    2015-01-01

    Distributed video coding (DVC) is a coding paradigm that entails low complexity encoding by exploiting the source statistics at the decoder. To improve the DVC coding efficiency, this paper proposes a novel adaptive technique for mode decision to control and take advantage of skip mode and intra...... mode in DVC. The adaptive mode decision is not only based on quality of key frames but also the rate of Wyner-Ziv (WZ) frames. To improve noise distribution estimation for a more accurate mode decision, a residual motion compensation is proposed to estimate a current noise residue based on a previously...... decoded frame. The experimental results show that the proposed adaptive mode decision DVC significantly improves the rate distortion performance without increasing the encoding complexity. For a GOP size of 2 on the set of test sequences, the average bitrate saving of the proposed codec is 35.5% on WZ...

  17. A unified model of the standard genetic code.

    Science.gov (United States)

    José, Marco V; Zamudio, Gabriel S; Morgado, Eberto R

    2017-03-01

    The Rodin-Ohno (RO) and the Delarue models divide the table of the genetic code into two classes of aminoacyl-tRNA synthetases (aaRSs I and II) with recognition from the minor or major groove sides of the tRNA acceptor stem, respectively. These models are asymmetric but they are biologically meaningful. On the other hand, the standard genetic code (SGC) can be derived from the primeval RNY code (R stands for purines, Y for pyrimidines and N any of them). In this work, the RO-model is derived by means of group actions, namely, symmetries represented by automorphisms, assuming that the SGC originated from a primeval RNY code. It turns out that the RO-model is symmetric in a six-dimensional (6D) hypercube. Conversely, using the same automorphisms, we show that the RO-model can lead to the SGC. In addition, the asymmetric Delarue model becomes symmetric by means of quotient group operations. We formulate isometric functions that convert the class aaRS I into the class aaRS II and vice versa. We show that the four polar requirement categories display a symmetrical arrangement in our 6D hypercube. Altogether these results cannot be attained, neither in two nor in three dimensions. We discuss the present unified 6D algebraic model, which is compatible with both the SGC (based upon the primeval RNY code) and the RO-model.

  18. Defining the cognitive enhancing properties of video games: Steps Towards Standardization and Translation.

    Science.gov (United States)

    Goodwin, Shikha Jain; Dziobek, Derek

    2016-09-01

    Ever since video games were available to the general public, they have intrigued brain researchers for many reasons. There is an enormous amount of diversity in the video game research, ranging from types of video games used, the amount of time spent playing video games, the definition of video gamer versus non-gamer to the results obtained after playing video games. In this paper, our goal is to provide a critical discussion of these issues, along with some steps towards generalization using the discussion of an article published by Clemenson and Stark (2005) as the starting point. The authors used a distinction between 2D versus 3D video games to compare their effects on the learning and memory in humans. The primary hypothesis of the authors is that the exploration of virtual environments while playing video games is a human correlate of environment enrichment. Authors found that video gamers performed better than the non-video gamers, and if non-gamers are trained on playing video gamers, 3D games provide better environment enrichment compared to 2D video games, as indicated by better memory scores. The end goal of standardization in video games is to be able to translate the field so that the results can be used for greater good.

  19. 3-D model-based frame interpolation for distributed video coding of static scenes.

    Science.gov (United States)

    Maitre, Matthieu; Guillemot, Christine; Morin, Luce

    2007-05-01

    This paper addresses the problem of side information extraction for distributed coding of videos captured by a camera moving in a 3-D static environment. Examples of targeted applications are augmented reality, remote-controlled robots operating in hazardous environments, or remote exploration by drones. It explores the benefits of the structure-from-motion paradigm for distributed coding of this type of video content. Two interpolation methods constrained by the scene geometry, based either on block matching along epipolar lines or on 3-D mesh fitting, are first developed. These techniques are based on a robust algorithm for sub-pel matching of feature points, which leads to semi-dense correspondences between key frames. However, their rate-distortion (RD) performances are limited by misalignments between the side information and the actual Wyner-Ziv (WZ) frames due to the assumption of linear motion between key frames. To cope with this problem, two feature point tracking techniques are introduced, which recover the camera parameters of the WZ frames. A first technique, in which the frames remain encoded separately, performs tracking at the decoder and leads to significant RD performance gains. A second technique further improves the RD performances by allowing a limited tracking at the encoder. As an additional benefit, statistics on tracks allow the encoder to adapt the key frame frequency to the video motion content.

  20. A high-efficient significant coefficient scanning algorithm for 3-D embedded wavelet video coding

    Science.gov (United States)

    Song, Haohao; Yu, Songyu; Song, Li; Xiong, Hongkai

    2005-07-01

    3-D embedded wavelet video coding (3-D EWVC) algorithms become a vital scheme for state-of-the-art scalable video coding. A major objective in a progressive transmission scheme is to select the most important information which yields the largest distortion reduction to be transmitted first, so traditional 3-D EWVC algorithms scan coefficients according to bit-plane order. To significant bit information of the same bit-plane, however, these algorithms neglect the different effect of coefficients in different subbands to distortion. In this paper, we analyze different effect of significant information bits of the same bit-plane in different subbands to distortion and propose a high-efficient significant coefficient scanning algorithm. Experimental results of 3-D SPIHT and 3-D SPECK show that high-efficient significant coefficient scanning algorithm can improve traditional 3-D EWVC algorithms' ability of compression, and make reconstructed videos have higher PSNR and better visual effects in the same bit rate compared to original significant coefficient scanning algorithms respectively.

  1. Standards, building codes, and certification programs for solar technology applicatons

    Energy Technology Data Exchange (ETDEWEB)

    Riley, J. D.; Odland, R.; Barker, H.

    1979-07-01

    This report is a primer on solar standards development. It explains the development of standards, building code provisions, and certification programs and their relationship to the emerging solar technologies. These areas are important in the commercialization of solar technology because they lead to the attainment of two goals: the development of an industry infrastructure and consumer confidence. Standards activities in the four phases of the commercialization process (applied research, development, introduction, and diffusion) are discussed in relation to institutional issues. Federal policies have been in operation for a number of years to accelerate the development process for solar technology. These policies are discussed in light of the Office of Management and Budget (OMB) Circular on federal interaction with the voluntary consensus system, and in light of current activities of DOE, HUD, and other interested federal agencies. The appendices cover areas of specific interest to different audiences: activities on the state and local level; and standards, building codes, and certification programs for specific technologies. In addition, a contract for the development of a model solar document let by DOE to a model consortium is excerpted in the Appendix.

  2. High Temperature Gas Reactors: Assessment of Applicable Codes and Standards

    Energy Technology Data Exchange (ETDEWEB)

    McDowell, Bruce K.; Nickolaus, James R.; Mitchell, Mark R.; Swearingen, Gary L.; Pugh, Ray

    2011-10-31

    Current interest expressed by industry in HTGR plants, particularly modular plants with power up to about 600 MW(e) per unit, has prompted NRC to task PNNL with assessing the currently available literature related to codes and standards applicable to HTGR plants, the operating history of past and present HTGR plants, and with evaluating the proposed designs of RPV and associated piping for future plants. Considering these topics in the order they are arranged in the text, first the operational histories of five shut-down and two currently operating HTGR plants are reviewed, leading the authors to conclude that while small, simple prototype HTGR plants operated reliably, some of the larger plants, particularly Fort St. Vrain, had poor availability. Safety and radiological performance of these plants has been considerably better than LWR plants. Petroleum processing plants provide some applicable experience with materials similar to those proposed for HTGR piping and vessels. At least one currently operating plant - HTR-10 - has performed and documented a leak before break analysis that appears to be applicable to proposed future US HTGR designs. Current codes and standards cover some HTGR materials, but not all materials are covered to the high temperatures envisioned for HTGR use. Codes and standards, particularly ASME Codes, are under development for proposed future US HTGR designs. A 'roadmap' document has been prepared for ASME Code development; a new subsection to section III of the ASME Code, ASME BPVC III-5, is scheduled to be published in October 2011. The question of terminology for the cross-duct structure between the RPV and power conversion vessel is discussed, considering the differences in regulatory requirements that apply depending on whether this structure is designated as a 'vessel' or as a 'pipe'. We conclude that designing this component as a 'pipe' is the more appropriate choice, but that the ASME BPVC

  3. Screen Codes: Efficient Data Transfer from Video Displays to Mobile Devices

    OpenAIRE

    Collomosse, J.; Kindberg, T.

    2007-01-01

    We present ‘Screen codes’ - a space- and time-efficient, aesthetically compelling method for transferring data from a display to a camera-equipped mobile device. Screen codes encode data as a grid of luminosity fluctuations within an arbitrary image, displayed on the video screen and decoded on a mobile device. These ‘twinkling’ images are a form of ‘visual hyperlink’, by which users can move dynamically generated content to and from their mobile devices. They help bridge the ‘content divide’...

  4. CowLog – Cross-Platform Application for Coding Behaviours from Video

    OpenAIRE

    Pastell, Matti

    2016-01-01

    CowLog is a cross-platform application to code behaviours from video recordings for use in behavioural research. The software has been used in several studies e.g. to study sleep in dairy calves, emotions in goats and the mind wandering related to computer use during lectures. CowLog 3 is implemented using JavaScript and HTML using the Electron framework. The framework allows the development of packaged cross-platform applications using features from web browser (Chromium) as well as server s...

  5. Improving a Power Line Communications Standard with LDPC Codes

    Directory of Open Access Journals (Sweden)

    Praveen Jain

    2007-01-01

    Full Text Available We investigate a power line communications (PLC scheme that could be used to enhance the HomePlug 1.0 standard, specifically its ROBO mode which provides modest throughput for the worst case PLC channel. The scheme is based on using a low-density parity-check (LDPC code, in lieu of the concatenated Reed-Solomon and convolutional codes in ROBO mode. The PLC channel is modeled with multipath fading and Middleton's class A noise. Clipping is introduced to mitigate the effect of impulsive noise. A simple and effective method is devised to estimate the variance of the clipped noise for LDPC decoding. Simulation results show that the proposed scheme outperforms the HomePlug 1.0 ROBO mode and has lower computational complexity. The proposed scheme also dispenses with the repetition of information bits in ROBO mode to gain time diversity, resulting in 4-fold increase in physical layer throughput.

  6. Region of interest video coding for low bit-rate transmission of carotid ultrasound videos over 3G wireless networks.

    Science.gov (United States)

    Tsapatsoulis, Nicolas; Loizou, Christos; Pattichis, Constantinos

    2007-01-01

    Efficient medical video transmission over 3G wireless is of great importance for fast diagnosis and on site medical staff training purposes. In this paper we present a region of interest based ultrasound video compression study which shows that significant reduction of the required, for transmission, bit rate can be achieved without altering the design of existing video codecs. Simple preprocessing of the original videos to define visually and clinically important areas is the only requirement.

  7. Electronic health record standards, coding systems, frameworks, and infrastructures

    CERN Document Server

    Sinha, Pradeep K; Bendale, Prashant; Mantri, Manisha; Dande, Atreya

    2013-01-01

    Discover How Electronic Health Records Are Built to Drive the Next Generation of Healthcare Delivery The increased role of IT in the healthcare sector has led to the coining of a new phrase ""health informatics,"" which deals with the use of IT for better healthcare services. Health informatics applications often involve maintaining the health records of individuals, in digital form, which is referred to as an Electronic Health Record (EHR). Building and implementing an EHR infrastructure requires an understanding of healthcare standards, coding systems, and frameworks. This book provides an

  8. Transmission of compressed video

    Science.gov (United States)

    Pasch, H. L.

    1990-09-01

    An overview of video coding is presented. The aim is not to give a technical summary of possible coding techniques, but to address subjects related to video compression in general and to the transmission of compressed video in more detail. Bit rate reduction is in general possible by removing redundant information; removing information the eye does not use anyway; and reducing the quality of the video. The codecs which are used for reducing the bit rate, can be divided into two groups: Constant Bit rate Codecs (CBC's), which keep the bit rate constant, but vary the video quality; and Variable Bit rate Codecs (VBC's), which keep the video quality constant by varying the bit rate. VBC's can be in general reach a higher video quality than CBC's using less bandwidth, but need a transmission system that allows the bandwidth of a connection to fluctuate in time. The current and the next generation of the PSTN does not allow this; ATM might. There are several factors which influence the quality of video: the bit error rate of the transmission channel, slip rate, packet loss rate/packet insertion rate, end-to-end delay, phase shift between voice and video, and bit rate. Based on the bit rate of the coded video, the following classification of coded video can be made: High Definition Television (HDTV); Broadcast Quality Television (BQTV); video conferencing; and video telephony. The properties of these classes are given. The video conferencing and video telephony equipment available now and in the next few years can be divided into three categories: conforming to 1984 CCITT standard for video conferencing; conforming to 1988 CCITT standard; and conforming to no standard.

  9. Efficient biprediction decision scheme for fast high efficiency video coding encoding

    Science.gov (United States)

    Park, Sang-hyo; Lee, Seung-ho; Jang, Euee S.; Jun, Dongsan; Kang, Jung-Won

    2016-11-01

    An efficient biprediction decision scheme of high efficiency video coding (HEVC) is proposed for fast-encoding applications. For low-delay video applications, bidirectional prediction can be used to increase compression performance efficiently with previous reference frames. However, at the same time, the computational complexity of the HEVC encoder is significantly increased due to the additional biprediction search. Although a some research has attempted to reduce this complexity, whether the prediction is strongly related to both motion complexity and prediction modes in a coding unit has not yet been investigated. A method that avoids most compression-inefficient search points is proposed so that the computational complexity of the motion estimation process can be dramatically decreased. To determine if biprediction is critical, the proposed method exploits the stochastic correlation of the context of prediction units (PUs): the direction of a PU and the accuracy of a motion vector. Through experimental results, the proposed method showed that the time complexity of biprediction can be reduced to 30% on average, outperforming existing methods in view of encoding time, number of function calls, and memory access.

  10. Interactive Video Coding and Transmission over Heterogeneous Wired-to-Wireless IP Networks Using an Edge Proxy

    Directory of Open Access Journals (Sweden)

    Modestino James W

    2004-01-01

    Full Text Available Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC coding scheme employing Reed-Solomon (RS codes and rate-compatible punctured convolutional (RCPC codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.

  11. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    Directory of Open Access Journals (Sweden)

    Chang Yao-Jen

    2002-01-01

    Full Text Available This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network and the H.324 WAN (wide-area network users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  12. Temporal Scalability through Adaptive -Band Filter Banks for Robust H.264/MPEG-4 AVC Video Coding

    Directory of Open Access Journals (Sweden)

    Pau G

    2006-01-01

    Full Text Available This paper presents different structures that use adaptive -band hierarchical filter banks for temporal scalability. Open-loop and closed-loop configurations are introduced and illustrated using existing video codecs. In particular, it is shown that the H.264/MPEG-4 AVC codec allows us to introduce scalability by frame shuffling operations, thus keeping backward compatibility with the standard. The large set of shuffling patterns introduced here can be exploited to adapt the encoding process to the video content features, as well as to the user equipment and transmission channel characteristics. Furthermore, simulation results show that this scalability is obtained with no degradation in terms of subjective and objective quality in error-free environments, while in error-prone channels the scalable versions provide increased robustness.

  13. Video-assisted thoracoscopic surgery (VATS) lobectomy using a standardized anterior approach

    DEFF Research Database (Denmark)

    Hansen, Henrik Jessen; Petersen, René Horsleben; Christensen, Merete

    2011-01-01

    Lobectomy using video-assisted thoracoscopic surgery (VATS) still is a controversial operation despite its many observed benefits. The controversy may be due to difficulties performing the procedure. This study addresses a standardized anterior approach facilitating the operation....

  14. Indexing, Browsing, and Searching of Digital Video.

    Science.gov (United States)

    Smeaton, Alan F.

    2004-01-01

    Presents a literature review that covers the following topics related to indexing, browsing, and searching of digital video: video coding and standards; conventional approaches to accessing digital video; automatically structuring and indexing digital video; searching, browsing, and summarization; measurement and evaluation of the effectiveness of…

  15. Performance Analysis of Video PHY Controller Using Unidirection and Bi-directional IO Standard via 7 Series FPGA

    DEFF Research Database (Denmark)

    Das, Bhagwan; Abdullah, M F L; Hussain, Dil muhammed Akbar

    2017-01-01

    The Video PHY controller offers an interface between transmitters/receivers and video ports. These video ports are categorized in HDMI or Displayport. HDMI Video PHY controller are mostly used for their high speed operation for high resolution graphics. However, the execution of high resolution...... graphics consumes more power, this creates a need of designing the low power design for Video PHY controller. In this paper, the performance of Video PHY controller is analyzed by comparing the power consumption of unidirectional and bi-directional IO Standard over 7 series FPGA. It is determined...... that total on-chip power is reduced for unidirectional IO Standard based Video PHY controller compared to bidirectional IO Standard based Video PHY controller. The most significant achievement of this work is that it is concluded that unidirectional IO Standard based Video PHY controller consume least...

  16. Fifty years of progress in speech coding standards

    Science.gov (United States)

    Cox, Richard

    2004-10-01

    Over the past 50 years, speech coding has taken root worldwide. Early applications were for the military and transmission for telephone networks. The military gave equal priority to intelligibility and low bit rate. The telephone network gave priority to high quality and low delay. These illustrate three of the four areas in which requirements must be set for any speech coder application: bit rate, quality, delay, and complexity. While the military could afford relatively expensive terminal equipment for secure communications, the telephone network needed low cost for massive deployment in switches and transmission equipment worldwide. Today speech coders are at the heart of the wireless phones and telephone answering systems we use every day. In addition to the technology and technical invention that has occurred, standards make it possible for all these different systems to interoperate. The primary areas of standardization are the public switched telephone network, wireless telephony, and secure telephony for government and military applications. With the advent of IP telephony there are additional standardization efforts and challenges. In this talk the progress in all areas is reviewed as well as a reflection on Jim Flanagan's impact on this field during the past half century.

  17. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Dongyul Lee

    2014-01-01

    Full Text Available The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC with adaptive modulation and coding (AMC provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  18. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    Science.gov (United States)

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  19. Analyses to support development of risk-informed separation distances for hydrogen codes and standards.

    Energy Technology Data Exchange (ETDEWEB)

    LaChance, Jeffrey L.; Houf, William G. (Sandia National Laboratories, Livermore, CA); Fluer, Inc., Paso Robels, CA; Fluer, Larry (Fluer, Inc., Paso Robels, CA); Middleton, Bobby

    2009-03-01

    The development of a set of safety codes and standards for hydrogen facilities is necessary to ensure they are designed and operated safely. To help ensure that a hydrogen facility meets an acceptable level of risk, code and standard development organizations are tilizing risk-informed concepts in developing hydrogen codes and standards.

  20. 78 FR 18321 - International Code Council: The Update Process for the International Codes and Standards

    Science.gov (United States)

    2013-03-26

    ... which cycle, go to: http://www.iccsafe.org/cs/codes/Web pages/cycle.aspx. The Code Development Process..., Country Club Hills, Illinois 60478; or download a copy from the ICC Web site noted previously. The... Code. International Property Maintenance Code. International Residential Code. International Swimming...

  1. Digital video transcoding for transmission and storage

    CERN Document Server

    Sun, Huifang; Chen, Xuemin

    2004-01-01

    Professionals in the video and multimedia industries need a book that explains industry standards for video coding and how to convert the compressed information between standards. Digital Video Transcoding for Transmission and Storage answers this demand while also supplying the theories and principles of video compression and transcoding technologies. Emphasizing digital video transcoding techniques, this book summarizes its content via examples of practical methods for transcoder implementation. It relates almost all of its featured transcoding technologies to practical applications.This vol

  2. A model of R-D performance evaluation for Rate-Distortion-Complexity evaluation of H.264 video coding

    DEFF Research Database (Denmark)

    Wu, Mo; Forchhammer, Søren

    2007-01-01

    This paper considers a method for evaluation of Rate-Distortion-Complexity (R-D-C) performance of video coding. A statistical model of the transformed coefficients is used to estimate the Rate-Distortion (R-D) performance. A model frame work for rate, distortion and slope of the R-D curve for inter...... and intra frame is presented. Assumptions are given for analyzing an R-D model for fast R-D-C evaluation. The theoretical expressions are combined with H.264 video coding, and confirmed by experimental results. The complexity frame work is applied to the integer motion estimation....

  3. Real-time distributed video coding for 1K-pixel visual sensor networks

    Science.gov (United States)

    Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian

    2016-07-01

    Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.

  4. WIAMan Technology Demonstrator Sensor Codes Conforming to International Organization for Standardization/Technical Standard (ISO/TS) 13499

    Science.gov (United States)

    2016-03-01

    Standardization/Technical Standard (ISO/ TS ) 13499 by Michael Tegtmeyer Approved for public release; distribution is unlimited. NOTICES Disclaimers The findings...International Organization for Standardization/Technical Standard (ISO/ TS ) 13499 by Michael TegtmeyerWIAMan Engineering Office, ARL Approved for public...WIAMan Technology Demonstrator Sensor Codes Conforming to International Organization for Standardization/Technical Standard (ISO/ TS ) 13499 Michael

  5. Partial Encryption of Entropy-Coded Video Compression Using Coupled Chaotic Maps

    Directory of Open Access Journals (Sweden)

    Fadi Almasalha

    2014-10-01

    Full Text Available Due to pervasive communication infrastructures, a plethora of enabling technologies is being developed over mobile and wired networks. Among these, video streaming services over IP are the most challenging in terms of quality, real-time requirements and security. In this paper, we propose a novel scheme to efficiently secure variable length coded (VLC multimedia bit streams, such as H.264. It is based on code word error diffusion and variable size segment shuffling. The codeword diffusion and the shuffling mechanisms are based on random operations from a secure and computationally efficient chaos-based pseudo-random number generator. The proposed scheme is ubiquitous to the end users and can be deployed at any node in the network. It provides different levels of security, with encrypted data volume fluctuating between 5.5–17%. It works on the compressed bit stream without requiring any decoding. It provides excellent encryption speeds on different platforms, including mobile devices. It is 200% faster and 150% more power efficient when compared with AES software-based full encryption schemes. Regarding security, the scheme is robust to well-known attacks in the literature, such as brute force and known/chosen plain text attacks.

  6. MPEG-2 video coding with an adaptive selection of scanning path and picture structure

    Science.gov (United States)

    Zhou, Minhua; De Lameillieure, Jan L.; Schaefer, Ralf

    1996-09-01

    In the MPEG-2 video coding an interlaced frame can be encoded as either a frame-picture or two field-pictures. The selection of picture structure (frame/field) has a strong impact on picture quality. In order to achieve the best possible picture quality, an adaptive scheme is proposed in this paper to select the optimal picture structure on a frame by frame basis. The selection of picture structure is performed in connection with that of the optimal scanning path. First, the scanning path (zig-zag scan/alternate scan) is chosen based on a post-analysis of DCT-coefficients. Secondly, the optimal picture structure is selected for the next frame according to the chosen scanning path, i.e. a zig-zag scan corresponds to frame picture structure, while an alternate scan corresponds to field picture structure. Furthermore, the TM5 buffer control algorithm is extended to support the coding with adaptive frame/field picture structure. Finally, simulation results verify the adaptive scheme proposed in this paper.

  7. Thoracic surgeons' perception of frail behavior in videos of standardized patients.

    Science.gov (United States)

    Ferguson, Mark K; Thompson, Katherine; Huisingh-Scheetz, Megan; Farnan, Jeanne; Hemmerich, Josh A; Slawinski, Kris; Acevedo, Julissa; Lee, Sang Mee; Rojnica, Marko; Small, Stephen

    2014-01-01

    Frailty is a predictor of poor outcomes following many types of operations. We measured thoracic surgeons' accuracy in assessing patient frailty using videos of standardized patients demonstrating signs of physical frailty. We compared their performance to that of geriatrics specialists. We developed an anchored scale for rating degree of frailty. Reference categories were assigned to 31 videos of standardized patients trained to exhibit five levels of activity ranging from "vigorous" to "frail." Following an explanation of frailty, thoracic surgeons and geriatrics specialists rated the videos. We evaluated inter-rater agreement and tested differences between ratings and reference categories. The influences of clinical specialty, clinical experience, and self-rated expertise were examined. Inter-rater rank correlation among all participants was high (Kendall's W 0.85) whereas exact agreement (Fleiss' kappa) was only moderate (0.47). Better inter-rater agreement was demonstrated for videos exhibiting extremes of behavior. Exact agreement was better for thoracic surgeons (n = 32) than geriatrics specialists (n = 9; p = 0.045), whereas rank correlation was similar for both groups. More clinical years of experience and self-reported expertise were not associated with better inter-rater agreement. Videos of standardized patients exhibiting varying degrees of frailty are rated with internal consistency by thoracic surgeons as accurately as geriatrics specialists when referenced to an anchored scale. Ratings were less consistent for moderate degrees of frailty, suggesting that physicians require training to recognize early frailty. Such videos may be useful in assessing and teaching frailty recognition.

  8. Efficient 3D Watermarked Video Communication with Chaotic Interleaving, Convolution Coding, and LMMSE Equalization

    Science.gov (United States)

    El-Shafai, W.; El-Bakary, E. M.; El-Rabaie, S.; Zahran, O.; El-Halawany, M.; Abd El-Samie, F. E.

    2017-06-01

    Three-Dimensional Multi-View Video (3D-MVV) transmission over wireless networks suffers from Macro-Blocks losses due to either packet dropping or fading-motivated bit errors. Thus, the robust performance of 3D-MVV transmission schemes over wireless channels becomes a recent considerable hot research issue due to the restricted resources and the presence of severe channel errors. The 3D-MVV is composed of multiple video streams shot by several cameras around a single object, simultaneously. Therefore, it is an urgent task to achieve high compression ratios to meet future bandwidth constraints. Unfortunately, the highly-compressed 3D-MVV data becomes more sensitive and vulnerable to packet losses, especially in the case of heavy channel faults. Thus, in this paper, we suggest the application of a chaotic Baker interleaving approach with equalization and convolution coding for efficient Singular Value Decomposition (SVD) watermarked 3D-MVV transmission over an Orthogonal Frequency Division Multiplexing wireless system. Rayleigh fading and Additive White Gaussian Noise are considered in the real scenario of 3D-MVV transmission. The SVD watermarked 3D-MVV frames are primarily converted to their luminance and chrominance components, which are then converted to binary data format. After that, chaotic interleaving is applied prior to the modulation process. It is used to reduce the channel effects on the transmitted bit streams and it also adds a degree of encryption to the transmitted 3D-MVV frames. To test the performance of the proposed framework; several simulation experiments on different SVD watermarked 3D-MVV frames have been executed. The experimental results show that the received SVD watermarked 3D-MVV frames still have high Peak Signal-to-Noise Ratios and watermark extraction is possible in the proposed framework.

  9. Telemetry Standards, RCC Standard 106-17, Chapter 4, Pulse Code Modulation Standards

    Science.gov (United States)

    2017-07-01

    item c) or supercommutation (Subsection 4.3.2 item d). f. Format changes (Section 4.4). g. Asynchronous embedded formats ( Paragraph 4.5). h. Tagged...however, more than one counter may be used if needed. This paragraph assumes the use of one SFID counter. The SFID counter shall be located in a...Words The following paragraphs describe the formatting of time words within a PCM stream. A 16-bit standardized time word format and a method to

  10. Multiview video codec based on KTA techniques

    Science.gov (United States)

    Seo, Jungdong; Kim, Donghyun; Ryu, Seungchul; Sohn, Kwanghoon

    2011-03-01

    Multi-view video coding (MVC) is a video coding standard developed by MPEG and VCEG for multi-view video. It showed average PSNR gain of 1.5dB compared with view-independent coding by H.264/AVC. However, because resolutions of multi-view video are getting higher for more realistic 3D effect, high performance video codec is needed. MVC adopted hierarchical B-picture structure and inter-view prediction as core techniques. The hierarchical B-picture structure removes the temporal redundancy, and the inter-view prediction reduces the inter-view redundancy by compensated prediction from the reconstructed neighboring views. Nevertheless, MVC has inherent limitation in coding efficiency, because it is based on H.264/AVC. To overcome the limit, an enhanced video codec for multi-view video based on Key Technology Area (KTA) is proposed. KTA is a high efficiency video codec by Video Coding Expert Group (VCEG), and it was carried out for coding efficiency beyond H.264/AVC. The KTA software showed better coding gain than H.264/AVC by using additional coding techniques. The techniques and the inter-view prediction are implemented into the proposed codec, which showed high coding gain compared with the view-independent coding result by KTA. The results presents that the inter-view prediction can achieve higher efficiency in a multi-view video codec based on a high performance video codec such as HEVC.

  11. A parallel 3-D discrete wavelet transform architecture using pipelined lifting scheme approach for video coding

    Science.gov (United States)

    Hegde, Ganapathi; Vaya, Pukhraj

    2013-10-01

    This article presents a parallel architecture for 3-D discrete wavelet transform (3-DDWT). The proposed design is based on the 1-D pipelined lifting scheme. The architecture is fully scalable beyond the present coherent Daubechies filter bank (9, 7). This 3-DDWT architecture has advantages such as no group of pictures restriction and reduced memory referencing. It offers low power consumption, low latency and high throughput. The computing technique is based on the concept that lifting scheme minimises the storage requirement. The application specific integrated circuit implementation of the proposed architecture is done by synthesising it using 65 nm Taiwan Semiconductor Manufacturing Company standard cell library. It offers a speed of 486 MHz with a power consumption of 2.56 mW. This architecture is suitable for real-time video compression even with large frame dimensions.

  12. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    Science.gov (United States)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  13. Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding

    Directory of Open Access Journals (Sweden)

    Yun Zhang

    2010-01-01

    Full Text Available We propose a Stereoscopic Visual Attention- (SVA- based regional bit allocation optimization for Multiview Video Coding (MVC by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over 20∼30% bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by 0.46∼0.61 dB at the cost of insensitive image quality degradation of the background image.

  14. Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding

    Science.gov (United States)

    Zhang, Yun; Jiang, Gangyi; Yu, Mei; Chen, Ken; Dai, Qionghai

    2010-12-01

    We propose a Stereoscopic Visual Attention- (SVA-) based regional bit allocation optimization for Multiview Video Coding (MVC) by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI) is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over [InlineEquation not available: see fulltext.]% bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by [InlineEquation not available: see fulltext.] dB at the cost of insensitive image quality degradation of the background image.

  15. Stereo side information generation in low-delay distributed stereo video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Forchhammer, Søren

    2012-01-01

    Distributed Video Coding (DVC) is a technique that allows shifting the computational complexity from the encoder to the decoder. One of the core elements of the decoder is the creation of the Side Information (SI), which is a hypothesis of what the to-be-decoded frame looks like. Much work on DVC...... has been carried out: often the decoder can use future and past frames in order to obtain the SI exploiting the time redundancy. Other work has addressed a Multiview scenario; exploiting the frames coming from cameras close to the one we are decoding (usually a left and right camera) it is possible...... to create SI exploiting the inter-view spatial redundancy. A careful fusion of the two SI should be done in order to use the best part of each SI. In this work we study a Stereo Low-Delay scenario using only two views. Due to the delay constraint we use only past frames of the sequence we are decoding...

  16. Bitstream decoding processor for fast entropy decoding of variable length coding-based multiformat videos

    Science.gov (United States)

    Jo, Hyunho; Sim, Donggyu

    2014-06-01

    We present a bitstream decoding processor for entropy decoding of variable length coding-based multiformat videos. Since most of the computational complexity of entropy decoders comes from bitstream accesses and table look-up process, the developed bitstream processing unit (BsPU) has several designated instructions to access bitstreams and to minimize branch operations in the table look-up process. In addition, the instruction for bitstream access has the capability to remove emulation prevention bytes (EPBs) of H.264/AVC without initial delay, repeated memory accesses, and additional buffer. Experimental results show that the proposed method for EPB removal achieves a speed-up of 1.23 times compared to the conventional EPB removal method. In addition, the BsPU achieves speed-ups of 5.6 and 3.5 times in entropy decoding of H.264/AVC and MPEG-4 Visual bitstreams, respectively, compared to an existing processor without designated instructions and a new table mapping algorithm. The BsPU is implemented on a Xilinx Virtex5 LX330 field-programmable gate array. The MPEG-4 Visual (ASP, Level 5) and H.264/AVC (Main Profile, Level 4) are processed using the developed BsPU with a core clock speed of under 250 MHz in real time.

  17. Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding

    Directory of Open Access Journals (Sweden)

    Dai Qionghai

    2010-01-01

    Full Text Available We propose a Stereoscopic Visual Attention- (SVA- based regional bit allocation optimization for Multiview Video Coding (MVC by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over % bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by  dB at the cost of insensitive image quality degradation of the background image.

  18. RD Optimized, Adaptive, Error-Resilient Transmission of MJPEG2000-Coded Video over Multiple Time-Varying Channels

    Directory of Open Access Journals (Sweden)

    Bezan Scott

    2006-01-01

    Full Text Available To reliably transmit video over error-prone channels, the data should be both source and channel coded. When multiple channels are available for transmission, the problem extends to that of partitioning the data across these channels. The condition of transmission channels, however, varies with time. Therefore, the error protection added to the data at one instant of time may not be optimal at the next. In this paper, we propose a method for adaptively adding error correction code in a rate-distortion (RD optimized manner using rate-compatible punctured convolutional codes to an MJPEG2000 constant rate-coded frame of video. We perform an analysis on the rate-distortion tradeoff of each of the coding units (tiles and packets in each frame and adapt the error correction code assigned to the unit taking into account the bandwidth and error characteristics of the channels. This method is applied to both single and multiple time-varying channel environments. We compare our method with a basic protection method in which data is either not transmitted, transmitted with no protection, or transmitted with a fixed amount of protection. Simulation results show promising performance for our proposed method.

  19. Using standardized patients versus video cases for representing clinical problems in problem-based learning.

    Science.gov (United States)

    Yoon, Bo Young; Choi, Ikseon; Choi, Seokjin; Kim, Tae-Hee; Roh, Hyerin; Rhee, Byoung Doo; Lee, Jong-Tae

    2016-06-01

    The quality of problem representation is critical for developing students' problem-solving abilities in problem-based learning (PBL). This study investigates preclinical students' experience with standardized patients (SPs) as a problem representation method compared to using video cases in PBL. A cohort of 99 second-year preclinical students from Inje University College of Medicine (IUCM) responded to a Likert scale questionnaire on their learning experiences after they had experienced both video cases and SPs in PBL. The questionnaire consisted of 14 items with eight subcategories: problem identification, hypothesis generation, motivation, collaborative learning, reflective thinking, authenticity, patient-doctor communication, and attitude toward patients. The results reveal that using SPs led to the preclinical students having significantly positive experiences in boosting patient-doctor communication skills; the perceived authenticity of their clinical situations; development of proper attitudes toward patients; and motivation, reflective thinking, and collaborative learning when compared to using video cases. The SPs also provided more challenges than the video cases during problem identification and hypotheses generation. SPs are more effective than video cases in delivering higher levels of authenticity in clinical problems for PBL. The interaction with SPs engages preclinical students in deeper thinking and discussion; growth of communication skills; development of proper attitudes toward patients; and motivation. Considering the higher cost of SPs compared with video cases, SPs could be used most advantageously during the preclinical period in the IUCM curriculum.

  20. Final Report. An Integrated Partnership to Create and Lead the Solar Codes and Standards Working Group

    Energy Technology Data Exchange (ETDEWEB)

    Rosenthal, Andrew [New Mexico State Univ., Las Cruces, NM (United States)

    2013-12-30

    The DOE grant, “An Integrated Partnership to Create and Lead the Solar Codes and Standards Working Group,” to New Mexico State University created the Solar America Board for Codes and Standards (Solar ABCs). From 2007 – 2013 with funding from this grant, Solar ABCs identified current issues, established a dialogue among key stakeholders, and catalyzed appropriate activities to support the development of codes and standards that facilitated the installation of high quality, safe photovoltaic systems. Solar ABCs brought the following resources to the PV stakeholder community; Formal coordination in the planning or revision of interrelated codes and standards removing “stove pipes” that have only roofing experts working on roofing codes, PV experts on PV codes, fire enforcement experts working on fire codes, etc.; A conduit through which all interested stakeholders were able to see the steps being taken in the development or modification of codes and standards and participate directly in the processes; A central clearing house for new documents, standards, proposed standards, analytical studies, and recommendations of best practices available to the PV community; A forum of experts that invites and welcomes all interested parties into the process of performing studies, evaluating results, and building consensus on standards and code-related topics that affect all aspects of the market; and A biennial gap analysis to formally survey the PV community to identify needs that are unmet and inhibiting the market and necessary technical developments.

  1. Effects of Expanded and Standard Captions on Deaf College Students' Comprehension of Educational Videos

    Science.gov (United States)

    Stinson, Michael S.; Stevenson, Susan

    2013-01-01

    Twenty-two college students who were deaf viewed one instructional video with standard captions and a second with expanded captions, in which key terms were expanded in the form of vocabulary definitions, labeled illustrations, or concept maps. The students performed better on a posttest after viewing either type of caption than on a pretest;…

  2. Panel mounted time code reader. [Day of year, hour, minute, second from standard Inter Range Instrumentation Group time codes

    Energy Technology Data Exchange (ETDEWEB)

    Shaum, R. L.

    1978-02-01

    The time code reader described is composed of an assembly of commonly available electronic components logically arranged to minimize component count while reliably achieving the basic function of decoding and displaying the time of year information which is available in each of several standard Inter Range Instrumentation Group (IRIG) time codes. The time code reader omits all subsidiary functions of the code except that of retrieving a readable time of year (day, hour, minute, second). The reader can be mounted on any equipment panel having an available flat surface that is 2 by 6 inches in dimensions. IRIG time codes A, B, E, and G can be read without the necessity of switching, and a relatively wide range of input voltages is accommodated.

  3. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    Science.gov (United States)

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  4. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    Directory of Open Access Journals (Sweden)

    Behrang Barekatain

    Full Text Available In recent years, Random Network Coding (RNC has emerged as a promising solution for efficient Peer-to-Peer (P2P video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  5. Are you ready for an office code blue? : online video to prepare for office emergencies.

    Science.gov (United States)

    Moore, Simon

    2015-01-01

    Medical emergencies occur commonly in offices of family physicians, yet many offices are poorly prepared for emergencies. An Internet-based educational video discussing office emergencies might improve the responses of physicians and their staff to emergencies, yet such a tool has not been previously described. To use evidence-based practices to develop an educational video detailing preparation for emergencies in medical offices, disseminate the video online, and evaluate the attitudes of physicians and their staff toward the video. A 6-minute video was created using a review of recent literature and Canadian regulatory body policies. The video describes recommended emergency equipment, emergency response improvement, and office staff training. Physicians and their staff were invited to view the video online at www.OfficeEmergencies.ca. Viewers' opinions of the video format and content were assessed by survey (n = 275). Survey findings indicated the video was well presented and relevant, and the Web-based format was considered convenient and satisfactory. Participants would take other courses using this technology, and agreed this program would enhance patient care. Copyright© the College of Family Physicians of Canada.

  6. Phenotypic Graphs and Evolution Unfold the Standard Genetic Code as the Optimal

    Science.gov (United States)

    Zamudio, Gabriel S.; José, Marco V.

    2018-03-01

    In this work, we explicitly consider the evolution of the Standard Genetic Code (SGC) by assuming two evolutionary stages, to wit, the primeval RNY code and two intermediate codes in between. We used network theory and graph theory to measure the connectivity of each phenotypic graph. The connectivity values are compared to the values of the codes under different randomization scenarios. An error-correcting optimal code is one in which the algebraic connectivity is minimized. We show that the SGC is optimal in regard to its robustness and error-tolerance when compared to all random codes under different assumptions.

  7. Assessement of Codes and Standards Applicable to a Hydrogen Production Plant Coupled to a Nuclear Reactor

    Energy Technology Data Exchange (ETDEWEB)

    M. J. Russell

    2006-06-01

    This is an assessment of codes and standards applicable to a hydrogen production plant to be coupled to a nuclear reactor. The result of the assessment is a list of codes and standards that are expected to be applicable to the plant during its design and construction.

  8. Applying emerging digital video interface standards to airborne avionics sensor and digital map integrations: benefits outweigh the initial costs

    Science.gov (United States)

    Kuehl, C. Stephen

    1996-06-01

    Video signal system performance can be compromised in a military aircraft cockpit management system (CMS) with the tailoring of vintage Electronics Industries Association (EIA) RS170 and RS343A video interface standards. Video analog interfaces degrade when induced system noise is present. Further signal degradation has been traditionally associated with signal data conversions between avionics sensor outputs and the cockpit display system. If the CMS engineering process is not carefully applied during the avionics video and computing architecture development, extensive and costly redesign will occur when visual sensor technology upgrades are incorporated. Close monitoring and technical involvement in video standards groups provides the knowledge-base necessary for avionic systems engineering organizations to architect adaptable and extendible cockpit management systems. With the Federal Communications Commission (FCC) in the process of adopting the Digital HDTV Grand Alliance System standard proposed by the Advanced Television Systems Committee (ATSC), the entertainment and telecommunications industries are adopting and supporting the emergence of new serial/parallel digital video interfaces and data compression standards that will drastically alter present NTSC-M video processing architectures. The re-engineering of the U.S. Broadcasting system must initially preserve the electronic equipment wiring networks within broadcast facilities to make the transition to HDTV affordable. International committee activities in technical forums like ITU-R (former CCIR), ANSI/SMPTE, IEEE, and ISO/IEC are establishing global consensus on video signal parameterizations that support a smooth transition from existing analog based broadcasting facilities to fully digital computerized systems. An opportunity exists for implementing these new video interface standards over existing video coax/triax cabling in military aircraft cockpit management systems. Reductions in signal

  9. 77 FR 67628 - National Fire Codes: Request for Public Input for Revision of Codes and Standards

    Science.gov (United States)

    2012-11-13

    ... 820--2012 Standard for Fire 7/8/2013 Protection in Wastewater Treatment and Collection Facilities... Storage and Handling of Cellulose Nitrate Film. NFPA 45--2011 Standard on Fire 1/4/2013 Protection for...

  10. Codes and standards and other guidance cited in regulatory documents. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Ankrum, A.; Nickolaus, J.; Vinther, R.; Maguire-Moffitt, N.; Hammer, J.; Sherfey, L.; Warner, R. [Pacific Northwest Lab., Richland, WA (United States)

    1994-08-01

    As part of the US Nuclear Regulatory Commission (NRC) Standard Review Plan Update and Development Program, Pacific Northwest Laboratory developed a listing of industry consensus codes and standards and other government and industry guidance referred to in regulatory documents. In addition to updating previous information, Revision 1 adds citations from the NRC Inspection Manual and the Improved Standard Technical Specifications. This listing identifies the version of the code or standard cited in the regulatory document, the regulatory document, and the current version of the code or standard. It also provides a summary characterization of the nature of the citation. This listing was developed from electronic searches of the Code of Federal Regulations and the NRC`s Bulletins, Information Notices, Circulars, Generic Letters, Policy Statements, Regulatory Guides, and the Standard Review Plan (NUREG-0800).

  11. Low complexity hevc intra coding

    OpenAIRE

    Ruiz Coll, José Damián

    2016-01-01

    Over the last few decades, much research has focused on the development and optimization of video codecs for media distribution to end-users via the Internet, broadcasts or mobile networks, but also for videoconferencing and for the recording on optical disks for media distribution. Most of the video coding standards for delivery are characterized by using a high efficiency hybrid schema, based on inter-prediction coding for temporal picture decorrelation, and intra-prediction coding for spat...

  12. A Smart Video Coding Method for Time Lag Reduction in Telesurgery

    National Research Council Canada - National Science Library

    Sun, Mingui; Liu, Qiang; Xu, Jian; Kassam, Amin; Enos, Sharon E; Marchessault, Ronald; Gilbert, Gary; Sclabassi, Robert J

    2006-01-01

    .... These advances have made remotely operable telemedicine possible. However, a key technology that rapidly encodes, transmits, and decodes surgical video with the minimum round-trip delay and the least influence by network jitter...

  13. Analysis of Packet-Loss-Induced Distortion in View Synthesis Prediction-Based 3D Video Coding.

    Science.gov (United States)

    Gao, Pan; Peng, Qiang; Xiang, Wei

    2017-06-01

    View synthesis prediction (VSP) is a crucial coding tool for improving compression efficiency in the next generation 3D video systems. However, VSP is susceptible to catastrophic error propagation when multi-view video plus depth (MVD) data are transmitted over lossy networks. This paper aims at accurately modeling the transmission errors propagated in the inter-view direction caused by VSP. Toward this end, we first study how channel errors gradually propagate along the VSP-based inter-view prediction path. Then, a new recursive model is formulated to estimate the expected end-to-end distortion caused by those channel losses. For the proposed model, the compound impact of the transmission distortions of both the texture video and depth map on the quality of the synthetic reference view is mathematically analyzed. Especially, the expected view synthesis distortion due to depth errors is characterized in the frequency domain using a new approach, which combines the energy densities of the reconstructed texture image and the channel errors. The proposed model also explicitly considers the disparity rounding operation invoked for the sub-pixel precision rendering of the synthesized reference view. Experimental results are presented to demonstrate that the proposed analytic model is capable of effectively modeling the channel-induced distortion for MVD-based 3D video transmission.

  14. HEVC performance and complexity for 4K video

    OpenAIRE

    Bross, Benjamin; George, Valeri; Álvarez-Mesa, Mauricio; Mayer, Tobias; Chi, Chi Ching; Brandenburg, Jens; Schierl, Thomas; Marpe, Detlev; Juurlink, Ben

    2013-01-01

    The recently finalized High-Efficiency Video Coding (HEVC) standard was jointly developed by the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) to improve the compression performance of current video coding standards by 50%. Especially when it comes to transmit high resolution video like 4K over the internet or in broadcast, the 50% bitrate reduction is essential. This paper shows that real-time decoding of 4K video with a frame-level parallel deco...

  15. 76 FR 22383 - National Fire Codes: Request for Proposals for Revision of Codes and Standards

    Science.gov (United States)

    2011-04-21

    ... Facilities. NFPA 140-2008 Standard on Motion 5/23/2011 Picture and Television Production Studio Soundstages... Fireplaces, Vents, and Solid Fuel-Burning Appliances. NFPA 225-2009 Model Manufactured Home 5/23/2011... Standard for Fire 5/23/2011 Safety Criteria for Manufactured Home Installations, Sites, and Communities...

  16. 77 FR 34020 - National Fire Codes: Request for Public Input for Revision of Codes and Standards

    Science.gov (United States)

    2012-06-08

    ... Fire Protection in Wastewater 7/8/2013. Treatment and Collection Facilities. NFPA 850--2010 Recommended... the Storage and Handling of 6/22/2012. Cellulose Nitrate Film. NFPA 45--2011 Standard on Fire...

  17. Codes and Standards Requirements for Deployment of Emerging Fuel Cell Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Burgess, R.; Buttner, W.; Riykin, C.

    2011-12-01

    The objective of this NREL report is to provide information on codes and standards (of two emerging hydrogen power fuel cell technology markets; forklift trucks and backup power units), that would ease the implementation of emerging fuel cell technologies. This information should help project developers, project engineers, code officials and other interested parties in developing and reviewing permit applications for regulatory compliance.

  18. NODC Standard Format Marine Toxic Substances and Pollutants (F144) chemical identification codes (NODC Accession 9200273)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This archival information package contains a listing of codes and chemical names that were used in NODC Standard Format Marine Toxic Substances and Pollutants (F144)...

  19. Spatial resolution enhancement residual coding using hybrid ...

    Indian Academy of Sciences (India)

    the increasing demands of video communication that motivates researchers to develop cutting- edge algorithms. All the video coding standards, to date, make use of various ... quantization and entropy coding to minimize spatio- temporal, intra-frame, visual, and statistical redundancies, respectively. Intra and inter prediction.

  20. Accelerating Wavelet-Based Video Coding on Graphics Hardware using CUDA

    NARCIS (Netherlands)

    Laan, Wladimir J. van der; Roerdink, Jos B.T.M.; Jalba, Andrei C.; Zinterhof, P; Loncaric, S; Uhl, A; Carini, A

    2009-01-01

    The Discrete Wavelet Transform (DWT) has a wide range of applications from signal processing to video and image compression. This transform, by means of the lifting scheme, can be performed in a memory mid computation efficient way on modern, programmable GPUs, which can be regarded as massively

  1. Observational Review and Analysis of Concussion: a Method for Conducting a Standardized Video Analysis of Concussion in Rugby League

    National Research Council Canada - National Science Library

    Gardner, Andrew J; Levi, Christopher R; Iverson, Grant L

    2017-01-01

    ... published. The aim of this study is to evaluate whether independent raters reliably agreed on the injury characterization when using a standardized observational instrument to record video footage of National Rugby League (NRL...

  2. Accounting Education Approach in the Context of New Turkish Commercial Code and Turkish Accounting Standards

    Directory of Open Access Journals (Sweden)

    Cevdet Kızıl

    2014-08-01

    Full Text Available The aim of this article is to investigate the impact of new Turkish commercial code and Turkish accounting standards on accounting education. This study takes advantage of the survey method for gathering information and running the research analysis. For this purpose, questionnaire forms are distributed to university students personally and via the internet.This paper includes significant research questions such as “Are accounting academicians informed and knowledgeable on new Turkish commercial code and Turkish accounting standards?”, “Do accounting academicians integrate new Turkish commercial code and Turkish accounting standards to their lectures?”, “How does modern accounting education methodology and technology coincides with the teaching of new Turkish commercial code and Turkish accounting standards?”, “Do universities offer mandatory and elective courses which cover the new Turkish commercial code and Turkish accounting standards?” and “If such courses are offered, what are their names, percentage in the curriculum and degree of coverage?”Research contributes to the literature in several ways. Firstly, new Turkish commercial code and Turkish accounting standards are current significant topics for the accounting profession. Furthermore, the accounting education provides a basis for the implementations in public and private sector. Besides, one of the intentions of new Turkish commercial code and Turkish accounting standards is to foster transparency. That is definitely a critical concept also in terms of mergers, acquisitions and investments. Stakeholders of today’s business world such as investors, shareholders, entrepreneurs, auditors and government are in need of more standardized global accounting principles Thus, revision and redesigning of accounting educations plays an important role. Emphasized points also clearly prove the necessity and functionality of this research.

  3. Safety, codes and standards for hydrogen installations. Metrics development and benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Harris, Aaron P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dedrick, Daniel E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); LaFleur, Angela Christine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); San Marchi, Christopher W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-04-01

    Automakers and fuel providers have made public commitments to commercialize light duty fuel cell electric vehicles and fueling infrastructure in select US regions beginning in 2014. The development, implementation, and advancement of meaningful codes and standards is critical to enable the effective deployment of clean and efficient fuel cell and hydrogen solutions in the energy technology marketplace. Metrics pertaining to the development and implementation of safety knowledge, codes, and standards are important to communicate progress and inform future R&D investments. This document describes the development and benchmarking of metrics specific to the development of hydrogen specific codes relevant for hydrogen refueling stations. These metrics will be most useful as the hydrogen fuel market transitions from pre-commercial to early-commercial phases. The target regions in California will serve as benchmarking case studies to quantify the success of past investments in research and development supporting safety codes and standards R&D.

  4. Impact of H.264/AVC and H.265/HEVC Compression Standards on the Video Quality for 4K Resolution

    Directory of Open Access Journals (Sweden)

    Miroslav Uhrina

    2014-01-01

    Full Text Available This article deals with the impact of H.264/AVC and H.265/HEVC compression standards on the video quality for 4K resolution. In the first part a short characteristic of both compression standards is written. The second part focuses on the well-known objective metrics which were used for evaluating the video quality. In the third part the measurements and the experimental results are described.

  5. International standard problem (ISP) no. 41 follow up exercise: Containment iodine computer code exercise: parametric studies

    Energy Technology Data Exchange (ETDEWEB)

    Ball, J.; Glowa, G.; Wren, J. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada); Ewig, F. [GRS Koln (Germany); Dickenson, S. [AEAT, (United Kingdom); Billarand, Y.; Cantrel, L. [IPSN (France); Rydl, A. [NRIR (Czech Republic); Royen, J. [OECD/NEA (France)

    2001-11-01

    This report describes the results of the second phase of International Standard Problem (ISP) 41, an iodine behaviour code comparison exercise. The first phase of the study, which was based on a simple Radioiodine Test Facility (RTF) experiment, demonstrated that all of the iodine behaviour codes had the capability to reproduce iodine behaviour for a narrow range of conditions (single temperature, no organic impurities, controlled pH steps). The current phase, a parametric study, was designed to evaluate the sensitivity of iodine behaviour codes to boundary conditions such as pH, dose rate, temperature and initial I{sup -} concentration. The codes used in this exercise were IODE(IPSN), IODE(NRIR), IMPAIR(GRS), INSPECT(AEAT), IMOD(AECL) and LIRIC(AECL). The parametric study described in this report identified several areas of discrepancy between the various codes. In general, the codes agree regarding qualitative trends, but their predictions regarding the actual amount of volatile iodine varied considerably. The largest source of the discrepancies between code predictions appears to be their different approaches to modelling the formation and destruction of organic iodides. A recommendation arising from this exercise is that an additional code comparison exercise be performed on organic iodide formation, against data obtained front intermediate-scale studies (two RTF (AECL, Canada) and two CAIMAN facility, (IPSN, France) experiments have been chosen). This comparison will allow each of the code users to realistically evaluate and improve the organic iodide behaviour sub-models within their codes. (author)

  6. 76 FR 70414 - National Fire Protection Association (NFPA) Proposes To Revise Codes and Standards

    Science.gov (United States)

    2011-11-14

    ... Document--Edition Document title closing date NFPA 1--2012 Fire Code 6/22/2012 NFPA 2--2011 Hydrogen... Road Vehicles. NFPA 610--2009 Guide for Emergency and Safety Operations at 11/25/2011 Motorsports...--2009 Standard for Wildland Fire Management 11/25/2011 NFPA 1192--2011 Standard on Recreational Vehicles...

  7. Defining the cognitive enhancing properties of video games: Steps Towards Standardization and Translation

    OpenAIRE

    Goodwin, Shikha Jain; Dziobek, Derek

    2016-01-01

    Ever since video games were available to the general public, they have intrigued brain researchers for many reasons. There is an enormous amount of diversity in the video game research, ranging from types of video games used, the amount of time spent playing video games, the definition of video gamer versus non-gamer to the results obtained after playing video games. In this paper, our goal is to provide a critical discussion of these issues, along with some steps towards generalization using...

  8. Prediction accuracy in estimating joint angle trajectories using a video posture coding method for sagittal lifting tasks.

    Science.gov (United States)

    Chang, Chien-Chi; McGorry, Raymond W; Lin, Jia-Hua; Xu, Xu; Hsiang, Simon M

    2010-08-01

    This study investigated prediction accuracy of a video posture coding method for lifting joint trajectory estimation. From three filming angles, the coder selected four key snapshots, identified joint angles and then a prediction program estimated the joint trajectories over the course of a lift. Results revealed a limited range of differences of joint angles (elbow, shoulder, hip, knee, ankle) between the manual coding method and the electromagnetic motion tracking system approach. Lifting range significantly affected estimate accuracy for all joints and camcorder filming angle had a significant effect on all joints but the hip. Joint trajectory predictions were more accurate for knuckle-to-shoulder lifts than for floor-to-shoulder or floor-to-knuckle lifts with average root mean square errors (RMSE) of 8.65 degrees , 11.15 degrees and 11.93 degrees , respectively. Accuracy was also greater for the filming angles orthogonal to the participant's sagittal plane (RMSE = 9.97 degrees ) as compared to filming angles of 45 degrees (RMSE = 11.01 degrees ) or 135 degrees (10.71 degrees ). The effects of lifting speed and loading conditions were minimal. To further increase prediction accuracy, improved prediction algorithms and/or better posture matching methods should be investigated. STATEMENT OF RELEVANCE: Observation and classification of postures are common steps in risk assessment of manual materials handling tasks. The ability to accurately predict lifting patterns through video coding can provide ergonomists with greater resolution in characterising or assessing the lifting tasks than evaluation based solely on sampling with a single lifting posture event.

  9. Cross-Layer Design for Video Transmission over Wireless Rician Slow-Fading Channels Using an Adaptive Multiresolution Modulation and Coding Scheme

    Directory of Open Access Journals (Sweden)

    James W. Modestino

    2007-01-01

    Full Text Available We describe a multilayered video transport scheme for wireless channels capable of adapting to channel conditions in order to maximize end-to-end quality of service (QoS. This scheme combines a scalable H.263+ video source coder with unequal error protection (UEP across layers. The UEP is achieved by employing different channel codes together with a multiresolution modulation approach to transport the different priority layers. Adaptivity to channel conditions is provided through a joint source-channel coding (JSCC approach which attempts to jointly optimize the source and channel coding rates together with the modulation parameters to obtain the maximum achievable end-to-end QoS for the prevailing channel conditions. In this work, we model the wireless links as slow-fading Rician channel where the channel conditions can be described in terms of the channel signal-to-noise ratio (SNR and the ratio of specular-to-diffuse energy ζ2. The multiresolution modulation/coding scheme consists of binary rate-compatible punctured convolutional (RCPC codes used together with nonuniform phase-shift keyed (PSK signaling constellations. Results indicate that this adaptive JSCC scheme employing scalable video encoding together with a multiresolution modulation/coding approach leads to significant improvements in delivered video quality for specified channel conditions. In particular, the approach results in considerably improved graceful degradation properties for decreasing channel SNR.

  10. Cross-Layer Design for Video Transmission over Wireless Rician Slow-Fading Channels Using an Adaptive Multiresolution Modulation and Coding Scheme

    Directory of Open Access Journals (Sweden)

    Modestino James W

    2007-01-01

    Full Text Available We describe a multilayered video transport scheme for wireless channels capable of adapting to channel conditions in order to maximize end-to-end quality of service (QoS. This scheme combines a scalable H.263+ video source coder with unequal error protection (UEP across layers. The UEP is achieved by employing different channel codes together with a multiresolution modulation approach to transport the different priority layers. Adaptivity to channel conditions is provided through a joint source-channel coding (JSCC approach which attempts to jointly optimize the source and channel coding rates together with the modulation parameters to obtain the maximum achievable end-to-end QoS for the prevailing channel conditions. In this work, we model the wireless links as slow-fading Rician channel where the channel conditions can be described in terms of the channel signal-to-noise ratio (SNR and the ratio of specular-to-diffuse energy . The multiresolution modulation/coding scheme consists of binary rate-compatible punctured convolutional (RCPC codes used together with nonuniform phase-shift keyed (PSK signaling constellations. Results indicate that this adaptive JSCC scheme employing scalable video encoding together with a multiresolution modulation/coding approach leads to significant improvements in delivered video quality for specified channel conditions. In particular, the approach results in considerably improved graceful degradation properties for decreasing channel SNR.

  11. An electrocorticographic BCI using code-based VEP for control in video applications: a single-subject study.

    Science.gov (United States)

    Kapeller, Christoph; Kamada, Kyousuke; Ogawa, Hiroshi; Prueckl, Robert; Scharinger, Josef; Guger, Christoph

    2014-01-01

    A brain-computer-interface (BCI) allows the user to control a device or software with brain activity. Many BCIs rely on visual stimuli with constant stimulation cycles that elicit steady-state visual evoked potentials (SSVEP) in the electroencephalogram (EEG). This EEG response can be generated with a LED or a computer screen flashing at a constant frequency, and similar EEG activity can be elicited with pseudo-random stimulation sequences on a screen (code-based BCI). Using electrocorticography (ECoG) instead of EEG promises higher spatial and temporal resolution and leads to more dominant evoked potentials due to visual stimulation. This work is focused on BCIs based on visual evoked potentials (VEP) and its capability as a continuous control interface for augmentation of video applications. One 35 year old female subject with implanted subdural grids participated in the study. The task was to select one out of four visual targets, while each was flickering with a code sequence. After a calibration run including 200 code sequences, a linear classifier was used during an evaluation run to identify the selected visual target based on the generated code-based VEPs over 20 trials. Multiple ECoG buffer lengths were tested and the subject reached a mean online classification accuracy of 99.21% for a window length of 3.15 s. Finally, the subject performed an unsupervised free run in combination with visual feedback of the current selection. Additionally, an algorithm was implemented that allowed to suppress false positive selections and this allowed the subject to start and stop the BCI at any time. The code-based BCI system attained very high online accuracy, which makes this approach very promising for control applications where a continuous control signal is needed.

  12. A repository of codes of ethics and technical standards in health informatics.

    Science.gov (United States)

    Samuel, Hamman W; Zaïane, Osmar R

    2014-01-01

    We present a searchable repository of codes of ethics and standards in health informatics. It is built using state-of-the-art search algorithms and technologies. The repository will be potentially beneficial for public health practitioners, researchers, and software developers in finding and comparing ethics topics of interest. Public health clinics, clinicians, and researchers can use the repository platform as a one-stop reference for various ethics codes and standards. In addition, the repository interface is built for easy navigation, fast search, and side-by-side comparative reading of documents. Our selection criteria for codes and standards are two-fold; firstly, to maintain intellectual property rights, we index only codes and standards freely available on the internet. Secondly, major international, regional, and national health informatics bodies across the globe are surveyed with the aim of understanding the landscape in this domain. We also look at prevalent technical standards in health informatics from major bodies such as the International Standards Organization (ISO) and the U. S. Food and Drug Administration (FDA). Our repository contains codes of ethics from the International Medical Informatics Association (IMIA), the iHealth Coalition (iHC), the American Health Information Management Association (AHIMA), the Australasian College of Health Informatics (ACHI), the British Computer Society (BCS), and the UK Council for Health Informatics Professions (UKCHIP), with room for adding more in the future. Our major contribution is enhancing the findability of codes and standards related to health informatics ethics by compilation and unified access through the health informatics ethics repository.

  13. Republic of Lithuania; Report on the Observance of Standards and Codes-Fiscal Transparency Module

    OpenAIRE

    International Monetary Fund

    2002-01-01

    This report examines the Observance of Standards and Codes on Fiscal Transparency for the Republic of Lithuania. Lithuania’s fiscal institutional framework meets many requirements of the Code of Good Practices on Fiscal Transparency. Important strengths are clearly defined roles and responsibilities of the three branches of government; limited scope for quasi-fiscal activity at the central government level; and binding debt rules for all levels of government. The reforms that are being impl...

  14. Final Technical Report for GO17004 Regulatory Logic: Codes and Standards for the Hydrogen Economy

    Energy Technology Data Exchange (ETDEWEB)

    Nakarado, Gary L. [Regulatory Logic LLC, Golden, CO (United States)

    2017-02-22

    The objectives of this project are to: develop a robust supporting research and development program to provide critical hydrogen behavior data and a detailed understanding of hydrogen combustion and safety across a range of scenarios, needed to establish setback distances in building codes and minimize the overall data gaps in code development; support and facilitate the completion of technical specifications by the International Organization for Standardization (ISO) for gaseous hydrogen refueling (TS 20012) and standards for on-board liquid (ISO 13985) and gaseous or gaseous blend (ISO 15869) hydrogen storage by 2007; support and facilitate the effort, led by the NFPA, to complete the draft Hydrogen Technologies Code (NFPA 2) by 2008; with experimental data and input from Technology Validation Program element activities, support and facilitate the completion of standards for bulk hydrogen storage (e.g., NFPA 55) by 2008; facilitate the adoption of the most recently available model codes (e.g., from the International Code Council [ICC]) in key regions; complete preliminary research and development on hydrogen release scenarios to support the establishment of setback distances in building codes and provide a sound basis for model code development and adoption; support and facilitate the development of Global Technical Regulations (GTRs) by 2010 for hydrogen vehicle systems under the United Nations Economic Commission for Europe, World Forum for Harmonization of Vehicle Regulations and Working Party on Pollution and Energy Program (ECE-WP29/GRPE); and to Support and facilitate the completion by 2012 of necessary codes and standards needed for the early commercialization and market entry of hydrogen energy technologies.

  15. Development of a Coding Instrument to Assess the Quality and Content of Anti-Tobacco Video Games.

    Science.gov (United States)

    Alber, Julia M; Watson, Anna M; Barnett, Tracey E; Mercado, Rebeccah; Bernhardt, Jay M

    2015-07-01

    Previous research has shown the use of electronic video games as an effective method for increasing content knowledge about the risks of drugs and alcohol use for adolescents. Although best practice suggests that theory, health communication strategies, and game appeal are important characteristics for developing games, no instruments are currently available to examine the quality and content of tobacco prevention and cessation electronic games. This study presents the systematic development of a coding instrument to measure the quality, use of theory, and health communication strategies of tobacco cessation and prevention electronic games. Using previous research and expert review, a content analysis coding instrument measuring 67 characteristics was developed with three overarching categories: type and quality of games, theory and approach, and type and format of messages. Two trained coders applied the instrument to 88 games on four platforms (personal computer, Nintendo DS, iPhone, and Android phone) to field test the instrument. Cohen's kappa for each item ranged from 0.66 to 1.00, with an average kappa value of 0.97. Future research can adapt this coding instrument to games addressing other health issues. In addition, the instrument questions can serve as a useful guide for evidence-based game development.

  16. Development of a Coding Instrument to Assess the Quality and Content of Anti-Tobacco Video Games

    Science.gov (United States)

    Alber, Julia M.; Watson, Anna M.; Barnett, Tracey E.; Mercado, Rebeccah

    2015-01-01

    Abstract Previous research has shown the use of electronic video games as an effective method for increasing content knowledge about the risks of drugs and alcohol use for adolescents. Although best practice suggests that theory, health communication strategies, and game appeal are important characteristics for developing games, no instruments are currently available to examine the quality and content of tobacco prevention and cessation electronic games. This study presents the systematic development of a coding instrument to measure the quality, use of theory, and health communication strategies of tobacco cessation and prevention electronic games. Using previous research and expert review, a content analysis coding instrument measuring 67 characteristics was developed with three overarching categories: type and quality of games, theory and approach, and type and format of messages. Two trained coders applied the instrument to 88 games on four platforms (personal computer, Nintendo DS, iPhone, and Android phone) to field test the instrument. Cohen's kappa for each item ranged from 0.66 to 1.00, with an average kappa value of 0.97. Future research can adapt this coding instrument to games addressing other health issues. In addition, the instrument questions can serve as a useful guide for evidence-based game development. PMID:26167842

  17. Use of a standardized code status explanation by residents among hospitalized patients

    Directory of Open Access Journals (Sweden)

    Kriti Mittal

    2014-04-01

    Full Text Available Objectives: There is wide variability in the discussion of code status by residents among hospitalized patients. The primary objective of this study was to determine the effect of a scripted code status explanation on patient understanding of choices pertaining to code status and end-of-life care. Methods: This was a single center, randomized trial in a teaching hospital. Patients were randomized to a control (questionnaire alone or intervention arm (standardized explanation+ questionnaire. A composite score was generated based on patient responses to assess comprehension. Results: The composite score was 5.27 in the intervention compared to 4.93 in the control arm (p=0.066. The score was lower in older patients (p<0.001, patients with multiple comorbidities (p≤0.001, KATZ score <6 (p=0.008, and those living in an assisted living/nursing home (p=0.005. There were significant differences in patient understanding of the ability to receive chest compressions, intravenous fluids, and tube feeds by code status. Conclusion: The scripted code status explanation did not significantly impact the composite score. Age, comorbidities, performance status, and type of residence demonstrated a significant association with patient understanding of code status choices. Practice implications: Standardized discussion of code status and training in communication of end-of-life care merit further research.

  18. Analyses in support of risk-informed natural gas vehicle maintenance facility codes and standards :

    Energy Technology Data Exchange (ETDEWEB)

    Ekoto, Isaac W.; Blaylock, Myra L.; LaFleur, Angela Christine; LaChance, Jeffrey L.; Horne, Douglas B.

    2014-03-01

    Safety standards development for maintenance facilities of liquid and compressed gas fueled large-scale vehicles is required to ensure proper facility design and operation envelopes. Standard development organizations are utilizing risk-informed concepts to develop natural gas vehicle (NGV) codes and standards so that maintenance facilities meet acceptable risk levels. The present report summarizes Phase I work for existing NGV repair facility code requirements and highlights inconsistencies that need quantitative analysis into their effectiveness. A Hazardous and Operability study was performed to identify key scenarios of interest. Finally, scenario analyses were performed using detailed simulations and modeling to estimate the overpressure hazards from HAZOP defined scenarios. The results from Phase I will be used to identify significant risk contributors at NGV maintenance facilities, and are expected to form the basis for follow-on quantitative risk analysis work to address specific code requirements and identify effective accident prevention and mitigation strategies.

  19. Building America Guidance for Identifying and Overcoming Code, Standard, and Rating Method Barriers

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Pamala C.; Halverson, Mark A.

    2013-09-01

    The U.S. Department of Energy’s (DOE) Building America program implemented a new Codes and Standards Innovation (CSI) Team in 2013. The Team’s mission is to assist Building America (BA) research teams and partners in identifying and resolving conflicts between Building America innovations and the various codes and standards that govern the construction of residences. A CSI Roadmap was completed in September, 2013. This guidance document was prepared using the information in the CSI Roadmap to provide BA research teams and partners with specific information and approaches to identifying and overcoming potential barriers to Building America (BA) innovations arising in and/or stemming from codes, standards, and rating methods. For more information on the BA CSI team, please email: CSITeam@pnnl.gov

  20. A novel 3D scalable video compression algorithm

    Science.gov (United States)

    Somasundaram, Siva; Subbalakshmi, Koduvayur P.

    2003-05-01

    In this paper we propose a scalable video coding scheme that utilizes the embedded block coding with optimal truncation (EBCOT) compression algorithm. Three dimensional spatio-temporal decomposition of the video sequence succeeded by compression using the EBCOT generates a SNR and resolution scalable bit stream. The proposed video coding algorithm not only performs closer to the MPEG-4 video coding standard in compression efficiency but also provides better SNR and resolution scalability. Experimental results show that the performance of the proposed algorithm does better than the 3-D SPIHT (Set Partitioning in Hierarchial Trees) algorithm by 1.5dB.

  1. A Hybrid Scheme Based on Pipelining and Multitasking in Mobile Application Processors for Advanced Video Coding

    Directory of Open Access Journals (Sweden)

    Muhammad Asif

    2015-01-01

    Full Text Available One of the key requirements for mobile devices is to provide high-performance computing at lower power consumption. The processors used in these devices provide specific hardware resources to handle computationally intensive video processing and interactive graphical applications. Moreover, processors designed for low-power applications may introduce limitations on the availability and usage of resources, which present additional challenges to the system designers. Owing to the specific design of the JZ47x series of mobile application processors, a hybrid software-hardware implementation scheme for H.264/AVC encoder is proposed in this work. The proposed scheme distributes the encoding tasks among hardware and software modules. A series of optimization techniques are developed to speed up the memory access and data transferring among memories. Moreover, an efficient data reusage design is proposed for the deblock filter video processing unit to reduce the memory accesses. Furthermore, fine grained macroblock (MB level parallelism is effectively exploited and a pipelined approach is proposed for efficient utilization of hardware processing cores. Finally, based on parallelism in the proposed design, encoding tasks are distributed between two processing cores. Experiments show that the hybrid encoder is 12 times faster than a highly optimized sequential encoder due to proposed techniques.

  2. Multiple description coding for SNR scalable video transmission over unreliable networks

    NARCIS (Netherlands)

    Choupani, R.; Wong, S.; Tolun, M.

    2012-01-01

    Streaming multimedia data on best-effort networks such as the Internet requires measures against bandwidth fluctuations and frame loss. Multiple Description Coding (MDC) methods are used to overcome the jitter and delay problems arising from frame losses by making the transmitted data more error

  3. A Model for Video Quality Assessment Considering Packet Loss for Broadcast Digital Television Coded in H.264

    Directory of Open Access Journals (Sweden)

    Jose Joskowicz

    2014-01-01

    Full Text Available This paper presents a model to predict video quality perceived by the broadcast digital television (DTV viewer. We present how noise on DTV can introduce individual transport stream (TS packet losses at the receiver. The type of these errors is different than the produced on IP networks. Different scenarios of TS packet loss are analyzed, including uniform and burst distributions. The results show that there is a high variability on the perceived quality for a given percentage of packet loss and type of error. This implies that there is practically no correlation between the type of error or the percentage of packets loss and the perceived degradation. A new metric is introduced, the weighted percentage of slice loss, which takes into account the affected slice type in each lost TS packet. We show that this metric is correlated with the video quality degradation. A novel parametric model for video quality estimation is proposed, designed, and verified based on the results of subjective tests in SD and HD. The results were compared to a standard model used in IP transmission scenarios. The proposed model improves Pearson Correlation and root mean square error between the subjective and the predicted MOS.

  4. Coding Local and Global Binary Visual Features Extracted From Video Sequences.

    Science.gov (United States)

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.

  5. Compiled reports on the applicability of selected codes and standards to advanced reactors

    Energy Technology Data Exchange (ETDEWEB)

    Benjamin, E.L.; Hoopingarner, K.R.; Markowski, F.J.; Mitts, T.M.; Nickolaus, J.R.; Vo, T.V.

    1994-08-01

    The following papers were prepared for the Office of Nuclear Regulatory Research of the U.S. Nuclear Regulatory Commission under contract DE-AC06-76RLO-1830 NRC FIN L2207. This project, Applicability of Codes and Standards to Advance Reactors, reviewed selected mechanical and electrical codes and standards to determine their applicability to the construction, qualification, and testing of advanced reactors and to develop recommendations as to where it might be useful and practical to revise them to suit the (design certification) needs of the NRC.

  6. Review of application code and standards for mechanical and piping design of HANARO fuel test loop

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J. Y.

    1998-02-01

    The design and installation of the irradiation test facility for verification test of the fuel performance are very important in connection with maximization of the utilization of HANARO. HANARO fuel test loop was designed in accordance with the same code and standards of nuclear power plant because HANARO FTL will be operated the high pressure and temperature same as nuclear power plant operation conditions. The objective of this study is to confirm the propriety of application code and standards for mechanical and piping of HANARO fuel test loop and to decide the technical specification of FTL systems. (author). 18 refs., 8 tabs., 6 figs.

  7. Impact of GoP on the Video Quality of VP9 Compression Standard for Full HD Resolution

    Directory of Open Access Journals (Sweden)

    Miroslav Uhrina

    2016-01-01

    Full Text Available In the last years, the interest on multimedia services has significantly increased. This leads to requirements for quality assessment, especially in video domain. Compression together with the transmission link imperfection are two main factors that influence the quality. This paper deals with the assessment of the Group of Pictures (GoP impact on the video quality of VP9 compression standard. The evaluation was done using selected objective and subjective methods for two types of Full HD sequences depending on content. These results are part of a new model that is still being created and will be used for predicting the video quality in networks based on IP.

  8. Optimal modulation and coding scheme allocation of scalable video multicast over IEEE 802.16e networks

    Directory of Open Access Journals (Sweden)

    Tsai Chia-Tai

    2011-01-01

    Full Text Available Abstract With the rapid development of wireless communication technology and the rapid increase in demand for network bandwidth, IEEE 802.16e is an emerging network technique that has been deployed in many metropolises. In addition to the features of high data rate and large coverage, it also enables scalable video multicasting, which is a potentially promising application, over an IEEE 802.16e network. How to optimally assign the modulation and coding scheme (MCS of the scalable video stream for the mobile subscriber stations to improve spectral efficiency and maximize utility is a crucial task. We formulate this MCS assignment problem as an optimization problem, called the total utility maximization problem (TUMP. This article transforms the TUMP into a precedence constraint knapsack problem, which is a NP-complete problem. Then, a branch and bound method, which is based on two dominance rules and a lower bound, is presented to solve the TUMP. The simulation results show that the proposed branch and bound method can find the optimal solution efficiently.

  9. Using High-Fidelity Simulation and Video-Assisted Debriefing to Enhance Obstetrical Hemorrhage Mock Code Training.

    Science.gov (United States)

    Jacobs, Peggy J

    The purpose of this descriptive, one-group posttest study was to explore the nursing staff's perception of the benefits of using high-fidelity simulation during mandated obstetrical hemorrhage mock code training. In addition, the use of video-assisted debriefing was used to enhance the nursing staff's evaluation of their communication and teamwork processes during a simulated obstetrical crisis. The convenience sample of 84 members of the nursing staff consented to completing data collection forms and being videotaped during the simulation. Quantitative results for the postsimulation survey showed that 93% of participants agreed or totally agreed that the use of SimMan made the simulation more realistic and enhanced learning and that debriefing and the use of videotaped playback improved their evaluation of team communication. Participants derived greatest benefit from reviewing their performance on videotape and discussing it during postsimulation debriefing. Simulation with video-assisted debriefing offers hospital educators the ability to evaluate team processes and offer support to improve teamwork with the ultimate goal of improving patient outcomes during obstetrical hemorrhage.

  10. The Effects of Standard and Reversed Code Mixing on L2 Vocabulary Recognition and Recall

    Directory of Open Access Journals (Sweden)

    Abbas Ali Zarei

    2012-09-01

    Full Text Available To investigate the effects of two code mixing conventions on L2 vocabulary recognition and recall, 87 female Iranian lower intermediate EFL learners were divided into three groups. One group received vocabulary instruction through standard code mixing in which an L1 lexical item was incorporated within an L2 context; another received the same instruction through reversed code mixing, which involved the use of an L2 lexical item within an L1 context. The third group was a comparison group that was presented with the same words in English sentences without any code mixing. At the end of the treatment, multiple-choice and fill-in-the-blanks vocabulary tests were administered to all three groups. The gathered data were analyzed using two separate one-way ANOVA procedures. Results indicated that code mixing conventions had no significant effect on the learners' vocabulary recognition. As to vocabulary production, the comparison group outperformed the standard code mixing group in a statistically significant way. The findings of the present study may have implications for learners, teachers, and syllabus designers.

  11. An adaptive scan of high frequency subbands for dyadic intra frame in MPEG4-AVC/H.264 scalable video coding

    Science.gov (United States)

    Shahid, Z.; Chaumont, M.; Puech, W.

    2009-01-01

    This paper develops a new adaptive scanning methodology for intra frame scalable coding framework based on a subband/wavelet(DWTSB) coding approach for MPEG-4 AVC/H.264 scalable video coding (SVC). It attempts to take advantage of the prior knowledge of the frequencies which are present in different higher frequency subbands. We propose dyadic intra frame coding method with adaptive scan (DWTSB-AS) for each subband as traditional zigzag scan is not suitable for high frequency subbands. Thus, by just modification of the scan order of the intra frame scalable coding framework of H.264, we can get better compression. The proposed algorithm has been theoretically justified and is thoroughly evaluated against the current SVC test model JSVM and DWTSB through extensive coding experiments for scalable coding of intra frame. The simulation results show the proposed scanning algorithm consistently outperforms JSVM and DWTSB in PSNR performance. This results in extra compression for intra frames, along with spatial scalability. Thus Image and video coding applications, traditionally serviced by separate coders, can be efficiently provided by an integrated coding system.

  12. End-of-life decisions in Malaysia: Adequacies of ethical codes and developing legal standards.

    Science.gov (United States)

    Kassim, Puteri Nemie Jahn; Alias, Fadhlina

    2015-06-01

    End-of-life decision-making is an area of medical practice in which ethical dilemmas and legal interventions have become increasingly prevalent. Decisions are no longer confined to clinical assessments; rather, they involve wider considerations such as a patient's religious and cultural beliefs, financial constraints, and the wishes and needs of family members. These decisions affect everyone concerned, including members of the community as a whole. Therefore it is imperative that clear ethical codes and legal standards are developed to help guide the medical profession on the best possible course of action for patients. This article considers the relevant ethical, codes and legal provisions in Malaysia governing certain aspects of end-of-life decision-making. It highlights the lack of judicial decisions in this area as well as the limitations with the Malaysian regulatory system. The article recommends the development of comprehensive ethical codes and legal standards to guide end-of-life decision-making in Malaysia.

  13. Up to code: does your company's conduct meet world-class standards?

    Science.gov (United States)

    Paine, Lynn; Deshpandé, Rohit; Margolis, Joshua D; Bettcher, Kim Eric

    2005-12-01

    Codes of conduct have long been a feature of corporate life. Today, they are arguably a legal necessity--at least for public companies with a presence in the United States. But the issue goes beyond U.S. legal and regulatory requirements. Sparked by corruption and excess of various types, dozens of industry, government, investor, and multisector groups worldwide have proposed codes and guidelines to govern corporate behavior. These initiatives reflect an increasingly global debate on the nature of corporate legitimacy. Given the legal, organizational, reputational, and strategic considerations, few companies will want to be without a code. But what should it say? Apart from a handful of essentials spelled out in Sarbanes-Oxley regulations and NYSE rules, authoritative guidance is sorely lacking. In search of some reference points for managers, the authors undertook a systematic analysis of a select group of codes. In this article, they present their findings in the form of a "codex," a reference source on code content. The Global Business Standards Codex contains a set of overarching principles as well as a set of conduct standards for putting those principles into practice. The GBS Codex is not intended to be adopted as is, but is meant to be used as a benchmark by those wishing to create their own world-class code. The provisions of the codex must be customized to a company's specific business and situation; individual companies' codes will include their own distinctive elements as well. What the codex provides is a starting point grounded in ethical fundamentals and aligned with an emerging global consensus on basic standards of corporate behavior.

  14. Codes and standards for structural wood products and their use in the United States

    Science.gov (United States)

    David W. Green; Roland. Hernandez

    1998-01-01

    The system of model building codes and voluntary product standards used in the United States for structural lumber and engineered wood products can appear complicated and confusing to those introduced to it for the first time. This paper is a discussion of the various types of structural wood products commonly used in U.S. residential and commercial construction and...

  15. Building America Guidance for Identifying and Overcoming Code, Standard, and Rating Method Barriers

    Energy Technology Data Exchange (ETDEWEB)

    Cole, P. C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Halverson, M. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-09-01

    This guidance document was prepared using the input from the meeting summarized in the draft CSI Roadmap to provide Building America research teams and partners with specific information and approaches to identifying and overcoming potential barriers to Building America innovations arising in and/or stemming from codes, standards, and rating methods.

  16. Standard PREanalytical Codes: A New Paradigm for Environmental Biobanking Sectors Explored in Algal Culture Collections.

    Science.gov (United States)

    Benson, Erica E; Betsou, Fotini; Amaral, Raquel; Santos, Lília M A; Harding, Keith

    2011-12-01

    The Standard PREanalytical Code (SPREC) was developed by the medical/clinical biobanking sector motivated by the need to harmonize biospecimen traceability in preanalytical processes and enable interconnectivity and interoperability between different biobanks, research consortia, and infrastructures. The clinical SPREC (01) consists of standard preanalytical variable options (7-code elements), which comprise published and (ideally) validated methodologies. Although the SPREC has been designed to facilitate clinical research, the concept could have utility in biorepositories and culture collections that service environmental and biodiversity communities. The SPREC paradigm can be applied to different storage regimes across all types of biorepository. The objective of this article is to investigate adapting the code in nonclinical biobanks using algal culture collections and their cryostorage as a case study. The SPREC (01) is recalibrated as a putative code that might be adopted for biobanks holding different types of biodiversity; it is extended to include optional coding from the point of sample collection to postcryostorage manipulations, with the caveat that the processes are undertaken by biorepository personnel.

  17. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.

  18. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  19. Technical Skills Training for Veterinary Students: A Comparison of Simulators and Video for Teaching Standardized Cardiac Dissection.

    Science.gov (United States)

    Allavena, Rachel E; Schaffer-White, Andrea B; Long, Hanna; Alawneh, John I

    2017-06-05

    The goal of the study was to evaluate alternative student-centered approaches that could replace autopsy sessions and live demonstration, and to explore refinements in assessment procedures for standardized cardiac dissection. Simulators and videos were identified as feasible, economical, student-centered teaching methods for technical skills training in medical contexts, and a direct comparison was undertaken. A low-fidelity anatomically correct simulator approximately the size of a horse's heart with embedded dissection pathways was constructed and used with a series of laminated photographs of standardized cardiac dissection. A video of a standardized cardiac dissection of a normal horse's heart was recorded and presented with audio commentary. Students were allowed to nominate a preference for learning method, and students who indicated no preference were randomly allocated to keep group numbers even. Objective performance data from an objective structure assessment criterion and student perception data on confidence and competency from surveys showed both innovations were similarly effective. Evaluator reflections as well as usage logs to track patterns of student use were both recorded. A strong selection preference was identified for kinesthetic learners choosing the simulator and visual learners choosing the video. Students in the video cohort were better at articulating the reasons for dissection procedures and sequence due to the audio commentary, and student satisfaction was higher with the video. The major conclusion of this study was that both methods are effective tools for technical skills training, but consideration should to be given to the preferred learning style of adult learners to maximize educational outcomes.

  20. Rate-Adaptive Video Compression (RAVC) Universal Video Stick (UVS)

    Science.gov (United States)

    Hench, David L.

    2009-05-01

    The H.264 video compression standard, aka MPEG 4 Part 10 aka Advanced Video Coding (AVC) allows new flexibility in the use of video in the battlefield. This standard necessitates encoder chips to effectively utilize the increased capabilities. Such chips are designed to cover the full range of the standard with designers of individual products given the capability of selecting the parameters that differentiate a broadcast system from a video conferencing system. The SmartCapture commercial product and the Universal Video Stick (UVS) military versions are about the size of a thumb drive with analog video input and USB (Universal Serial Bus) output and allow the user to select the parameters of imaging to the. Thereby, allowing the user to select video bandwidth (and video quality) using four dimensions of quality, on the fly, without stopping video transmission. The four dimensions are: 1) spatial, change from 720 pixel x 480 pixel to 320 pixel x 360 pixel to 160 pixel x 180 pixel, 2) temporal, change from 30 frames/ sec to 5 frames/sec, 3) transform quality with a 5 to 1 range, 4) and Group of Pictures (GOP) that affects noise immunity. The host processor simply wraps the H.264 network abstraction layer packets into the appropriate network packets. We also discuss the recently adopted scalable amendment to H.264 that will allow limit RAVC at any point in the communication chain by throwing away preselected packets.

  1. Video surveillance system based on MPEG-4

    Science.gov (United States)

    Ge, Jing; Zhang, Guoping; Yang, Zongkai

    2008-03-01

    Multimedia technology and networks protocol are the basic technology of the video surveillance system. A network remote video surveillance system based on MPEG-4 video coding standards is designed and implemented in this paper. The advantages of the MPEG-4 are analyzed in detail in the surveillance field, and then the real-time protocol and real-time control protocol (RTP/RTCP) are chosen as the networks transmission protocol. The whole system includes video coding control module, playing back module, network transmission module and network receiver module The scheme of management, control and storage about video data are discussed. The DirectShow technology is used to playback video data. The transmission scheme of digital video processing in networks, RTP packaging of MPEG-4 video stream is discussed. The receiver scheme of video date and mechanism of buffer are discussed. The most of the functions are archived by software, except that the video coding control module is achieved by hardware. The experiment results show that it provides good video quality and has the real-time performance. This system can be applied into wide fields.

  2. Supported eText in Captioned Videos: A Comparison of Expanded versus Standard Captions on Student Comprehension of Educational Content

    Science.gov (United States)

    Anderson-Inman, Lynne; Terrazas-Arellanes, Fatima E.

    2009-01-01

    Expanded captions are designed to enhance the educational value by linking unfamiliar words to one of three types of information: vocabulary definitions, labeled illustrations, or concept maps. This study investigated the effects of expanded captions versus standard captions on the comprehension of educational video materials on DVD by secondary…

  3. High-Penetration Photovoltaics Standards and Codes Workshop, Denver, Colorado, May 20, 2010: Workshop Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Coddington, M.; Kroposki, B.; Basso, T.; Lynn, K.; Herig, C.; Bower, W.

    2010-09-01

    Effectively interconnecting high-level penetration of photovoltaic (PV) systems requires careful technical attention to ensuring compatibility with electric power systems. Standards, codes, and implementation have been cited as major impediments to widespread use of PV within electric power systems. On May 20, 2010, in Denver, Colorado, the National Renewable Energy Laboratory, in conjunction with the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), held a workshop to examine the key technical issues and barriers associated with high PV penetration levels with an emphasis on codes and standards. This workshop included building upon results of the High Penetration of Photovoltaic (PV) Systems into the Distribution Grid workshop held in Ontario California on February 24-25, 2009, and upon the stimulating presentations of the diverse stakeholder presentations.

  4. Regulations, Codes, and Standards (RCS) Template for California Hydrogen Dispensing Stations

    Energy Technology Data Exchange (ETDEWEB)

    Rivkin, C.; Blake, C.; Burgess, R.; Buttner, W.; Post, M.

    2012-11-01

    This report explains the Regulations, Codes, and Standards (RCS) requirements for hydrogen dispensing stations in the State of California. The reports shows the basic components of a hydrogen dispensing station in a simple schematic drawing; the permits and approvals that would typically be required for the construction and operation of a hydrogen dispensing station; and a basic permit that might be employed by an Authority Having Jurisdiction (AHJ).

  5. France; Report on the Observance of Standards and Codes-Monetary and Financial Policies Transparency and Fiscal Transparency-Updates

    OpenAIRE

    International Monetary Fund

    2002-01-01

    This report evaluates the Observance of Standards and Codes on Monetary and Financial Policies Transparency and Fiscal Transparency for France. Up to mid-2001, different rules were applied to insurance firms regulated by the Insurance Code and to establishments regulated by the Code de la Mutualité. Moving toward the consolidation of these rules, a new Code de la Mutualité was ratified by Parliament in July 2001. Now, prudential rules concerning authorizations for new entrants in the insuranc...

  6. Motion estimation optimization in a MPEG-1-like video coding scheme for low-bit-rate applications

    Science.gov (United States)

    Roser, Miguel; Villegas, Paulo

    1994-05-01

    In this paper we present a work based on a coding algorithm for visual information that follows the International Standard ISO-IEC IS 11172, `Coding of Moving Pictures and Associated Audio for Digital Storage Media up to about 1.5 Mbit/s', widely known as MPEG1. The main intention in the definition of the MPEG 1 standard was to provide a large degree of flexibility to be used in many different applications. The interest of this paper is to adapt the MPEG 1 scheme for low bitrate operation and optimize it for special situations, as for example, a talking head with low movement, which is a usual situation in videotelephony application. An adapted and compatible MPEG 1 scheme, previously developed, able to operate at px8 Kbit/s will be used in this work. Looking for a low complexity scheme and taking into account that the most expensive (from the point of view of consumed computer time) step in the scheme is the motion estimation process (almost 80% of the total computer time is spent on the ME), an improvement of the motion estimation module based on the use of a new search pattern is presented in this paper.

  7. Using video-taped examples of standardized patient to teach medical students taking informed consent

    Directory of Open Access Journals (Sweden)

    SHIRIN HABIBI KHORASANI

    2015-04-01

    Full Text Available Introduction: Medical student should be trained in medical ethics and one of the most essential issues in this field is taking informed consents. In this research, we compared the effect of effectiveness of teaching methods on students’ ability in taking informed consent from patients. Methods: This semi-experimental study was carried out on fifty eight subjects from the 4th-year students of Shiraz University of Medical Sciences who attended in medical ethics course before their ‘clinical clerkship’training.Method of sampling was census and students were randomly allocated into two groups of control group (n=28 was trained in traditional lecture-based class and the case groupnamed as A1 (n=22 were taught by video-taped examples of standardized patient.Then A1 group attended in traditional lecture-based classes named as A2. The groups were evaluated in terms the ability of recognition of ethical issues through the scenario based ethical examination before and after each training. Scenarios were related to the topics of informed consent. Data were analyzed by SPSS 14 software using descriptive statistics and anova test. P-value less than 0.05 was considered as significant. Results: The mean scores results of A2, A1 and B group were found to be 7.21, 5.91 and 5.73 out of 8, respectively. Comparison between the groups demonstrated that the ability of taking informed consent was significantly higher in A2 group (p<0.001, followed by A1 group (p<0.05, while was the least in the B group (p=0.875. Conclusion: According to this research, lecture-based teaching is still of great value in teaching medical ethics, but when combined with standardized patient, the outcome will be much better. It should be considered that mixed methods of teaching should be used together for better result.

  8. TANDA TANGAN DIGITAL MENGGUNAKAN QR CODE DENGAN METODE ADVANCED ENCRYPTION STANDARD

    Directory of Open Access Journals (Sweden)

    Abdul Gani Putra Suratma

    2017-04-01

    Full Text Available Tanda tangan digital (digital signature adalah sebuah skema matematis yang secara unik mengidentifikasikan seorang pengirim, sekaligus untuk membuktikan keaslian dari pemilik sebuah pesan atau dokumen digital, sehingga sebuah tanda tangan digital yang autentik (sah, sudah cukup menjadi alasan bagi penerima un- tuk percaya bahwa sebuah pesan atau dokumen yang diterima adalah berasal dari pengirim yang telah diketahui. Perkembangan teknologi memungkinkan adanya tanda tangan digital yang dapat digunakan untuk melakukan pembuktian secara matematis, sehingga informasi yang didapat oleh satu pihak dari pihak lain dapat diidentifikasi untuk memastikan keaslian informasi yang diterima. Tanda tangan digital merupakan mekanisme otentikasi yang memungkinkan pembuat pesan menambahkan sebuah kode yang bertindak sebagai tanda tangannya. Tujuan dari penelitian ini menerapkan QR Code atau yang dikenal dengan istilah QR (Quick Respon dan Algoritma yang akan ditambahkan yaitu AES (Advanced Encryption Standard sebagai tanda tangan digital sehingga hasil dari penelitian penerapan QR Code menggunakan algoritma Advanced Encryption Standard sebagai tanda tangan digital dapat berfungsi sebagai otentikasi tanda tangan pimpinan serta ve- rivikasi dokumen pengambilan barang yang sah. dari penelitian ini akurasi klasifi- kasi QR Code dengan menggunakan naïve bayes classifier sebesar 90% dengan precision positif sebesar 80% dan precision negatif sebesar 100%.

  9. Video Classification and Adaptive QoP/QoS Control for Multiresolution Video Applications on IPTV

    Directory of Open Access Journals (Sweden)

    Huang Shyh-Fang

    2012-01-01

    Full Text Available With the development of heterogeneous networks and video coding standards, multiresolution video applications over networks become important. It is critical to ensure the service quality of the network for time-sensitive video services. Worldwide Interoperability for Microwave Access (WIMAX is a good candidate for delivering video signals because through WIMAX the delivery quality based on the quality-of-service (QoS setting can be guaranteed. The selection of suitable QoS parameters is, however, not trivial for service users. Instead, what a video service user really concerns with is the video quality of presentation (QoP which includes the video resolution, the fidelity, and the frame rate. In this paper, we present a quality control mechanism in multiresolution video coding structures over WIMAX networks and also investigate the relationship between QoP and QoS in end-to-end connections. Consequently, the video presentation quality can be simply mapped to the network requirements by a mapping table, and then the end-to-end QoS is achieved. We performed experiments with multiresolution MPEG coding over WIMAX networks. In addition to the QoP parameters, the video characteristics, such as, the picture activity and the video mobility, also affect the QoS significantly.

  10. A new decay heat standard proposition based on a technical specifications guide for computation codes

    Energy Technology Data Exchange (ETDEWEB)

    Laugier, Frederic; Garzenne, Claude [EDF R and D SINETICS, 1 av. du Gal de Gaulle, 92141 Clamart Cedex (France); Diop, Cheikh [CEA Saclay, 91191 Gif-sur-Yvette (France); Ebalard, Sylvie [AREVA, 92084 Paris La Defense (France); Sargeni, Antonio [IRSN, B.P. 17, 92262 Fontenay aux Roses (France)

    2008-07-01

    The existing ISO international decay heat standard provides the basis for calculating the decay heat power of non-recycled nuclear fuel of light water reactors. Computing decay heat with this standard can be really efficient for standard uranium fuels. Though, for Mixed Oxide fuels, high Burn-Up uranium fuels or for non standard irradiation sequences, decay heat can only be estimated by more complex decay heat computation codes. Therefore, a new ISO international standard, a 'Technical specifications guide for decay heat computation' has been proposed to reflect the international way to compute decay heat in light water reactors. The aim of this article is to give the justifications for the methods that lead to simplified modelization of the decay heat and that will be put into this standard. These methods are useful for rapid and precise determination of reaction rates and for nuclide chains simplifications. We propose also a simple method to evaluate the sensitivity of decay heat computations with respect to nuclear data. (authors)

  11. ISO 639-1 and ISO 639-2: International Standards for Language Codes. ISO 15924: International Standard for Names of Scripts.

    Science.gov (United States)

    Byrum, John D.

    This paper describes two international standards for the representation of the names of languages. The first (ISO 639-1), published in 1988, provides two-letter codes for 136 languages and was produced primarily to meet terminological needs. The second (ISO 639-2) appeared in late 1998 and includes three-letter codes for 460 languages. This list…

  12. Endoscopic trimodal imaging detects colonic neoplasia as well as standard video endoscopy.

    Science.gov (United States)

    Kuiper, Teaco; van den Broek, Frank J C; Naber, Anton H; van Soest, Ellert J; Scholten, Pieter; Mallant-Hent, Rosalie Ch; van den Brande, Jan; Jansen, Jeroen M; van Oijen, Arnoud H A M; Marsman, Willem A; Bergman, Jacques J G H M; Fockens, Paul; Dekker, Evelien

    2011-06-01

    Endoscopic trimodal imaging (ETMI) is a novel endoscopic technique that combines high-resolution endoscopy (HRE), autofluorescence imaging (AFI), and narrow-band imaging (NBI) that has only been studied in academic settings. We performed a randomized, controlled trial in a nonacademic setting to compare ETMI with standard video endoscopy (SVE) in the detection and differentiation of colorectal lesions. The study included 234 patients scheduled to receive colonoscopy who were randomly assigned to undergo a colonoscopy in tandem with either ETMI or SVE. In the ETMI group (n=118), first examination was performed using HRE, followed by AFI. In the other group, both examinations were performed using SVE (n=116). In the ETMI group, detected lesions were differentiated using AFI and NBI. In the ETMI group, 87 adenomas were detected in the first examination (with HRE), and then 34 adenomas were detected during second inspection (with AFI). In the SVE group, 79 adenomas were detected during the first inspection, and then 33 adenomas were detected during the second inspection. Adenoma detection rates did not differ significantly between the 2 groups (ETMI: 1.03 vs SVE: 0.97, P=.360). The adenoma miss-rate was 29% for HRE and 28% for SVE. The sensitivity, specificity, and accuracy of NBI in differentiating adenomas from nonadenomatous lesions were 87%, 63%, and 75%, respectively; corresponding values for AFI were 90%, 37%, and 62%, respectively. In a nonacademic setting, ETMI did not improve the detection rate for adenomas compared with SVE. NBI and AFI each differentiated colonic lesions with high levels of sensitivity but low levels of specificity. Copyright © 2011 AGA Institute. Published by Elsevier Inc. All rights reserved.

  13. Standardized Cardiovascular Quality Assurance Forms with Multilingual Support, UMLS Coding and Medical Concept Analyses.

    Science.gov (United States)

    Varghese, Julian; Schulze Sünninghausen, Sarah; Dugas, Martin

    2015-01-01

    Standardized quality assurance (QA) plays an import role to maintain and develop success of cardiovascular procedures (CP). Well-established QA models from Germany could be shared in a form repository for world-wide reuse and exchange. Therefore, we collected the complete set of all quality QA forms for CP, which is obligatory to be filled out by all German health service providers. Original forms were converted into standardized study forms according to ODM (Operational Data Model) and translated into English. Common medical concepts and clusters of medical concepts were identified based on UMLS coding of form items. All forms are available on the web as multilingual ODM documents. UMLS concept coverage analysis indicates 88% coverage with few but critically important definition gaps, which need to be addressed by UMLS.

  14. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    Science.gov (United States)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-01-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  15. A standardized Code Blue Team eliminates variable survival from in-hospital cardiac arrest.

    Science.gov (United States)

    Qureshi, Sultana A; Ahern, Terence; O'Shea, Ryan; Hatch, Lorien; Henderson, Sean O

    2012-01-01

    Recent studies suggest that time of day affects survival from in-hospital cardiac arrest. Lower survival rates are observed during nights and on weekends, except in areas with consistent physician care, such as the Emergency Department. Since 1997, our hospital has utilized a standard, hospital-wide "Code Blue Team" (CBT) to respond to cardiac arrests at any time. This team is always led by an emergency physician, and includes specially trained nurses. To assess if time of day or week affects survival from in-hospital cardiac arrest when a trained, consistent, emergency physician-led CBT is implemented. This is an analysis of prospectively collected data on initial survival rates (return of spontaneous circulation >20 min) of all cardiac arrests that were managed by the CBT from 2000 to 2008. Cardiac arrests were also subcategorized based on initial cardiac rhythm. Survival rates were compared according to time of day or week. A total of 1692 cardiac arrests were included. There was no significant difference in the overall rate of initial survival between day/evening vs. night hours (odds ratio [OR] 1.04, 95% confidence interval [CI] 0.83-1.29), or between weekday vs. weekend hours (OR 1.10, 95% CI 0.85-1.38). This held true for all cardiac rhythms. At our institution, there is no significant difference in survival from cardiac arrest when a standardized "Code Blue Team" is utilized, regardless of the time of day or week. Copyright © 2012. Published by Elsevier Inc.

  16. Observational Review and Analysis of Concussion: a Method for Conducting a Standardized Video Analysis of Concussion in Rugby League.

    Science.gov (United States)

    Gardner, Andrew J; Levi, Christopher R; Iverson, Grant L

    2017-12-01

    Several professional contact and collision sports have recently introduced the use of sideline video review for club medical staff to help identify and manage concussions. As such, reviewing video footage on the sideline has become increasingly relied upon to assist with improving the identification of possible injury. However, as yet, a standardized method for reviewing such video footage in rugby league has not been published. The aim of this study is to evaluate whether independent raters reliably agreed on the injury characterization when using a standardized observational instrument to record video footage of National Rugby League (NRL) concussions. Video footage of 25 concussions were randomly selected from a pool of 80 medically diagnosed concussions from the 2013-2014 NRL seasons. Four raters (two naïve and two expert) independently viewed video footage of 25 NRL concussions and completed the Observational Review and Analysis of Concussion form for the purpose of this inter-rater reliability study. The inter-rater reliability was calculated using Cohen's kappa (κ) and intra-class correlation (ICC) statistics. The two naïve raters and the two expert raters were compared with one another separately. A considerable number of components for the naïve and expert raters had almost perfect agreement (κ or ICC value ≥ 0.9), 9 of 22 (41%) components for naïve raters and 21 of 22 (95%) components for expert raters. For the concussion signs, however, the majority of the rating agreement was moderate (κ value 0.6-0.79); both the naïve and expert raters had 4 of 6 (67%) concussion signs with moderate agreement. The most difficult concussion sign to achieve agreement on was blank or vacant stare, which had weak (κ value 0.4-0.59) agreement for both naïve and expert raters. There appears to be value in expert raters, but less value for naive raters, in using the new Observational Review and Analysis of Concussion (ORAC) Form. The ORAC Form has high inter

  17. The AutoProof Verifier: Usability by Non-Experts and on Standard Code

    Directory of Open Access Journals (Sweden)

    Carlo A. Furia

    2015-08-01

    Full Text Available Formal verification tools are often developed by experts for experts; as a result, their usability by programmers with little formal methods experience may be severely limited. In this paper, we discuss this general phenomenon with reference to AutoProof: a tool that can verify the full functional correctness of object-oriented software. In particular, we present our experiences of using AutoProof in two contrasting contexts representative of non-expert usage. First, we discuss its usability by students in a graduate course on software verification, who were tasked with verifying implementations of various sorting algorithms. Second, we evaluate its usability in verifying code developed for programming assignments of an undergraduate course. The first scenario represents usability by serious non-experts; the second represents usability on "standard code", developed without full functional verification in mind. We report our experiences and lessons learnt, from which we derive some general suggestions for furthering the development of verification tools with respect to improving their usability.

  18. SECOND ATLAS DOMESTIC STANDARD PROBLEM (DSP-02 FOR A CODE ASSESSMENT

    Directory of Open Access Journals (Sweden)

    YEON-SIK KIM

    2013-12-01

    Full Text Available KAERI (Korea Atomic Energy Research Institute has been operating an integral effect test facility, the Advanced Thermal-Hydraulic Test Loop for Accident Simulation (ATLAS, for transient and accident simulations of advanced pressurized water reactors (PWRs. Using ATLAS, a high-quality integral effect test database has been established for major design basis accidents of the APR1400 plant. A Domestic Standard Problem (DSP exercise using the ATLAS database was promoted to transfer the database to domestic nuclear industries and contribute to improving a safety analysis methodology for PWRs. This 2nd ATLAS DSP (DSP-02 exercise aims at an effective utilization of an integral effect database obtained from ATLAS, the establishment of a cooperation framework among the domestic nuclear industry, a better understanding of the thermal hydraulic phenomena, and an investigation into the possible limitation of the existing best-estimate safety analysis codes. A small break loss of coolant accident with a 6-inch break at the cold leg was determined as a target scenario by considering its technical importance and by incorporating interests from participants. This DSP exercise was performed in an open calculation environment where the integral effect test data was open to participants prior to the code calculations. This paper includes major information of the DSP-02 exercise as well as comparison results between the calculations and the experimental data.

  19. Experimental video signals distribution MMF network based on IEEE 802.11 standard

    Science.gov (United States)

    Kowalczyk, Marcin; Maksymiuk, Lukasz; Siuzdak, Jerzy

    2014-11-01

    The article was focused on presentation the achievements in a scope of experimental research on transmission of digital video streams in the frame of specially realized for this purpose ROF (Radio over Fiber) network. Its construction was based on the merge of wireless IEEE 802.11 network, popularly referred as Wi-Fi, with a passive optical network PON based on multimode fibers MMF. The proposed approach can constitute interesting proposal in area of solutions in the scope of the systems monitoring extensive, within which is required covering of a large area with ensuring of a relatively high degree of immunity on the interferences transmitted signals from video IP cameras to the monitoring center and a high configuration flexibility (easily change the deployment of cameras) of such network.

  20. On the Impact of Zero-padding in Network Coding Efficiency with Internet Traffic and Video Traces

    DEFF Research Database (Denmark)

    Taghouti, Maroua; Roetter, Daniel Enrique Lucani; Pedersen, Morten Videbæk

    2016-01-01

    compiled by Arizona State University. Our numerical results show the dependence of the zero-padding overhead with the number of packets combined in a generation using RLNC. Surprisingly, medium and large TCP generations are strongly affected with more than 100% of padding overhead. Although all video...

  1. Optimization of high-definition video coding and hybrid fiber-wireless transmission in the 60 GHz band

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Pham, Tien Thang; Beltrán, Marta

    2011-01-01

    The paper addresses the problem of distribution of highdefinition video over fiber-wireless networks. The physical layer architecture with the low complexity envelope detection solution is investigated. We present both experimental studies and simulation of high quality high-definition compressed...

  2. Inventory of Safety-related Codes and Standards for Energy Storage Systems with some Experiences related to Approval and Acceptance

    Energy Technology Data Exchange (ETDEWEB)

    Conover, David R.

    2014-09-11

    The purpose of this document is to identify laws, rules, model codes, codes, standards, regulations, specifications (CSR) related to safety that could apply to stationary energy storage systems (ESS) and experiences to date securing approval of ESS in relation to CSR. This information is intended to assist in securing approval of ESS under current CSR and to identification of new CRS or revisions to existing CRS and necessary supporting research and documentation that can foster the deployment of safe ESS.

  3. [The evolution of ethical standards in the practice of psychology: a reflection on the APA Code of Ethics].

    Science.gov (United States)

    Sabourin, M

    1999-04-01

    After briefly describing the need for ethics in the development of professional regulation and analyzing the historical emergence of codes of ethics, the goal of this paper is to scrutinize the process by which the American Psychological Association developed its own Code of Ethics and proceeded to revise it periodically. Different lessons can be derived from these efforts and from the criticisms that were formulated. The need for international standards in professional and research ethics is then considered, and the results of a recent study on this subject are presented. Five major conclusions can be derived from the preceding analysis: (1) Codes of ethics can help professional recognition by stressing the importance given to the protection of the public, (2) the development of a code of ethics is usually related to the advancement of professional practice, (3) ethical standards should be in tune with the cultural values and the belief system of a given community, (4) a well-balanced code should incorporate both general aspirational principles and enforceable standards, and (5) the method used to define principles and standards should be empirically based.

  4. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules, F9-F11

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes.

  5. Impact of developing a multidisciplinary coded dataset standard on administrative data accuracy for septoplasty, septorhinoplasty and nasal trauma surgery.

    Science.gov (United States)

    Nouraei, S A R; Hudovsky, A; Virk, J S; Saleh, H A

    2017-04-01

    This study aimed to develop a multidisciplinary coded dataset standard for nasal surgery and to assess its impact on data accuracy. An audit of 528 patients undergoing septal and/or inferior turbinate surgery, rhinoplasty and/or septorhinoplasty, and nasal fracture surgery was undertaken. A total of 200 septoplasties, 109 septorhinoplasties, 57 complex septorhinoplasties and 116 nasal fractures were analysed. There were 76 (14.4 per cent) changes to the primary diagnosis. Septorhinoplasties were the most commonly amended procedures. The overall audit-related income change for nasal surgery was £8.78 per patient. Use of a multidisciplinary coded dataset standard revealed that nasal diagnoses were under-coded; a significant proportion of patients received more precise diagnoses following the audit. There was also significant under-coding of both morbidities and revision surgery. The multidisciplinary coded dataset standard approach can improve the accuracy of both data capture and information flow, and, thus, ultimately create a more reliable dataset for use outcomes and health planning.

  6. Supporting the Use of CERT (registered trademark) Secure Coding Standards in DoD Acquisitions

    Science.gov (United States)

    2012-07-01

    CAT II – The Designer will ensure the web application assigns the character set on all web pages. (Page 48) Secure Coding Guidance None SEP...Designer will ensure the application does not have XSS vulnerabilities. (Page 52) Secure Coding Guidance None SEP TEP SDP STP CMU/SEI-2012...connection from a trusted SSL web server using a DoD or trusted commercial SSL server certificate. 3. The mobile code was downloaded over a TLS

  7. Multimedia signal coding and transmission

    CERN Document Server

    Ohm, Jens-Rainer

    2015-01-01

    This textbook covers the theoretical background of one- and multidimensional signal processing, statistical analysis and modelling, coding and information theory with regard to the principles and design of image, video and audio compression systems. The theoretical concepts are augmented by practical examples of algorithms for multimedia signal coding technology, and related transmission aspects. On this basis, principles behind multimedia coding standards, including most recent developments like High Efficiency Video Coding, can be well understood. Furthermore, potential advances in future development are pointed out. Numerous figures and examples help to illustrate the concepts covered. The book was developed on the basis of a graduate-level university course, and most chapters are supplemented by exercises. The book is also a self-contained introduction both for researchers and developers of multimedia compression systems in industry.

  8. Network Coding to Enhance Standard Routing Protocols in Wireless Mesh Networks

    DEFF Research Database (Denmark)

    Pahlevani, Peyman; Roetter, Daniel Enrique Lucani; Fitzek, Frank

    2013-01-01

    This paper introduces a design and simulation of a locally optimized network coding protocol, called PlayNCool, for wireless mesh networks. PlayN-Cool is easy to implement and compatible with existing routing protocols and devices. This allows the system to gain from network coding capabilities...

  9. NODC Standard Product: NODC Taxonomic Code on CD-ROM (NODC Accession 0050418)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The content of the NODC Taxonomic Code, Version 8 CD-ROM (CD-ROM NODC-68) distributed by NODC is archived in this accession. Version 7 of the NODC Taxonomic Code...

  10. Digital video technologies and their network requirements

    Energy Technology Data Exchange (ETDEWEB)

    R. P. Tsang; H. Y. Chen; J. M. Brandt; J. A. Hutchins

    1999-11-01

    Coded digital video signals are considered to be one of the most difficult data types to transport due to their real-time requirements and high bit rate variability. In this study, the authors discuss the coding mechanisms incorporated by the major compression standards bodies, i.e., JPEG and MPEG, as well as more advanced coding mechanisms such as wavelet and fractal techniques. The relationship between the applications which use these coding schemes and their network requirements are the major focus of this study. Specifically, the authors relate network latency, channel transmission reliability, random access speed, buffering and network bandwidth with the various coding techniques as a function of the applications which use them. Such applications include High-Definition Television, Video Conferencing, Computer-Supported Collaborative Work (CSCW), and Medical Imaging.

  11. Application of coupled code technique to a safety analysis of a standard MTR research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Hamidouche, Tewfik [Division de l' Environnement, de la Surete et des Dechets Radioactifs, Centre de Recherche Nucleaire d' Alger (CRNA), Alger (Algeria); Laboratoire de Mecanique des Fluides Theorique et Appliquee, Faculte de Physique, Universite Des Sciences et de la Technologie Houari Boumediene, (USTHB), Bab-Ezzouar, Alger (Algeria)], E-mail: t.hamidouche@crna.dz; Bousbia-Salah, Anis [Dipartimento di Ingegneria Meccanica, Nucleari e della Produzione-Facolta di Ingegneria, Universita di Pisa, Pisa (Italy)], E-mail: b.salah@ing.unipi.it; Si-Ahmed, El Khider [Laboratoire de Mecanique des Fluides Theorique et Appliquee, Faculte de Physique, Universite Des Sciences et de la Technologie Houari Boumediene, (USTHB), Bab-Ezzouar, Alger (Algeria)], E-mail: esi-ahmed@usthb.dz; Mokeddem, Mohamed Yazid [Division de la Physique et des Applications Nucleaires, Centre de Recherche Nucleaire de Draria (CRND) (Algeria); D' Auria, Franscesco [Dipartimento di Ingegneria Meccanica, Nucleari e della Produzione-Facolta di Ingegneria, Universita di Pisa, Pisa (Italy)

    2009-10-15

    Accident analyses in nuclear research reactors have been performed, up to now, using simple computational tools based on conservative physical models. These codes, developed to focus on specific phenomena in the reactor, were widely used for licensing purposes. Nowadays, the advances in computer technology make it possible to switch to a new generation of computational tools that provides more realistic description of the phenomena occurring in a nuclear research reactor. Recent International Atomic Energy Agency (IAEA) activities have emphasized the maturity in using Best Estimate (BE) Codes in the analysis of accidents in research reactors. Indeed, some assessments have already been performed using BE thermal-hydraulic system codes such as RELAP5/Mod3. The challenge today is oriented to the application of coupled code techniques for research reactors safety analyses. Within the framework of the current study, a Three-Dimensional Neutron Kinetics Thermal-Hydraulic Model (3D-NKTH) based on coupled PARCS and RELAP5/Mod3.3 codes has been developed for the IAEA High Enriched Uranium (HEU) benchmark core. The results of the steady state calculations are sketched by comparison to tabulated results issued from the IAEA TECDOC 643. These data were obtained using conventional diffusion codes as well as Monte Carlo codes. On the other hand, the transient analysis was assessed with conventional coupled point kinetics-thermal-hydraulic channel codes such as RELAP5 stand alone, RETRAC-PC, and PARET codes. Through this study, the applicability of the coupled code technique is emphasized with an outline of some remaining challenges.

  12. An improvement analysis on video compression using file segmentation

    Science.gov (United States)

    Sharma, Shubhankar; Singh, K. John; Priya, M.

    2017-11-01

    From the past two decades the extreme evolution of the Internet has lead a massive rise in video technology and significantly video consumption over the Internet which inhabits the bulk of data traffic in general. Clearly, video consumes that so much data size on the World Wide Web, to reduce the burden on the Internet and deduction of bandwidth consume by video so that the user can easily access the video data.For this, many video codecs are developed such as HEVC/H.265 and V9. Although after seeing codec like this one gets a dilemma of which would be improved technology in the manner of rate distortion and the coding standard.This paper gives a solution about the difficulty for getting low delay in video compression and video application e.g. ad-hoc video conferencing/streaming or observation by surveillance. Also this paper describes the benchmark of HEVC and V9 technique of video compression on subjective oral estimations of High Definition video content, playback on web browsers. Moreover, this gives the experimental ideology of dividing the video file into several segments for compression and putting back together to improve the efficiency of video compression on the web as well as on the offline mode.

  13. “First-person view” of pathogen transmission and hand hygiene – use of a new head-mounted video capture and coding tool

    Directory of Open Access Journals (Sweden)

    Lauren Clack

    2017-10-01

    Full Text Available Abstract Background Healthcare workers’ hands are the foremost means of pathogen transmission in healthcare, but detailed hand trajectories have been insufficiently researched so far. We developed and applied a new method to systematically document hand-to-surface exposures (HSE to delineate true hand transmission pathways in real-life healthcare settings. Methods A head-mounted camera and commercial coding software were used to capture ten active care episodes by eight nurses and two physicians and code HSE type and duration using a hierarchical coding scheme. We identified HSE sequences of particular relevance to infectious risks for patients based on the WHO ‘Five Moments for Hand Hygiene’. The study took place in a trauma intensive care unit in a 900-bed university hospital in Switzerland. Results Overall, the ten videos totaled 296.5 min and featured eight nurses and two physicians. A total of 4222 HSE were identified (1 HSE every 4.2 s, which concerned bare (79% and gloved (21% hands. The HSE inside the patient zone (n = 1775; 42% included mobile objects (33%, immobile surfaces (5%, and patient intact skin (4%, while HSE outside the patient zone (n = 1953; 46% included HCW’s own body (10%, mobile objects (28%, and immobile surfaces (8%. A further 494 (12% events involved patient critical sites. Sequential analysis revealed 291 HSE transitions from outside to inside patient zone, i.e. “colonization events”, and 217 from any surface to critical sites, i.e. “infection events”. Hand hygiene occurred 97 times, 14 (5% adherence times at colonization events and three (1% adherence times at infection events. On average, hand rubbing lasted 13 ± 9 s. Conclusions The abundance of HSE underscores the central role of hands in the spread of potential pathogens while hand hygiene occurred rarely at potential colonization and infection events. Our approach produced a valid video and coding instrument for in-depth analysis of

  14. "First-person view" of pathogen transmission and hand hygiene - use of a new head-mounted video capture and coding tool.

    Science.gov (United States)

    Clack, Lauren; Scotoni, Manuela; Wolfensberger, Aline; Sax, Hugo

    2017-01-01

    Healthcare workers' hands are the foremost means of pathogen transmission in healthcare, but detailed hand trajectories have been insufficiently researched so far. We developed and applied a new method to systematically document hand-to-surface exposures (HSE) to delineate true hand transmission pathways in real-life healthcare settings. A head-mounted camera and commercial coding software were used to capture ten active care episodes by eight nurses and two physicians and code HSE type and duration using a hierarchical coding scheme. We identified HSE sequences of particular relevance to infectious risks for patients based on the WHO 'Five Moments for Hand Hygiene'. The study took place in a trauma intensive care unit in a 900-bed university hospital in Switzerland. Overall, the ten videos totaled 296.5 min and featured eight nurses and two physicians. A total of 4222 HSE were identified (1 HSE every 4.2 s), which concerned bare (79%) and gloved (21%) hands. The HSE inside the patient zone (n = 1775; 42%) included mobile objects (33%), immobile surfaces (5%), and patient intact skin (4%), while HSE outside the patient zone (n = 1953; 46%) included HCW's own body (10%), mobile objects (28%), and immobile surfaces (8%). A further 494 (12%) events involved patient critical sites. Sequential analysis revealed 291 HSE transitions from outside to inside patient zone, i.e. "colonization events", and 217 from any surface to critical sites, i.e. "infection events". Hand hygiene occurred 97 times, 14 (5% adherence) times at colonization events and three (1% adherence) times at infection events. On average, hand rubbing lasted 13 ± 9 s. The abundance of HSE underscores the central role of hands in the spread of potential pathogens while hand hygiene occurred rarely at potential colonization and infection events. Our approach produced a valid video and coding instrument for in-depth analysis of hand trajectories during active patient care that may help to design

  15. A Code of Ethics and Standards for Outer-Space Commerce

    Science.gov (United States)

    Livingston, David M.

    2002-01-01

    Now is the time to put forth an effective code of ethics for businesses in outer space. A successful code would be voluntary and would actually promote the growth of individual companies, not hinder their efforts to provide products and services. A properly designed code of ethics would ensure the development of space commerce unfettered by government-created barriers. Indeed, if the commercial space industry does not develop its own professional code of ethics, government- imposed regulations would probably be instituted. Should this occur, there is a risk that the development of off-Earth commerce would become more restricted. The code presented in this paper seeks to avoid the imposition of new barriers to space commerce as well as make new commercial space ventures easier to develop. The proposed code consists of a preamble, which underscores basic values, followed by a number of specific principles. For the most part, these principles set forth broad commitments to fairness and integrity with respect to employees, consumers, business transactions, political contributions, natural resources, off-Earth development, designated environmental protection zones, as well as relevant national and international laws. As acceptance of this code of ethics grows within the industry, general modifications will be necessary to accommodate the different types of businesses entering space commerce. This uniform applicability will help to assure that the code will not be perceived as foreign in nature, potentially restrictive, or threatening. Companies adopting this code of ethics will find less resistance to their space development plans, not only in the United States but also from nonspacefaring nations. Commercial space companies accepting and refining this code would demonstrate industry leadership and an understanding that will serve future generations living, working, and playing in space. Implementation of the code would also provide an off-Earth precedent for a modified

  16. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Control modules C4, C6

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U. S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume is part of the manual related to the control modules for the newest updated version of this computational package.

  17. Origin of Life: Pathways of the 20 Standard Amino Acids of the Genetic Code

    Science.gov (United States)

    Hall, A.; Acker-Moorehead, M.; Onyilagha, J.

    2017-11-01

    How nature used four nucleotides to build its proteins and form genetic code is intriguing. Stereochemical, Coevolution, and Adaptive theories have been propounded. We updated biosynthesis pathways and give insight into ancient evolutionary events.

  18. Maryland Public School Standards for Telecommunications Distribution Systems: Infrastructure Design for Voice, Video, and Data Communications.

    Science.gov (United States)

    Maryland State Dept. of Education, Baltimore. School Facilities Branch.

    Telecommunications infrastructure has the dual challenges of maintaining quality while accommodating change, issues that have long been met through a series of implementation standards. This document is designed to ensure that telecommunications systems within the Maryland public school system are also capable of meeting both challenges and…

  19. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules F1-F8

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE.

  20. Instructional DVD video lesson with code switching: its effect on the performance level in dancing social dance among grade 10 special program in the art students of the Philippines

    Directory of Open Access Journals (Sweden)

    Capilitan Fernando T.

    2017-01-01

    Full Text Available This paper shows that the experimental group who are exposed to DVD Video Lesson that uses code switching language has an average mean score in the pretest of 1.56, and this increased to an average mean of 3.50 in the posttest. The control group that uses DVD Video Lesson that uses purely English language got an average mean of 1.06 in the pretest and increased to 1.53 in the posttest. Based on the results of the performance posttest taken by the two groups, the experimental group has a dramatic increase in scores from the pretest to posttest. Although both groups had increased in their performance scores from pretest to posttest, the experimental group (code switching language performs well in the posttest than the control group. As revealed in this findings , there is a significant difference in the posttest scores between the experimental group who are exposed to DVD lesson that uses code switching as a medium of instruction and the control group who are exposed to DVD lesson that uses English. The students who are exposed to the Video Lesson that uses code switching perform well than those students who are exposed in DVD video lesson that uses purely English language. DVD Video lesson that uses code switching as a medium of instruction in teaching social dance is the useful approach in teaching Grade 10 Special Program in the Art students. The language used (code switching is the powerful medium of instruction that enhances the learning outcomes of the students to perform well. This paper could be an eye opener to the Department of Education to inculcate the used of first language/local language or MTB-MLE, not only in Grade I to III but all level in K to 12 programs, since education is a key factor for building a better nation.

  1. What`s new in codes and standards - Office of Building Technologies (OBT): Appliance and lighting standards

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-09-01

    US homeowners spend $110 billion each year to power such home appliances as refrigerators, freezers, water heaters, furnaces, air conditioners, and lights. These uses account for about 70% of all the primary energy consumed in homes. During its typical 10-15-year lifetime, the appliance`s operating costs may exceed its initial purchase price several times over. Nevertheless, many consumers do not consider energy efficiency when making purchases. And manufacturers are reluctant to invest in more efficient technology that may not be accepted in the highly competitive marketplace. Recognizing the great potential for energy savings, many states began prescribing minimum energy efficiencies for appliances. Anticipating the burden of complying with differing state standards, manufacturers supported developing federal standards that would preempt state standards.

  2. Overview of Development and Deployment of Codes, Standards and Regulations Affecting Energy Storage System Safety in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Conover, David R.

    2014-08-22

    This report acquaints stakeholders and interested parties involved in the development and/or deployment of energy storage systems (ESS) with the subject of safety-related codes, standards and regulations (CSRs). It is hoped that users of this document gain a more in depth and uniform understanding of safety-related CSR development and deployment that can foster improved communications among all ESS stakeholders and the collaboration needed to realize more timely acceptance and approval of safe ESS technology through appropriate CSR.

  3. Context adaptive coding of bi-level images

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2008-01-01

    for the modern paradigm of data compression based on a modelling and a coding stage. One advantage of contexts is their flexibility, e.g. choosing a two-dimensional ("-D) context facilitates efficient image coding. The area of image coding has greatly been influenced by context adaptive coding, applied e.......g. in the lossless JBIG bi-level image coding standard, and in the entropy coding of contemporary lossless and lossy image and video coding standards and schemes. The theoretical work and analysis of universal context based coding has addressed sequences of data and finite memory models as Markov chains and sources...... to 2-D of a finite memory source. Further developments of causal image models, e.g. to approximate MRF, lead to considering hidden states in the context formation. These causal image models provides image coding models and they are here related to context based image coding. The entropy of the image...

  4. 75 FR 66735 - National Fire Protection Association (NFPA): Request for Comments on NFPA's Codes and Standards

    Science.gov (United States)

    2010-10-29

    ... Chemical Process Areas. NFPA 499 Recommended Practice P for the Classification of Combustible Dusts and of Hazardous (Classified) Locations for Electrical Installations in Chemical Process Areas. NFPA 550 Guide to... safety related issues. NFPA's National Fire Codes , which holds over 290 documents, are administered by...

  5. Appalachian and Standard English Code-Switching Strategies among Primary Classroom Teachers

    Science.gov (United States)

    McConnell, Michele S.

    2011-01-01

    Students who grow up speaking regional dialects benefit from learning code switching (CS) strategies to allow bidialectal communication across their social worlds. This rationale proposes that students' home language of Appalachian English is acceptable at home and should be preserved; however, another set of language patterns, those of Standard…

  6. A Comparison of Visual Recognition of the Laryngopharyngeal Structures Between High and Standard Frame Rate Videos of the Fiberoptic Endoscopic Evaluation of Swallowing.

    Science.gov (United States)

    Aghdam, Mehran Alizadeh; Ogawa, Makoto; Iwahashi, Toshihiko; Hosokawa, Kiyohito; Kato, Chieri; Inohara, Hidenori

    2017-04-29

    The purpose of this study was to assess whether or not high frame rate (HFR) videos recorded using high-speed digital imaging (HSDI) improve the visual recognition of the motions of the laryngopharyngeal structures during pharyngeal swallow in fiberoptic endoscopic evaluation of swallowing (FEES). Five healthy subjects were asked to swallow 0.5 ml water under fiberoptic nasolaryngoscopy. The endoscope was connected to a high-speed camera, which recorded the laryngopharyngeal view throughout the swallowing process at 4000 frames/s (fps). Each HFR video was then copied and downsampled into a standard frame rate (SFR) video version (30 fps). Fifteen otorhinolaryngologists observed all of the HFR/SFR videos in random order and rated the four-point ordinal scale reflecting the degree of visual recognition of the rapid laryngopharyngeal structure motions just before the 'white-out' phenomenon. Significantly higher scores, reflecting better visibility, were seen for the HFR videos compared with the SFR videos for the following laryngopharyngeal structures: the posterior pharyngeal wall (p = 0.001), left pharyngeal wall (p = 0.015), right lateral pharyngeal wall (p = 0.035), tongue base (p = 0.005), and epiglottis tilting (p = 0.005). However, when visualized with HFR and SFR, 'certainly clear observation' of the laryngeal structures was achieved in <50% of cases, because all the motions were not necessarily captured in each video. These results demonstrate the use of HSDI in FEES makes the motion perception of the laryngopharyngeal structures during pharyngeal swallow easier in comparison to SFR videos with equivalent image quality due to the ability of HSDI to depict the laryngopharyngeal motions in a continuous manner.

  7. Portable Video Media Versus Standard Verbal Communication in Surgical Information Delivery to Nurses: A Prospective Multicenter, Randomized Controlled Crossover Trial.

    Science.gov (United States)

    Kam, Jonathan; Ainsworth, Hannah; Handmer, Marcus; Louie-Johnsun, Mark; Winter, Matthew

    2016-10-01

    Continuing education of health professionals is important for delivery of quality health care. Surgical nurses are often required to understand surgical procedures. Nurses need to be aware of the expected outcomes and recognize potential complications of such procedures during their daily work. Traditional educational methods, such as conferences and tutorials or informal education at the bedside, have many drawbacks for delivery of this information in a universal, standardized, and timely manner. The rapid uptake of portable media devices makes portable video media (PVM) a potential alternative to current educational methods. To compare PVM to standard verbal communication (SVC) for surgical information delivery and educational training for nurses and evaluate its impact on knowledge acquisition and participant satisfaction. Prospective, multicenter, randomized controlled crossover trial. Two hospitals: Gosford District Hospital and Wyong Hospital. Seventy-two nursing staff (36 at each site). Information delivery via PVM--7-minute video compared to information delivered via SVC. Knowledge acquisition was measured by a 32-point questionnaire, and satisfaction with the method of education delivery was measured using the validated Client Satisfaction Questionnaire (CSQ-8). Knowledge acquisition was higher via PVM compared to SVC 25.9 (95% confidence interval [CI] 25.2-26.6) versus 24.3 (95% CI 23.5-25.1), p = .004. Participant satisfaction was higher with PVM 29.5 (95% CI 28.3-30.7) versus 26.5 (95% CI 25.1-27.9), p = .003. Following information delivery via SVC, participants had a 6% increase in knowledge scores, 24.3 (95% CI 23.5-25.1) versus 25.7 (95% CI 24.9-26.5) p = .001, and a 13% increase in satisfaction scores, 26.5 (95% CI 25.1-27.9) versus 29.9 (95% CI 28.8-31.0) p < .001, when they crossed-over to information delivery via PVM. PVM provides a novel method for providing education to nurses that improves knowledge retention and satisfaction with the

  8. Spacelabs Innovative Project Award winner--2008. Megacode simulation workshop and education video--a megatonne of care and Code blue: live and interactive.

    Science.gov (United States)

    Loucks, Lynda; Leskowski, Jessica; Fallis, Wendy

    2010-01-01

    Skill acquisition and knowledge translation of best practices can be successfully facilitated using simulation methods. The 2008 Spacelabs Innovative Project Award was awarded for a unique training workshop that used simulation in the area of cardiac life support and resuscitation to train multiple health care personnel in basic and advanced skills. The megacode simulation workshop and education video was an educational event held in 2007 in Winnipeg, MB, for close to 60 participants and trainers from multiple disciplines across the provinces of Manitoba and Northwestern Ontario. The event included lectures, live simulation of a megacode, and hands-on training in the latest techniques in resuscitation. The goals of this project were to promote efficiency and better outcomes related to resuscitation measures, to foster teamwork, to emphasize the importance of each team member's role, and to improve knowledge and skills in resuscitation. The workshop was filmed to produce a training DVD that could be used for future knowledge enhancement and introductory training of health care personnel. Substantial positive feedback was received and evaluations indicated that participants reported improvement and expansion of their knowledge of advanced cardiac life support. Given their regular participation in cardiac arrest codes and the importance of staying up-to-date on best practice, the workshop was particularly useful to health care staff and nurses working in critical care areas. In addition, those who participate less frequently in cardiac resuscitation will benefit from the educational video for ongoing competency. Through accelerating knowledge translation from the literature to the bedside, it is hoped that this event contributed to improved patient care and outcomes with respect to advanced cardiac life support.

  9. Report on the Observance of Standards and Codes, Accounting and Auditing

    OpenAIRE

    World Bank

    2017-01-01

    The quality of financial reporting depends to a great extent on the quality of the Accounting and Auditing (A&A) standards on which the reporting is based. Accounting standards are seen as a critical language of business. In countries seeking to improve their business environment to attract foreign direct investment and mobilize savings and finance to support productive and job-creating ac...

  10. 75 FR 66725 - National Fire Protection Association (NFPA) Proposes To Revise Codes and Standards

    Science.gov (United States)

    2010-10-29

    ... and Television Production Studio Soundstages, Approved Production Facilities, and Production Locations. NFPA 225--2009 Model Manufactured Home 5/23/2011 Installation Standard. ] NFPA 259--2008 Standard Test... Fire Safety 5/23/2011 Criteria for Manufactured Home Installations, Sites, and Communities. NFPA 501...

  11. 76 FR 70413 - National Fire Protection Association (NFPA): Request for Comments on NFPA's Codes and Standards

    Science.gov (United States)

    2011-11-14

    ... Tests and Classification System P for Cigarette Ignition Resistance of Components of Upholstered... Systems (PASS)..... P NFPA 1989 Standard on Breathing Air Quality for Emergency P Services Respiratory... of Standpipe and Hose P Systems. NFPA 17 Standard for Dry Chemical Extinguishing Systems...... P NFPA...

  12. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Miscellaneous -- Volume 3, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Petrie, L.M.; Jordon, W.C. [Oak Ridge National Lab., TN (United States); Edwards, A.L. [Oak Ridge National Lab., TN (United States)]|[Lawrence Livermore National Lab., CA (United States)] [and others

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice; (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System developments has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3--for the data libraries and subroutine libraries.

  13. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Control modules -- Volume 1, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Landers, N.F.; Petrie, L.M.; Knight, J.R. [Oak Ridge National Lab., TN (United States)] [and others

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3 for the documentation of the data libraries and subroutine libraries.

  14. Transmission of object based fine-granular-scalability video over networks

    Science.gov (United States)

    Shi, Xu-li; Jin, Zhi-cheng; Teng, Guo-wei; Zhang, Zhao-yang; An, Ping; Xiao, Guang

    2006-05-01

    It is a hot focus of current researches in video standards that how to transmit video streams over Internet and wireless networks. One of the key methods is FGS(Fine-Granular-Scalability), which can always adapt to the network bandwidth varying but with some sacrifice of coding efficiency, is supported by MPEG-4. Object-based video coding algorithm has been firstly included in MPEG-4 standard that can be applied in interactive video. However, the real time segmentation of VOP(video object plan) is difficult that limit the application of MPEG-4 standard in interactive video. H.264/AVC is the up-to-date video-coding standard, which enhance compression performance and provision a network-friendly video representation. In this paper, we proposed a new Object Based FGS(OBFGS) coding algorithm embedded in H.264/AVC that is different from that in mpeg-4. After the algorithms optimization for the H.264 encoder, the FGS first finish the base-layer coding. Then extract moving VOP using the base-layer information of motion vectors and DCT coefficients. Sparse motion vector field of p-frame composed of 4*4 blocks, 4*8 blocks and 8*4 blocks in base-layer is interpolated. The DCT coefficient of I-frame is calculated by using information of spatial intra-prediction. After forward projecting each p-frame vector to the immediate adjacent I-frame, the method extracts moving VOPs (video object plan) using a recursion 4*4 block classification process. Only the blocks that belong to the moving VOP in 4*4 block-level accuracy is coded to produce enhancement-layer stream. Experimental results show that our proposed system can obtain high interested VOP quality at the cost of fewer coding efficiency.

  15. Application of the Coastal and Marine Ecological Classification Standard to ROV Video Data for Enhanced Analysis of Deep-Sea Habitats in the Gulf of Mexico

    Science.gov (United States)

    Ruby, C.; Skarke, A. D.; Mesick, S.

    2016-02-01

    The Coastal and Marine Ecological Classification Standard (CMECS) is a network of common nomenclature that provides a comprehensive framework for organizing physical, biological, and chemical information about marine ecosystems. It was developed by the National Oceanic and Atmospheric Administration (NOAA) Coastal Services Center, in collaboration with other feral agencies and academic institutions, as a means for scientists to more easily access, compare, and integrate marine environmental data from a wide range of sources and time frames. CMECS has been endorsed by the Federal Geographic Data Committee (FGDC) as a national metadata standard. The research presented here is focused on the application of CMECS to deep-sea video and environmental data collected by the NOAA ROV Deep Discoverer and the NOAA Ship Okeanos Explorer in the Gulf of Mexico in 2011-2014. Specifically, a spatiotemporal index of the physical, chemical, biological, and geological features observed in ROV video records was developed in order to allow scientist, otherwise unfamiliar with the specific content of existing video data, to rapidly determine the abundance and distribution of features of interest, and thus evaluate the applicability of those video data to their research. CMECS units (setting, component, or modifier) for seafloor images extracted from high-definition ROV video data were established based upon visual assessment as well as analysis of coincident environmental sensor (temperature, conductivity), navigation (ROV position, depth, attitude), and log (narrative dive summary) data. The resulting classification units were integrated into easily searchable textual and geo-databases as well as an interactive web map. The spatial distribution and associations of deep-sea habitats as indicated by CMECS classifications are described and optimized methodological approaches for application of CMECS to deep-sea video and environmental data are presented.

  16. More Than Bar Codes: Integrating Global Standards-Based Bar Code Technology Into National Health Information Systems in Ethiopia and Pakistan to Increase End-to-End Supply Chain Visibility.

    Science.gov (United States)

    Hara, Liuichi; Guirguis, Ramy; Hummel, Keith; Villanueva, Monica

    2017-12-28

    The United Nations Population Fund (UNFPA) and the United States Agency for International Development (USAID) DELIVER PROJECT work together to strengthen public health commodity supply chains by standardizing bar coding under a single set of global standards. From 2015, UNFPA and USAID collaborated to pilot test how tracking and tracing of bar coded health products could be operationalized in the public health supply chains of Ethiopia and Pakistan and inform the ecosystem needed to begin full implementation. Pakistan had been using proprietary bar codes for inventory management of contraceptive supplies but transitioned to global standards-based bar codes during the pilot. The transition allowed Pakistan to leverage the original bar codes that were preprinted by global manufacturers as opposed to printing new bar codes at the central warehouse. However, barriers at lower service delivery levels prevented full realization of end-to-end data visibility. Key barriers at the district level were the lack of a digital inventory management system and absence of bar codes at the primary-level packaging level, such as single blister packs. The team in Ethiopia developed an open-sourced smartphone application that allowed the team to scan bar codes using the mobile phone's camera and to push the captured data to the country's data mart. Real-time tracking and tracing occurred from the central warehouse to the Addis Ababa distribution hub and to 2 health centers. These pilots demonstrated that standardized product identification and bar codes can significantly improve accuracy over manual stock counts while significantly streamlining the stock-taking process, resulting in efficiencies. The pilots also showed that bar coding technology by itself is not sufficient to ensure data visibility. Rather, by using global standards for identification and data capture of pharmaceuticals and medical devices, and integrating the data captured into national and global tracking systems

  17. Dimensioning Method for Conversational Video Applications in Wireless Convergent Networks

    Directory of Open Access Journals (Sweden)

    Alonso JoséI

    2008-01-01

    Full Text Available Abstract New convergent services are becoming possible, thanks to the expansion of IP networks based on the availability of innovative advanced coding formats such as H.264, which reduce network bandwidth requirements providing good video quality, and the rapid growth in the supply of dual-mode WiFi cellular terminals. This paper provides, first, a comprehensive subject overview as several technologies are involved, such as medium access protocol in IEEE802.11, H.264 advanced video coding standards, and conversational application characterization and recommendations. Second, the paper presents a new and simple dimensioning model of conversational video over wireless LAN. WLAN is addressed under the optimal network throughput and the perspective of video quality. The maximum number of simultaneous users resulting from throughput is limited by the collisions taking place in the shared medium with the statistical contention protocol. The video quality is conditioned by the packet loss in the contention protocol. Both approaches are analyzed within the scope of the advanced video codecs used in conversational video over IP, to conclude that conversational video dimensioning based on network throughput is not enough to ensure a satisfactory user experience, and video quality has to be taken also into account. Finally, the proposed model has been applied to a real-office scenario.

  18. Dimensioning Method for Conversational Video Applications in Wireless Convergent Networks

    Directory of Open Access Journals (Sweden)

    Raquel Perez Leal

    2007-12-01

    Full Text Available New convergent services are becoming possible, thanks to the expansion of IP networks based on the availability of innovative advanced coding formats such as H.264, which reduce network bandwidth requirements providing good video quality, and the rapid growth in the supply of dual-mode WiFi cellular terminals. This paper provides, first, a comprehensive subject overview as several technologies are involved, such as medium access protocol in IEEE802.11, H.264 advanced video coding standards, and conversational application characterization and recommendations. Second, the paper presents a new and simple dimensioning model of conversational video over wireless LAN. WLAN is addressed under the optimal network throughput and the perspective of video quality. The maximum number of simultaneous users resulting from throughput is limited by the collisions taking place in the shared medium with the statistical contention protocol. The video quality is conditioned by the packet loss in the contention protocol. Both approaches are analyzed within the scope of the advanced video codecs used in conversational video over IP, to conclude that conversational video dimensioning based on network throughput is not enough to ensure a satisfactory user experience, and video quality has to be taken also into account. Finally, the proposed model has been applied to a real-office scenario.

  19. Standards, Best Practices and Codes of Ethics Impact on IT Service Quality – The Case of Slovenian IT Departments

    Directory of Open Access Journals (Sweden)

    Dzangir Kolar

    2017-03-01

    Full Text Available The purpose of this paper is to explore the critical success factor for implementing standards, best practices and codes of ethics, what their benefits are when implemented and how they impact the quality of information technology (IT services. Through extensive literature review and interview with experts in the field we identified instrumental determinants. The structural equation model (SEM has been used on the case of IT departments in large Slovenian companies to test the hypotheses presented. The study is based on 102 responses from IT managers in large Slovenian companies. Research findings confirmed a positive correlation between used factors.

  20. An Overview of the Coding Standard MPEG-4 Audio Amendments 1 and 2: HE-AAC, SSC, and HE-AAC v2

    Directory of Open Access Journals (Sweden)

    Ekstrand P

    2009-01-01

    Full Text Available In 2003 and 2004, the ISO/IEC MPEG standardization committee added two amendments to their MPEG-4 audio coding standard. These amendments concern parametric coding techniques and encompass Spectral Band Replication (SBR, Sinusoidal Coding (SSC, and Parametric Stereo (PS. In this paper, we will give an overview of the basic ideas behind these techniques and references to more detailed information. Furthermore, the results of listening tests as performed during the final stages of the MPEG-4 standardization process are presented in order to illustrate the performance of these techniques.

  1. Standardizing training for psoriasis measures: effectiveness of an online training video on Psoriasis Area and Severity Index assessment by physician and patient raters.

    Science.gov (United States)

    Armstrong, April W; Parsi, Kory; Schupp, Clayton W; Mease, Philip J; Duffin, Kristina C

    2013-05-01

    Because the Psoriasis Area and Severity Index (PASI) is the most commonly used and validated disease severity measure for clinical trials, it is imperative to standardize training to ensure reliability in PASI scoring for accurate assessment of disease severity. To evaluate whether an online PASI training video improves scoring accuracy among patients with psoriasis and physicians on first exposure to PASI. This equivalency study compared PASI assessment performed by patients and PASI-naive physicians with that of PASI-experienced physicians at baseline and after standardized video training. The study was conducted from March 15, 2011, to September 1, 2011. Outpatient psoriasis clinic at University of California, Davis. Forty-two psoriasis patients and 14 PASI-naive physicians participated in the study. The scores from 12 dermatologists experienced in PASI evaluation were used as the criterion standard against which other scores were compared. Aggregate and component PASI scores from image sets corresponding to mild, moderate, and severe psoriasis. After viewing the training video, PASI-naive physicians produced equivalent scores for all components of PASI; patients provided equivalent scores for most PASI components, with the exception of area scores for moderate-to-severe psoriasis images. After the online video training, the PASI-naive physicians and patients exhibited improved accuracy in assigning total PASI scores for mild (Mean(experienced physician) - Mean(PASI-naive physician): 1.2; Mean(experienced physician) - Mean(patient): -2.1), moderate (Mean(experienced physician) - Mean(PASI-naive physician): 0; Mean(experienced physician) - Mean(patient): -5.7), and severe (Mean(experienced physician) - Mean(PASI-naive physician): -5.1; Mean(experienced physician) - Mean(patient): -10.4) psoriasis, respectively. Use of an online PASI training video represents an effective tool in improving accuracy in PASI scoring by both health care professionals and patients

  2. The need for standards and codes to ensure an acoustically comfortable environment in multifamily housing buildings in Mexico City

    Science.gov (United States)

    Kotasek Gonzalez, Eduardo; Rodriguez Manzo, Fausto

    2002-11-01

    It is clear that almost all kinds of buildings require protection against noise and undesirable sounds, however, there are some countries where this is not yet regulated, such is the case of Mexico. Mexico City, the biggest city in the world could also be the noisiest. This is a problem which is today being debated; in fact there is no doubt that this has an important influence on the acoustic comfort conditions of dwellings, besides the habits and culture of the inhabitants, which are very different from those in the Anglo-Saxon countries. These are all details that must be taken into account in the design of an acoustic comfort standard for buildings in cities like Mexico. In this paper we deal with this problem and it suggests some recommendations to consider in a proposed acoustic comfort standard or code to be applied in the design of multifamily housing buildings in Mexico City.

  3. The safety relief valve handbook design and use of process safety valves to ASME and International codes and standards

    CERN Document Server

    Hellemans, Marc

    2009-01-01

    The Safety Valve Handbook is a professional reference for design, process, instrumentation, plant and maintenance engineers who work with fluid flow and transportation systems in the process industries, which covers the chemical, oil and gas, water, paper and pulp, food and bio products and energy sectors. It meets the need of engineers who have responsibilities for specifying, installing, inspecting or maintaining safety valves and flow control systems. It will also be an important reference for process safety and loss prevention engineers, environmental engineers, and plant and process designers who need to understand the operation of safety valves in a wider equipment or plant design context. . No other publication is dedicated to safety valves or to the extensive codes and standards that govern their installation and use. A single source means users save time in searching for specific information about safety valves. . The Safety Valve Handbook contains all of the vital technical and standards informat...

  4. Standardized quality assurance forms for organ transplantations with multilingual support, open access and UMLS coding.

    Science.gov (United States)

    Varghese, Julian; Sünninghausen, Sarah Schulze; Dugas, Martin

    2015-01-01

    Quality assurance (QA) is a key factor to evaluate success of organ transplantations. In Germany QA documentation is progressively developed and enforced by law. Our objective is to share QA models from Germany in a standardized format within a form repository for world-wide reuse and exchange. Original QA forms were converted into standardized study forms according to the Operational Data Model (ODM) and shared for open access in an international forms repository. Form elements were translated into English and semantically enriched with Concept Unique Identifiers from the Unified Medical Language System (UMLS) based on medical expert decision. All forms are available on the web as multilingual ODM documents. UMLS concept coverage analysis indicates 92% coverage with few but critically important definition gaps. New content and infrastructure for harmonized documentation forms is provided in the domain of organ transplantations enabling world-wide reuse and exchange.

  5. Calculation of low-cycle fatigue in accordance with the national standard and strength codes

    Science.gov (United States)

    Kontorovich, T. S.; Radin, Yu. A.

    2017-08-01

    Over the most recent 15 years, the Russian power industry has largely relied on imported equipment manufactured in compliance with foreign standards and procedures. This inevitably necessitates their harmonization with the regulatory documents of the Russian Federation, which include calculations of strength, low cycle fatigue, and assessment of the equipment service life. An important regulatory document providing the engineering foundation for cyclic strength and life assessment for high-load components of the boiler and steamline of a water/steam circuit is RD 10-249-98:2000: Standard Method of Strength Estimation in Stationary Boilers and Steam and Water Piping. In January 2015, the National Standard of the Russian Federation 12952-3:2001 was introduced regulating the issues of design and calculation of the pressure parts of water-tube boilers and auxiliary installations. Thus, there appeared to be two documents simultaneously valid in the same energy field and using different methods for calculating the low-cycle fatigue strength, which leads to different results. In this connection, the current situation can lead to incorrect ideas about the cyclic strength and the service life of high-temperature boiler parts. The article shows that the results of calculations performed in accordance with GOST R 55682.3-2013/EN 12952-3: 2001 are less conservative than the results of the standard RD 10-249-98. Since the calculation of the expected service life of boiler parts should use GOST R 55682.3-2013/EN 12952-3: 2001, it becomes necessary to establish the applicability scope of each of the above documents.

  6. low bit rate video coding low bit rate video coding

    African Journals Online (AJOL)

    eobe

    bottom-up merging procedure is to find out motion vector of current frame by any kind of motion estimation algorithm. Once, the motion vectors are available to motion compensation module, then the bottom-up merging process is implemented in two steps. Firstly, the VBMC merges macro-block into the bigger block, and ...

  7. Overview of the U.S. DOE Hydrogen Safety, Codes and Standards Program. Part 4: Hydrogen Sensors; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Buttner, William J.; Rivkin, Carl; Burgess, Robert; Brosha, Eric; Mukundan, Rangachary; James, C. Will; Keller, Jay

    2016-12-01

    Hydrogen sensors are recognized as a critical element in the safety design for any hydrogen system. In this role, sensors can perform several important functions including indication of unintended hydrogen releases, activation of mitigation strategies to preclude the development of dangerous situations, activation of alarm systems and communication to first responders, and to initiate system shutdown. The functionality of hydrogen sensors in this capacity is decoupled from the system being monitored, thereby providing an independent safety component that is not affected by the system itself. The importance of hydrogen sensors has been recognized by DOE and by the Fuel Cell Technologies Office's Safety and Codes Standards (SCS) program in particular, which has for several years supported hydrogen safety sensor research and development. The SCS hydrogen sensor programs are currently led by the National Renewable Energy Laboratory, Los Alamos National Laboratory, and Lawrence Livermore National Laboratory. The current SCS sensor program encompasses the full range of issues related to safety sensors, including development of advance sensor platforms with exemplary performance, development of sensor-related code and standards, outreach to stakeholders on the role sensors play in facilitating deployment, technology evaluation, and support on the proper selection and use of sensors.

  8. Guided waves dispersion equations for orthotropic multilayered pipes solved using standard finite elements code.

    Science.gov (United States)

    Predoi, Mihai Valentin

    2014-09-01

    The dispersion curves for hollow multilayered cylinders are prerequisites in any practical guided waves application on such structures. The equations for homogeneous isotropic materials have been established more than 120 years ago. The difficulties in finding numerical solutions to analytic expressions remain considerable, especially if the materials are orthotropic visco-elastic as in the composites used for pipes in the last decades. Among other numerical techniques, the semi-analytical finite elements method has proven its capability of solving this problem. Two possibilities exist to model a finite elements eigenvalue problem: a two-dimensional cross-section model of the pipe or a radial segment model, intersecting the layers between the inner and the outer radius of the pipe. The last possibility is here adopted and distinct differential problems are deduced for longitudinal L(0,n), torsional T(0,n) and flexural F(m,n) modes. Eigenvalue problems are deduced for the three modes classes, offering explicit forms of each coefficient for the matrices used in an available general purpose finite elements code. Comparisons with existing solutions for pipes filled with non-linear viscoelastic fluid or visco-elastic coatings as well as for a fully orthotropic hollow cylinder are all proving the reliability and ease of use of this method. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Validation of Indian standard code provisions for fire resistance of flexural elements

    Directory of Open Access Journals (Sweden)

    Aneesha Balaji

    2015-04-01

    Full Text Available Fire resistance provisions in Indian codes are prescriptive in nature and provide only tabulated fire ratings for structural members. Eurocode EN 1992-1-2:2004 suggests simplified methods which include explicit equations for fire resistant design. The aim of this paper is to familiarize the simplified method, i.e., 500°C isotherm method. The procedure is customized for Indian conditions and a parametric study is done to determine the fire rating for flexural elements. Fire ratings recommended in IS 456:2000 is compared with strength criteria by using the 500°C isotherm method. It is also compared by thermal criteria obtained by heat transfer analysis of finite element model. Through these studies, it is shown that for most of the cross-sections, the fire rating obtained from the two methods is higher than that given in IS 456:2000 provisions and the increase in cover has significant effect in increasing fire rating only for lower values of cover to reinforcement.

  10. New Standard Evaluated Neutron Cross Section Libraries for the GEANT4 Code and First Verification

    CERN Document Server

    Mendoza, Emilio; Koi, Tatsumi; Guerrero, Carlos

    2014-01-01

    The Monte Carlo simulation of the interaction of neutrons with matter relies on evaluated nuclear data libraries and models. The evaluated libraries are compilations of measured physical parameters (such as cross sections) combined with predictions of nuclear model calculations which have been adjusted to reproduce the experimental data. The results obtained from the simulations depend largely on the accuracy of the underlying nuclear data used, and thus it is important to have access to the nuclear data libraries available, either of general use or compiled for specific applications, and to perform exhaustive validations which cover the wide scope of application of the simulation code. In this paper we describe the work performed in order to extend the capabilities of the GEANT4 toolkit for the simulation of the interaction of neutrons with matter at neutron energies up to 20 MeV and a first verification of the results obtained. Such a work is of relevance for applications as diverse as the simulation of a n...

  11. Universal Design Criteria in Standards and Codes About Accessibility of Built Environments in Brazil.

    Science.gov (United States)

    Guimarães, Marcelo Pinto

    2016-01-01

    This paper includes some criticism in analysis of the development and implementation of the national standards for accessibility of the built environment in Brazil, i.e., the NBR9050. Currently, the 2015 version of it resembles an encyclopaedia including a variety of exotic contributions gathered historically from different sources; however, that characteristic makes it work like a puzzle that keeps alive prejudices about users' needs and disabilities. Besides, there are conflicts between recommended ideas and previous requirements from older versions. The definition of Universal Design has been published since 2004, but there is still no indication of how to make the principles work in practice. Therefore, it is very hard for city officials to assess quality of environments, and professionals have serious constraints to explore their skills further while addressing users' diversified needs. Certainly, the current NBR9050 requires further editorial work. Nevertheless, an important decision is necessary: it is important to organize information so that readers may identify in each topic whether Universal Design application can be achieved or whether the proposed technical solution may lead to construction of limited spatial adaptation and reach only some poor accommodation of users with uncommon needs. Presenting some examples in context of socially inclusive environments, the newer revised version of NBR9050 is necessary to explain about pitfalls of bad design of accessibility for discriminated disabled users. Readers should be able to establish conceptual links between the best ideas so that Universal Design could be easily understood.

  12. M-health medical video communication systems: an overview of design approaches and recent advances.

    Science.gov (United States)

    Panayides, A S; Pattichis, M S; Constantinides, A G; Pattichis, C S

    2013-01-01

    The emergence of the new, High Efficiency Video Coding (HEVC) standard, combined with wide deployment of 4G wireless networks, will provide significant support toward the adoption of mobile-health (m-health) medical video communication systems in standard clinical practice. For the first time since the emergence of m-health systems and services, medical video communication systems can be deployed that can rival the standards of in-hospital examinations. In this paper, we provide a thorough overview of today's advancements in the field, discuss existing approaches, and highlight the future trends and objectives.

  13. SPECIAL REPORT: Creating Conference Video

    Directory of Open Access Journals (Sweden)

    Noel F. Peden

    2008-12-01

    Full Text Available Capturing video at a conference is easy. Doing it so the product is useful is another matter. Many subtle problems come into play so that video and audio obtained can be used to create a final product. This article discusses what the author learned in the two years of shooting and editing video for Code4Lib conference.

  14. High definition colonoscopy combined with i-Scan is superior in the detection of colorectal neoplasias compared with standard video colonoscopy: a prospective randomized controlled trial.

    Science.gov (United States)

    Hoffman, A; Sar, F; Goetz, M; Tresch, A; Mudter, J; Biesterfeld, S; Galle, P R; Neurath, M F; Kiesslich, R

    2010-10-01

    Colonoscopy is the accepted gold standard for the detection of colorectal cancer. The aim of the current study was to prospectively compare high definition plus (HD+) colonoscopy with I-Scan functionality (electronic staining) vs. standard video colonoscopy. The primary endpoint was the detection of patients having colon cancer or at least one adenoma. A total of 220 patients due to undergo screening colonoscopy, postpolypectomy surveillance or with a positive occult blood test were randomized in a 1 : 1 ratio to undergo HD+ colonoscopy in conjunction with I-Scan surface enhancement (90i series, Pentax, Tokyo, Japan) or standard video colonoscopy (EC-3870FZK, Pentax). Detected colorectal lesions were judged according to type, location, and size. Lesions were characterized in the HD+ group by using further I-Scan functionality (p- and v-modes) to analyze pattern and vessel architecture. Histology was predicted and biopsies or resections were performed on all identified lesions. HD+ colonoscopy with I-Scan functionality detected significantly more patients with colorectal neoplasia (38 %) compared with standard resolution endoscopy (13 %) (200 patients finally analyzed; 100 per arm). Significantly more neoplastic (adenomatous and cancerous) lesions and more flat adenomas could be detected using high definition endoscopy with surface enhancement. Final histology could be predicted with high accuracy (98.6 %) within the HD+ group. HD+ colonoscopy with I-Scan is superior to standard video colonoscopy in detecting patients with colorectal neoplasia based on this prospective, randomized, controlled trial. © Georg Thieme Verlag KG Stuttgart · New York.

  15. On the evolution of the standard genetic code: vestiges of critical scale invariance from the RNA world in current prokaryote genomes.

    Directory of Open Access Journals (Sweden)

    Marco V José

    Full Text Available Herein two genetic codes from which the primeval RNA code could have originated the standard genetic code (SGC are derived. One of them, called extended RNA code type I, consists of all codons of the type RNY (purine-any base-pyrimidine plus codons obtained by considering the RNA code but in the second (NYR type and third (YRN type reading frames. The extended RNA code type II, comprises all codons of the type RNY plus codons that arise from transversions of the RNA code in the first (YNY type and third (RNR nucleotide bases. In order to test if putative nucleotide sequences in the RNA World and in both extended RNA codes, share the same scaling and statistical properties to those encountered in current prokaryotes, we used the genomes of four Eubacteria and three Archaeas. For each prokaryote, we obtained their respective genomes obeying the RNA code or the extended RNA codes types I and II. In each case, we estimated the scaling properties of triplet sequences via a renormalization group approach, and we calculated the frequency distributions of distances for each codon. Remarkably, the scaling properties of the distance series of some codons from the RNA code and most codons from both extended RNA codes turned out to be identical or very close to the scaling properties of codons of the SGC. To test for the robustness of these results, we show, via computer simulation experiments, that random mutations of current genomes, at the rates of 10(-10 per site per year during three billions of years, were not enough for destroying the observed patterns. Therefore, we conclude that most current prokaryotes may still contain relics of the primeval RNA World and that both extended RNA codes may well represent two plausible evolutionary paths between the RNA code and the current SGC.

  16. Performance Based Plastic Design of Concentrically Braced Frame attuned with Indian Standard code and its Seismic Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Sejal Purvang Dalal

    2015-12-01

    Full Text Available In the Performance Based Plastic design method, the failure is predetermined; making it famous throughout the world. But due to lack of proper guidelines and simple stepwise methodology, it is not quite popular in India. In this paper, stepwise design procedure of Performance Based Plastic Design of Concentrically Braced frame attuned with the Indian Standard code has been presented. The comparative seismic performance evaluation of a six storey concentrically braced frame designed using the displacement based Performance Based Plastic Design (PBPD method and currently used force based Limit State Design (LSD method has also been carried out by nonlinear static pushover analysis and time history analysis under three different ground motions. Results show that Performance Based Plastic Design method is superior to the current design in terms of displacement and acceleration response. Also total collapse of the frame is prevented in the PBPD frame.

  17. Video processing project

    CSIR Research Space (South Africa)

    Globisch, R

    2009-03-01

    Full Text Available Video processing source code for algorithms and tools used in software media pipelines (e.g. image scalers, colour converters, etc.) The currently available source code is written in C++ with their associated libraries and DirectShow- Filters....

  18. Adaptive and ubiquitous video streaming over Wireless Mesh Networks

    Directory of Open Access Journals (Sweden)

    Malik Mubashir Hassan

    2016-10-01

    Full Text Available In recent years, with the dramatic improvement on scalability of H.264/MPEG-4 standard and growing demand for new multimedia services have spurred the research on scalable video streaming over wireless networks in both industry and academia. Video streaming applications are increasingly being deployed in Wireless Mesh Networks (WMNs. However, robust streaming of video over WMNs poses many challenges due to varying nature of wireless networks. Bit-errors, packet-losses and burst-packet-losses are very common in such type of networks, which severely influence the perceived video quality at receiving end. Therefore, a carefully-designed error recovery scheme must be employed. In this paper, we propose an interactive and ubiquitous video streaming scheme for Scalable Video Coding (SVC based video streaming over WMNs towards heterogeneous receivers. Intelligently taking the benefit of path diversity, the proposed scheme initially calculates the quality of all candidate paths and then based on quality of path it decides adaptively the size and level of error protection for all packets in order to combat the effect of losses on perceived quality of reconstructed video at receiving end. Our experimental results show that the proposed streaming approach can react to varying channel conditions with less degradation in video quality.

  19. Pilot study comparing changes in postural control after training using a video game balance board program and 2 standard activity-based balance intervention programs.

    Science.gov (United States)

    Pluchino, Alessandra; Lee, Sae Yong; Asfour, Shihab; Roos, Bernard A; Signorile, Joseph F

    2012-07-01

    To compare the impacts of Tai Chi, a standard balance exercise program, and a video game balance board program on postural control and perceived falls risk. Randomized controlled trial. Research laboratory. Independent seniors (N=40; 72.5±8.40) began the training, 27 completed. Tai Chi, a standard balance exercise program, and a video game balance board program. The following were used as measures: Timed Up & Go, One-Leg Stance, functional reach, Tinetti Performance Oriented Mobility Assessment, force plate center of pressure (COP) and time to boundary, dynamic posturography (DP), Falls Risk for Older People-Community Setting, and Falls Efficacy Scale. No significant differences were seen between groups for any outcome measures at baseline, nor were significant time or group × time differences for any field test or questionnaire. No group × time differences were seen for any COP measures; however, significant time differences were seen for total COP, 3 of 4 anterior/posterior displacement and both velocity, and 1 displacement and 1 velocity medial/lateral measure across time for the entire sample. For DP, significant improvements in the overall score (dynamic movement analysis score), and in 2 of the 3 linear and angular measures were seen for the sample. The video game balance board program, which can be performed at home, was as effective as Tai Chi and the standard balance exercise program in improving postural control and balance dictated by the force plate postural sway and DP measures. This finding may have implications for exercise adherence because the at-home nature of the intervention eliminates many obstacles to exercise training. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  20. A Peer-Reviewed Instructional Video is as Effective as a Standard Recorded Didactic Lecture in Medical Trainees Performing Chest Tube Insertion: A Randomized Control Trial.

    Science.gov (United States)

    Saun, Tomas J; Odorizzi, Scott; Yeung, Celine; Johnson, Marjorie; Bandiera, Glen; Dev, Shelly P

    Online medical education resources are becoming an increasingly used modality and many studies have demonstrated their efficacy in procedural instruction. This study sought to determine whether a standardized online procedural video is as effective as a standard recorded didactic teaching session for chest tube insertion. A randomized control trial was conducted. Participants were taught how to insert a chest tube with either a recorded didactic teaching session, or a New England Journal of Medicine (NEJM) video. Participants filled out a questionnaire before and after performing the procedure on a cadaver, which was filmed and assessed by 2 blinded evaluators using a standardized tool. Western University, London, Ontario. Level of clinical care: institutional. A total of 30 fourth-year medical students from 2 graduating classes at the Schulich School of Medicine & Dentistry were screened for eligibility. Two students did not complete the study and were excluded. There were 13 students in the NEJM group, and 15 students in the didactic group. The NEJM group׳s average score was 45.2% (±9.56) on the prequestionnaire, 67.7% (±12.9) for the procedure, and 60.1% (±7.65) on the postquestionnaire. The didactic group׳s average score was 42.8% (±10.9) on the prequestionnaire, 73.7% (±9.90) for the procedure, and 46.5% (±7.46) on the postquestionnaire. There was no difference between the groups on the prequestionnaire (Δ + 2.4%; 95% CI: -5.16 to 9.99), or the procedure (Δ -6.0%; 95% CI: -14.6 to 2.65). The NEJM group had better scores on the postquestionnaire (Δ + 11.15%; 95% CI: 3.74-18.6). The NEJM video was as effective as video-recorded didactic training for teaching the knowledge and technical skills essential for chest tube insertion. Participants expressed high satisfaction with this modality. It may prove to be a helpful adjunct to standard instruction on the topic. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc

  1. Scanning laser video camera/ microscope

    Science.gov (United States)

    Wang, C. P.; Bow, R. T.

    1984-10-01

    A laser scanning system capable of scanning at standard video rate has been developed. The scanning mirrors, circuit design and system performance, as well as its applications to video cameras and ultra-violet microscopes, are discussed.

  2. Processing Decoded Video for Backlight Dimming

    DEFF Research Database (Denmark)

    Burini, Nino; Korhonen, Jari

    Quality of digital image and video signals on TV screens is aected by many factors, including the display technology and compression standards. An accurate knowledge of the characteristics of the display andof the video signals can be used to develop advanced algorithms that improve the visual...... rendition of the signals, particularly in the case of LCDs with dynamic local backlight. This thesis shows that it is possible to model LCDs with dynamic backlight to design algorithms that improve the visual quality of 2D and 3D content, and that digital video coding artifacts like blocking or ringing can...... be reduced with post-processing. LCD screens with dynamic local backlight are modeled in their main aspects, like pixel luminance, light diusion and light perception. Following the model, novel algorithms based on optimization are presented and extended, then reduced in complexity, to produce backlights...

  3. Sharing code.

    Science.gov (United States)

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  4. Qualitative assessment of cause-of-injury coding in U.S. military hospitals: NATO standardization agreement (STANAG) 2050.

    Science.gov (United States)

    Amoroso, P J; Smith, G S; Bell, N S

    2000-04-01

    Accurate injury cause data are essential for injury prevention research. U.S. military hospitals, unlike civilian hospitals, use the NATO STANAG system for cause-of-injury coding. Reported deficiencies in civilian injury cause data suggested a need to specifically evaluate the STANAG. The Total Army Injury and Health Outcomes Database (TAIHOD) was used to evaluate worldwide Army injury hospitalizations, especially STANAG Trauma, Injury, and Place of Occurrence coding. We conducted a review of hospital procedures at Tripler Army Medical Center (TAMC) including injury cause and intent coding, potential crossover between acute injuries and musculoskeletal conditions, and data for certain hospital patients who are not true admissions. We also evaluated the use of free-text injury comment fields in three hospitals. Army-wide review of injury records coding revealed full compliance with cause coding, although nonspecific codes appeared to be overused. A small but intensive single hospital records review revealed relatively poor intent coding but good activity and cause coding. Data on specific injury history were present on most acute injury records and 75% of musculoskeletal conditions. Place of Occurrence coding, although inherently nonspecific, was over 80% accurate. Review of text fields produced additional details of the injuries in over 80% of cases. STANAG intent coding specificity was poor, while coding of cause of injury was at least comparable to civilian systems. The strengths of military hospital data systems are an exceptionally high compliance with injury cause coding, the availability of free text, and capture of all population hospital records without regard to work-relatedness. Simple changes in procedures could greatly improve data quality.

  5. Efficient coding unit partition strategy for HEVC intracoding

    Science.gov (United States)

    Sun, Xuebin; Chen, Xiaodong; Xu, Yong; Wang, Yi; Yu, Daoyin

    2017-07-01

    As the newest international video compression standard, high efficiency video coding (HEVC) achieves a higher compression ratio and better video quality, compared with the previous standard, H.264/advanced video coding. However, higher compression efficiency is obtained at the cost of extraordinary computational load, which obstructs the implementation of the HEVC encoder for real-time applications and mobile devices. Intracoding is one of the high computational stages due to the flexible coding unit (CU) sizes and high density of angular prediction modes. This paper presents an intraencoding technique to speed up the process, which is composed of an early coding tree unit (CTU) depth interval prediction and an efficient CU partition method. The encoded CU depth information in the already encoded surrounding CTUs is utilized to predict the encoding CU search depth interval of the current CTU. By analyzing the textural features of CU, an early CU splitting termination is proposed to decide whether a CU should be decomposed into four lower-dimensions CUs or not. The experimental results indicate that the proposed algorithm outperforms the reference software HM16.7 by decreasing the coding time up to 53.67% with a negligible bit rate increase of 0.52%, and peak signal-to-noise ratio losses lower 0.06 dB, respectively.

  6. On locality of Generalized Reed-Muller codes over the broadcast erasure channel

    KAUST Repository

    Alloum, Amira

    2016-07-28

    One to Many communications are expected to be among the killer applications for the currently discussed 5G standard. The usage of coding mechanisms is impacting broadcasting standard quality, as coding is involved at several levels of the stack, and more specifically at the application layer where Rateless, LDPC, Reed Slomon codes and network coding schemes have been extensively studied, optimized and standardized in the past. Beyond reusing, extending or adapting existing application layer packet coding mechanisms based on previous schemes and designed for the foregoing LTE or other broadcasting standards; our purpose is to investigate the use of Generalized Reed Muller codes and the value of their locality property in their progressive decoding for Broadcast/Multicast communication schemes with real time video delivery. Our results are meant to bring insight into the use of locally decodable codes in Broadcasting. © 2016 IEEE.

  7. Analysis of Potential Benefits and Costs of Adopting ASHRAE Standard 90.1-1999 as a Commercial Building Energy Code in Illinois Jurisdictions

    Energy Technology Data Exchange (ETDEWEB)

    Belzer, David B.; Cort, Katherine A.; Winiarski, David W.; Richman, Eric E.; Friedrich, Michele

    2002-05-01

    ASHRAE Standard 90.1-1999 was developed in an effort to set minimum requirements for energy efficienty design and construction of new commercial buildings. This report assesses the benefits and costs of adopting this standard as the building energy code in Illinois. Energy and economic impacts are estimated using BLAST combined with a Life-Cycle Cost approach to assess corresponding economic costs and benefits.

  8. SCALE: A modular code system for performing Standardized Computer Analyses for Licensing Evaluation. Volume 1, Part 2: Control modules S1--H1; Revision 5

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.

  9. SCALE: A modular code system for performing Standardized Computer Analyses for Licensing Evaluation. Volume 2, Part 3: Functional modules F16--F17; Revision 5

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.

  10. Analysis of Potential Benefits and Costs of Adopting ASHRAE Standard 90.1-2001 as the Commercial Building Energy Code in Tennessee

    Energy Technology Data Exchange (ETDEWEB)

    Cort, Katherine A.; Winiarski, David W.; Belzer, David B.; Richman, Eric E.

    2004-09-30

    ASHRAE Standard 90.1-2001 Energy Standard for Buildings except Low-Rise Residential Buildings (hereafter referred to as ASHRAE 90.1-2001 or 90.1-2001) was developed in an effort to set minimum requirements for the energy efficient design and construction of new commercial buildings. The State of Tennessee is considering adopting ASHRAE 90.1-2001 as its commercial building energy code. In an effort to evaluate whether or not this is an appropriate code for the state, the potential benefits and costs of adopting this standard are considered in this report. Both qualitative and quantitative benefits and costs are assessed. Energy and economic impacts are estimated using the Building Loads Analysis and System Thermodynamics (BLAST) simulations combined with a Life-Cycle Cost (LCC) approach to assess corresponding economic costs and benefits. Tennessee currently has ASHRAE Standard 90A-1980 as the statewide voluntary/recommended commercial energy standard; however, it is up to the local jurisdiction to adopt this code. Because 90A-1980 is the recommended standard, many of the requirements of ASHRAE 90A-1980 were used as a baseline for simulations.

  11. The impact of standard and hard-coded parameters on the hydrologic fluxes in the Noah-MP land surface model

    Science.gov (United States)

    Thober, S.; Cuntz, M.; Mai, J.; Samaniego, L. E.; Clark, M. P.; Branch, O.; Wulfmeyer, V.; Attinger, S.

    2016-12-01

    Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The agility of the models to react to different meteorological conditions is artificially constrained by having hard-coded parameters in their equations. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options in addition to the 71 standard parameters. We performed a Sobol' global sensitivity analysis to variations of the standard and hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff, their component fluxes, as well as photosynthesis and sensible heat were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Latent heat and total runoff show very similar sensitivities towards standard and hard-coded parameters. They are sensitive to both soil and plant parameters, which means that model calibrations of hydrologic or land surface models should take both soil and plant parameters into account. Sensible and latent heat exhibit almost the same sensitivities so that calibration or sensitivity analysis can be performed with either of the two. Photosynthesis has almost the same sensitivities as transpiration, which are different from the sensitivities of latent heat. Including photosynthesis and latent heat in model calibration might therefore be beneficial. Surface runoff is sensitive to almost all hard-coded snow parameters. These sensitivities get, however, diminished in total

  12. Real-time video codec using reversible wavelets

    Science.gov (United States)

    Huang, Gen Dow; Chiang, David J.; Huang, Yi-En; Cheng, Allen

    2003-04-01

    This paper describes the hardware implementation of a real-time video codec using reversible Wavelets. The TechSoft (TS) real-time video system employs the Wavelet differencing for the inter-frame compression based on the independent Embedded Block Coding with Optimized Truncation (EBCOT) of the embedded bit stream. This high performance scalable image compression using EBCOT has been selected as part of the ISO new image compression standard, JPEG2000. The TS real-time video system can process up to 30 frames per second (fps) of the DVD format. In addition, audio signals are also processed by the same design for the cost reduction. Reversible Wavelets are used not only for the cost reduction, but also for the lossless applications. Design and implementation issues of the TS real-time video system are discussed.

  13. Student Views on the Use of 2 Styles of Video-Enhanced Feedback Compared to Standard Lecture Feedback During Clinical Skills Training.

    Science.gov (United States)

    Nesbitt, Craig; Phillips, Alex W; Searle, Roger; Stansby, Gerard

    2015-01-01

    Feedback plays an important role in the learning process. However, often this may be delivered in an unstructured fashion that can detract from its potential benefit. Further, students may have different preferences in how feedback should be delivered, which may be influenced by which method they feel will lead to the most effective learning. The aim of this study was to evaluate student views on 3 different modes of feedback particularly in relation to the benefit each conferred. Undergraduate medical students participating in a surgical suturing study were asked to give feedback using a semi-structured questionnaire. Discrete questions using a Likert scale and open responses were solicited. Students received either standard lecture feedback (SLF), individualized video feedback (IVF), or enhanced unsupervised video feedback (UVF). Students had a strong preference for IVF over UVF or SLF. These responses correlated with their perception of how much each type of feedback improved their performance. However, there was no statistical difference in suturing skill improvement between IVF and UVF, which were both significantly better than SLF. Students have a strong preference for IVF. This relates to a perception that this will lead to the greatest level of skill improvement. However, an equal effect in improvement can be achieved by using less resource-demanding UVF. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  14. Edge-based intramode selection for depth-map coding in 3D-HEVC.

    Science.gov (United States)

    Park, Chun-Su

    2015-01-01

    The 3D video extension of High Efficiency Video Coding (3D-HEVC) is the state-of-the-art video coding standard for the compression of the multiview video plus depth format. In the 3D-HEVC design, new depth-modeling modes (DMMs) are utilized together with the existing intraprediction modes for depth intracoding. The DMMs can provide more accurate prediction signals and thereby achieve better compression efficiency. However, testing the DMMs in the intramode decision process causes a drastic increase in the computational complexity. In this paper, we propose a fast mode decision algorithm for depth intracoding. The proposed algorithm first performs a simple edge classification in the Hadamard transform domain. Then, based on the edge classification results, the proposed algorithm selectively omits unnecessary DMMs in the mode decision process. Experimental results demonstrate that the proposed algorithm speeds up the mode decision process by up to 37.65% with negligible loss of coding efficiency.

  15. Analysis of Potential Benefits and Costs of Adopting ASHRAE Standard 90.1-1999 as a Commercial Building Energy Code in Michigan

    Energy Technology Data Exchange (ETDEWEB)

    Cort, Katherine A.; Belzer, David B.; Halverson, Mark A.; Richman, Eric E.; Winiarski, David W.

    2002-09-30

    The state of Michigan is considering adpoting ASHRAE 90.1-1999 as its commercial building energy code. In an effort to evaluate whether or not this is an appropraite code for the state, the potential benefits and costs of adopting this standard are considered. Both qualitative and quantitative benefits are assessed. The energy simulation and economic results suggest that adopting ASHRAE 90.1-1999 would provide postitive net benefits to the state relative to the building and design requirements currently in place.

  16. Revision of Ethical Standard 3.04 of the "Ethical Principles of Psychologists and Code of Conduct" (2002, as amended 2010).

    Science.gov (United States)

    2016-12-01

    The following amendment to Ethical Standard 3.04 of the 2002 "Ethical Principles of Psychologists and Code of Conduct" as amended, 2010 (the Ethics Code; American Psychological Association, 2002, 2010) was adopted by the APA Council of Representatives at its August 2016 meeting. The amendment will become effective January 1, 2017. Following is an explanation of the change, a clean version of the revision, and a version indicating changes from the 2002 language (inserted text is in italics). (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. Video-Assisted Thoracoscopy is Superior to Standard Computer Tomography of the Thorax for Selection of Patients With Spontaneous Pneumothorax for Bullectomy

    Directory of Open Access Journals (Sweden)

    Julius P. Janssen

    1995-01-01

    Full Text Available Background: Spontaneous pneumothorax (SP is a common disease of unknown cause often attributed to rupture of a subpleural bulla or bleb [in this study described as emphysema-like changes (ELC]. Treatment of SP varies from conservative (rest to aggressive (surgery. Patients with bullae >2 cm diameter, found either by chest roentgenogram or during thoracoscopy, are often treated surgically (bullectomy and pleurectomy, or abrasion. Thoracoscopy is frequently used as the method of choice to select patients for surgery. With the recent introduction of video-assisted thoracoscopy (VAT, it is now possible to combine a diagnostic and therapeutic procedure. However, to do this general anesthesia and a fully equipped operating theater are needed. Proper selection of patients for this costly and time-consuming procedure is necessary. We evaluated whether standard computed tomography (CT is appropriate for selection of patients with SP who are candidates for surgical intervention.

  18. Quality scalable video data stream

    OpenAIRE

    Wiegand, T.; Kirchhoffer, H.; Schwarz, H

    2008-01-01

    An apparatus for generating a quality-scalable video data stream (36) is described which comprises means (42) for coding a video signal (18) using block-wise transformation to obtain transform blocks (146, 148) of transformation coefficient values for a picture (140) of the video signal, a predetermined scan order (154, 156, 164, 166) with possible scan positions being defined among the transformation coefficient values within the transform blocks so that in each transform block, for each pos...

  19. Color spaces in digital video

    Energy Technology Data Exchange (ETDEWEB)

    Gaunt, R.

    1997-05-01

    . For example, humans `see` more white-to-black (luminance) detail then red, green, or blue color detail. Also, the eye is most sensitive to green colors. Taking advantage of this, both composite and component video allocates more bandwidth for the luma (Y`) signal than the chroma signals. Y`611 is composed of 59% green`, 30% red`, and 11% blue` (prime symbol denotes gamma corrected colors). This luma signal also maintains compatibility with black and white television receivers. Component digital video converts R`G`B` signals (either from a camera or a computer) to a monochromatic brightness signal Y` (referred here as luma to distinguish it from the CIE luminance linear- light quantity), and two color difference signals Cb and Cr. These last two are the blue and red signals with the luma component subtracted out. As you know, computer graphic images are composed of red, green, and blue elements defined in a linear color space. Color monitors do not display RGB linearly. A linear RGB color space image must be gamma corrected to be displayed properly on a CRT. Gamma correction, which is approximately a 0.45 power function, must also be employed before converting an RGB image to video color space. Gamma correction is defined for video in the international standard: ITU-Rec. BT.709-4. The gamma correction transform is the same for red, green, and blue. The color coding standard for component digital video and high definition video symbolizes gamma corrected luma by Y`, the blue difference signal by Cb (Cb = B` -Y`), and the red color difference signal by Cr (Cr = R` - Y`). Component analog HDTV uses Y`PbPr. To reduce conversion errors, clip in R`G`B`, not in Y`CbCr space. View video on a video monitor, computer monitor phosphors are wrong. Use a large word size (double precision) to avoid warp around, the0232n round the results to values between 0 and 255. And finally, recall that multiplying two 8- bit numbers results in a 16-bit number, so values need to be clipped to 8

  20. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  1. Subjective evaluation of H.265/HEVC based dynamic adaptive video streaming over HTTP (HEVC-DASH)

    Science.gov (United States)

    Irondi, Iheanyi; Wang, Qi; Grecos, Christos

    2015-02-01

    The Dynamic Adaptive Streaming over HTTP (DASH) standard is becoming increasingly popular for real-time adaptive HTTP streaming of internet video in response to unstable network conditions. Integration of DASH streaming techniques with the new H.265/HEVC video coding standard is a promising area of research. The performance of HEVC-DASH systems has been previously evaluated by a few researchers using objective metrics, however subjective evaluation would provide a better measure of the user's Quality of Experience (QoE) and overall performance of the system. This paper presents a subjective evaluation of an HEVC-DASH system implemented in a hardware testbed. Previous studies in this area have focused on using the current H.264/AVC (Advanced Video Coding) or H.264/SVC (Scalable Video Coding) codecs and moreover, there has been no established standard test procedure for the subjective evaluation of DASH adaptive streaming. In this paper, we define a test plan for HEVC-DASH with a carefully justified data set employing longer video sequences that would be sufficient to demonstrate the bitrate switching operations in response to various network condition patterns. We evaluate the end user's real-time QoE online by investigating the perceived impact of delay, different packet loss rates, fluctuating bandwidth, and the perceived quality of using different DASH video stream segment sizes on a video streaming session using different video sequences. The Mean Opinion Score (MOS) results give an insight into the performance of the system and expectation of the users. The results from this study show the impact of different network impairments and different video segments on users' QoE and further analysis and study may help in optimizing system performance.

  2. Representing videos in tangible products

    Science.gov (United States)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  3. An implement of fast hiding data into H.264 bitstream based on intra-prediction coding

    Science.gov (United States)

    Hua, Cao; Jingli, Zhou; Shengsheng, Yu

    2005-10-01

    Digital watermarking is the technique which embeds an invisible signal including owner identification and copy control information into multimedia data such as image, audio and video for copyright protection. A blind robust algorithm of hiding data into H.264 video stream rapidly was proposed in this paper, copyright protection can be achieved by embedding the robust watermark during the procedure of intra prediction encoding which is characteristic of H.264 standard. This scheme is well compatible with H.264 video coding standard and can directly extract the embedded data from the watermarked H.264 compression video stream without using the original video. Experimental results demonstrate that this scheme is very computational efficient during watermark embedding and extraction and the embedded data not lead to increasing the bit-rate of H.264 bit-stream too many. This algorithm is feasible for real time system implementation.

  4. Research on key technologies in multiview video and interactive multiview video streaming

    OpenAIRE

    Xiu, Xiaoyu

    2011-01-01

    Emerging video applications are being developed where multiple views of a scene are captured. Two central issues in the deployment of future multiview video (MVV) systems are compression efficiency and interactive video experience, which makes it necessary to develop advanced technologies on multiview video coding (MVC) and interactive multiview video streaming (IMVS). The former aims at efficient compression of all MVV data in a ratedistortion (RD) optimal manner by exploiting both temporal ...

  5. On the Combination of Multi-Layer Source Coding and Network Coding for Wireless Networks

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Fitzek, Frank; Pedersen, Morten Videbæk

    2013-01-01

    This paper introduces a mutually beneficial interplay between network coding and scalable video source coding in order to propose an energy-efficient video streaming approach accommodating multiple heterogeneous receivers, for which current solutions are either inefficient or insufficient. State...... support of multi-resolution video streaming....

  6. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  7. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F1--F8 -- Volume 2, Part 1, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Greene, N.M.; Petrie, L.M.; Westfall, R.M.; Bucholz, J.A.; Hermann, O.W.; Fraley, S.K. [Oak Ridge National Lab., TN (United States)

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries.

  8. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F9--F16 -- Volume 2, Part 2, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    West, J.T.; Hoffman, T.J.; Emmett, M.B.; Childs, K.W.; Petrie, L.M.; Landers, N.F.; Bryan, C.B.; Giles, G.E. [Oak Ridge National Lab., TN (United States)

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries. This volume discusses the following functional modules: MORSE-SGC; HEATING 7.2; KENO V.a; JUNEBUG-II; HEATPLOT-S; REGPLOT 6; PLORIGEN; and OCULAR.

  9. Reconfigurable Secure Video Codec Based on DWT and AES Processor

    Directory of Open Access Journals (Sweden)

    Rached Tourki

    2010-01-01

    Full Text Available In this paper, we proposed a secure video codec based on the discrete wavelet transformation (DWT and the Advanced Encryption Standard (AES processor. Either, use of video coding with DWT or encryption using AES is well known. However, linking these two designs to achieve secure video coding is leading. The contributions of our work are as follows. First, a new method for image and video compression is proposed. This codec is a synthesis of JPEG and JPEG2000,which is implemented using Huffman coding to the JPEG and DWT to the JPEG2000. Furthermore, an improved motion estimation algorithm is proposed. Second, the encryptiondecryption effects are achieved by the AES processor. AES is aim to encrypt group of LL bands. The prominent feature of this method is an encryption of LL bands by AES-128 (128-bit keys, or AES-192 (192-bit keys, or AES-256 (256-bit keys.Third, we focus on a method that implements partial encryption of LL bands. Our approach provides considerable levels of security (key size, partial encryption, mode encryption, and has very limited adverse impact on the compression efficiency. The proposed codec can provide up to 9 cipher schemes within a reasonable software cost. Latency, correlation, PSNR and compression rate results are analyzed and shown.

  10. Digital video.

    Science.gov (United States)

    Johnson, Don; Johnson, Mike

    2004-04-01

    The process of digital capture, editing, and archiving video has become an important aspect of documenting arthroscopic surgery. Recording the arthroscopic findings before and after surgery is an essential part of the patient's medical record. The hardware and software has become more reasonable to purchase, but the learning curve to master the software is steep. Digital video is captured at the time of arthroscopy to a hard disk, and written to a CD at the end of the operative procedure. The process of obtaining video of open procedures is more complex. Outside video of the procedure is recorded on digital tape with a digital video camera. The camera must be plugged into a computer to capture the video on the hard disk. Adobe Premiere software is used to edit the video and render the finished video to the hard drive. This finished video is burned onto a CD. We outline the choice of computer hardware and software for the manipulation of digital video. The techniques of backup and archiving the completed projects and files also are outlined. The uses of digital video for education and the formats that can be used in PowerPoint presentations are discussed.

  11. Dynamic infrared thermography (DIRT) for assessment of skin blood perfusion in cranioplasty: a proof of concept for qualitative comparison with the standard indocyanine green video angiography (ICGA).

    Science.gov (United States)

    Rathmann, P; Chalopin, C; Halama, D; Giri, P; Meixensberger, J; Lindner, D

    2017-11-15

    Complications in wound healing after neurosurgical operations occur often due to scarred dehiscence with skin blood perfusion disturbance. The standard imaging method for intraoperative skin perfusion assessment is the invasive indocyanine green video angiography (ICGA). The noninvasive dynamic infrared thermography (DIRT) is a promising alternative modality that was evaluated by comparison with ICGA. The study was carried out in two parts: (1) investigation of technical conditions for intraoperative use of DIRT for its comparison with ICGA, and (2) visual and quantitative comparison of both modalities in a proof of concept on nine patients. Time-temperature curves in DIRT and time-intensity curves in ICGA for defined regions of interest were analyzed. New perfusion parameters were defined in DIRT and compared with the usual perfusion parameters in ICGA. The visual observation of the image data in DIRT and ICGA showed that operation material, anatomical structures and skin perfusion are represented similarly in both modalities. Although the analysis of the curves and perfusion parameter values showed differences between patients, no complications were observed clinically. These differences were represented in DIRT and ICGA equivalently. DIRT has shown a great potential for intraoperative use, with several advantages over ICGA. The technique is passive, contactless and noninvasive. The practicability of the intraoperative recording of the same operation field section with ICGA and DIRT has been demonstrated. The promising results of this proof of concept provide a basis for a trial with a larger number of patients.

  12. Using a Standardized Video-Based Assessment in a University Teacher Education Program to Examine Preservice Teachers Knowledge Related to Effective Teaching

    Science.gov (United States)

    Wiens, Peter D.; Hessberg, Kevin; LoCasale-Crouch, Jennifer; DeCoster, Jamie

    2013-01-01

    The Video Assessment of Interactions and Learning (VAIL), a video-based assessment of teacher understanding of effective teaching strategies and behaviors, was administered to preservice teachers. Descriptive and regression analyzes were conducted to examine trends among participants and identify predictors at the individual level and program…

  13. Content-based image and video compression

    Science.gov (United States)

    Du, Xun; Li, Honglin; Ahalt, Stanley C.

    2002-08-01

    The term Content-Based appears often in applications for which MPEG-7 is expected to play a significant role. MPEG-7 standardizes descriptors of multimedia content, and while compression is not the primary focus of MPEG-7, the descriptors defined by MPEG-7 can be used to reconstruct a rough representation of an original multimedia source. In contrast, current image and video compression standards such as JPEG and MPEG are not designed to encode at the very low bit-rates that could be accomplished with MPEG-7 using descriptors. In this paper we show that content-based mechanisms can be introduced into compression algorithms to improve the scalability and functionality of current compression methods such as JPEG and MPEG. This is the fundamental idea behind Content-Based Compression (CBC). Our definition of CBC is a compression method that effectively encodes a sufficient description of the content of an image or a video in order to ensure that the recipient is able to reconstruct the image or video to some degree of accuracy. The degree of accuracy can be, for example, the classification error rate of the encoded objects, since in MPEG-7 the classification error rate measures the performance of the content descriptors. We argue that the major difference between a content-based compression algorithm and conventional block-based or object-based compression algorithms is that content-based compression replaces the quantizer with a more sophisticated classifier, or with a quantizer which minimizes classification error. Compared to conventional image and video compression methods such as JPEG and MPEG, our results show that content-based compression is able to achieve more efficient image and video coding by suppressing the background while leaving the objects of interest nearly intact.

  14. Compression of mixed video and graphics images for TV systems

    Science.gov (United States)

    van der Schaar-Mitrea, Mihaela; de With, Peter H. N.

    1998-01-01

    The diversity in TV images has augmented with the increased application of computer graphics. In this paper we study z coding system that supports both the lossless coding of such graphics data and regular lossy video compression. The lossless coding techniques are based on runlength and arithmetical coding. For video compression, we introduce a simple block predictive coding technique featuring individual pixel access, so that it enables a gradual shift from lossless coding of graphics to the lossy coding of video. An overall bit rate control completes the system. Computer simulations show a very high quality with a compression factor between 2-3.

  15. One Primer To Rule Them All: Universal Primer That Adds BBa_B0034 Ribosomal Binding Site to Any Coding Standard 10 BioBrick

    OpenAIRE

    Bryksin, Anton V.; Bachman, Haylee N.; Cooper, Spencer W.; Balavijayan, Tilak; Blackstone, Rachael M.; Du, Haoli; Jenkins, Jackson P.; Haynes, Casey L.; Siemer, Jessica L.; Fiore, Vincent F.; Barker, Thomas H.

    2014-01-01

    Here, we present a universal, simple, efficient, and reliable way to add small BioBrick parts to any BioBrick via PCR that is compatible with BioBrick assembly standard 10. As a proof of principle, we have designed a universal primer, rbs_B0034, that contains a ribosomal binding site (RBS; BBa_B0034) and that can be used in PCR to amplify any coding BioBrick that starts with ATG. We performed test PCRs with rbs_B0034 on 31 different targets and found it to be 93.6% efficient. Moreover, when s...

  16. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  17. Wireless medical ultrasound video transmission through noisy channels.

    Science.gov (United States)

    Panayides, A; Pattichis, M S; Pattichis, C S

    2008-01-01

    Recent advances in video compression such as the current state-of-the-art H.264/AVC standard in conjunction with increasingly available bitrate through new technologies like 3G, and WiMax have brought mobile health (m-Health) healthcare systems and services closer to reality. Despite this momentum towards m-Health systems and especially e-Emergency systems, wireless channels remain error prone, while the absence of objective quality metrics limits the ability of providing medical video of adequate diagnostic quality at a required bitrate. In this paper we investigate different encoding schemes and loss rates in medical ultrasound video transmission and come to conclusions involving efficiency, the trade-off between bitrate and quality, while we highlight the relationship linking video quality and the error ratio of corrupted P and B frames. More specifically, we investigate IPPP, IBPBP and IBBPBBP coding structures under packet loss rates of 2%, 5%, 8% and 10% and derive that the latter attains higher SNR ratings in all tested cases. A preliminary clinical evaluation shows that for SNR ratings higher than 30 db, video diagnostic quality may be adequate, while above 30.5 db the diagnostic information available in the reconstructed ultrasound video is close to that of the original.

  18. No-reference pixel based video quality assessment for HEVC decoded video

    DEFF Research Database (Denmark)

    Huang, Xin; Søgaard, Jacob; Forchhammer, Søren

    2017-01-01

    This paper proposes a No-Reference (NR) Video Quality Assessment (VQA) method for videos subject to the distortion given by the High Efficiency Video Coding (HEVC) scheme. The assessment is performed without access to the bitstream. The proposed analysis is based on the transform coefficients...

  19. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding.

    Directory of Open Access Journals (Sweden)

    Manoranjan Paul

    Full Text Available A Hyperspectral (HS image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing "original pixel intensity"-based coding approaches using traditional image coders (e.g., JPEG2000 to the "residual"-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM in the latest video coding standard High Efficiency Video Coding (HEVC for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression.

  20. On metrics for objective and subjective evaluation of high dynamic range video

    Science.gov (United States)

    Minoo, Koohyar; Gu, Zhouye; Baylon, David; Luthra, Ajay

    2015-09-01

    In high dynamic range (HDR) video, it is possible to represent a wider range of intensities and contrasts compared to the current standard dynamic range (SDR) video. HDR video can simultaneously preserve details in very bright and very dark areas of a scene whereas these details become lost or washed out in SDR video. Because the perceived quality due to this increased fidelity may not fit the same model of perceived quality in the SDR video, it is not clear whether the objective metrics that have been widely used and studied for SDR visual experience are reasonably accurate for HDR cases, in terms of correlation with subjective measurement for HDR video quality. This paper investigates several objective metrics and their correlation to subjective quality for a variety of HDR video content. Results are given for the case of HDR content compressed at different bit rates. In addition to rating the relevance of each objective metric in terms of its correlation to the subjective measurements, comparisons are also presented to show how closely different objective metrics can predict the results obtained by subjective quality assessment in terms of coding efficiency provided by different coding processes.

  1. Indexed Captioned Searchable Videos: A Learning Companion for STEM Coursework

    Science.gov (United States)

    Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher

    2017-01-01

    Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character…

  2. Uruguay; Report on Observance of Standards and Codes-Data Module and the Response by the Authorities

    OpenAIRE

    International Monetary Fund

    2001-01-01

    The paper provides a summary of Uruguay's practices with respect to the coverage, periodicity, and timeliness of the Special Data Dissemination Standard (SDDS) data categories, and an assessment of the quality of national accounts, prices, fiscal, monetary and financial, and external sector statistics. Uruguay has made good progress recently in improving the dissemination of statistical information. The Internet pages of the Central Bank of Uruguay (BCU) and the National Institute of Statisti...

  3. Encoder power consumption comparison of Distributed Video Codec and H.264/AVC in low-complexity mode

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Belyaev, Evgeny; Forchhammer, Søren

    2010-01-01

    This paper presents a power consumption comparison of a novel approach to video compression based on distributed video coding (DVC) and widely used video compression based on H.264/AVC standard. We have used a low-complexity configuration for H.264/AVC codec. It is well-known that motion estimation...... (ME) and CABAC entropy coder consume much power so we eliminate ME from the codec and use CAVLC instead of CABAC. Some investigations show that low-complexity DVC outperforms other algorithms in terms of encoder side energy consumption . However, estimations of power consumption for H.264/AVC and DVC...

  4. Empirical evaluation of H.265/HEVC-based dynamic adaptive video streaming over HTTP (HEVC-DASH)

    Science.gov (United States)

    Irondi, Iheanyi; Wang, Qi; Grecos, Christos

    2014-05-01

    Real-time HTTP streaming has gained global popularity for delivering video content over Internet. In particular, the recent MPEG-DASH (Dynamic Adaptive Streaming over HTTP) standard enables on-demand, live, and adaptive Internet streaming in response to network bandwidth fluctuations. Meanwhile, emerging is the new-generation video coding standard, H.265/HEVC (High Efficiency Video Coding) promises to reduce the bandwidth requirement by 50% at the same video quality when compared with the current H.264/AVC standard. However, little existing work has addressed the integration of the DASH and HEVC standards, let alone empirical performance evaluation of such systems. This paper presents an experimental HEVC-DASH system, which is a pull-based adaptive streaming solution that delivers HEVC-coded video content through conventional HTTP servers where the client switches to its desired quality, resolution or bitrate based on the available network bandwidth. Previous studies in DASH have focused on H.264/AVC, whereas we present an empirical evaluation of the HEVC-DASH system by implementing a real-world test bed, which consists of an Apache HTTP Server with GPAC, an MP4Client (GPAC) with open HEVC-based DASH client and a NETEM box in the middle emulating different network conditions. We investigate and analyze the performance of HEVC-DASH by exploring the impact of various network conditions such as packet loss, bandwidth and delay on video quality. Furthermore, we compare the Intra and Random Access profiles of HEVC coding with the Intra profile of H.264/AVC when the correspondingly encoded video is streamed with DASH. Finally, we explore the correlation among the quality metrics and network conditions, and empirically establish under which conditions the different codecs can provide satisfactory performance.

  5. On the Development and Optimization of HEVC Video Decoders Using High-Level Dataflow Modeling

    OpenAIRE

    Jerbi, Khaled; Yviquel, Hervé; Sanchez, Alexandre; Renzi, Daniele; De Saint Jorre, Damien; Alberti, Claudio; Mattavelli, Marco; Raulet, Mickael

    2017-01-01

    International audience; With the emergence of the High Efficiency Video Coding (HEVC) standard, a dataflow description of the decoder part was developed as part of the MPEG-B standard. This dataflow description presented modest framerate results which led us to propose methodolo-gies to improve the performance. In this paper, we introduce architectural improvements by exposing more parallelism using YUV and frame-based parallel decoding. We also present platform optimizations based on the use...

  6. Video games

    OpenAIRE

    Kolář, Vojtěch

    2012-01-01

    This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...

  7. Low-latency video transmission over high-speed WPANs based on low-power video compression

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Ann

    2010-01-01

    This paper presents latency-constrained video transmission over high-speed wireless personal area networks (WPANs). Low-power video compression is proposed as an alternative to uncompressed video transmission. A video source rate control based on MINMAX quality criteria is introduced. Practical...... results for video encoder based on H.264/AVC standard are also given....

  8. Low-latency video transmission over high-speed WPANs based on low-power video compression

    OpenAIRE

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Ann

    2010-01-01

    This paper presents latency-constrained video transmission over high-speed wireless personal area networks (WPANs). Low-power video compression is proposed as an alternative to uncompressed video transmission. A video source rate control based on MINMAX quality criteria is introduced. Practical results for video encoder based on H.264/AVC standard are also given.

  9. Lesion Explorer: a video-guided, standardized protocol for accurate and reliable MRI-derived volumetrics in Alzheimer's disease and normal elderly.

    Science.gov (United States)

    Ramirez, Joel; Scott, Christopher J M; McNeely, Alicia A; Berezuk, Courtney; Gao, Fuqiang; Szilagyi, Gregory M; Black, Sandra E

    2014-04-14

    Obtaining in vivo human brain tissue volumetrics from MRI is often complicated by various technical and biological issues. These challenges are exacerbated when significant brain atrophy and age-related white matter changes (e.g. Leukoaraiosis) are present. Lesion Explorer (LE) is an accurate and reliable neuroimaging pipeline specifically developed to address such issues commonly observed on MRI of Alzheimer's disease and normal elderly. The pipeline is a complex set of semi-automatic procedures which has been previously validated in a series of internal and external reliability tests(1,2). However, LE's accuracy and reliability is highly dependent on properly trained manual operators to execute commands, identify distinct anatomical landmarks, and manually edit/verify various computer-generated segmentation outputs. LE can be divided into 3 main components, each requiring a set of commands and manual operations: 1) Brain-Sizer, 2) SABRE, and 3) Lesion-Seg. Brain-Sizer's manual operations involve editing of the automatic skull-stripped total intracranial vault (TIV) extraction mask, designation of ventricular cerebrospinal fluid (vCSF), and removal of subtentorial structures. The SABRE component requires checking of image alignment along the anterior and posterior commissure (ACPC) plane, and identification of several anatomical landmarks required for regional parcellation. Finally, the Lesion-Seg component involves manual checking of the automatic lesion segmentation of subcortical hyperintensities (SH) for false positive errors. While on-site training of the LE pipeline is preferable, readily available visual teaching tools with interactive training images are a viable alternative. Developed to ensure a high degree of accuracy and reliability, the following is a step-by-step, video-guided, standardized protocol for LE's manual procedures.

  10. The Aesthetics of Coding

    DEFF Research Database (Denmark)

    Andersen, Christian Ulrik

    2007-01-01

    Computer art is often associated with computer-generated expressions (digitally manipulated audio/images in music, video, stage design, media facades, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated...... code, etc.). The presentation relates this artistic fascination of code to a media critique expressed by Florian Cramer, claiming that the graphical interface represents a media separation (of text/code and image) causing alienation to the computer’s materiality. Cramer is thus the voice of a new ‘code...... avant-garde’. In line with Cramer, the artists Alex McLean and Adrian Ward (aka Slub) declare: “art-oriented programming needs to acknowledge the conditions of its own making – its poesis.” By analysing the Live Coding performances of Slub (where they program computer music live), the presentation...

  11. Video Malware - Behavioral Analysis

    Directory of Open Access Journals (Sweden)

    Rajdeepsinh Dodia

    2015-04-01

    Full Text Available Abstract The counts of malware attacks exploiting the internet increasing day by day and has become a serious threat. The latest malware spreading out through the media players embedded using the video clip of funny in nature to lure the end users. Once it is executed and installed then the behavior of the malware is in the malware authors hand. The spread of the malware emulates through Internet USB drives sharing of the files and folders can be anything which makes presence concealed. The funny video named as it connected to the film celebrity where the malware variant was collected from the laptop of the terror outfit organization .It runs in the backend which it contains malicious code which steals the user sensitive information like banking credentials username amp password and send it to the remote host user called command amp control. The stealed data is directed to the email encapsulated in the malicious code. The potential malware will spread through the USB and other devices .In summary the analysis reveals the presence of malicious code in executable video file and its behavior.

  12. Controlled-Release Oxycodone as "Gold Standard" for Postoperative Pain Therapy in Patients Undergoing Video-Assisted Thoracic Surgery or Thoracoscopy: A Retrospective Evaluation of 788 Cases.

    Science.gov (United States)

    Kampe, Sandra; Weinreich, Gerhard; Darr, Christopher; Stamatis, Georgios; Hachenberg, Thomas

    2015-09-01

    To assess the clinical efficacy of controlled-release oxycodone for postoperative analgesia after video-assisted thoracic surgery (VATS) or thoracoscopy. Pain therapy is standardized in our thoracic center throughout the complete postoperative stay. Patients receive immediately postoperative standardized oral analgesic protocol with controlled-released oxycodone (Oxy Group) or oxycodone with naloxone (Targin Group) and nonopioid every 6 h. We switched the opioid protocol from controlled-release oxycodone to Targin in January 2012. All patients are visited daily by a pain specialist throughout the whole stay. Data of 788 patients undergoing VATS (n = 367) or thoracoscopy (n = 421) during January 2011 until March 2013 were analyzed. In VATS, patients with Targin had higher pain scores at rest (p < 0.02) and on coughing (p < 0.001) than patients with oxycodone alone and more patients with Targin were dismissed with oral opioid dose than patients with oxycodone alone (p < 0.001). No differences in pain scores on POD 5 and 6, or in length of hospital stay, incidence of nausea, time to first dejection or opioid dose after dismission were found between controlled-release oxycodone and Targin. After conventional thoracoscopy, 209 patients received controlled-release oxycodone and 212 Targin. Patients with Targin had higher pain scores at rest (p < 0.004) and on coughing (p < 0.01) than patients with oxycodone alone and more patients with Targin were dismissed with oral opioid dose than patients with oxycodone alone (p < 0.004). There were no differences in pain scores on POD 5 and 6, or in length of hospital stay, incidence of nausea, time to first dejection or opioid dose after dismission. Oral opioid analgesia with controlled-release oxycodone is an effective postoperative regimen after VATS and thoracoscopies. Our retrospective data indicate that Targin might be less effective analgesic than oxycodone after VATS and thoracoscopies with no

  13. Data Partitioning Technique for Improved Video Prioritization

    Directory of Open Access Journals (Sweden)

    Ismail Amin Ali

    2017-07-01

    Full Text Available A compressed video bitstream can be partitioned according to the coding priority of the data, allowing prioritized wireless communication or selective dropping in a congested channel. Known as data partitioning in the H.264/Advanced Video Coding (AVC codec, this paper introduces a further sub-partition of one of the H.264/AVC codec’s three data-partitions. Results show a 5 dB improvement in Peak Signal-to-Noise Ratio (PSNR through this innovation. In particular, the data partition containing intra-coded residuals is sub-divided into data from: those macroblocks (MBs naturally intra-coded, and those MBs forcibly inserted for non-periodic intra-refresh. Interactive user-to-user video streaming can benefit, as then HTTP adaptive streaming is inappropriate and the High Efficiency Video Coding (HEVC codec is too energy demanding.

  14. Watermarking textures in video games

    Science.gov (United States)

    Liu, Huajian; Berchtold, Waldemar; Schäfer, Marcel; Lieb, Patrick; Steinebach, Martin

    2014-02-01

    Digital watermarking is a promising solution to video game piracy. In this paper, based on the analysis of special challenges and requirements in terms of watermarking textures in video games, a novel watermarking scheme for DDS textures in video games is proposed. To meet the performance requirements in video game applications, the proposed algorithm embeds the watermark message directly in the compressed stream in DDS files and can be straightforwardly applied in watermark container technique for real-time embedding. Furthermore, the embedding approach achieves high watermark payload to handle collusion secure fingerprinting codes with extreme length. Hence, the scheme is resistant to collusion attacks, which is indispensable in video game applications. The proposed scheme is evaluated in aspects of transparency, robustness, security and performance. Especially, in addition to classical objective evaluation, the visual quality and playing experience of watermarked games is assessed subjectively in game playing.

  15. Lossless Compression of Video using Motion Compensation

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1998-01-01

    We investigate lossless coding of video using predictive coding andmotion compensation. The methods incorporate state-of-the-art lossless techniques such ascontext based prediction and bias cancellation, Golomb coding, high resolution motion field estimation,3d-dimensional predictors, prediction ...

  16. No-Reference Video Quality Assessment by HEVC Codec Analysis

    DEFF Research Database (Denmark)

    Huang, Xin; Søgaard, Jacob; Forchhammer, Søren

    2015-01-01

    the transform coefficients, estimates the distortion, and assesses the video quality. The proposed scheme generates VQA features based on Intra coded frames, and then maps features using an Elastic Net to predict subjective video quality. A set of HEVC coded 4K UHD sequences are tested. Results show......This paper proposes a No-Reference (NR) Video Quality Assessment (VQA) method for videos subject to the distortion given by High Efficiency Video Coding (HEVC). The proposed assessment can be performed either as a BitstreamBased (BB) method or as a Pixel-Based (PB). It extracts or estimates...

  17. No-Reference Video Quality Assessment using MPEG Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2013-01-01

    We present a method for No-Reference (NR) Video Quality Assessment (VQA) for decoded video without access to the bitstream. This is achieved by extracting and pooling features from a NR image quality assessment method used frame by frame. We also present methods to identify the video coding...... and estimate the video coding parameters for MPEG-2 and H.264/AVC which can be used to improve the VQA. The analysis differs from most other video coding analysis methods since it is without access to the bitstream. The results show that our proposed method is competitive with other recent NR VQA methods...

  18. [Standards for treatment in forensic committment according to § 63 and § 64 of the German criminal code : Interdisciplinary task force of the DGPPN].

    Science.gov (United States)

    Müller, J L; Saimeh, N; Briken, P; Eucker, S; Hoffmann, K; Koller, M; Wolf, T; Dudeck, M; Hartl, C; Jakovljevic, A-K; Klein, V; Knecht, G; Müller-Isberner, R; Muysers, J; Schiltz, K; Seifert, D; Simon, A; Steinböck, H; Stuckmann, W; Weissbeck, W; Wiesemann, C; Zeidler, R

    2017-08-01

    People who have been convicted of a crime due to a severe mental disorder and continue to be dangerous as a result of this disorder may be placed in a forensic psychiatric facility for improvement and safeguarding according to § 63 and § 64 of the German Criminal Code (StGB). In Germany, approximately 9000 patients are treated in clinics for forensic psychiatry and psychotherapy on the basis of § 63 of the StGB and in withdrawal centers on the basis of § 64 StGB. The laws for treatment of patients in forensic commitment are passed by the individual States, with the result that even the basic conditions differ in the individual States. While minimum requirements have already been published for the preparation of expert opinions on liability and legal prognosis, consensus standards for the treatment in forensic psychiatry have not yet been published. Against this background, in 2014 the German Society for Psychiatry and Psychotherapy, Psychosomatics and Neurology (DGPPN) commissioned an interdisciplinary task force to develop professional standards for treatment in forensic psychiatry. Legal, ethical, structural, therapeutic and prognostic standards for forensic psychiatric treatment should be described according to the current state of science. After 3 years of work the results of the interdisciplinary working group were presented in early 2017 and approved by the board of the DGPPN. The standards for the treatment in the forensic psychiatric commitment aim to initiate a discussion in order to standardize the treatment conditions and to establish evidence-based recommendations.

  19. LASIP-III, a generalized processor for standard interface files. [For creating binary files from BCD input data and printing binary file data in BCD format (devised for fast reactor physics codes)

    Energy Technology Data Exchange (ETDEWEB)

    Bosler, G.E.; O' Dell, R.D.; Resnik, W.M.

    1976-03-01

    The LASIP-III code was developed for processing Version III standard interface data files which have been specified by the Committee on Computer Code Coordination. This processor performs two distinct tasks, namely, transforming free-field format, BCD data into well-defined binary files and providing for printing and punching data in the binary files. While LASIP-III is exported as a complete free-standing code package, techniques are described for easily separating the processor into two modules, viz., one for creating the binary files and one for printing the files. The two modules can be separated into free-standing codes or they can be incorporated into other codes. Also, the LASIP-III code can be easily expanded for processing additional files, and procedures are described for such an expansion. 2 figures, 8 tables.

  20. The Henry Ford Production System: reduction of surgical pathology in-process misidentification defects by bar code-specified work process standardization.

    Science.gov (United States)

    Zarbo, Richard J; Tuthill, J Mark; D'Angelo, Rita; Varney, Ruan; Mahar, Beverly; Neuman, Cheryl; Ormsby, Adrian

    2009-04-01

    Misidentification defects are a potential patient safety issue in medicine, including in the surgical pathology laboratory. In addressing the Joint Commission's national patient safety goal of accurate patient and specimen identification, we focused our lens internally on our own laboratory processes, with measurement tools designed to identify potential misidentification defects and their root causes. Based on this knowledge, aligned with our lean work culture in the Henry Ford Production System, we redesigned our surgical pathology laboratory workflow with simplified connections and pathways reinforced by a bar code technology innovation to specify and standardize work processes. We also adopted just-in-time prestain slide labeling with solvent-impervious, bar-coded slide labels at the microtome station, eliminating the loop-back pathway of poststain, batch slide matching, and labeling with adhesive paper labels. These changes have enabled us to dramatically reduce the overall misidentification case rate by approximately 62% with an approximate 95% reduction in the more common histologic slide misidentification defects while increasing technical throughput at the histology microtomy station by 125%.

  1. 4K Video-Laryngoscopy and Video-Stroboscopy: Preliminary Findings.

    Science.gov (United States)

    Woo, Peak

    2016-01-01

    4K video is a new format. At 3840 × 2160 resolution, it has 4 times the resolution of standard 1080 high definition (HD) video. Magnification can be done without loss of resolution. This study uses 4K video for video-stroboscopy. Forty-six patients were examined by conventional video-stroboscopy (digital 3 chip CCD) and compared with 4K video-stroboscopy. The video was recorded on a Blackmagic 4K cinema production camera in CinemaDNG RAW format. The video was played back on a 4K monitor and compared to standard video. Pathological conditions included: polyps, scar, cysts, cancer, sulcus, and nodules. Successful 4K video recordings were achieved in all subjects using a 70° rigid endoscope. The camera system is bulky. The examination is performed similarly to standard video-stroboscopy. Playback requires a 4K monitor. As expected, the images were far clearer in detail than standard video. Stroboscopy video using the 4K camera was consistently able to show more detail. Two patients had diagnosis change after 4K viewing. 4K video is an exciting new technology that can be applied to laryngoscopy. It allows for cinematic 4K quality recordings. Both continuous and stroboscopic light can be used for visualization. Its clinical utility is feasible, but usefulness must be proven. © The Author(s) 2015.

  2. Video Podcasts

    DEFF Research Database (Denmark)

    Nortvig, Anne Mette; Sørensen, Birgitte Holm

    2016-01-01

    This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...

  3. Real-time differentiation of adenomatous and hyperplastic diminutive colorectal polyps during analysis of unaltered videos of standard colonoscopy using a deep learning model.

    Science.gov (United States)

    Byrne, Michael F; Chapados, Nicolas; Soudan, Florian; Oertel, Clemens; Linares Pérez, Milagros; Kelly, Raymond; Iqbal, Nadeem; Chandelier, Florent; Rex, Douglas K

    2017-10-24

    In general, academic but not community endoscopists have demonstrated adequate endoscopic differentiation accuracy to make the 'resect and discard' paradigm for diminutive colorectal polyps workable. Computer analysis of video could potentially eliminate the obstacle of interobserver variability in endoscopic polyp interpretation and enable widespread acceptance of 'resect and discard'. We developed an artificial intelligence (AI) model for real-time assessment of endoscopic video images of colorectal polyps. A deep convolutional neural network model was used. Only narrow band imaging video frames were used, split equally between relevant multiclasses. Unaltered videos from routine exams not specifically designed or adapted for AI classification were used to train and validate the model. The model was tested on a separate series of 125 videos of consecutively encountered diminutive polyps that were proven to be adenomas or hyperplastic polyps. The AI model works with a confidence mechanism and did not generate sufficient confidence to predict the histology of 19 polyps in the test set, representing 15% of the polyps. For the remaining 106 diminutive polyps, the accuracy of the model was 94% (95% CI 86% to 97%), the sensitivity for identification of adenomas was 98% (95% CI 92% to 100%), specificity was 83% (95% CI 67% to 93%), negative predictive value 97% and positive predictive value 90%. An AI model trained on endoscopic video can differentiate diminutive adenomas from hyperplastic polyps with high accuracy. Additional study of this programme in a live patient clinical trial setting to address resect and discard is planned. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. The use of portable video media vs standard verbal communication in the urological consent process: a multicentre, randomised controlled, crossover trial.

    Science.gov (United States)

    Winter, Matthew; Kam, Jonathan; Nalavenkata, Sunny; Hardy, Ellen; Handmer, Marcus; Ainsworth, Hannah; Lee, Wai Gin; Louie-Johnsun, Mark

    2016-11-01

    To determine if portable video media (PVM) improves patient's knowledge and satisfaction acquired during the consent process for cystoscopy and insertion of a ureteric stent compared to standard verbal communication (SVC), as informed consent is a crucial component of patient care and PVM is an emerging technology that may help improve the consent process. In this multi-centre randomised controlled crossover trial, patients requiring cystoscopy and stent insertion were recruited from two major teaching hospitals in Australia over a 15-month period (July 2014-December 2015). Patient information delivery was via PVM and SVC. The PVM consisted of an audio-visual presentation with cartoon animation presented on an iPad. Patient satisfaction was assessed using the validated Client Satisfaction Questionnaire 8 (CSQ-8; maximum score 32) and knowledge was tested using a true/false questionnaire (maximum score 28). Questionnaires were completed after first intervention and after crossover. Scores were analysed using the independent samples t-test and Wilcoxon signed-rank test for the crossover analysis. In all, 88 patients were recruited. A significant 3.1 point (15.5%) increase in understanding was demonstrable favouring the use of PVM (P < 0.001). There was no difference in patient satisfaction between the groups as judged by the CSQ-8. A significant 3.6 point (17.8%) increase in knowledge score was seen when the SVC group were crossed over to the PVM arm. A total of 80.7% of patients preferred PVM and 19.3% preferred SVC. Limitations include the lack of a validated questionnaire to test knowledge acquired from the interventions. This study demonstrates patients' preference towards PVM in the urological consent process of cystoscopy and ureteric stent insertion. PVM improves patient's understanding compared with SVC and is a more effective means of content delivery to patients in terms of overall preference and knowledge gained during the consent process. © 2016 The

  5. Video Game Based Learning in English Grammar

    Science.gov (United States)

    Singaravelu, G.

    2008-01-01

    The study enlightens the effectiveness of Video Game Based Learning in English Grammar at standard VI. A Video Game package was prepared and it consisted of self-learning activities in play way manner which attracted the minds of the young learners. Chief objective: Find out the effectiveness of Video-Game based learning in English grammar.…

  6. The revised APTA code of ethics for the physical therapist and standards of ethical conduct for the physical therapist assistant: theory, purpose, process, and significance.

    Science.gov (United States)

    Swisher, Laura Lee; Hiller, Peggy

    2010-05-01

    In June 2009, the House of Delegates (HOD) of the American Physical Therapy Association (APTA) passed a major revision of the APTA Code of Ethics for physical therapists and the Standards of Ethical Conduct for the Physical Therapist Assistant. The revised documents will be effective July 1, 2010. The purposes of this article are: (1) to provide a historical, professional, and theoretical context for this important revision; (2) to describe the 4-year revision process; (3) to examine major features of the documents; and (4) to discuss the significance of the revisions from the perspective of the maturation of physical therapy as a doctoring profession. PROCESS OF REVISION: The process for revision is delineated within the context of history and the Bylaws of APTA. FORMAT, STRUCTURE, AND CONTENT OF REVISED CORE ETHICS DOCUMENTS: The revised documents represent a significant change in format, level of detail, and scope of application. Previous APTA Codes of Ethics and Standards of Ethical Conduct for the Physical Therapist Assistant have delineated very broad general principles, with specific obligations spelled out in the Ethics and Judicial Committee's Guide for Professional Conduct and Guide for Conduct of the Physical Therapist Assistant. In contrast to the current documents, the revised documents address all 5 roles of the physical therapist, delineate ethical obligations in organizational and business contexts, and align with the tenets of Vision 2020. The significance of this revision is discussed within historical parameters, the implications for physical therapists and physical therapist assistants, the maturation of the profession, societal accountability and moral community, potential regulatory implications, and the inclusive and deliberative process of moral dialogue by which changes were developed, revised, and approved.

  7. A video to improve patient and surrogate understanding of cardiopulmonary resuscitation choices in the ICU: a randomized controlled trial.

    Science.gov (United States)

    Wilson, Michael E; Krupa, Artur; Hinds, Richard F; Litell, John M; Swetz, Keith M; Akhoundi, Abbasali; Kashyap, Rahul; Gajic, Ognjen; Kashani, Kianoush

    2015-03-01

    To determine if a video depicting cardiopulmonary resuscitation and resuscitation preference options would improve knowledge and decision making among patients and surrogates in the ICU. Randomized, unblinded trial. Single medical ICU. Patients and surrogate decision makers in the ICU. The usual care group received a standard pamphlet about cardiopulmonary resuscitation and cardiopulmonary resuscitation preference options plus routine code status discussions with clinicians. The video group received usual care plus an 8-minute video that depicted cardiopulmonary resuscitation, showed a simulated hospital code, and explained resuscitation preference options. One hundred three patients and surrogates were randomized to usual care. One hundred five patients and surrogates were randomized to video plus usual care. Median total knowledge scores (0-15 points possible for correct answers) in the video group were 13 compared with 10 in the usual care group, p value of less than 0.0001. Video group participants had higher rates of understanding the purpose of cardiopulmonary resuscitation and resuscitation options and terminology and could correctly name components of cardiopulmonary resuscitation. No statistically significant differences in documented resuscitation preferences following the interventions were found between the two groups, although the trial was underpowered to detect such differences. A majority of participants felt that the video was helpful in cardiopulmonary resuscitation decision making (98%) and would recommend the video to others (99%). A video depicting cardiopulmonary resuscitation and explaining resuscitation preference options was associated with improved knowledge of in-hospital cardiopulmonary resuscitation options and cardiopulmonary resuscitation terminology among patients and surrogate decision makers in the ICU, compared with receiving a pamphlet on cardiopulmonary resuscitation. Patients and surrogates found the video helpful in decision

  8. Raising the standard: changes to the Australian Code of Good Manufacturing Practice (cGMP) for human blood and blood components, human tissues and human cellular therapy products.

    Science.gov (United States)

    Wright, Craig; Velickovic, Zlatibor; Brown, Ross; Larsen, Stephen; Macpherson, Janet L; Gibson, John; Rasko, John E J

    2014-04-01

    In Australia, manufacture of blood, tissues and biologicals must comply with the federal laws and meet the requirements of the Therapeutic Goods Administration (TGA) Manufacturing Principles as outlined in the current Code of Good Manufacturing Practice (cGMP). The Therapeutic Goods Order (TGO) No. 88 was announced concurrently with the new cGMP, as a new standard for therapeutic goods. This order constitutes a minimum standard for human blood, tissues and cellular therapeutic goods aimed at minimising the risk of infectious disease transmission. The order sets out specific requirements relating to donor selection, donor testing and minimisation of infectious disease transmission from collection and manufacture of these products. The Therapeutic Goods Manufacturing Principles Determination No. 1 of 2013 references the human blood and blood components, human tissues and human cellular therapy products 2013 (2013 cGMP). The name change for the 2013 cGMP has allowed a broadening of the scope of products to include human cellular therapy products. It is difficult to directly compare versions of the code as deletion of some clauses has not changed the requirements to be met, as they are found elsewhere amongst the various guidelines provided. Many sections that were specific for blood and blood components are now less prescriptive and apply to a wider range of cellular therapies, but the general overall intent remains the same. Use of 'should' throughout the document instead of 'must' allows flexibility for alternative processes, but these systems will still require justification by relevant logical argument and validation data to be acceptable to TGA. The cGMP has seemingly evolved so that specific issues identified at audit over the last decade have now been formalised in the new version. There is a notable risk management approach applied to most areas that refer to process justification and decision making. These requirements commenced on 31 May 2013 and a 12 month

  9. Scan converting video tape recorder

    Science.gov (United States)

    Holt, N. I. (Inventor)

    1971-01-01

    A video tape recorder is disclosed of sufficient bandwidth to record monochrome television signals or standard NTSC field sequential color at current European and American standards. The system includes scan conversion means for instantaneous playback at scanning standards different from those at which the recording is being made.

  10. One primer to rule them all: universal primer that adds BBa_B0034 ribosomal binding site to any coding standard 10 BioBrick.

    Science.gov (United States)

    Bryksin, Anton V; Bachman, Haylee N; Cooper, Spencer W; Balavijayan, Tilak; Blackstone, Rachael M; Du, Haoli; Jenkins, Jackson P; Haynes, Casey L; Siemer, Jessica L; Fiore, Vincent F; Barker, Thomas H

    2014-12-19

    Here, we present a universal, simple, efficient, and reliable way to add small BioBrick parts to any BioBrick via PCR that is compatible with BioBrick assembly standard 10. As a proof of principle, we have designed a universal primer, rbs_B0034, that contains a ribosomal binding site (RBS; BBa_B0034) and that can be used in PCR to amplify any coding BioBrick that starts with ATG. We performed test PCRs with rbs_B0034 on 31 different targets and found it to be 93.6% efficient. Moreover, when supplemented with a complementary primer, addition of RBS can be accomplished via whole plasmid site-directed mutagenesis, thus reducing the time required for further assembly of composite parts. The described method brings simplicity to the addition of small parts, such as regulatory elements to existing BioBricks. The final product of the PCR assembly is indistinguishable from the standard or 3A BioBrick assembly.

  11. Video Conferencing for a Virtual Seminar Room

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Fosgerau, A.; Hansen, Peter Søren K.

    2002-01-01

    A PC-based video conferencing system for a virtual seminar room is presented. The platform is enhanced with DSPs for audio and video coding and processing. A microphone array is used to facilitate audio based speaker tracking, which is used for adaptive beam-forming and automatic camera-control...

  12. Real-time data compression of broadcast video signals

    Science.gov (United States)

    Shalkauser, Mary Jo W. (Inventor); Whyte, Wayne A., Jr. (Inventor); Barnes, Scott P. (Inventor)

    1991-01-01

    A non-adaptive predictor, a nonuniform quantizer, and a multi-level Huffman coder are incorporated into a differential pulse code modulation system for coding and decoding broadcast video signals in real time.

  13. Verification testing of the compression performance of the HEVC screen content coding extensions

    Science.gov (United States)

    Sullivan, Gary J.; Baroncini, Vittorio A.; Yu, Haoping; Joshi, Rajan L.; Liu, Shan; Xiu, Xiaoyu; Xu, Jizheng

    2017-09-01

    This paper reports on verification testing of the coding performance of the screen content coding (SCC) extensions of the High Efficiency Video Coding (HEVC) standard (Rec. ITU-T H.265 | ISO/IEC 23008-2 MPEG-H Part 2). The coding performance of HEVC screen content model (SCM) reference software is compared with that of the HEVC test model (HM) without the SCC extensions, as well as with the Advanced Video Coding (AVC) joint model (JM) reference software, for both lossy and mathematically lossless compression using All-Intra (AI), Random Access (RA), and Lowdelay B (LB) encoding structures and using similar encoding techniques. Video test sequences in 1920×1080 RGB 4:4:4, YCbCr 4:4:4, and YCbCr 4:2:0 colour sampling formats with 8 bits per sample are tested in two categories: "text and graphics with motion" (TGM) and "mixed" content. For lossless coding, the encodings are evaluated in terms of relative bit-rate savings. For lossy compression, subjective testing was conducted at 4 quality levels for each coding case, and the test results are presented through mean opinion score (MOS) curves. The relative coding performance is also evaluated in terms of Bjøntegaard-delta (BD) bit-rate savings for equal PSNR quality. The perceptual tests and objective metric measurements show a very substantial benefit in coding efficiency for the SCC extensions, and provided consistent results with a high degree of confidence. For TGM video, the estimated bit-rate savings ranged from 60-90% relative to the JM and 40-80% relative to the HM, depending on the AI/RA/LB configuration category and colour sampling format.

  14. VQone MATLAB toolbox: A graphical experiment builder for image and video quality evaluations: VQone MATLAB toolbox.

    Science.gov (United States)

    Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka

    2016-03-01

    This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  15. SDC: Scalable description coding for adaptive streaming media

    OpenAIRE

    Quinlan, Jason J.; Zahran, Ahmed H.; Sreenan, Cormac J.

    2012-01-01

    Video compression techniques enable adaptive media streaming over heterogeneous links to end-devices. Scalable Video Coding (SVC) and Multiple Description Coding (MDC) represent well-known techniques for video compression with distinct characteristics in terms of bandwidth efficiency and resiliency to packet loss. In this paper, we present Scalable Description Coding (SDC), a technique to compromise the tradeoff between bandwidth efficiency and error resiliency without sacrificing user-percei...

  16. Akademisk video

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    Dette kapitel har fokus på metodiske problemstillinger, der opstår i forhold til at bruge (digital) video i forbindelse med forskningskommunikation, ikke mindst online. Video har længe været benyttet i forskningen til dataindsamling og forskningskommunikation. Med digitaliseringen og internettet er...... der dog opstået nye muligheder og udfordringer i forhold til at formidle og distribuere forskningsresultater til forskellige målgrupper via video. Samtidig er klassiske metodologiske problematikker som forskerens positionering i forhold til det undersøgte stadig aktuelle. Både klassiske og nye...... problemstillinger diskuteres i kapitlet, som rammesætter diskussionen ud fra forskellige positioneringsmuligheder: formidler, historiefortæller, eller dialogist. Disse positioner relaterer sig til genrer inden for ’akademisk video’. Afslutningsvis præsenteres en metodisk værktøjskasse med redskaber til planlægning...

  17. Using MPEG DASH SRD for zoomable and navigable video

    NARCIS (Netherlands)

    D'Acunto, L.; Berg, J. van den; Thomas, E.; Niamut, O.A.

    2016-01-01

    This paper presents a video streaming client implementation that makes use of the Spatial Relationship Description (SRD) feature of the MPEG-DASH standard, to provide a zoomable and navigable video to an end user. SRD allows a video streaming client to request spatial subparts of a particular video

  18. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... World Videos. The workshops were run on December 4, 2016, in Cancun in Mexico. The two workshops together received 13 papers. Each paper was then reviewed by at least two expert reviewers in the field. In all, 11 papers were accepted to be presented at the workshops. The topics covered in the papers...

  19. Priority-based methods for reducing the impact of packet loss on HEVC encoded video streams

    Science.gov (United States)

    Nightingale, James; Wang, Qi; Grecos, Christos

    2013-02-01

    The rapid growth in the use of video streaming over IP networks has outstripped the rate at which new network infrastructure has been deployed. These bandwidth-hungry applications now comprise a significant part of all Internet traffic and present major challenges for network service providers. The situation is more acute in mobile networks where the available bandwidth is often limited. Work towards the standardisation of High Efficiency Video Coding (HEVC), the next generation video coding scheme, is currently on track for completion in 2013. HEVC offers the prospect of a 50% improvement in compression over the current H.264 Advanced Video Coding standard (H.264/AVC) for the same quality. However, there has been very little published research on HEVC streaming or the challenges of delivering HEVC streams in resource-constrained network environments. In this paper we consider the problem of adapting an HEVC encoded video stream to meet the bandwidth limitation in a mobile networks environment. Video sequences were encoded using the Test Model under Consideration (TMuC HM6) for HEVC. Network abstraction layers (NAL) units were packetized, on a one NAL unit per RTP packet basis, and transmitted over a realistic hybrid wired/wireless testbed configured with dynamically changing network path conditions and multiple independent network paths from the streamer to the client. Two different schemes for the prioritisation of RTP packets, based on the NAL units they contain, have been implemented and empirically compared using a range of video sequences, encoder configurations, bandwidths and network topologies. In the first prioritisation method the importance of an RTP packet was determined by the type of picture and the temporal switching point information carried in the NAL unit header. Packets containing parameter set NAL units and video coding layer (VCL) NAL units of the instantaneous decoder refresh (IDR) and the clean random access (CRA) pictures were given the

  20. Moving traffic object retrieval in H.264/MPEG compressed video

    Science.gov (United States)

    Shi, Xu-li; Xiao, Guang; Wang, Shuo-zhong; Zhang, Zhao-yang; An, Ping

    2006-05-01

    Moving object retrieval technique in compressed domain plays an important role in many real-time applications, e.g. Vehicle Detection and Classification. A number of retrieval techniques that operate in compressed domain have been reported in the literature. H.264/AVC is the up-to-date video-coding standard that is likely to lead to the proliferation of retrieval techniques in the compressed domain. Up to now, few literatures on H.264/AVC compressed video have been reported. Compared with the MPEG standard, H.264/AVC employs several new coding block types and different entropy coding method, which result in moving object retrieval in H.264/ AVC compressed video a new task and challenging work. In this paper, an approach to extract and retrieval moving traffic object in H.264/AVC compressed video is proposed. Our algorithm first Interpolates the sparse motion vector of p-frame that is composed of 4*4 blocks, 4*8 blocks and 8*4 blocks and so on. After forward projecting each p-frame vector to the immediate adjacent I-frame and calculating the DCT coefficients of I-frame using information of spatial intra-prediction, the method extracts moving VOPs (video object plan) using an interactive 4*4 block classification process. In Vehicle Detection application, the segmented VOP in 4*4 block-level accuracy is insufficient. Once we locate the target VOP, the actual edges of the VOP in 4*4 block accuracy can be extracted by applying Canny Edge Detection only on the moving VOP in 4*4 block accuracy. The VOP in pixel accuracy is then achieved by decompressing the DCT blocks of the VOPs. The edge-tracking algorithm is applied to find the missing edge pixels. After the segmentation process a retrieval algorithm that based on CSS (Curvature Scale Space) is used to search the interested shape of vehicle in H.264/AVC compressed video sequence. Experiments show that our algorithm can extract and retrieval moving vehicles efficiency and robustly.

  1. Evaluating and Implementing JPEG XR Optimized for Video Surveillance

    OpenAIRE

    Yu, Lang

    2010-01-01

    This report describes both evaluation and implementation of the new coming image compression standard JPEG XR. The intention is to determine if JPEG XR is an appropriate standard for IP based video surveillance purposes. Video surveillance, especially IP based video surveillance, currently has an increasing role in the security market. To be a good standard for surveillance, the video stream generated by the camera is required to be low bit-rate, low latency on the network and at the same tim...

  2. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real W...

  3. Advanced single-stream HDR coding using JVET JEM with backward compatibility options

    Science.gov (United States)

    Topiwala, Pankaj; Dai, Wei; Krishnan, Madhu

    2017-09-01

    This paper presents a state of the art approach in HDR/WCG video coding developed at FastVDO called FVHDR, and based on the JEM 6 Test Model of the Joint Exploration Team, a joint committee of ITU|ISO/IEC. A fully automatic adaptive video process that differs from a known HDR video processing chain (analogous to HDR10, herein called "anchor") developed recently in the standards committee JVET, is used. FVHDR works entirely within the framework of JEM software model, but adds additional tools. These tools can become an integral part of a future video coding standard, or be extracted as additional pre- and post-processing chains. Reconstructed video sequences using FVHDR show a subjective visual quality superior to the output of the anchor. Moreover the resultant SDR content generated by the data adaptive grading process is backward compatible. Representative objective results for the system include: results for DE100, and PSNRL100, were -13.4%, and -3.8% respectively.

  4. Layer-Optimized Streaming of Motion-Compensated Orthogonal Video

    OpenAIRE

    SHEN, Wenjie

    2013-01-01

    This report presents a layer-optimized streaming technique for delivering video content over the Internet using quality-scalable motion-compensated orthogonal video. We use Motion-Compensated Orthogonal Transforms (MCOT) to remove temporal and spatial redundancy. The resulting subbands are quantized and entropy coded by Embedded Block Coding with Optimized Truncations (EBCOT). Therefore, we are able to encode the input video into multiple quality layers with sequential decoding dependency. A ...

  5. Lossless Compression of Broadcast Video

    DEFF Research Database (Denmark)

    Martins, Bo; Eriksen, N.; Faber, E.

    1998-01-01

    We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...... artificial sequence containing uncompressible data all the 4:2:2, 8-bit test video material easily compresses losslessly to a rate below 125 Mbit/s. At this rate, video plus overhead can be contained in a single telecom 4th order PDH channel or a single STM-1 channel. Difficult 4:2:2, 10-bit test material...

  6. Video Primal Sketch: A Unified Middle-Level Representation for Video

    OpenAIRE

    Han, Zhi; Xu, Zongben; Zhu, Song-Chun

    2015-01-01

    This paper presents a middle-level video representation named Video Primal Sketch (VPS), which integrates two regimes of models: i) sparse coding model using static or moving primitives to explicitly represent moving corners, lines, feature points, etc., ii) FRAME /MRF model reproducing feature statistics extracted from input video to implicitly represent textured motion, such as water and fire. The feature statistics include histograms of spatio-temporal filters and velocity distributions. T...

  7. FTV standardization for super-multiview and free navigation in MPEG

    Science.gov (United States)

    Tanimoto, Masayuki

    2015-05-01

    FTV (Free-viewpoint Television) is the ultimate 3DTV with an infinite number of views and ranks as the top of visual media. It enables users to view 3D scenes by freely changing the viewpoint. MPEG has been developing FTV standards since 2001. MVC (Multiview Video Coding) is the first phase of FTV, which enables efficient coding of multiview video. 3DV (3D Video) is the second phase of FTV, which enables the efficient coding of multiview video and depth data for multiview displays. Views in between linearly arranged cameras are synthesized from the multiview video and depth data in 3DV. Based on recent development of 3D technology, MPEG has started the third phase of FTV, targeting super multiview and free navigation applications. This new FTV standardization will achieve more flexible camera arrangement, more efficient coding and new functionality. Users can enjoy very realistic 3D viewing and walkthrough/ fly-through experience of 3D scenes in the super multiview and free navigation applications of FTV.

  8. Fast compressed domain motion detection in H.264 video streams for video surveillance applications

    DEFF Research Database (Denmark)

    Szczerba, Krzysztof; Forchhammer, Søren; Støttrup-Andersen, Jesper

    2009-01-01

    numbers of video streams on a single server. The focus of the work is on using the information in coded video streams to reduce the computational complexity and memory requirements, which translates into reduced hardware requirements and costs. The devised algorithm detects and segments activity based...

  9. Fragile watermarking scheme for H.264 video authentication

    Science.gov (United States)

    Wang, Chuen-Ching; Hsu, Yu-Chang

    2010-02-01

    A novel H.264 advanced video coding fragile watermarking method is proposed that enables the authenticity and integrity of the video streams to be verified. The effectiveness of the proposed scheme is demonstrated by way of experimental simulations. The results show that by embedding the watermark information in the last nonzero-quantized coefficient in each discrete cosine transform block, the proposed scheme induces no more than a minor distortion of the video content. In addition, we show that the proposed scheme is able to detect unauthorized changes in the watermarked video content without the original video. The experimental results demonstrate the feasibility of the proposed video authentication system.

  10. Performance Evaluation of Concurrent Multipath Video Streaming in Multihomed Mobile Networks

    Directory of Open Access Journals (Sweden)

    James Nightingale

    2013-01-01

    Full Text Available High-quality real-time video streaming to users in mobile networks is challenging due to the dynamically changing nature of the network paths, particularly the limited bandwidth and varying end-to-end delay. In this paper, we empirically investigate the performance of multipath streaming in the context of multihomed mobile networks. Existing schemes that make use of the aggregated bandwidth of multiple paths can overcome bandwidth limitations on a single path but suffer an efficiency penalty caused by retransmission of lost packets in reliable transport schemes or path switching overheads in unreliable transport schemes. This work focuses on the evaluation of schemes to permit concurrent use of multiple paths to deliver video streams. A comprehensive streaming framework for concurrent multipath video streaming is proposed and experimentally evaluated, using current state-of-the-art H.264 Scalable Video Coding (H.264/SVC and the next generation High Efficiency Video Coding (HEVC standards. It provides a valuable insight into the benefit of using such schemes in conjunction with encoder specific packet prioritisation mechanisms for quality-aware packet scheduling and scalable streaming. The remaining obstacles to deployment of concurrent multipath schemes are identified, and the challenges in realising HEVC based concurrent multipath streaming are highlighted.

  11. Skalabilitas Signal to Noise Ratio (SNR pada Pengkodean Video dengan Derau Gaussian

    Directory of Open Access Journals (Sweden)

    Agus Purwadi

    2015-04-01

    Full Text Available In video transmission, there is a possibility of packet lost an d a large load variation on the bandwidth. These are the source of network congestion, which can interfere the communication data rate. This study discusses a system to overcome the congestion with Signal-to-noise ratio (SNR scalability-based approach, for the video sequence encoding method into two layers, which is a solution to decrease encoding mode for each packet and channel coding rate. The goal is to minimize any distortion from the source to the destination. The coding system used is a video coding standards that is MPEG-2 or H.263 with SNR scalability. The algorithm used for motion compensation, temporal redundancy and spatial redundancy is the Discrete Cosine Transform (DCT and quantization. The transmission error is simulated by adding Gaussian noise (error on motion vectors. From the simulation results, the SNR and Peak Signal to Noise Ratio (PSNR in the noisy video frames decline with averages of 3dB and 4dB respectively.

  12. Submissions of stakeholders on voluntary codes of conduct, guidelines and best practices, and/or standards in relation to access and benefit-sharing for all subsectors of genetic resources for food and agriculture

    NARCIS (Netherlands)

    Martyniuk, E.; Berger, B.; Bojkovski, D.; Bouchel, D.; Hiemstra, S.J.

    2014-01-01

    The Commission, at its Fourteenth Regular Session, requested its Secretary to invite stakeholder groups to report on voluntary codes of conduct, guidelines and best practices, and/or standards in relation to access and benefit-sharing for all subsectors of genetic resources for food and agriculture,

  13. Temporal scalability comparison of the H.264/SVC and distributed video codec

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Belyaev, Evgeny

    2009-01-01

    The problem of the multimedia scalable video streaming is a current topic of interest. There exist many methods for scalable video coding. This paper is focused on the scalable extension of H.264/AVC (H.264/SVC) and distributed video coding (DVC). The paper presents an efficiency comparison of SVC...

  14. Codes of Conduct

    Science.gov (United States)

    Million, June

    2004-01-01

    Most schools have a code of conduct, pledge, or behavioral standards, set by the district or school board with the school community. In this article, the author features some schools that created a new vision of instilling code of conducts to students based on work quality, respect, safety and courtesy. She suggests that communicating the code…

  15. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition...

  16. Video Analytics

    DEFF Research Database (Denmark)

    include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition......This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...

  17. High-definition video display based on the FPGA and THS8200

    Science.gov (United States)

    Qian, Jia; Sui, Xiubao

    2014-11-01

    This paper presents a high-definition video display solution based on the FPGA and THS8200. THS8200 is a video decoder chip launched by TI company, this chip has three 10-bit DAC channels which can capture video data in both 4:2:2 and 4:4:4 formats, and its data synchronization can be either through the dedicated synchronization signals HSYNC and VSYNC, or extracted from the embedded video stream synchronization information SAV / EAV code. In this paper, we will utilize the address and control signals generated by FPGA to access to the data-storage array, and then the FPGA generates the corresponding digital video signals YCbCr. These signals combined with the synchronization signals HSYNC and VSYNC that are also generated by the FPGA act as the input signals of THS8200. In order to meet the bandwidth requirements of the high-definition TV, we adopt video input in the 4:2:2 format over 2×10-bit interface. THS8200 is needed to be controlled by FPGA with I2C bus to set the internal registers, and as a result, it can generate the synchronous signal that is satisfied with the standard SMPTE and transfer the digital video signals YCbCr into analog video signals YPbPr. Hence, the composite analog output signals YPbPr are consist of image data signal and synchronous signal which are superimposed together inside the chip THS8200. The experimental research indicates that the method presented in this paper is a viable solution for high-definition video display, which conforms to the input requirements of the new high-definition display devices.

  18. Study on multi-description coding for ROI medical image based on EBCOT

    Science.gov (United States)

    Hou, Alin; Zhang, Lihong; Shi, Dongcheng; Cui, Guangming; Xu, Kun; Zhou, Wen; Liu, Jie

    2008-03-01

    Embedded block coding with optimized truncation (EBCOT) with the wavelet shape of tree encoding structure is more flexible because it encodes each code block respectively by decomposing the subband into code blocks, so that the embedded code streams will come into being to support the mass classification, the hierarchical resolution and the random access. However the anti-missing performance via network of the algorithm is worse. The source signal has been divided into many code streams by multi-description coding (MDC) of image and video so that it will be transferred through the insecure transmission channel. Region of Interest (ROI) coding gives priority to the focus of doctor's interest generally occupies lesser part of entire medical image. In this paper, ROI coding, which combines MDC and EBCOT, has been done according to JPEG2000 ROI coding standard in medical images. The algorithm not only uses the hierarchical spatial resolution and random access to ROI of EBCOT, but also has improved the anti-missing performance via network, and formed robust code stream. The experimental results demonstrated that the coding method improved the system compression ratio without influence on the medical diagnosis.

  19. Energy-Efficient Bandwidth Allocation for Multiuser Scalable Video Streaming over WLAN

    Directory of Open Access Journals (Sweden)

    Lafruit Gauthier

    2008-01-01

    Full Text Available Abstract We consider the problem of packet scheduling for the transmission of multiple video streams over a wireless local area network (WLAN. A cross-layer optimization framework is proposed to minimize the wireless transceiver energy consumption while meeting the user required visual quality constraints. The framework relies on the IEEE 802.11 standard and on the embedded bitstream structure of the scalable video coding scheme. It integrates an application-level video quality metric as QoS constraint (instead of a communication layer quality metric with energy consumption optimization through link layer scaling and sleeping. Both energy minimization and min-max energy optimization strategies are discussed. Simulation results demonstrate significant energy gains compared to the state-of-the-art approaches.

  20. Game theoretic wireless resource allocation for H.264 MGS video transmission over cognitive radio networks

    Science.gov (United States)

    Fragkoulis, Alexandros; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2015-03-01

    We propose a method for the fair and efficient allocation of wireless resources over a cognitive radio system network to transmit multiple scalable video streams to multiple users. The method exploits the dynamic architecture of the Scalable Video Coding extension of the H.264 standard, along with the diversity that OFDMA networks provide. We use a game-theoretic Nash Bargaining Solution (NBS) framework to ensure that each user receives the minimum video quality requirements, while maintaining fairness over the cognitive radio system. An optimization problem is formulated, where the objective is the maximization of the Nash product while minimizing the waste of resources. The problem is solved by using a Swarm Intelligence optimizer, namely Particle Swarm Optimization. Due to the high dimensionality of the problem, we also introduce a dimension-reduction technique. Our experimental results demonstrate the fairness imposed by the employed NBS framework.

  1. Energy-Efficient Bandwidth Allocation for Multiuser Scalable Video Streaming over WLAN

    Directory of Open Access Journals (Sweden)

    Xin Ji

    2008-01-01

    Full Text Available We consider the problem of packet scheduling for the transmission of multiple video streams over a wireless local area network (WLAN. A cross-layer optimization framework is proposed to minimize the wireless transceiver energy consumption while meeting the user required visual quality constraints. The framework relies on the IEEE 802.11 standard and on the embedded bitstream structure of the scalable video coding scheme. It integrates an application-level video quality metric as QoS constraint (instead of a communication layer quality metric with energy consumption optimization through link layer scaling and sleeping. Both energy minimization and min-max energy optimization strategies are discussed. Simulation results demonstrate significant energy gains compared to the state-of-the-art approaches.

  2. Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    Science.gov (United States)

    Shalkhauser, Mary JO; Whyte, Wayne A.

    1991-01-01

    Advances in very large scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible an potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for DPCM (differential pulse code midulation)-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the codec are described, and performance results are provided.

  3. Codes That Support Smart Growth Development

    Science.gov (United States)

    Provides examples of local zoning codes that support smart growth development, categorized by: unified development code, form-based code, transit-oriented development, design guidelines, street design standards, and zoning overlay.

  4. Cyclone Codes

    OpenAIRE

    Schindelhauer, Christian; Jakoby, Andreas; Köhler, Sven

    2016-01-01

    We introduce Cyclone codes which are rateless erasure resilient codes. They combine Pair codes with Luby Transform (LT) codes by computing a code symbol from a random set of data symbols using bitwise XOR and cyclic shift operations. The number of data symbols is chosen according to the Robust Soliton distribution. XOR and cyclic shift operations establish a unitary commutative ring if data symbols have a length of $p-1$ bits, for some prime number $p$. We consider the graph given by code sym...

  5. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  6. Multimedia image and video processing

    CERN Document Server

    Guan, Ling

    2012-01-01

    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  7. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Sammenfatning af de mest væsentlige pointer fra hovedrapporten: Dokumentation og evaluering af Coding Class......Sammenfatning af de mest væsentlige pointer fra hovedrapporten: Dokumentation og evaluering af Coding Class...

  8. ALD: adaptive layer distribution for scalable video

    OpenAIRE

    Quinlan, Jason J.; Zahran, Ahmed H.; Sreenan, Cormac J.

    2013-01-01

    Bandwidth constriction and datagram loss are prominent issues that affect the perceived quality of streaming video over lossy networks, such as wireless. The use of layered video coding seems attractive as a means to alleviate these issues, but its adoption has been held back in large part by the inherent priority assigned to the critical lower layers and the consequences for quality that result from their loss. The proposed use of forward error correction (FEC) as a solution only further bur...

  9. code {poems}

    Directory of Open Access Journals (Sweden)

    Ishac Bertran

    2012-08-01

    Full Text Available "Exploring the potential of code to communicate at the level of poetry," the code­ {poems} project solicited submissions from code­writers in response to the notion of a poem, written in a software language which is semantically valid. These selections reveal the inner workings, constitutive elements, and styles of both a particular software and its authors.

  10. Violent video games affecting our children.

    Science.gov (United States)

    Vessey, J A; Lee, J E

    2000-01-01

    Exposure to media violence is associated with increased aggression and its sequelae. Unfortunately, the majority of entertainment video games contain violence. Moreover, children of both genders prefer games with violent content. As there is no compulsory legislative standards to limit the type and amount of violence in video games, concerned adults must assume an oversight role.

  11. Music Videos: On Reality and Representation.

    Science.gov (United States)

    Tee, Ernie

    Music videos from the past few years have become a prominent phenomenon in our culture. They are critically compared by a small or large section of the public with the structures of, and relations within, social reality. These videos are considered to portray real situations that are, according to the standards of western culture, severely…

  12. Robotic video photogrammetry system

    Science.gov (United States)

    Gustafson, Peter C.

    1997-07-01

    For many years, photogrammetry has been in use at TRW. During that time, needs have arisen for highly repetitive measurements. In an effort to satisfy these needs in a timely manner, a specialized Robotic Video Photogrammetry System (RVPS) was developed by TRW in conjunction with outside vendors. The primary application for the RVPS has strict accuracy requirements that demand significantly more images than the previously used film-based system. The time involved in taking these images was prohibitive but by automating the data acquisition process, video techniques became a practical alternative to the more traditional film- based approach. In fact, by applying video techniques, measurement productivity was enhanced significantly. Analysis involved was also brought `on-board' to the RVPS, allowing shop floor acquisition and delivery of results. The RVPS has also been applied in other tasks and was found to make a critical improvement in productivity, allowing many more tests to be run in a shorter time cycle. This paper will discuss the creation of the system and TRW's experiences with the RVPS. Highlighted will be the lessons learned during these efforts and significant attributes of the process not common to the standard application of photogrammetry for industrial measurement. As productivity and ease of use continue to drive the application of photogrammetry in today's manufacturing climate, TRW expects several systems, with technological improvements applied, to be in use in the near future.

  13. Report number codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.N. (ed.)

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  14. 4K Video Traffic Prediction using Seasonal Autoregressive Modeling

    Directory of Open Access Journals (Sweden)

    D. R. Marković

    2017-06-01

    Full Text Available From the perspective of average viewer, high definition video streams such as HD (High Definition and UHD (Ultra HD are increasing their internet presence year over year. This is not surprising, having in mind expansion of HD streaming services, such as YouTube, Netflix etc. Therefore, high definition video streams are starting to challenge network resource allocation with their bandwidth requirements and statistical characteristics. Need for analysis and modeling of this demanding video traffic has essential importance for better quality of service and experience support. In this paper we use an easy-to-apply statistical model for prediction of 4K video traffic. Namely, seasonal autoregressive modeling is applied in prediction of 4K video traffic, encoded with HEVC (High Efficiency Video Coding. Analysis and modeling were performed within R programming environment using over 17.000 high definition video frames. It is shown that the proposed methodology provides good accuracy in high definition video traffic modeling.

  15. Real-time video streaming in mobile cloud over heterogeneous wireless networks

    Science.gov (United States)

    Abdallah-Saleh, Saleh; Wang, Qi; Grecos, Christos

    2012-06-01

    Recently, the concept of Mobile Cloud Computing (MCC) has been proposed to offload the resource requirements in computational capabilities, storage and security from mobile devices into the cloud. Internet video applications such as real-time streaming are expected to be ubiquitously deployed and supported over the cloud for mobile users, who typically encounter a range of wireless networks of diverse radio access technologies during their roaming. However, real-time video streaming for mobile cloud users across heterogeneous wireless networks presents multiple challenges. The network-layer quality of service (QoS) provision to support high-quality mobile video delivery in this demanding scenario remains an open research question, and this in turn affects the application-level visual quality and impedes mobile users' perceived quality of experience (QoE). In this paper, we devise a framework to support real-time video streaming in this new mobile video networking paradigm and evaluate the performance of the proposed framework empirically through a lab-based yet realistic testing platform. One particular issue we focus on is the effect of users' mobility on the QoS of video streaming over the cloud. We design and implement a hybrid platform comprising of a test-bed and an emulator, on which our concept of mobile cloud computing, video streaming and heterogeneous wireless networks are implemented and integrated to allow the testing of our framework. As representative heterogeneous wireless networks, the popular WLAN (Wi-Fi) and MAN (WiMAX) networks are incorporated in order to evaluate effects of handovers between these different radio access technologies. The H.264/AVC (Advanced Video Coding) standard is employed for real-time video streaming from a server to mobile users (client nodes) in the networks. Mobility support is introduced to enable continuous streaming experience for a mobile user across the heterogeneous wireless network. Real-time video stream packets

  16. Perceptual compressive sensing scalability in mobile video

    Science.gov (United States)

    Bivolarski, Lazar

    2011-09-01

    Scalability features embedded within the video sequences allows for streaming over heterogeneous networks to a variety of end devices. Compressive sensing techniques that will allow for lowering the complexity increase the robustness of the video scalability are reviewed. Human visual system models are often used in establishing perceptual metrics that would evaluate quality of video. Combining of perceptual and compressive sensing approach outlined from recent investigations. The performance and the complexity of different scalability techniques are evaluated. Application of perceptual models to evaluation of the quality of compressive sensing scalability is considered in the near perceptually lossless case and to the appropriate coding schemes is reviewed.

  17. The definitive guide to HTML 5 video

    CERN Document Server

    Pfeiffer, Silvia

    2010-01-01

    Plugins will soon be a thing of the past. The Definitive Guide to HTML5 Video is the first authoritative book on HTML5 video, the new web standard that allows browsers to support audio and video elements natively. This makes it very easy for web developers to publish audio and video, integrating both within the general presentation of web pages. For example, media elements can be styled using CSS (style sheets), integrated into SVG (scalable vector graphics), and manipulated in a Canvas. The book offers techniques for providing accessibility to media elements, enabling consistent handling of a

  18. Lossless Compression of Video using Motion Compensation

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1998-01-01

    Summary form only given. We investigate lossless coding of video using predictive coding and motion compensation. The new coding methods combine state-of-the-art lossless techniques as JPEG (context based prediction and bias cancellation, Golomb coding), with high resolution motion field estimation......-predictors and intra-frame predictors as well. As proposed by Ribas-Corbera (see PhD thesis, University of Michigan, 1996), we use bi-linear interpolation in order to achieve sub-pixel precision of the motion field. Using more reference images is another way of achieving higher accuracy of the match. The motion...

  19. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... NEI YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration ... Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: ...

  20. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia ... of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ...

  1. NEI You Tube Videos: Amblyopia

    Science.gov (United States)

    ... YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia ... of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ...

  2. The path of code linting

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Join the path of code linting and discover how it can help you reach higher levels of programming enlightenment. Today we will cover how to embrace code linters to offload cognitive strain on preserving style standards in your code base as well as avoiding error-prone constructs. Additionally, I will show you the journey ahead for integrating several code linters in the programming tools your already use with very little effort.

  3. An overview of new video techniques

    CERN Document Server

    Parker, R

    1999-01-01

    Current video transmission and distribution systems at CERN use a variety of analogue techniques which are several decades old. It will soon be necessary to replace this obsolete equipment, and the opportunity therefore exists to rationalize the diverse systems now in place. New standards for digital transmission and distribution are now emerging. This paper gives an overview of these new standards and of the underlying technology common to many of them. The paper reviews Digital Video Broadcasting (DVB), the Motion Picture Experts Group specifications (MPEG1, MPEG2, MPEG4, and MPEG7), videoconferencing standards (H.261 etc.), and packet video systems, together with predictions of the penetration of these standards into the consumer market. The digital transport mechanisms now available (IP, SDH, ATM) are also reviewed, and the implication of widespread adoption of these systems on video transmission and distribution is analysed.

  4. Coded Splitting Tree Protocols

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar

    2013-01-01

    This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... as possible. Evaluations show that the proposed protocol provides considerable gains over the standard tree splitting protocol applying SIC. The improvement comes at the expense of an increased feedback and receiver complexity....

  5. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed ... Activity Role of Body Weight in Osteoarthritis Educational Videos for Patients Rheumatoid Arthritis Educational Video Series Psoriatic ...

  6. Indexed Captioned Searchable Videos: A Learning Companion for STEM Coursework

    Science.gov (United States)

    Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher

    2017-02-01

    Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character recognition (OCR) technology was enhanced with image transformations for extraction of text from video frames to support indexing and search. The images and text on video frames is analyzed to divide lecture videos into topical segments. The ICS video player integrates indexing, search, and captioning in video playback providing instant access to the content of interest. This video framework has been used by more than 70 courses in a variety of STEM disciplines and assessed by more than 4000 students. Results presented from the surveys demonstrate the value of the videos as a learning resource and the role played by videos in a students learning process. Survey results also establish the value of indexing and search features in a video platform for education. This paper reports on the development and evaluation of ICS videos framework and over 5 years of usage experience in several STEM courses.

  7. 61214++++','DOAJ-ART-EN'); return false;" href="+++++https://jual.nipissingu.ca/wp-content/uploads/sites/25/2014/06/v61214.m4v">61214++++">Jailed - Video

    Directory of Open Access Journals (Sweden)

    Cameron CULBERT

    2012-07-01

    Full Text Available As the public education system in Northern Ontario continues to take a downward spiral, a plethora of secondary school students are being placed in an alternative educational environment. Juxtaposing the two educational settings reveals very similar methods and characteristics of educating our youth as opposed to using a truly alternative approach to education. This video reviews the relationship between public education and alternative education in a remote Northern Ontario setting. It is my belief that the traditional methods of teaching are not appropriate in educating at risk students in alternative schools. Paper and pencil worksheets do not motivate these students to learn and succeed. Alternative education should emphasize experiential learning, a just in time curriculum based on every unique individual and the students true passion for everyday life. Cameron Culbert was born on February 3rd, 1977 in North Bay, Ontario. His teenage years were split between attending public school and his willed curriculum on the ski hill. Culbert spent 10 years (1996-2002 & 2006-2010 competing for Canada as an alpine ski racer. His passion for teaching and coaching began as an athlete and has now transferred into the classroom and the community. As a graduate of Nipissing University (BA, BEd, MEd. Camerons research interests are alternative education, physical education and technology in the classroom. Currently Cameron is an active educator and coach in Northern Ontario.

  8. Interactive streaming of stored multiview video using redundant frame structures.

    Science.gov (United States)

    Cheung, Gene; Ortega, Antonio; Cheung, Ngai-Man

    2011-03-01

    While much of multiview video coding focuses on the rate-distortion performance of compressing all frames of all views for storage or non-interactive video delivery over networks, we address the problem of designing a frame structure to enable interactive multiview streaming, where clients can interactively switch views during video playback. Thus, as a client is playing back successive frames (in time) for a given view, it can send a request to the server to switch to a different view while continuing uninterrupted temporal playback. Noting that standard tools for random access (i.e., I-frame insertion) can be bandwidth-inefficient for this application, we propose a redundant representation of I-, P-, and "merge" frames, where each original picture can be encoded into multiple versions, appropriately trading off expected transmission rate with storage, to facilitate view switching. We first present ad hoc frame structures with good performance when the view-switching probabilities are either very large or very small. We then present optimization algorithms that generate more general frame structures with better overall performance for the general case. We show in our experiments that we can generate redundant frame structures offering a range of tradeoff points between transmission and storage, e.g., outperforming simple I-frame insertion structures by up to 45% in terms of bandwidth efficiency at twice the storage cost.

  9. Analog Coding.

    Science.gov (United States)

    CODING, ANALOG SYSTEMS), INFORMATION THEORY, DATA TRANSMISSION SYSTEMS , TRANSMITTER RECEIVERS, WHITE NOISE, PROBABILITY, ERRORS, PROBABILITY DENSITY FUNCTIONS, DIFFERENTIAL EQUATIONS, SET THEORY, COMPUTER PROGRAMS

  10. Software Optimization of Video Codecs on Pentium Processor with MMX Technology

    Directory of Open Access Journals (Sweden)

    Liu KJ Ray

    2001-01-01

    Full Text Available A key enabling technology for the proliferation of multimedia PC's is the availability of fast video codecs, which are the basic building blocks of many new multimedia applications. Since most industrial video coding standards (e.g., MPEG1, MPEG2, H.261, H.263 only specify the decoder syntax, there are a lot of rooms for optimization in a practical implementation. When considering a specific hardware platform like the PC, the algorithmic optimization must be considered in tandem with the architecture of the PC. Specifically, an algorithm that is optimal in the sense of number of operations needed may not be the fastest implementation on the PC. This is because special instructions are available which can perform several operations at once under special circumstances. In this work, we describe a fast implementation of H.263 video encoder for the Pentium processor with MMX technology. The described codec is adopted for video mail and video phone softwares used in IBM ThinkPad.

  11. Robust Shot Boundary Detection from Video Using Dynamic Texture

    Directory of Open Access Journals (Sweden)

    Peng Taile

    2014-03-01

    Full Text Available Video boundary detection belongs to a basis subject in computer vision. It is more important to video analysis and video understanding. The existing video boundary detection methods always are effective to certain types of video data. These methods have relatively low generalization ability. We present a novel shot boundary detection algorithm based on video dynamic texture. Firstly, the two adjacent frames are read from a given video. We normalize the two frames to get the same size frame. Secondly, we divide these frames into some sub-domain on the same standard. The following thing is to calculate the average gradient direction of sub-domain and form dynamic texture. Finally, the dynamic texture of adjacent frames is compared. We have done some experiments in different types of video data. These experimental results show that our method has high generalization ability. To different type of videos, our algorithm can achieve higher average precision and average recall relative to some algorithms.

  12. Video Design Games

    DEFF Research Database (Denmark)

    Smith, Rachel Charlotte; Christensen, Kasper Skov; Iversen, Ole Sejer

    We introduce Video Design Games to train educators in teaching design. The Video Design Game is a workshop format consisting of three rounds in which participants observe, reflect and generalize based on video snippets from their own practice. The paper reports on a Video Design Game workshop...

  13. Characterization of social video

    Science.gov (United States)

    Ostrowski, Jeffrey R.; Sarhan, Nabil J.

    2009-01-01

    The popularity of social media has grown dramatically over the World Wide Web. In this paper, we analyze the video popularity distribution of well-known social video websites (YouTube, Google Video, and the AOL Truveo Video Search engine) and characterize their workload. We identify trends in the categories, lengths, and formats of those videos, as well as characterize the evolution of those videos over time. We further provide an extensive analysis and comparison of video content amongst the main regions of the world.

  14. EEG-based classification of video quality perception using steady state visual evoked potentials (SSVEPs)

    Science.gov (United States)

    Acqualagna, Laura; Bosse, Sebastian; Porbadnigk, Anne K.; Curio, Gabriel; Müller, Klaus-Robert; Wiegand, Thomas; Blankertz, Benjamin

    2015-04-01

    Objective. Recent studies exploit the neural signal recorded via electroencephalography (EEG) to get a more objective measurement of perceived video quality. Most of these studies capitalize on the event-related potential component P3. We follow an alternative approach to the measurement problem investigating steady state visual evoked potentials (SSVEPs) as EEG correlates of quality changes. Unlike the P3, SSVEPs are directly linked to the sensory processing of the stimuli and do not require long experimental sessions to get a sufficient signal-to-noise ratio. Furthermore, we investigate the correlation of the EEG-based measures with the outcome of the standard behavioral assessment. Approach. As stimulus material, we used six gray-level natural images in six levels of degradation that were created by coding the images with the HM10.0 test model of the high efficiency video coding (H.265/MPEG-HEVC) using six different compression rates. The degraded images were presented in rapid alternation with the original images. In this setting, the presence of SSVEPs is a neural marker that objectively indicates the neural processing of the quality changes that are induced by the video coding. We tested two different machine learning methods to classify such potentials based on the modulation of the brain rhythm and on time-locked components, respectively. Main results. Results show high accuracies in classification of the neural signal over the threshold of the perception of the quality changes. Accuracies significantly correlate with the mean opinion scores given by the participants in the standardized degradation category rating quality assessment of the same group of images. Significance. The results show that neural assessment of video quality based on SSVEPs is a viable complement of the behavioral one and a significantly fast alternative to methods based on the P3 component.

  15. Real-time demonstration hardware for enhanced DPCM video compression algorithm

    Science.gov (United States)

    Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.

    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development

  16. Divergence coding for convolutional codes

    Directory of Open Access Journals (Sweden)

    Valery Zolotarev

    2017-01-01

    Full Text Available In the paper we propose a new coding/decoding on the divergence principle. A new divergent multithreshold decoder (MTD for convolutional self-orthogonal codes contains two threshold elements. The second threshold element decodes the code with the code distance one greater than for the first threshold element. Errorcorrecting possibility of the new MTD modification have been higher than traditional MTD. Simulation results show that the performance of the divergent schemes allow to approach area of its effective work to channel capacity approximately on 0,5 dB. Note that we include the enough effective Viterbi decoder instead of the first threshold element, the divergence principle can reach more. Index Terms — error-correcting coding, convolutional code, decoder, multithreshold decoder, Viterbi algorithm.

  17. Transnasal endoscopy with narrow-band imaging and Lugol staining to screen patients with head and neck cancer whose condition limits oral intubation with standard endoscope (with video).

    Science.gov (United States)

    Lee, Yi-Chia; Wang, Cheng-Ping; Chen, Chien-Chuan; Chiu, Han-Mo; Ko, Jenq-Yuh; Lou, Pei-Jen; Yang, Tsung-Lin; Huang, Hsin-Yi; Wu, Ming-Shiang; Lin, Jaw-Town; Hsiu-Hsi Chen, Tony; Wang, Hsiu-Po

    2009-03-01

    Early detection of esophageal cancer in patients with head and neck cancers may alter treatment planning and improve survival. However, standard endoscopic screening is not feasible for some patients with tumor-related airway compromise or postirradiation trismus. To evaluate a novel, sequential approach by integrating ultrathin endoscopy with narrow-band imaging and Lugol chromoendoscopy. Cross-sectional study. Single center in Taiwan. Forty-four consecutive patients with transoral difficulty screened for synchronous or metachronous esophageal cancer. Sensitivity, specificity, and accuracy in the detection of mucosal high-grade neoplasia or invasive cancer. Fifty-four endoscopic interpretations were obtained, and 11 mucosal high-grade neoplasia and 7 invasive cancers were confirmed by histology. The mean examination time was 19.4 minutes (range 7.9-35.2 minutes), and all patients tolerated the procedure well. Sensitivity, specificity, and accuracy (with 95% CI) were 55.6% (95% CI, 33.5%-75.6%), 97.2% (95% CI, 85.8%-99.3%), and 83.3% (95% CI, 71.2%-90.9%), respectively, for standard endoscopy; 88.9% (95% CI, 66.9%-96.6%), 97.2% (95% CI, 85.8%-99.3%), and 94.4% (95% CI, 84.9%-97.9%), respectively, with the adjunct of narrow-band imaging; and 88.9% (95% CI, 66.9%-96.6%), 72.2% (95% CI, 55.9%-84.1%), and 77.8% (95% CI, 64.9%-86.8%), respectively, with the adjunct of Lugol chromoendoscopy. When we integrated all interpretations on the basis of the sequential approach, the estimated probability of false-negative findings was 1.2% (95% CI, 0.1%-4.6%). Inherent shortcomings of ultrathin endoscopy, such as its resolution, light source, and lack of magnification. The use of ultrathin endoscopy in a sequential approach for multimodal detection is feasible in patients with transoral difficulty and substantially increases the detection rate of synchronous or metachronous neoplasms.

  18. Video visual analytics

    OpenAIRE

    Höferlin, Markus Johannes

    2013-01-01

    The amount of video data recorded world-wide is tremendously growing and has already reached hardly manageable dimensions. It originates from a wide range of application areas, such as surveillance, sports analysis, scientific video analysis, surgery documentation, and entertainment, and its analysis represents one of the challenges in computer science. The vast amount of video data renders manual analysis by watching the video data impractical. However, automatic evaluation of video material...

  19. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...... development, Speaking Code unfolds an argument to undermine the distinctions between criticism and practice, and to emphasize the aesthetic and political aspects of software studies. Not reducible to its functional aspects, program code mirrors the instability inherent in the relationship of speech......; alternatives to mainstream development, from performances of the live-coding scene to the organizational forms of commons-based peer production; the democratic promise of social media and their paradoxical role in suppressing political expression; and the market’s emptying out of possibilities for free...

  20. Temporal compressive imaging for video

    Science.gov (United States)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.