WorldWideScience

Sample records for resilient video coding

  1. Error Resilience in Current Distributed Video Coding Architectures

    Directory of Open Access Journals (Sweden)

    Tonoli Claudia

    2009-01-01

    Full Text Available In distributed video coding the signal prediction is shifted at the decoder side, giving therefore most of the computational complexity burden at the receiver. Moreover, since no prediction loop exists before transmission, an intrinsic robustness to transmission errors has been claimed. This work evaluates and compares the error resilience performance of two distributed video coding architectures. In particular, we have considered a video codec based on the Stanford architecture (DISCOVER codec and a video codec based on the PRISM architecture. Specifically, an accurate temporal and rate/distortion based evaluation of the effects of the transmission errors for both the considered DVC architectures has been performed and discussed. These approaches have been also compared with H.264/AVC, in both cases of no error protection, and simple FEC error protection. Our evaluations have highlighted in all cases a strong dependence of the behavior of the various codecs to the content of the considered video sequence. In particular, PRISM seems to be particularly well suited for low-motion sequences, whereas DISCOVER provides better performance in the other cases.

  2. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  3. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  4. Distributed Video Coding: Iterative Improvements

    DEFF Research Database (Denmark)

    Luong, Huynh Van

    Nowadays, emerging applications such as wireless visual sensor networks and wireless video surveillance are requiring lightweight video encoding with high coding efficiency and error-resilience. Distributed Video Coding (DVC) is a new coding paradigm which exploits the source statistics...... and noise modeling and also learn from the previous decoded Wyner-Ziv (WZ) frames, side information and noise learning (SING) is proposed. The SING scheme introduces an optical flow technique to compensate the weaknesses of the block based SI generation and also utilizes clustering of DCT blocks to capture...... cross band correlation and increase local adaptivity in noise modeling. During decoding, the updated information is used to iteratively reestimate the motion and reconstruction in the proposed motion and reconstruction reestimation (MORE) scheme. The MORE scheme not only reestimates the motion vectors...

  5. Error and Congestion Resilient Video Streaming over Broadband Wireless

    Directory of Open Access Journals (Sweden)

    Laith Al-Jobouri

    2015-04-01

    Full Text Available In this paper, error resilience is achieved by adaptive, application-layer rateless channel coding, which is used to protect H.264/Advanced Video Coding (AVC codec data-partitioned videos. A packetization strategy is an effective tool to control error rates and, in the paper, source-coded data partitioning serves to allocate smaller packets to more important compressed video data. The scheme for doing this is applied to real-time streaming across a broadband wireless link. The advantages of rateless code rate adaptivity are then demonstrated in the paper. Because the data partitions of a video slice are each assigned to different network packets, in congestion-prone wireless networks the increased number of packets per slice and their size disparity may increase the packet loss rate from buffer overflows. As a form of congestion resilience, this paper recommends packet-size dependent scheduling as a relatively simple way of alleviating the buffer-overflow problem arising from data-partitioned packets. The paper also contributes an analysis of data partitioning and packet sizes as a prelude to considering scheduling regimes. The combination of adaptive channel coding and prioritized packetization for error resilience with packet-size dependent packet scheduling results in a robust streaming scheme specialized for broadband wireless and real-time streaming applications such as video conferencing, video telephony, and telemedicine.

  6. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  7. Video over DSL with LDGM Codes for Interactive Applications

    Directory of Open Access Journals (Sweden)

    Laith Al-Jobouri

    2016-05-01

    Full Text Available Digital Subscriber Line (DSL network access is subject to error bursts, which, for interactive video, can introduce unacceptable latencies if video packets need to be re-sent. If the video packets are protected against errors with Forward Error Correction (FEC, calculation of the application-layer channel codes themselves may also introduce additional latency. This paper proposes Low-Density Generator Matrix (LDGM codes rather than other popular codes because they are more suitable for interactive video streaming, not only for their computational simplicity but also for their licensing advantage. The paper demonstrates that a reduction of up to 4 dB in video distortion is achievable with LDGM Application Layer (AL FEC. In addition, an extension to the LDGM scheme is demonstrated, which works by rearranging the columns of the parity check matrix so as to make it even more resilient to burst errors. Telemedicine and video conferencing are typical target applications.

  8. Communicating pictures a course in image and video coding

    CERN Document Server

    Bull, David R

    2014-01-01

    Communicating Pictures starts with a unique historical perspective of the role of images in communications and then builds on this to explain the applications and requirements of a modern video coding system. It draws on the author's extensive academic and professional experience of signal processing and video coding to deliver a text that is algorithmically rigorous, yet accessible, relevant to modern standards, and practical. It offers a thorough grounding in visual perception, and demonstrates how modern image and video compression methods can be designed in order to meet the rate-quality performance levels demanded by today's applications, networks and users. With this book you will learn: Practical issues when implementing a codec, such as picture boundary extension and complexity reduction, with particular emphasis on efficient algorithms for transforms, motion estimators and error resilience Conflicts between conventional video compression, based on variable length coding and spatiotemporal prediction,...

  9. Constrained motion estimation-based error resilient coding for HEVC

    Science.gov (United States)

    Guo, Weihan; Zhang, Yongfei; Li, Bo

    2018-04-01

    Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.

  10. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    OpenAIRE

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content ...

  11. High efficiency video coding coding tools and specification

    CERN Document Server

    Wien, Mathias

    2015-01-01

    The video coding standard High Efficiency Video Coding (HEVC) targets at improved compression performance for video resolutions of HD and beyond, providing Ultra HD video at similar compressed bit rates as for HD video encoded with the well-established video coding standard H.264 | AVC. Based on known concepts, new coding structures and improved coding tools have been developed and specified in HEVC. The standard is expected to be taken up easily by established industry as well as new endeavors, answering the needs of todays connected and ever-evolving online world. This book presents the High Efficiency Video Coding standard and explains it in a clear and coherent language. It provides a comprehensive and consistently written description, all of a piece. The book targets at both, newbies to video coding as well as experts in the field. While providing sections with introductory text for the beginner, it suits as a well-arranged reference book for the expert. The book provides a comprehensive reference for th...

  12. Distributed video coding with multiple side information

    DEFF Research Database (Denmark)

    Huang, Xin; Brites, C.; Ascenso, J.

    2009-01-01

    Distributed Video Coding (DVC) is a new video coding paradigm which mainly exploits the source statistics at the decoder based on the availability of some decoder side information. The quality of the side information has a major impact on the DVC rate-distortion (RD) performance in the same way...... the quality of the predictions had a major impact in predictive video coding. In this paper, a DVC solution exploiting multiple side information is proposed; the multiple side information is generated by frame interpolation and frame extrapolation targeting to improve the side information of a single...

  13. On video formats and coding efficiency

    NARCIS (Netherlands)

    Bellers, E.B.; Haan, de G.

    2001-01-01

    This paper examines the efficiency of MPEG-2 coding for interlaced and progressive video, and compares de-interlacing and picture rate up-conversion before and after coding. We found receiver side de-interlacing and picture rate up-conversion (i.e. after coding) to give better image quality at a

  14. Adaptive format conversion for scalable video coding

    Science.gov (United States)

    Wan, Wade K.; Lim, Jae S.

    2001-12-01

    The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.

  15. Skype resilience to high motion videos

    NARCIS (Netherlands)

    Exarchakos, G.; Druda, L.; Menkovski, V.; Bellavista, P.; Liotta, A.

    Skype is one of the most popular video call services in the current Internet world. One of its strengths is the use of an adaptive mechanism to match the constraints of the underlying network. This work is focused on how this mechanism can maximize the video quality as perceived by the viewers using

  16. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  17. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  18. Code domain steganography in video tracks

    Science.gov (United States)

    Rymaszewski, Sławomir

    2008-01-01

    This article is dealing with a practical method of hiding secret information in video stream. Method is dedicated for MPEG-2 stream. The algorithm takes to consider not only MPEG video coding scheme described in standard but also bits PES-packets encapsulation in MPEG-2 Program Stream (PS). This modification give higher capacity and more effective bit rate control for output stream than previously proposed methods.

  19. Coding Transparency in Object-Based Video

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2006-01-01

    A novel algorithm for coding gray level alpha planes in object-based video is presented. The scheme is based on segmentation in multiple layers. Different coders are specifically designed for each layer. In order to reduce the bit rate, cross-layer redundancies as well as temporal correlation are...

  20. Coding visual features extracted from video sequences.

    Science.gov (United States)

    Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2014-05-01

    Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.

  1. Distributed Video Coding for Multiview and Video-plus-depth Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo

    The interest in Distributed Video Coding (DVC) systems has grown considerably in the academic world in recent years. With DVC the correlation between frames is exploited at the decoder (joint decoding). The encoder codes the frame independently, performing relatively simple operations. Therefore......, with DVC the complexity is shifted from encoder to decoder, making the coding architecture a viable solution for encoders with limited resources. DVC may empower new applications which can benefit from this reversed coding architecture. Multiview Distributed Video Coding (M-DVC) is the application...... of the to-be-decoded frame. Another key element is the Residual estimation, indicating the reliability of the SI, which is used to calculate the parameters of the correlation noise model between SI and original frame. In this thesis new methods for Inter-camera SI generation are analyzed in the Stereo...

  2. Recent advances in multiview distributed video coding

    Science.gov (United States)

    Dufaux, Frederic; Ouaret, Mourad; Ebrahimi, Touradj

    2007-04-01

    We consider dense networks of surveillance cameras capturing overlapped images of the same scene from different viewing directions, such a scenario being referred to as multi-view. Data compression is paramount in such a system due to the large amount of captured data. In this paper, we propose a Multi-view Distributed Video Coding approach. It allows for low complexity / low power consumption at the encoder side, and the exploitation of inter-view correlation without communications among the cameras. We introduce a combination of temporal intra-view side information and homography inter-view side information. Simulation results show both the improvement of the side information, as well as a significant gain in terms of coding efficiency.

  3. The emerging High Efficiency Video Coding standard (HEVC)

    International Nuclear Information System (INIS)

    Raja, Gulistan; Khan, Awais

    2013-01-01

    High definition video (HDV) is becoming popular day by day. This paper describes the performance analysis of latest upcoming video standard known as High Efficiency Video Coding (HEVC). HEVC is designed to fulfil all the requirements for future high definition videos. In this paper, three configurations (intra only, low delay and random access) of HEVC are analyzed using various 480p, 720p and 1080p high definition test video sequences. Simulation results show the superior objective and subjective quality of HEVC

  4. Complexity-aware high efficiency video coding

    CERN Document Server

    Correa, Guilherme; Agostini, Luciano; Cruz, Luis A da Silva

    2016-01-01

    This book discusses computational complexity of High Efficiency Video Coding (HEVC) encoders with coverage extending from the analysis of HEVC compression efficiency and computational complexity to the reduction and scaling of its encoding complexity. After an introduction to the topic and a review of the state-of-the-art research in the field, the authors provide a detailed analysis of the HEVC encoding tools compression efficiency and computational complexity.  Readers will benefit from a set of algorithms for scaling the computational complexity of HEVC encoders, all of which take advantage from the flexibility of the frame partitioning structures allowed by the standard.  The authors also provide a set of early termination methods based on data mining and machine learning techniques, which are able to reduce the computational complexity required to find the best frame partitioning structures. The applicability of the proposed methods is finally exemplified with an encoding time control system that emplo...

  5. Expressing Youth Voice through Video Games and Coding

    Science.gov (United States)

    Martin, Crystle

    2017-01-01

    A growing body of research focuses on the impact of video games and coding on learning. The research often elevates learning the technical skills associated with video games and coding or the importance of problem solving and computational thinking, which are, of course, necessary and relevant. However, the literature less often explores how young…

  6. Strong Resilience of Topological Codes to Depolarization

    Directory of Open Access Journals (Sweden)

    H. Bombin

    2012-04-01

    Full Text Available The inevitable presence of decoherence effects in systems suitable for quantum computation necessitates effective error-correction schemes to protect information from noise. We compute the stability of the toric code to depolarization by mapping the quantum problem onto a classical disordered eight-vertex Ising model. By studying the stability of the related ferromagnetic phase via both large-scale Monte Carlo simulations and the duality method, we are able to demonstrate an increased error threshold of 18.9(3% when noise correlations are taken into account. Remarkably, this result agrees within error bars with the result for a different class of codes—topological color codes—where the mapping yields interesting new types of interacting eight-vertex models.

  7. H.264/AVC error resilience tools suitable for 3G mobile video services

    Institute of Scientific and Technical Information of China (English)

    LIU Lin; YE Xiu-zi; ZHANG San-yuan; ZHANG Yin

    2005-01-01

    The emergence of third generation mobile system (3G) makes video transmission in wireless environment possible,and the latest 3GPP/3GPP2 standards require 3G terminals support H.264/AVC. Due to high packet loss rate in wireless environment, error resilience for 3G terminals is necessary. Moreover, because of the hardware restrictions, 3G mobile terminals support only part of H.264/AVC error resilience tool. This paper analyzes various error resilience tools and their functions, and presents 2 error resilience strategies for 3G mobile streaming video services and mobile conversational services. Performances of the proposed error resilience strategies were tested using off-line common test conditions. Experiments showed that the proposed error resilience strategies can yield reasonably satisfactory results.

  8. The H.264/MPEG4 advanced video coding

    Science.gov (United States)

    Gromek, Artur

    2009-06-01

    H.264/MPEG4-AVC is the newest video coding standard recommended by International Telecommunication Union - Telecommunication Standardization Section (ITU-T) and the ISO/IEC Moving Picture Expert Group (MPEG). The H.264/MPEG4-AVC has recently become leading standard for generic audiovisual services, since deployment for digital television. Nowadays is commonly used in wide range of video application ranging like mobile services, videoconferencing, IPTV, HDTV, video storage and many more. In this article, author briefly describes the technology applied in the H.264/MPEG4-AVC video coding standard, the way of real-time implementation and the way of future development.

  9. Video processing for human perceptual visual quality-oriented video coding.

    Science.gov (United States)

    Oh, Hyungsuk; Kim, Wonha

    2013-04-01

    We have developed a video processing method that achieves human perceptual visual quality-oriented video coding. The patterns of moving objects are modeled by considering the limited human capacity for spatial-temporal resolution and the visual sensory memory together, and an online moving pattern classifier is devised by using the Hedge algorithm. The moving pattern classifier is embedded in the existing visual saliency with the purpose of providing a human perceptual video quality saliency model. In order to apply the developed saliency model to video coding, the conventional foveation filtering method is extended. The proposed foveation filter can smooth and enhance the video signals locally, in conformance with the developed saliency model, without causing any artifacts. The performance evaluation results confirm that the proposed video processing method shows reliable improvements in the perceptual quality for various sequences and at various bandwidths, compared to existing saliency-based video coding methods.

  10. Noise Residual Learning for Noise Modeling in Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The noise model is one of the inherently difficult challenges in DVC. This paper considers Transform Domain Wyner-Ziv (TDWZ) coding and proposes...

  11. Error resilient H.264/AVC Video over Satellite for low Packet Loss Rates

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren; Andersen, Jakob Dahl

    2007-01-01

    The performance of video over satellite is simulated. The error resilience tools of intra macroblock refresh and slicing are optimized for live broadcast video over satellite. The improved performance using feedback, using a cross- layer approach, over the satellite link is also simulated. The ne...

  12. Coding the Complexity of Activity in Video Recordings

    DEFF Research Database (Denmark)

    Harter, Christopher Daniel; Otrel-Cass, Kathrin

    2017-01-01

    This paper presents a theoretical approach to coding and analyzing video data on human interaction and activity, using principles found in cultural historical activity theory. The systematic classification or coding of information contained in video data on activity can be arduous and time...... Bødker’s in 1996, three possible areas of expansion to Susanne Bødker’s method for analyzing video data were found. Firstly, a technological expansion due to contemporary developments in sophisticated analysis software, since the mid 1990’s. Secondly, a conceptual expansion, where the applicability...... of using Activity Theory outside of the context of human–computer interaction, is assessed. Lastly, a temporal expansion, by facilitating an organized method for tracking the development of activities over time, within the coding and analysis of video data. To expand on the above areas, a prototype coding...

  13. Film grain noise modeling in advanced video coding

    Science.gov (United States)

    Oh, Byung Tae; Kuo, C.-C. Jay; Sun, Shijun; Lei, Shawmin

    2007-01-01

    A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand, video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we present a method to remove film grain noise from image/video without distorting its original content. Besides, we describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The proposed model generates the film grain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed scheme.

  14. Scalable-to-lossless transform domain distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Veselov, Anton

    2010-01-01

    Distributed video coding (DVC) is a novel approach providing new features as low complexity encoding by mainly exploiting the source statistics at the decoder based on the availability of decoder side information. In this paper, scalable-tolossless DVC is presented based on extending a lossy Tran...... codec provides frame by frame encoding. Comparing the lossless coding efficiency, the proposed scalable-to-lossless TDWZ video codec can save up to 5%-13% bits compared to JPEG LS and H.264 Intra frame lossless coding and do so as a scalable-to-lossless coding....

  15. Improved side information generation for distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2008-01-01

    As a new coding paradigm, distributed video coding (DVC) deals with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. The performance of DVC highly depends on the quality of side information. With a better side...... information generation method, fewer bits will be requested from the encoder and more reliable decoded frames will be obtained. In this paper, a side information generation method is introduced to further improve the rate-distortion (RD) performance of transform domain distributed video coding. This algorithm...

  16. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Science.gov (United States)

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  17. Method and device for decoding coded digital video signals

    NARCIS (Netherlands)

    2000-01-01

    The invention relates to a video coding method and system including a quantization and coding sub-assembly (38) in which a quantization parameter is controlled by another parameter defined as being in direct relation with the dynamic range value of the data contained in given blocks of pixels.

  18. Video Coding Technique using MPEG Compression Standards

    African Journals Online (AJOL)

    Akorede

    The two dimensional discrete cosine transform (2-D DCT) is an integral part of video and image compression ... solution for the optimum trade-off by applying rate-distortion theory has been ..... Int. J. the computer, the internet and management,.

  19. High-resolution, low-delay, and error-resilient medical ultrasound video communication using H.264/AVC over mobile WiMAX networks.

    Science.gov (United States)

    Panayides, Andreas; Antoniou, Zinonas C; Mylonas, Yiannos; Pattichis, Marios S; Pitsillides, Andreas; Pattichis, Constantinos S

    2013-05-01

    In this study, we describe an effective video communication framework for the wireless transmission of H.264/AVC medical ultrasound video over mobile WiMAX networks. Medical ultrasound video is encoded using diagnostically-driven, error resilient encoding, where quantization levels are varied as a function of the diagnostic significance of each image region. We demonstrate how our proposed system allows for the transmission of high-resolution clinical video that is encoded at the clinical acquisition resolution and can then be decoded with low-delay. To validate performance, we perform OPNET simulations of mobile WiMAX Medium Access Control (MAC) and Physical (PHY) layers characteristics that include service prioritization classes, different modulation and coding schemes, fading channels conditions, and mobility. We encode the medical ultrasound videos at the 4CIF (704 × 576) resolution that can accommodate clinical acquisition that is typically performed at lower resolutions. Video quality assessment is based on both clinical (subjective) and objective evaluations.

  20. Context based Coding of Quantized Alpha Planes for Video Objects

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2002-01-01

    In object based video, each frame is a composition of objects that are coded separately. The composition is performed through the alpha plane that represents the transparency of the object. We present an alternative to MPEG-4 for coding of alpha planes that considers their specific properties....... Comparisons in terms of rate and distortion are provided, showing that the proposed coding scheme for still alpha planes is better than the algorithms for I-frames used in MPEG-4....

  1. High efficiency video coding (HEVC) algorithms and architectures

    CERN Document Server

    Budagavi, Madhukar; Sullivan, Gary

    2014-01-01

    This book provides developers, engineers, researchers and students with detailed knowledge about the High Efficiency Video Coding (HEVC) standard. HEVC is the successor to the widely successful H.264/AVC video compression standard, and it provides around twice as much compression as H.264/AVC for the same level of quality. The applications for HEVC will not only cover the space of the well-known current uses and capabilities of digital video – they will also include the deployment of new services and the delivery of enhanced video quality, such as ultra-high-definition television (UHDTV) and video with higher dynamic range, wider range of representable color, and greater representation precision than what is typically found today. HEVC is the next major generation of video coding design – a flexible, reliable and robust solution that will support the next decade of video applications and ease the burden of video on world-wide network traffic. This book provides a detailed explanation of the various parts ...

  2. Least-Square Prediction for Backward Adaptive Video Coding

    Directory of Open Access Journals (Sweden)

    Li Xin

    2006-01-01

    Full Text Available Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP, and demonstrate its potential in video coding. Motivated by the duality between edge contour in images and motion trajectory in video, we propose to derive the best prediction of the current frame from its causal past using least-square method. It is demonstrated that LSP is particularly effective for modeling video material with slow motion and can be extended to handle fast motion by temporal warping and forward adaptation. For typical QCIF test sequences, LSP often achieves smaller MSE than , full-search, quarter-pel block matching algorithm (BMA without the need of transmitting any overhead.

  3. Random Linear Network Coding for 5G Mobile Video Delivery

    Directory of Open Access Journals (Sweden)

    Dejan Vukobratovic

    2018-03-01

    Full Text Available An exponential increase in mobile video delivery will continue with the demand for higher resolution, multi-view and large-scale multicast video services. Novel fifth generation (5G 3GPP New Radio (NR standard will bring a number of new opportunities for optimizing video delivery across both 5G core and radio access networks. One of the promising approaches for video quality adaptation, throughput enhancement and erasure protection is the use of packet-level random linear network coding (RLNC. In this review paper, we discuss the integration of RLNC into the 5G NR standard, building upon the ideas and opportunities identified in 4G LTE. We explicitly identify and discuss in detail novel 5G NR features that provide support for RLNC-based video delivery in 5G, thus pointing out to the promising avenues for future research.

  4. A robust fusion method for multiview distributed video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Ascenso, Joao; Brites, Catarina

    2014-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the redundancy of the source (video) at the decoder side, as opposed to predictive coding, where the encoder leverages the redundancy. To exploit the correlation between views, multiview predictive video codecs require the encoder...... with a robust fusion system able to improve the quality of the fused SI along the decoding process through a learning process using already decoded data. We shall here take the approach to fuse the estimated distributions of the SIs as opposed to a conventional fusion algorithm based on the fusion of pixel...... values. The proposed solution is able to achieve gains up to 0.9 dB in Bjøntegaard difference when compared with the best-performing (in a RD sense) single SI DVC decoder, chosen as the best of an inter-view and a temporal SI-based decoder one....

  5. Selective encryption for H.264/AVC video coding

    Science.gov (United States)

    Shi, Tuo; King, Brian; Salama, Paul

    2006-02-01

    Due to the ease with which digital data can be manipulated and due to the ongoing advancements that have brought us closer to pervasive computing, the secure delivery of video and images has become a challenging problem. Despite the advantages and opportunities that digital video provide, illegal copying and distribution as well as plagiarism of digital audio, images, and video is still ongoing. In this paper we describe two techniques for securing H.264 coded video streams. The first technique, SEH264Algorithm1, groups the data into the following blocks of data: (1) a block that contains the sequence parameter set and the picture parameter set, (2) a block containing a compressed intra coded frame, (3) a block containing the slice header of a P slice, all the headers of the macroblock within the same P slice, and all the luma and chroma DC coefficients belonging to the all the macroblocks within the same slice, (4) a block containing all the ac coefficients, and (5) a block containing all the motion vectors. The first three are encrypted whereas the last two are not. The second method, SEH264Algorithm2, relies on the use of multiple slices per coded frame. The algorithm searches the compressed video sequence for start codes (0x000001) and then encrypts the next N bits of data.

  6. Efficient Power Allocation for Video over Superposition Coding

    KAUST Repository

    Lau, Chun Pong

    2013-03-01

    In this paper we consider a wireless multimedia system by mapping scalable video coded (SVC) bit stream upon superposition coded (SPC) signals, referred to as (SVC-SPC) architecture. Empirical experiments using a software-defined radio(SDR) emulator are conducted to gain a better understanding of its efficiency, specifically, the impact of the received signal due to different power allocation ratios. Our experimental results show that to maintain high video quality, the power allocated to the base layer should be approximately four times higher than the power allocated to the enhancement layer.

  7. Improved entropy encoding for high efficient video coding standard

    Directory of Open Access Journals (Sweden)

    B.S. Sunil Kumar

    2018-03-01

    Full Text Available The High Efficiency Video Coding (HEVC has better coding efficiency, but the encoding performance has to be improved to meet the growing multimedia applications. This paper improves the standard entropy encoding by introducing the optimized weighing parameters, so that higher rate of compression can be accomplished over the standard entropy encoding. The optimization is performed using the recently introduced firefly algorithm. The experimentation is carried out using eight benchmark video sequences and the PSNR for varying rate of data transmission is investigated. Comparative analysis based on the performance statistics is made with the standard entropy encoding. From the obtained results, it is clear that the originality of the decoded video sequence is preserved far better than the proposed method, though the compression rate is increased. Keywords: Entropy, Encoding, HEVC, PSNR, Compression

  8. Intra prediction using face continuity in 360-degree video coding

    Science.gov (United States)

    Hanhart, Philippe; He, Yuwen; Ye, Yan

    2017-09-01

    This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.

  9. The Simple Video Coder: A free tool for efficiently coding social video data.

    Science.gov (United States)

    Barto, Daniel; Bird, Clark W; Hamilton, Derek A; Fink, Brandi C

    2017-08-01

    Videotaping of experimental sessions is a common practice across many disciplines of psychology, ranging from clinical therapy, to developmental science, to animal research. Audio-visual data are a rich source of information that can be easily recorded; however, analysis of the recordings presents a major obstacle to project completion. Coding behavior is time-consuming and often requires ad-hoc training of a student coder. In addition, existing software is either prohibitively expensive or cumbersome, which leaves researchers with inadequate tools to quickly process video data. We offer the Simple Video Coder-free, open-source software for behavior coding that is flexible in accommodating different experimental designs, is intuitive for students to use, and produces outcome measures of event timing, frequency, and duration. Finally, the software also offers extraction tools to splice video into coded segments suitable for training future human coders or for use as input for pattern classification algorithms.

  10. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau

    2013-03-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However, it has never been practically implemented using a commercial 4G system. This paper demonstrates our prototype in achieving the SCM using a standard 802.16 based testbed for scalable video transmissions. In particular, to implement the superposition coded (SPC) modulation, we take advantage a novel software approach, namely logical SPC (L-SPC), which aims to mimic the physical layer superposition coded modulation. The emulation results show improved throughput comparing with generic multicast method.

  11. H.264 Layered Coded Video over Wireless Networks: Channel Coding and Modulation Constraints

    Directory of Open Access Journals (Sweden)

    Ghandi MM

    2006-01-01

    Full Text Available This paper considers the prioritised transmission of H.264 layered coded video over wireless channels. For appropriate protection of video data, methods such as prioritised forward error correction coding (FEC or hierarchical quadrature amplitude modulation (HQAM can be employed, but each imposes system constraints. FEC provides good protection but at the price of a high overhead and complexity. HQAM is less complex and does not introduce any overhead, but permits only fixed data ratios between the priority layers. Such constraints are analysed and practical solutions are proposed for layered transmission of data-partitioned and SNR-scalable coded video where combinations of HQAM and FEC are used to exploit the advantages of both coding methods. Simulation results show that the flexibility of SNR scalability and absence of picture drift imply that SNR scalability as modelled is superior to data partitioning in such applications.

  12. Efficient Enhancement for Spatial Scalable Video Coding Transmission

    Directory of Open Access Journals (Sweden)

    Mayada Khairy

    2017-01-01

    Full Text Available Scalable Video Coding (SVC is an international standard technique for video compression. It is an extension of H.264 Advanced Video Coding (AVC. In the encoding of video streams by SVC, it is suitable to employ the macroblock (MB mode because it affords superior coding efficiency. However, the exhaustive mode decision technique that is usually used for SVC increases the computational complexity, resulting in a longer encoding time (ET. Many other algorithms were proposed to solve this problem with imperfection of increasing transmission time (TT across the network. To minimize the ET and TT, this paper introduces four efficient algorithms based on spatial scalability. The algorithms utilize the mode-distribution correlation between the base layer (BL and enhancement layers (ELs and interpolation between the EL frames. The proposed algorithms are of two categories. Those of the first category are based on interlayer residual SVC spatial scalability. They employ two methods, namely, interlayer interpolation (ILIP and the interlayer base mode (ILBM method, and enable ET and TT savings of up to 69.3% and 83.6%, respectively. The algorithms of the second category are based on full-search SVC spatial scalability. They utilize two methods, namely, full interpolation (FIP and the full-base mode (FBM method, and enable ET and TT savings of up to 55.3% and 76.6%, respectively.

  13. Probabilistic Decision Based Block Partitioning for Future Video Coding

    KAUST Repository

    Wang, Zhao

    2017-11-29

    In the latest Joint Video Exploration Team development, the quadtree plus binary tree (QTBT) block partitioning structure has been proposed for future video coding. Compared to the traditional quadtree structure of High Efficiency Video Coding (HEVC) standard, QTBT provides more flexible patterns for splitting the blocks, which results in dramatically increased combinations of block partitions and high computational complexity. In view of this, a confidence interval based early termination (CIET) scheme is proposed for QTBT to identify the unnecessary partition modes in the sense of rate-distortion (RD) optimization. In particular, a RD model is established to predict the RD cost of each partition pattern without the full encoding process. Subsequently, the mode decision problem is casted into a probabilistic framework to select the final partition based on the confidence interval decision strategy. Experimental results show that the proposed CIET algorithm can speed up QTBT block partitioning structure by reducing 54.7% encoding time with only 1.12% increase in terms of bit rate. Moreover, the proposed scheme performs consistently well for the high resolution sequences, of which the video coding efficiency is crucial in real applications.

  14. Testing Video and Social Media for Engaging Users of the U.S. Climate Resilience Toolkit

    Science.gov (United States)

    Green, C. J.; Gardiner, N.; Niepold, F., III; Esposito, C.

    2015-12-01

    We developed a custom video production stye and a method for analyzing social media behavior so that we may deliberately build and track audience growth for decision-support tools and case studies within the U.S. Climate Resilience Toolkit. The new style of video focuses quickly on decision processes; its 30s format is well-suited for deployment through social media. We measured both traffic and engagement with video using Google Analytics. Each video included an embedded tag, allowing us to measure viewers' behavior: whether or not they entered the toolkit website; the duration of their session on the website; and the number pages they visited in that session. Results showed that video promotion was more effective on Facebook than Twitter. Facebook links generated twice the number of visits to the toolkit. Videos also increased Facebook interaction overall. Because most Facebook users are return visitors, this campaign did not substantially draw new site visitors. We continue to research and apply these methods in a targeted engagement and outreach campaign that utilizes the theory of social diffusion and social influence strategies to grow our audience of "influential" decision-makers and people within their social networks. Our goal is to increase access and use of the U.S. Climate Resilience Toolkit.

  15. Novel Intermode Prediction Algorithm for High Efficiency Video Coding Encoder

    Directory of Open Access Journals (Sweden)

    Chan-seob Park

    2014-01-01

    Full Text Available The joint collaborative team on video coding (JCT-VC is developing the next-generation video coding standard which is called high efficiency video coding (HEVC. In the HEVC, there are three units in block structure: coding unit (CU, prediction unit (PU, and transform unit (TU. The CU is the basic unit of region splitting like macroblock (MB. Each CU performs recursive splitting into four blocks with equal size, starting from the tree block. In this paper, we propose a fast CU depth decision algorithm for HEVC technology to reduce its computational complexity. In 2N×2N PU, the proposed method compares the rate-distortion (RD cost and determines the depth using the compared information. Moreover, in order to speed up the encoding time, the efficient merge SKIP detection method is developed additionally based on the contextual mode information of neighboring CUs. Experimental result shows that the proposed algorithm achieves the average time-saving factor of 44.84% in the random access (RA at Main profile configuration with the HEVC test model (HM 10.0 reference software. Compared to HM 10.0 encoder, a small BD-bitrate loss of 0.17% is also observed without significant loss of image quality.

  16. Joint disparity and motion estimation using optical flow for multiview Distributed Video Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Raket, Lars Lau; Brites, Catarina

    2014-01-01

    Distributed Video Coding (DVC) is a video coding paradigm where the source statistics are exploited at the decoder based on the availability of Side Information (SI). In a monoview video codec, the SI is generated by exploiting the temporal redundancy of the video, through motion estimation and c...

  17. Efficient Coding of Shape and Transparency for Video Objects

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2007-01-01

    A novel scheme for coding gray-level alpha planes in object-based video is presented. Gray-level alpha planes convey the shape and the transparency information, which are required for smooth composition of video objects. The algorithm proposed is based on the segmentation of the alpha plane...... in three layers: binary shape layer, opaque layer, and intermediate layer. Thus, the latter two layers replace the single transparency layer of MPEG-4 Part 2. Different encoding schemes are specifically designed for each layer, utilizing cross-layer correlations to reduce the bit rate. First, the binary...... demonstrating that the proposed techniques provide substantial bit rate savings coding shape and transparency when compared to the tools adopted in MPEG-4 Part 2....

  18. Video coding and decoding devices and methods preserving ppg relevant information

    NARCIS (Netherlands)

    2013-01-01

    The present invention relates to a video encoding device (10) for encoding video data and a corresponding video decoding device, wherein during decoding PPG relevant information shall be preserved. For this purpose the video coding device (10) comprises a first encoder (20) for encoding input video

  19. Video coding for decoding power-constrained embedded devices

    Science.gov (United States)

    Lu, Ligang; Sheinin, Vadim

    2004-01-01

    Low power dissipation and fast processing time are crucial requirements for embedded multimedia devices. This paper presents a technique in video coding to decrease the power consumption at a standard video decoder. Coupled with a small dedicated video internal memory cache on a decoder, the technique can substantially decrease the amount of data traffic to the external memory at the decoder. A decrease in data traffic to the external memory at decoder will result in multiple benefits: faster real-time processing and power savings. The encoder, given prior knowledge of the decoder"s dedicated video internal memory cache management scheme, regulates its choice of motion compensated predictors to reduce the decoder"s external memory accesses. This technique can be used in any standard or proprietary encoder scheme to generate a compliant output bit stream decodable by standard CPU-based and dedicated hardware-based decoders for power savings with the best quality-power cost trade off. Our simulation results show that with a relatively small amount of dedicated video internal memory cache, the technique may decrease the traffic between CPU and external memory over 50%.

  20. Distributed coding/decoding complexity in video sensor networks.

    Science.gov (United States)

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  1. Perceptual video quality assessment in H.264 video coding standard using objective modeling.

    Science.gov (United States)

    Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu

    2014-01-01

    Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.

  2. A New Video Coding Algorithm Using 3D-Subband Coding and Lattice Vector Quantization

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J.H. [Taejon Junior College, Taejon (Korea, Republic of); Lee, K.Y. [Sung Kyun Kwan University, Suwon (Korea, Republic of)

    1997-12-01

    In this paper, we propose an efficient motion adaptive 3-dimensional (3D) video coding algorithm using 3D subband coding (3D-SBC) and lattice vector quantization (LVQ) for low bit rate. Instead of splitting input video sequences into the fixed number of subbands along the temporal axes, we decompose them into temporal subbands of variable size according to motions in frames. Each spatio-temporally splitted 7 subbands are partitioned by quad tree technique and coded with lattice vector quantization(LVQ). The simulation results show 0.1{approx}4.3dB gain over H.261 in peak signal to noise ratio(PSNR) at low bit rate (64Kbps). (author). 13 refs., 13 figs., 4 tabs.

  3. Scalable Video Coding with Interlayer Signal Decorrelation Techniques

    Directory of Open Access Journals (Sweden)

    Yang Wenxian

    2007-01-01

    Full Text Available Scalability is one of the essential requirements in the compression of visual data for present-day multimedia communications and storage. The basic building block for providing the spatial scalability in the scalable video coding (SVC standard is the well-known Laplacian pyramid (LP. An LP achieves the multiscale representation of the video as a base-layer signal at lower resolution together with several enhancement-layer signals at successive higher resolutions. In this paper, we propose to improve the coding performance of the enhancement layers through efficient interlayer decorrelation techniques. We first show that, with nonbiorthogonal upsampling and downsampling filters, the base layer and the enhancement layers are correlated. We investigate two structures to reduce this correlation. The first structure updates the base-layer signal by subtracting from it the low-frequency component of the enhancement layer signal. The second structure modifies the prediction in order that the low-frequency component in the new enhancement layer is diminished. The second structure is integrated in the JSVM 4.0 codec with suitable modifications in the prediction modes. Experimental results with some standard test sequences demonstrate coding gains up to 1 dB for I pictures and up to 0.7 dB for both I and P pictures.

  4. Hybrid Video Coding Based on Bidimensional Matching Pursuit

    Directory of Open Access Journals (Sweden)

    Lorenzo Granai

    2004-12-01

    Full Text Available Hybrid video coding combines together two stages: first, motion estimation and compensation predict each frame from the neighboring frames, then the prediction error is coded, reducing the correlation in the spatial domain. In this work, we focus on the latter stage, presenting a scheme that profits from some of the features introduced by the standard H.264/AVC for motion estimation and replaces the transform in the spatial domain. The prediction error is so coded using the matching pursuit algorithm which decomposes the signal over an appositely designed bidimensional, anisotropic, redundant dictionary. Comparisons are made among the proposed technique, H.264, and a DCT-based coding scheme. Moreover, we introduce fast techniques for atom selection, which exploit the spatial localization of the atoms. An adaptive coding scheme aimed at optimizing the resource allocation is also presented, together with a rate-distortion study for the matching pursuit algorithm. Results show that the proposed scheme outperforms the standard DCT, especially at very low bit rates.

  5. Depth-based Multi-View 3D Video Coding

    DEFF Research Database (Denmark)

    Zamarin, Marco

    techniques are used to extract dense motion information and generate improved candidate side information. Multiple candidates are merged employing multi-hypothesis strategies. Promising rate-distortion performance improvements compared with state-of-the-art Wyner-Ziv decoders are reported, both when texture......-view video. Depth maps are typically used to synthesize the desired output views, and the performance of view synthesis algorithms strongly depends on the accuracy of depth information. In this thesis, novel algorithms for efficient depth map compression in MVD scenarios are proposed, with particular focus...... on edge-preserving solutions. In a proposed scheme, texture-depth correlation is exploited to predict surface shapes in the depth signal. In this way depth coding performance can be improved in terms of both compression gain and edge-preservation. Another solution proposes a new intra coding mode targeted...

  6. Perceptual coding of stereo endoscopy video for minimally invasive surgery

    Science.gov (United States)

    Bartoli, Guido; Menegaz, Gloria; Yang, Guang Zhong

    2007-03-01

    In this paper, we propose a compression scheme that is tailored for stereo-laparoscope sequences. The inter-frame correlation is modeled by the deformation field obtained by elastic registration between two subsequent frames and exploited for prediction of the left sequence. The right sequence is lossy encoded by prediction from the corresponding left images. Wavelet-based coding is applied to both the deformation vector fields and residual images. The resulting system supports spatio temporal scalability, while providing lossless performance. The implementation of the wavelet transform by integer lifting ensures a low computational complexity, thus reducing the required run-time memory allocation and on line implementation. Extensive psychovisual tests were performed for system validation and characterization with respect to the MPEG4 standard for video coding. Results are very encouraging: the PSVC system features the functionalities making it suitable for PACS while providing a good trade-off between usability and performance in lossy mode.

  7. An Adaptive Motion Estimation Scheme for Video Coding

    Directory of Open Access Journals (Sweden)

    Pengyu Liu

    2014-01-01

    Full Text Available The unsymmetrical-cross multihexagon-grid search (UMHexagonS is one of the best fast Motion Estimation (ME algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised.

  8. Vectorial Resilient PC(l) of Order k Boolean Functions from AG-Codes

    Institute of Scientific and Technical Information of China (English)

    Hao CHEN; Liang MA; Jianhua LI

    2011-01-01

    Propagation criteria and resiliency of vectorial Boolean functions are important for cryptographic purpose (see [1- 4, 7, 8, 10, 11, 16]). Kurosawa, Stoh [8] and Carlet [1]gave a construction of Boolean functions satisfying PC(l) of order k from binary linear or nonlinear codes. In this paper, the algebraic-geometric codes over GF(2m) are used to modify the Carlet and Kurosawa-Satoh's construction for giving vectorial resilient Boolean functions satisfying PC(l) of order k criterion. This new construction is compared with previously known results.

  9. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    Science.gov (United States)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  10. 3D Scan-Based Wavelet Transform and Quality Control for Video Coding

    Directory of Open Access Journals (Sweden)

    Parisot Christophe

    2003-01-01

    Full Text Available Wavelet coding has been shown to achieve better compression than DCT coding and moreover allows scalability. 2D DWT can be easily extended to 3D and thus applied to video coding. However, 3D subband coding of video suffers from two drawbacks. The first is the amount of memory required for coding large 3D blocks; the second is the lack of temporal quality due to the sequence temporal splitting. In fact, 3D block-based video coders produce jerks. They appear at blocks temporal borders during video playback. In this paper, we propose a new temporal scan-based wavelet transform method for video coding combining the advantages of wavelet coding (performance, scalability with acceptable reduced memory requirements, no additional CPU complexity, and avoiding jerks. We also propose an efficient quality allocation procedure to ensure a constant quality over time.

  11. Texture side information generation for distributed coding of video-plus-depth

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Raket, Lars Lau; Zamarin, Marco

    2013-01-01

    We consider distributed video coding in a monoview video-plus-depth scenario, aiming at coding textures jointly with their corresponding depth stream. Distributed Video Coding (DVC) is a video coding paradigm in which the complexity is shifted from the encoder to the decoder. The Side Information...... components) is strongly correlated, so the additional depth information may be used to generate more accurate SI for the texture stream, increasing the efficiency of the system. In this paper we propose various methods for accurate texture SI generation, comparing them with other state-of-the-art solutions...

  12. 3D video coding: an overview of present and upcoming standards

    Science.gov (United States)

    Merkle, Philipp; Müller, Karsten; Wiegand, Thomas

    2010-07-01

    An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.

  13. Transform domain Wyner-Ziv video coding with refinement of noise residue and side information

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2010-01-01

    are successively updating the estimated noise residue for noise modeling and side information frame quality during decoding. Experimental results show that the proposed decoder can improve the Rate- Distortion (RD) performance of a state-of-the-art Wyner Ziv video codec for the set of test sequences.......Distributed Video Coding (DVC) is a video coding paradigm which mainly exploits the source statistics at the decoder based on the availability of side information at the decoder. This paper considers feedback channel based Transform Domain Wyner-Ziv (TDWZ) DVC. The coding efficiency of TDWZ video...... coding does not match that of conventional video coding yet, mainly due to the quality of side information and inaccurate noise estimation. In this context, a novel TDWZ video decoder with noise residue refinement (NRR) and side information refinement (SIR) is proposed. The proposed refinement schemes...

  14. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    Science.gov (United States)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  15. Adaptive distributed video coding with correlation estimation using expectation propagation

    Science.gov (United States)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  16. Impact of packet losses in scalable 3D holoscopic video coding

    Science.gov (United States)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2014-05-01

    Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.

  17. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    Science.gov (United States)

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  18. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    Directory of Open Access Journals (Sweden)

    Behrang Barekatain

    Full Text Available In recent years, Random Network Coding (RNC has emerged as a promising solution for efficient Peer-to-Peer (P2P video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  19. Mutiple LDPC Decoding using Bitplane Correlation for Transform Domain Wyner-Ziv Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    Distributed video coding (DVC) is an emerging video coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. This paper considers a Low Density Parity Check (LDPC) based Transform Domain Wyner-Ziv (TDWZ) video...... codec. To improve the LDPC coding performance in the context of TDWZ, this paper proposes a Wyner-Ziv video codec using bitplane correlation through multiple parallel LDPC decoding. The proposed scheme utilizes inter bitplane correlation to enhance the bitplane decoding performance. Experimental results...

  20. Cross-band noise model refinement for transform domain Wyner–Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2012-01-01

    TDWZ video coding trails that of conventional video coding solutions, mainly due to the quality of side information, inaccurate noise modeling and loss in the final coding step. The major goal of this paper is to enhance the accuracy of the noise modeling, which is one of the most important aspects...... influencing the coding performance of DVC. A TDWZ video decoder with a novel cross-band based adaptive noise model is proposed, and a noise residue refinement scheme is introduced to successively update the estimated noise residue for noise modeling after each bit-plane. Experimental results show...... that the proposed noise model and noise residue refinement scheme can improve the rate-distortion (RD) performance of TDWZ video coding significantly. The quality of the side information modeling is also evaluated by a measure of the ideal code length....

  1. Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams

    Science.gov (United States)

    Cai, Hua; Zeng, Bing; Shen, Guobin; Xiong, Zixiang; Li, Shipeng

    2006-12-01

    This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS) video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP) method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream), our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth) and high robustness (under varying and/or unclean channel conditions).

  2. Resilience

    Science.gov (United States)

    Resilience is an important framework for understanding and managing complex systems of people and nature that are subject to abrupt and nonlinear change. The idea of ecological resilience was slow to gain acceptance in the scientific community, taking thirty years to become widel...

  3. Adaptive modeling of sky for video processing and coding applications

    NARCIS (Netherlands)

    Zafarifar, B.; With, de P.H.N.; Lagendijk, R.L.; Weber, Jos H.; Berg, van den A.F.M.

    2006-01-01

    Video content analysis for still- and moving images can be used for various applications, such as high-level semantic-driven operations or pixel-level contentdependent image manipulation. Within video content analysis, sky regions of an image form visually important objects, for which interesting

  4. Design considerations for view interpolation in a 3D video coding framework

    NARCIS (Netherlands)

    Morvan, Y.; Farin, D.S.; With, de P.H.N.; Lagendijk, R.L.; Weber, Jos H.; Berg, van den A.F.M.

    2006-01-01

    A 3D video stream typically consists of a set of views capturing simultaneously the same scene. For an efficient transmission of the 3D video, a compression technique is required. In this paper, we describe a coding architecture and appropriate algorithms that enable the compression and

  5. Basic prediction techniques in modern video coding standards

    CERN Document Server

    Kim, Byung-Gyu

    2016-01-01

    This book discusses in detail the basic algorithms of video compression that are widely used in modern video codec. The authors dissect complicated specifications and present material in a way that gets readers quickly up to speed by describing video compression algorithms succinctly, without going to the mathematical details and technical specifications. For accelerated learning, hybrid codec structure, inter- and intra- prediction techniques in MPEG-4, H.264/AVC, and HEVC are discussed together. In addition, the latest research in the fast encoder design for the HEVC and H.264/AVC is also included.

  6. Efficient Power Allocation for Video over Superposition Coding

    KAUST Repository

    Lau, Chun Pong; Jamshaid, K.; Shihada, Basem

    2013-01-01

    are conducted to gain a better understanding of its efficiency, specifically, the impact of the received signal due to different power allocation ratios. Our experimental results show that to maintain high video quality, the power allocated to the base layer

  7. Improved virtual channel noise model for transform domain Wyner-Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2009-01-01

    Distributed video coding (DVC) has been proposed as a new video coding paradigm to deal with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. A virtual channel noise model is utilized at the decoder to estimate...... the noise distribution between the side information frame and the original frame. This is one of the most important aspects influencing the coding performance of DVC. Noise models with different granularity have been proposed. In this paper, an improved noise model for transform domain Wyner-Ziv video...... coding is proposed, which utilizes cross-band correlation to estimate the Laplacian parameters more accurately. Experimental results show that the proposed noise model can improve the rate-distortion (RD) performance....

  8. Motion-adaptive intraframe transform coding of video signals

    NARCIS (Netherlands)

    With, de P.H.N.

    1989-01-01

    Spatial transform coding has been widely applied for image compression because of its high coding efficiency. However, in many intraframe systems, in which every TV frame is independently processed, coding of moving objects in the case of interlaced input signals is not addressed. In this paper, we

  9. Semi-Blind Error Resilient SLM for PAPR Reduction in OFDM Using Spread Spectrum Codes

    Science.gov (United States)

    Elhelw, Amr M.; Badran, Ehab F.

    2015-01-01

    High peak to average power ratio (PAPR) is one of the major problems of OFDM systems. Selected mapping (SLM) is a promising choice that can elegantly tackle this problem. Nevertheless, side information (SI) index is required to be transmitted which reduces the overall throughput. This paper proposes a semi-blind error resilient SLM system that utilizes spread spectrum codes for embedding the SI index in the transmitted symbols. The codes are embedded in an innovative manner which does not increase the average energy per symbol. The use of such codes allows the correction of probable errors in the SI index detection. A new receiver, which does not require perfect channel state information (CSI) for the detection of the SI index and has relatively low computational complexity, is proposed. Simulations results show that the proposed system performs well both in terms SI index detection error and bit error rate. PMID:26018504

  10. Semi-Blind Error Resilient SLM for PAPR Reduction in OFDM Using Spread Spectrum Codes.

    Directory of Open Access Journals (Sweden)

    Amr M Elhelw

    Full Text Available High peak to average power ratio (PAPR is one of the major problems of OFDM systems. Selected mapping (SLM is a promising choice that can elegantly tackle this problem. Nevertheless, side information (SI index is required to be transmitted which reduces the overall throughput. This paper proposes a semi-blind error resilient SLM system that utilizes spread spectrum codes for embedding the SI index in the transmitted symbols. The codes are embedded in an innovative manner which does not increase the average energy per symbol. The use of such codes allows the correction of probable errors in the SI index detection. A new receiver, which does not require perfect channel state information (CSI for the detection of the SI index and has relatively low computational complexity, is proposed. Simulations results show that the proposed system performs well both in terms SI index detection error and bit error rate.

  11. Variable disparity-motion estimation based fast three-view video coding

    Science.gov (United States)

    Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo

    2009-02-01

    In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.

  12. Motion estimation for video coding efficient algorithms and architectures

    CERN Document Server

    Chakrabarti, Indrajit; Chatterjee, Sumit Kumar

    2015-01-01

    The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.

  13. Spatial Pyramid Covariance based Compact Video Code for Robust Face Retrieval in TV-series.

    Science.gov (United States)

    Li, Yan; Wang, Ruiping; Cui, Zhen; Shan, Shiguang; Chen, Xilin

    2016-10-10

    We address the problem of face video retrieval in TV-series which searches video clips based on the presence of specific character, given one face track of his/her. This is tremendously challenging because on one hand, faces in TV-series are captured in largely uncontrolled conditions with complex appearance variations, and on the other hand retrieval task typically needs efficient representation with low time and space complexity. To handle this problem, we propose a compact and discriminative representation for the huge body of video data, named Compact Video Code (CVC). Our method first models the face track by its sample (i.e., frame) covariance matrix to capture the video data variations in a statistical manner. To incorporate discriminative information and obtain more compact video signature suitable for retrieval, the high-dimensional covariance representation is further encoded as a much lower-dimensional binary vector, which finally yields the proposed CVC. Specifically, each bit of the code, i.e., each dimension of the binary vector, is produced via supervised learning in a max margin framework, which aims to make a balance between the discriminability and stability of the code. Besides, we further extend the descriptive granularity of covariance matrix from traditional pixel-level to more general patchlevel, and proceed to propose a novel hierarchical video representation named Spatial Pyramid Covariance (SPC) along with a fast calculation method. Face retrieval experiments on two challenging TV-series video databases, i.e., the Big Bang Theory and Prison Break, demonstrate the competitiveness of the proposed CVC over state-of-the-art retrieval methods. In addition, as a general video matching algorithm, CVC is also evaluated in traditional video face recognition task on a standard Internet database, i.e., YouTube Celebrities, showing its quite promising performance by using an extremely compact code with only 128 bits.

  14. Bridging Inter-flow and Intra-flow Network Coding for Video Applications

    DEFF Research Database (Denmark)

    Hansen, Jonas; Krigslund, Jeppe; Roetter, Daniel Enrique Lucani

    2013-01-01

    transmission approach to decide how much and when to send redundancy in the network, and a minimalistic feedback mechanism to guarantee delivery of generations of the different flows. Given the delay constraints of video applications, we proposed a simple yet effective coding mechanism, Block Coding On The Fly...

  15. Re-estimation of Motion and Reconstruction for Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Raket, Lars Lau; Forchhammer, Søren

    2014-01-01

    Transform domain Wyner-Ziv (TDWZ) video coding is an efficient approach to distributed video coding (DVC), which provides low complexity encoding by exploiting the source statistics at the decoder side. The DVC coding efficiency depends mainly on side information and noise modeling. This paper...... proposes a motion re-estimation technique based on optical flow to improve side information and noise residual frames by taking partially decoded information into account. To improve noise modeling, a noise residual motion re-estimation technique is proposed. Residual motion compensation with motion...

  16. Subjective Video Quality Assessment in H.264/AVC Video Coding Standard

    Directory of Open Access Journals (Sweden)

    Z. Miličević

    2012-11-01

    Full Text Available This paper seeks to provide an approach for subjective video quality assessment in the H.264/AVC standard. For this purpose a special software program for the subjective assessment of quality of all the tested video sequences is developed. It was developed in accordance with recommendation ITU-T P.910, since it is suitable for the testing of multimedia applications. The obtained results show that in the proposed selective intra prediction and optimized inter prediction algorithm there is a small difference in picture quality (signal-to-noise ratio between decoded original and modified video sequences.

  17. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.

    Science.gov (United States)

    Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2016-01-01

    High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.

  18. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.

    Directory of Open Access Journals (Sweden)

    Yueying Wu

    Full Text Available High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI extraction using the high efficiency video coding (H.265/HEVC standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0. The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.

  19. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau; Shihada, Basem; Pin-Han Ho

    2013-01-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However

  20. Spatial-Aided Low-Delay Wyner-Ziv Video Coding

    Directory of Open Access Journals (Sweden)

    Bo Wu

    2009-01-01

    Full Text Available In distributed video coding, the side information (SI quality plays an important role in Wyner-Ziv (WZ frame coding. Usually, SI is generated at the decoder by the motion-compensated interpolation (MCI from the past and future key frames under the assumption that the motion trajectory between the adjacent frames is translational with constant velocity. However, this assumption is not always true and thus, the coding efficiency for WZ coding is often unsatisfactory in video with high and/or irregular motion. This situation becomes more serious in low-delay applications since only motion-compensated extrapolation (MCE can be applied to yield SI. In this paper, a spatial-aided Wyner-Ziv video coding (WZVC in low-delay application is proposed. In SA-WZVC, at the encoder, each WZ frame is coded as performed in the existing common Wyner-Ziv video coding scheme and meanwhile, the auxiliary information is also coded with the low-complexity DPCM. At the decoder, for the WZ frame decoding, auxiliary information should be decoded firstly and then SI is generated with the help of this auxiliary information by the spatial-aided motion-compensated extrapolation (SA-MCE. Theoretical analysis proved that when a good tradeoff between the auxiliary information coding and WZ frame coding is achieved, SA-WZVC is able to achieve better rate distortion performance than the conventional MCE-based WZVC without auxiliary information. Experimental results also demonstrate that SA-WZVC can efficiently improve the coding performance of WZVC in low-delay application.

  1. Low-Complexity Multiple Description Coding of Video Based on 3D Block Transforms

    Directory of Open Access Journals (Sweden)

    Andrey Norkin

    2007-02-01

    Full Text Available The paper presents a multiple description (MD video coder based on three-dimensional (3D transforms. Two balanced descriptions are created from a video sequence. In the encoder, video sequence is represented in a form of coarse sequence approximation (shaper included in both descriptions and residual sequence (details which is split between two descriptions. The shaper is obtained by block-wise pruned 3D-DCT. The residual sequence is coded by 3D-DCT or hybrid, LOT+DCT, 3D-transform. The coding scheme is targeted to mobile devices. It has low computational complexity and improved robustness of transmission over unreliable networks. The coder is able to work at very low redundancies. The coding scheme is simple, yet it outperforms some MD coders based on motion-compensated prediction, especially in the low-redundancy region. The margin is up to 3 dB for reconstruction from one description.

  2. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    Science.gov (United States)

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  3. Investigating the structure preserving encryption of high efficiency video coding (HEVC)

    Science.gov (United States)

    Shahid, Zafar; Puech, William

    2013-02-01

    This paper presents a novel method for the real-time protection of new emerging High Efficiency Video Coding (HEVC) standard. Structure preserving selective encryption is being performed in CABAC entropy coding module of HEVC, which is significantly different from CABAC entropy coding of H.264/AVC. In CABAC of HEVC, exponential Golomb coding is replaced by truncated Rice (TR) up to a specific value for binarization of transform coefficients. Selective encryption is performed using AES cipher in cipher feedback mode on a plaintext of binstrings in a context aware manner. The encrypted bitstream has exactly the same bit-rate and is format complaint. Experimental evaluation and security analysis of the proposed algorithm is performed on several benchmark video sequences containing different combinations of motion, texture and objects.

  4. Motion Vector Sharing and Bitrate Allocation for 3D Video-Plus-Depth Coding

    Directory of Open Access Journals (Sweden)

    Béatrice Pesquet-Popescu

    2008-08-01

    Full Text Available The video-plus-depth data representation uses a regular texture video enriched with the so-called depth map, providing the depth distance for each pixel. The compression efficiency is usually higher for smooth, gray level data representing the depth map than for classical video texture. However, improvements of the coding efficiency are still possible, taking into account the fact that the video and the depth map sequences are strongly correlated. Classically, the correlation between the texture motion vectors and the depth map motion vectors is not exploited in the coding process. The aim of this paper is to reduce the amount of information for describing the motion of the texture video and of the depth map sequences by sharing one common motion vector field. Furthermore, in the literature, the bitrate control scheme generally fixes for the depth map sequence a percentage of 20% of the texture stream bitrate. However, this fixed percentage can affect the depth coding efficiency, and it should also depend on the content of each sequence. We propose a new bitrate allocation strategy between the texture and its associated per-pixel depth information. We provide comparative analysis to measure the quality of the resulting 3D+t sequences.

  5. Extending JPEG-LS for low-complexity scalable video coding

    DEFF Research Database (Denmark)

    Ukhanova, Anna; Sergeev, Anton; Forchhammer, Søren

    2011-01-01

    JPEG-LS, the well-known international standard for lossless and near-lossless image compression, was originally designed for non-scalable applications. In this paper we propose a scalable modification of JPEG-LS and compare it with the leading image and video coding standards JPEG2000 and H.264/SVC...

  6. Programming Video Games and Simulations in Science Education: Exploring Computational Thinking through Code Analysis

    Science.gov (United States)

    Garneli, Varvara; Chorianopoulos, Konstantinos

    2018-01-01

    Various aspects of computational thinking (CT) could be supported by educational contexts such as simulations and video-games construction. In this field study, potential differences in student motivation and learning were empirically examined through students' code. For this purpose, we performed a teaching intervention that took place over five…

  7. Optimization of high-definition video coding and hybrid fiber-wireless transmission in the 60 GHz band

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Pham, Tien Thang; Beltrán, Marta

    2011-01-01

    The paper addresses the problem of distribution of highdefinition video over fiber-wireless networks. The physical layer architecture with the low complexity envelope detection solution is investigated. We present both experimental studies and simulation of high quality high-definition compressed...... video transmission over 60 GHz fiberwireless link. Using advanced video coding we satisfy low complexity and low delay constraints, meanwhile preserving the superb video quality after significantly extended wireless distance. © 2011 Optical Society of America....

  8. Joint source/channel coding of scalable video over noisy channels

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, G.; Zakhor, A. [Department of Electrical Engineering and Computer Sciences University of California Berkeley, California94720 (United States)

    1997-01-01

    We propose an optimal bit allocation strategy for a joint source/channel video codec over noisy channel when the channel state is assumed to be known. Our approach is to partition source and channel coding bits in such a way that the expected distortion is minimized. The particular source coding algorithm we use is rate scalable and is based on 3D subband coding with multi-rate quantization. We show that using this strategy, transmission of video over very noisy channels still renders acceptable visual quality, and outperforms schemes that use equal error protection only. The flexibility of the algorithm also permits the bit allocation to be selected optimally when the channel state is in the form of a probability distribution instead of a deterministic state. {copyright} {ital 1997 American Institute of Physics.}

  9. Spherical rotation orientation indication for HEVC and JEM coding of 360 degree video

    Science.gov (United States)

    Boyce, Jill; Xu, Qian

    2017-09-01

    Omnidirectional (or "360 degree") video, representing a panoramic view of a spherical 360° ×180° scene, can be encoded using conventional video compression standards, once it has been projection mapped to a 2D rectangular format. Equirectangular projection format is currently used for mapping 360 degree video to a rectangular representation for coding using HEVC/JEM. However, video in the top and bottom regions of the image, corresponding to the "north pole" and "south pole" of the spherical representation, is significantly warped. We propose to perform spherical rotation of the input video prior to HEVC/JEM encoding in order to improve the coding efficiency, and to signal parameters in a supplemental enhancement information (SEI) message that describe the inverse rotation process recommended to be applied following HEVC/JEM decoding, prior to display. Experiment results show that up to 17.8% bitrate gain (using the WS-PSNR end-to-end metric) can be achieved for the Chairlift sequence using HM16.15 and 11.9% gain using JEM6.0, and an average gain of 2.9% for HM16.15 and 2.2% for JEM6.0.

  10. Transcoding method from H.264/AVC to high efficiency video coding based on similarity of intraprediction, interprediction, and motion vector

    Science.gov (United States)

    Liu, Mei-Feng; Zhong, Guo-Yun; He, Xiao-Hai; Qing, Lin-Bo

    2016-09-01

    Currently, most video resources on line are encoded in the H.264/AVC format. More fluent video transmission can be obtained if these resources are encoded in the newest international video coding standard: high efficiency video coding (HEVC). In order to improve the video transmission and storage on line, a transcoding method from H.264/AVC to HEVC is proposed. In this transcoding algorithm, the coding information of intraprediction, interprediction, and motion vector (MV) in H.264/AVC video stream are used to accelerate the coding in HEVC. It is found through experiments that the region of interprediction in HEVC overlaps that in H.264/AVC. Therefore, the intraprediction for the region in HEVC, which is interpredicted in H.264/AVC, can be skipped to reduce coding complexity. Several macroblocks in H.264/AVC are combined into one PU in HEVC when the MV difference between two of the macroblocks in H.264/AVC is lower than a threshold. This method selects only one coding unit depth and one prediction unit (PU) mode to reduce the coding complexity. An MV interpolation method of combined PU in HEVC is proposed according to the areas and distances between the center of one macroblock in H.264/AVC and that of the PU in HEVC. The predicted MV accelerates the motion estimation for HEVC coding. The simulation results show that our proposed algorithm achieves significant coding time reduction with a little loss in bitrates distortion rate, compared to the existing transcoding algorithms and normal HEVC coding.

  11. Iterative Multiview Side Information for Enhanced Reconstruction in Distributed Video Coding

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available Distributed video coding (DVC is a new paradigm for video compression based on the information theoretical results of Slepian and Wolf (SW and Wyner and Ziv (WZ. DVC entails low-complexity encoders as well as separate encoding of correlated video sources. This is particularly attractive for multiview camera systems in video surveillance and camera sensor network applications, where low complexity is required at the encoder. In addition, the separate encoding of the sources implies no communication between the cameras in a practical scenario. This is an advantage since communication is time and power consuming and requires complex networking. In this work, different intercamera estimation techniques for side information (SI generation are explored and compared in terms of estimating quality, complexity, and rate distortion (RD performance. Further, a technique called iterative multiview side information (IMSI is introduced, where the final SI is used in an iterative reconstruction process. The simulation results show that IMSI significantly improves the RD performance for video with significant motion and activity. Furthermore, DVC outperforms AVC/H.264 Intra for video with average and low motion but it is still inferior to the Inter No Motion and Inter Motion modes.

  12. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Directory of Open Access Journals (Sweden)

    Lerouge Sam

    2007-01-01

    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4 .

  13. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Directory of Open Access Journals (Sweden)

    Rik Van de Walle

    2007-01-01

    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4%.

  14. Video coding standards AVS China, H.264/MPEG-4 PART 10, HEVC, VP6, DIRAC and VC-1

    CERN Document Server

    Rao, K R; Hwang, Jae Jeong

    2014-01-01

    Review by Ashraf A. Kassim, Professor, Department of Electrical & Computer Engineering, and Associate Dean, School of Engineering, National University of Singapore.     The book consists of eight chapters of which the first two provide an overview of various video & image coding standards, and video formats. The next four chapters present in detail the Audio & video standard (AVS) of China, the H.264/MPEG-4 Advanced video coding (AVC) standard, High efficiency video coding (HEVC) standard and the VP6 video coding standard (now VP10) respectively. The performance of the wavelet based Dirac video codec is compared with H.264/MPEG-4 AVC in chapter 7. Finally in chapter 8, the VC-1 video coding standard is presented together with VC-2 which is based on the intra frame coding of Dirac and an outline of a H.264/AVC to VC-1 transcoder.   The authors also present and discuss relevant research literature such as those which document improved methods & techniques, and also point to other related reso...

  15. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    Thomas André

    2007-03-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  16. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    André Thomas

    2007-01-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  17. A Complete Video Coding Chain Based on Multi-Dimensional Discrete Cosine Transform

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2010-09-01

    Full Text Available The paper deals with a video compression method based on the multi-dimensional discrete cosine transform. In the text, the encoder and decoder architectures including the definitions of all mathematical operations like the forward and inverse 3-D DCT, quantization and thresholding are presented. According to the particular number of currently processed pictures, the new quantization tables and entropy code dictionaries are proposed in the paper. The practical properties of the 3-D DCT coding chain compared with the modern video compression methods (such as H.264 and WebM and the computing complexity are presented as well. It will be proved the best compress properties could be achieved by complex H.264 codec. On the other hand the computing complexity - especially on the encoding side - is lower for the 3-D DCT method.

  18. Improved Side Information Generation for Distributed Video Coding by Exploiting Spatial and Temporal Correlations

    Directory of Open Access Journals (Sweden)

    Ye Shuiming

    2009-01-01

    Full Text Available Distributed video coding (DVC is a video coding paradigm allowing low complexity encoding for emerging applications such as wireless video surveillance. Side information (SI generation is a key function in the DVC decoder, and plays a key-role in determining the performance of the codec. This paper proposes an improved SI generation for DVC, which exploits both spatial and temporal correlations in the sequences. Partially decoded Wyner-Ziv (WZ frames, based on initial SI by motion compensated temporal interpolation, are exploited to improve the performance of the whole SI generation. More specifically, an enhanced temporal frame interpolation is proposed, including motion vector refinement and smoothing, optimal compensation mode selection, and a new matching criterion for motion estimation. The improved SI technique is also applied to a new hybrid spatial and temporal error concealment scheme to conceal errors in WZ frames. Simulation results show that the proposed scheme can achieve up to 1.0 dB improvement in rate distortion performance in WZ frames for video with high motion, when compared to state-of-the-art DVC. In addition, both the objective and perceptual qualities of the corrupted sequences are significantly improved by the proposed hybrid error concealment scheme, outperforming both spatial and temporal concealments alone.

  19. Traffic and Quality Characterization of the H.264/AVC Scalable Video Coding Extension

    Directory of Open Access Journals (Sweden)

    Geert Van der Auwera

    2008-01-01

    Full Text Available The recent scalable video coding (SVC extension to the H.264/AVC video coding standard has unprecedented compression efficiency while supporting a wide range of scalability modes, including temporal, spatial, and quality (SNR scalability, as well as combined spatiotemporal SNR scalability. The traffic characteristics, especially the bit rate variabilities, of the individual layer streams critically affect their network transport. We study the SVC traffic statistics, including the bit rate distortion and bit rate variability distortion, with long CIF resolution video sequences and compare them with the corresponding MPEG-4 Part 2 traffic statistics. We consider (i temporal scalability with three temporal layers, (ii spatial scalability with a QCIF base layer and a CIF enhancement layer, as well as (iii quality scalability modes FGS and MGS. We find that the significant improvement in RD efficiency of SVC is accompanied by substantially higher traffic variabilities as compared to the equivalent MPEG-4 Part 2 streams. We find that separately analyzing the traffic of temporal-scalability only encodings gives reasonable estimates of the traffic statistics of the temporal layers embedded in combined spatiotemporal encodings and in the base layer of combined FGS-temporal encodings. Overall, we find that SVC achieves significantly higher compression ratios than MPEG-4 Part 2, but produces unprecedented levels of traffic variability, thus presenting new challenges for the network transport of scalable video.

  20. Exploiting the Error-Correcting Capabilities of Low Density Parity Check Codes in Distributed Video Coding using Optical Flow

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau; Søgaard, Jacob; Salmistraro, Matteo

    2012-01-01

    We consider Distributed Video Coding (DVC) in presence of communication errors. First, we present DVC side information generation based on a new method of optical flow driven frame interpolation, where a highly optimized TV-L1 algorithm is used for the flow calculations and combine three flows....... Thereafter methods for exploiting the error-correcting capabilities of the LDPCA code in DVC are investigated. The proposed frame interpolation includes a symmetric flow constraint to the standard forward-backward frame interpolation scheme, which improves quality and handling of large motion. The three...... flows are combined in one solution. The proposed frame interpolation method consistently outperforms an overlapped block motion compensation scheme and a previous TV-L1 optical flow frame interpolation method with an average PSNR improvement of 1.3 dB and 2.3 dB respectively. For a GOP size of 2...

  1. Mixture block coding with progressive transmission in packet video. Appendix 1: Item 2. M.S. Thesis

    Science.gov (United States)

    Chen, Yun-Chung

    1989-01-01

    Video transmission will become an important part of future multimedia communication because of dramatically increasing user demand for video, and rapid evolution of coding algorithm and VLSI technology. Video transmission will be part of the broadband-integrated services digital network (B-ISDN). Asynchronous transfer mode (ATM) is a viable candidate for implementation of B-ISDN due to its inherent flexibility, service independency, and high performance. According to the characteristics of ATM, the information has to be coded into discrete cells which travel independently in the packet switching network. A practical realization of an ATM video codec called Mixture Block Coding with Progressive Transmission (MBCPT) is presented. This variable bit rate coding algorithm shows how a constant quality performance can be obtained according to user demand. Interactions between codec and network are emphasized including packetization, service synchronization, flow control, and error recovery. Finally, some simulation results based on MBCPT coding with error recovery are presented.

  2. Side Information and Noise Learning for Distributed Video Coding using Optical Flow and Clustering

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Rakêt, Lars Lau; Huang, Xin

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The coding efficiency of DVC critically depends on the quality of side information generation and accuracy of noise modeling. This paper considers...... Transform Domain Wyner-Ziv (TDWZ) coding and proposes using optical flow to improve side information generation and clustering to improve noise modeling. The optical flow technique is exploited at the decoder side to compensate weaknesses of block based methods, when using motion-compensation to generate...... side information frames. Clustering is introduced to capture cross band correlation and increase local adaptivity in the noise modeling. This paper also proposes techniques to learn from previously decoded (WZ) frames. Different techniques are combined by calculating a number of candidate soft side...

  3. Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding

    Science.gov (United States)

    Oh, Kwan-Jung; Oh, Byung Tae

    2015-04-01

    We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.

  4. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    Science.gov (United States)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  5. A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding

    Science.gov (United States)

    Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae

    2017-12-01

    High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.

  6. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    Science.gov (United States)

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  7. High-throughput sample adaptive offset hardware architecture for high-efficiency video coding

    Science.gov (United States)

    Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin

    2018-03-01

    A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.

  8. An Effective Transform Unit Size Decision Method for High Efficiency Video Coding

    Directory of Open Access Journals (Sweden)

    Chou-Chen Wang

    2014-01-01

    Full Text Available High efficiency video coding (HEVC is the latest video coding standard. HEVC can achieve higher compression performance than previous standards, such as MPEG-4, H.263, and H.264/AVC. However, HEVC requires enormous computational complexity in encoding process due to quadtree structure. In order to reduce the computational burden of HEVC encoder, an early transform unit (TU decision algorithm (ETDA is adopted to pruning the residual quadtree (RQT at early stage based on the number of nonzero DCT coefficients (called NNZ-EDTA to accelerate the encoding process. However, the NNZ-ETDA cannot effectively reduce the computational load for sequences with active motion or rich texture. Therefore, in order to further improve the performance of NNZ-ETDA, we propose an adaptive RQT-depth decision for NNZ-ETDA (called ARD-NNZ-ETDA by exploiting the characteristics of high temporal-spatial correlation that exist in nature video sequences. Simulation results show that the proposed method can achieve time improving ratio (TIR about 61.26%~81.48% when compared to the HEVC test model 8.1 (HM 8.1 with insignificant loss of image quality. Compared with the NNZ-ETDA, the proposed method can further achieve an average TIR about 8.29%~17.92%.

  9. High-speed low-complexity video coding with EDiCTius: a DCT coding proposal for JPEG XS

    Science.gov (United States)

    Richter, Thomas; Fößel, Siegfried; Keinert, Joachim; Scherl, Christian

    2017-09-01

    In its 71th meeting, the JPEG committee issued a call for low complexity, high speed image coding, designed to address the needs of low-cost video-over-ip applications. As an answer to this call, Fraunhofer IIS and the Computing Center of the University of Stuttgart jointly developed an embedded DCT image codec requiring only minimal resources while maximizing throughput on FPGA and GPU implementations. Objective and subjective tests performed for the 73rd meeting confirmed its excellent performance and suitability for its purpose, and it was selected as one of the two key contributions for the development of a joined test model. In this paper, its authors describe the design principles of the codec, provide a high-level overview of the encoder and decoder chain and provide evaluation results on the test corpus selected by the JPEG committee.

  10. Robust Pedestrian Tracking and Recognition from FLIR Video: A Unified Approach via Sparse Coding

    Directory of Open Access Journals (Sweden)

    Xin Li

    2014-06-01

    Full Text Available Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians, especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach.

  11. Fast bi-directional prediction selection in H.264/MPEG-4 AVC temporal scalable video coding.

    Science.gov (United States)

    Lin, Hung-Chih; Hang, Hsueh-Ming; Peng, Wen-Hsiao

    2011-12-01

    In this paper, we propose a fast algorithm that efficiently selects the temporal prediction type for the dyadic hierarchical-B prediction structure in the H.264/MPEG-4 temporal scalable video coding (SVC). We make use of the strong correlations in prediction type inheritance to eliminate the superfluous computations for the bi-directional (BI) prediction in the finer partitions, 16×8/8×16/8×8 , by referring to the best temporal prediction type of 16 × 16. In addition, we carefully examine the relationship in motion bit-rate costs and distortions between the BI and the uni-directional temporal prediction types. As a result, we construct a set of adaptive thresholds to remove the unnecessary BI calculations. Moreover, for the block partitions smaller than 8 × 8, either the forward prediction (FW) or the backward prediction (BW) is skipped based upon the information of their 8 × 8 partitions. Hence, the proposed schemes can efficiently reduce the extensive computational burden in calculating the BI prediction. As compared to the JSVM 9.11 software, our method saves the encoding time from 48% to 67% for a large variety of test videos over a wide range of coding bit-rates and has only a minor coding performance loss. © 2011 IEEE

  12. Game-Theoretic Rate-Distortion-Complexity Optimization of High Efficiency Video Coding

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Milani, Simone; Forchhammer, Søren

    2013-01-01

    profiles in order to tailor the computational load to the different hardware and power-supply resources of devices. In this work, we focus on optimizing the quantization parameter and partition depth in HEVC via a game-theoretic approach. The proposed rate control strategy alone provides 0.2 dB improvement......This paper presents an algorithm for rate-distortioncomplexity optimization for the emerging High Efficiency Video Coding (HEVC) standard, whose high computational requirements urge the need for low-complexity optimization algorithms. Optimization approaches need to specify different complexity...

  13. Bit Plane Coding based Steganography Technique for JPEG2000 Images and Videos

    Directory of Open Access Journals (Sweden)

    Geeta Kasana

    2016-02-01

    Full Text Available In this paper, a Bit Plane Coding (BPC based steganography technique for JPEG2000 images and Motion JPEG2000 video is proposed. Embedding in this technique is performed in the lowest significant bit planes of the wavelet coefficients of a cover image. In JPEG2000 standard, the number of bit planes of wavelet coefficients to be used in encoding is dependent on the compression rate and are used in Tier-2 process of JPEG2000. In the proposed technique, Tier-1 and Tier-2 processes of JPEG2000 and Motion JPEG2000 are executed twice on the encoder side to collect the information about the lowest bit planes of all code blocks of a cover image, which is utilized in embedding and transmitted to the decoder. After embedding secret data, Optimal Pixel Adjustment Process (OPAP is applied on stego images to enhance its visual quality. Experimental results show that proposed technique provides large embedding capacity and better visual quality of stego images than existing steganography techniques for JPEG2000 compressed images and videos. Extracted secret image is similar to the original secret image.

  14. The future of 3D and video coding in mobile and the internet

    Science.gov (United States)

    Bivolarski, Lazar

    2013-09-01

    The current Internet success has already changed our social and economic world and is still continuing to revolutionize the information exchange. The exponential increase of amount and types of data that is currently exchanged on the Internet represents significant challenge for the design of future architectures and solutions. This paper reviews the current status and trends in the design of solutions and research activities in the future Internet from point of view of managing the growth of bandwidth requirements and complexity of the multimedia that is being created and shared. Outlines the challenges that are present before the video coding and approaches to the design of standardized media formats and protocols while considering the expected convergence of multimedia formats and exchange interfaces. The rapid growth of connected mobile devices adds to the current and the future challenges in combination with the expected, in near future, arrival of multitude of connected devices. The new Internet technologies connecting the Internet of Things with wireless visual sensor networks and 3D virtual worlds requires conceptually new approaches of media content handling from acquisition to presentation in the 3D Media Internet. Accounting for the entire transmission system properties and enabling adaptation in real-time to context and content throughout the media proceeding path will be paramount in enabling the new media architectures as well as the new applications and services. The common video coding formats will need to be conceptually redesigned to allow for the implementation of the necessary 3D Media Internet features.

  15. Interactive Video Coding and Transmission over Heterogeneous Wired-to-Wireless IP Networks Using an Edge Proxy

    Directory of Open Access Journals (Sweden)

    Modestino James W

    2004-01-01

    Full Text Available Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC coding scheme employing Reed-Solomon (RS codes and rate-compatible punctured convolutional (RCPC codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.

  16. The science of ethics: Deception, the resilient self, and the APA code of ethics, 1966-1973.

    Science.gov (United States)

    Stark, Laura

    2010-01-01

    This paper has two aims. The first is to shed light on a remarkable archival source, namely survey responses from thousands of American psychologists during the 1960s in which they described their contemporary research practices and discussed whether the practices were "ethical." The second aim is to examine the process through which the American Psychological Association (APA) used these survey responses to create principles on how psychologists should treat human subjects. The paper focuses on debates over whether "deception" research was acceptable. It documents how members of the committee that wrote the principles refereed what was, in fact, a disagreement between two contemporary research orientations. The paper argues that the ethics committee ultimately built the model of "the resilient self" into the APA's 1973 ethics code. At the broadest level, the paper explores how prevailing understandings of human nature are written into seemingly universal and timeless codes of ethics. © 2010 Wiley Periodicals, Inc.

  17. 3D scene reconstruction based on multi-view distributed video coding in the Zernike domain for mobile applications

    Science.gov (United States)

    Palma, V.; Carli, M.; Neri, A.

    2011-02-01

    In this paper a Multi-view Distributed Video Coding scheme for mobile applications is presented. Specifically a new fusion technique between temporal and spatial side information in Zernike Moments domain is proposed. Distributed video coding introduces a flexible architecture that enables the design of very low complex video encoders compared to its traditional counterparts. The main goal of our work is to generate at the decoder the side information that optimally blends temporal and interview data. Multi-view distributed coding performance strongly depends on the side information quality built at the decoder. At this aim for improving its quality a spatial view compensation/prediction in Zernike moments domain is applied. Spatial and temporal motion activity have been fused together to obtain the overall side-information. The proposed method has been evaluated by rate-distortion performances for different inter-view and temporal estimation quality conditions.

  18. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    Science.gov (United States)

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  19. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Dongyul Lee

    2014-01-01

    Full Text Available The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC with adaptive modulation and coding (AMC provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  20. A model of R-D performance evaluation for Rate-Distortion-Complexity evaluation of H.264 video coding

    DEFF Research Database (Denmark)

    Wu, Mo; Forchhammer, Søren

    2007-01-01

    This paper considers a method for evaluation of Rate-Distortion-Complexity (R-D-C) performance of video coding. A statistical model of the transformed coefficients is used to estimate the Rate-Distortion (R-D) performance. A model frame work for rate, distortion and slope of the R-D curve for inter...... and intra frame is presented. Assumptions are given for analyzing an R-D model for fast R-D-C evaluation. The theoretical expressions are combined with H.264 video coding, and confirmed by experimental results. The complexity frame work is applied to the integer motion estimation....

  1. Wide-Range Motion Estimation Architecture with Dual Search Windows for High Resolution Video Coding

    Science.gov (United States)

    Dung, Lan-Rong; Lin, Meng-Chun

    This paper presents a memory-efficient motion estimation (ME) technique for high-resolution video compression. The main objective is to reduce the external memory access, especially for limited local memory resource. The reduction of memory access can successfully save the notorious power consumption. The key to reduce the memory accesses is based on center-biased algorithm in that the center-biased algorithm performs the motion vector (MV) searching with the minimum search data. While considering the data reusability, the proposed dual-search-windowing (DSW) approaches use the secondary windowing as an option per searching necessity. By doing so, the loading of search windows can be alleviated and hence reduce the required external memory bandwidth. The proposed techniques can save up to 81% of external memory bandwidth and require only 135 MBytes/sec, while the quality degradation is less than 0.2dB for 720p HDTV clips coded at 8Mbits/sec.

  2. Bit-depth scalable video coding with new inter-layer prediction

    Directory of Open Access Journals (Sweden)

    Chiang Jui-Chiu

    2011-01-01

    Full Text Available Abstract The rapid advances in the capture and display of high-dynamic range (HDR image/video content make it imperative to develop efficient compression techniques to deal with the huge amounts of HDR data. Since HDR device is not yet popular for the moment, the compatibility problems should be considered when rendering HDR content on conventional display devices. To this end, in this study, we propose three H.264/AVC-based bit-depth scalable video-coding schemes, called the LH scheme (low bit-depth to high bit-depth, the HL scheme (high bit-depth to low bit-depth, and the combined LH-HL scheme, respectively. The schemes efficiently exploit the high correlation between the high and the low bit-depth layers on the macroblock (MB level. Experimental results demonstrate that the HL scheme outperforms the other two schemes in some scenarios. Moreover, it achieves up to 7 dB improvement over the simulcast approach when the high and low bit-depth representations are 12 bits and 8 bits, respectively.

  3. Video coding and decoding devices and methods preserving PPG relevant information

    NARCIS (Netherlands)

    2015-01-01

    The present invention relates to a video encoding device (10, 10', 10") and method for encoding video data and to a corresponding video decoding device (60, 60') and method. To preserve PPG relevant information after encoding without requiring a large amount of additional data for the video encoder

  4. Video coding and decoding devices and methods preserving ppg relevant information

    NARCIS (Netherlands)

    2013-01-01

    The present invention relates to a video encoding device (10, 10', 10'') and method for encoding video data and to a corresponding video decoding device (60, 60') and method. To preserve PPG relevant information after encoding without requiring a large amount of additional data for the video encoder

  5. Efficient MPEG-2 to H.264/AVC Transcoding of Intra-Coded Video

    Directory of Open Access Journals (Sweden)

    Vetro Anthony

    2007-01-01

    Full Text Available This paper presents an efficient transform-domain architecture and corresponding mode decision algorithms for transcoding intra-coded video from MPEG-2 to H.264/AVC. Low complexity is achieved in several ways. First, our architecture employs direct conversion of the transform coefficients, which eliminates the need for the inverse discrete cosine transform (DCT and forward H.264/AVC transform. Then, within this transform-domain architecture, we perform macroblock-based mode decisions based on H.264/AVC transform coefficients, which is possible using a novel method of calculating distortion in the transform domain. The proposed method for distortion calculation could be used to make rate-distortion optimized mode decisions with lower complexity. Compared to the pixel-domain architecture with rate-distortion optimized mode decision, simulation results show that there is a negligible loss in quality incurred by the direct conversion of transform coefficients and the proposed transform-domain mode decision algorithms, while complexity is significantly reduced. To further reduce the complexity, we also propose two fast mode decision algorithms. The first algorithm ranks modes based on a simple cost function in the transform domain, then computes the rate-distortion optimal mode from a reduced set of ranked modes. The second algorithm exploits temporal correlations in the mode decision between temporally adjacent frames. Simulation results show that these algorithms provide additional computational savings over the proposed transform-domain architecture while maintaining virtually the same coding efficiency.

  6. Partial Encryption of Entropy-Coded Video Compression Using Coupled Chaotic Maps

    Directory of Open Access Journals (Sweden)

    Fadi Almasalha

    2014-10-01

    Full Text Available Due to pervasive communication infrastructures, a plethora of enabling technologies is being developed over mobile and wired networks. Among these, video streaming services over IP are the most challenging in terms of quality, real-time requirements and security. In this paper, we propose a novel scheme to efficiently secure variable length coded (VLC multimedia bit streams, such as H.264. It is based on code word error diffusion and variable size segment shuffling. The codeword diffusion and the shuffling mechanisms are based on random operations from a secure and computationally efficient chaos-based pseudo-random number generator. The proposed scheme is ubiquitous to the end users and can be deployed at any node in the network. It provides different levels of security, with encrypted data volume fluctuating between 5.5–17%. It works on the compressed bit stream without requiring any decoding. It provides excellent encryption speeds on different platforms, including mobile devices. It is 200% faster and 150% more power efficient when compared with AES software-based full encryption schemes. Regarding security, the scheme is robust to well-known attacks in the literature, such as brute force and known/chosen plain text attacks.

  7. A Novel Error Resilient Scheme for Wavelet-based Image Coding Over Packet Networks

    OpenAIRE

    WenZhu Sun; HongYu Wang; DaXing Qian

    2012-01-01

    this paper presents a robust transmission strategy for wavelet based scalable bit stream over packet erasure channel. By taking the advantage of the bit plane coding and the multiple description coding, the proposed strategy adopts layered multiple description coding (LMDC) for the embedded wavelet coders to improve the error resistant capability of the important bit planes in the meaning of D(R) function. Then, the post-compression rate-distortion (PCRD) optimization process is used to impro...

  8. Flood-resilient waterfront development in New York City: bridging flood insurance, building codes, and flood zoning.

    Science.gov (United States)

    Aerts, Jeroen C J H; Botzen, W J Wouter

    2011-06-01

    Waterfronts are attractive areas for many-often competing-uses in New York City (NYC) and are seen as multifunctional locations for economic, environmental, and social activities on the interface between land and water. The NYC waterfront plays a crucial role as a first line of flood defense and in managing flood risk and protecting the city from future climate change and sea-level rise. The city of New York has embarked on a climate adaptation program (PlaNYC) outlining the policies needed to anticipate the impacts of climate change. As part of this policy, the Department of City Planning has recently prepared Vision 2020: New York City Comprehensive Waterfront Plan for the over 500 miles of NYC waterfront (NYC-DCP, 2011). An integral part of the vision is to improve resilience to climate change and sea-level rise. This study seeks to provide guidance for advancing the goals of NYC Vision 2020 by assessing how flood insurance, flood zoning, and building code policies can contribute to waterfront development that is more resilient to climate change. © 2011 New York Academy of Sciences.

  9. Exploration of depth modeling mode one lossless wedgelets storage strategies for 3D-high efficiency video coding

    Science.gov (United States)

    Sanchez, Gustavo; Marcon, César; Agostini, Luciano Volcan

    2018-01-01

    The 3D-high efficiency video coding has introduced tools to obtain higher efficiency in 3-D video coding, and most of them are related to the depth maps coding. Among these tools, the depth modeling mode-1 (DMM-1) focuses on better encoding edges regions of depth maps. The large memory required for storing all wedgelet patterns is one of the bottlenecks in the DMM-1 hardware design of both encoder and decoder since many patterns must be stored. Three algorithms to reduce the DMM-1 memory requirements and a hardware design targeting the most efficient among these algorithms are presented. Experimental results demonstrate that the proposed solutions surpass related works reducing up to 78.8% of the wedgelet memory, without degrading the encoding efficiency. Synthesis results demonstrate that the proposed algorithm reduces almost 75% of the power dissipation when compared to the standard approach.

  10. Hide and Seek: Exploiting and Hardening Leakage-Resilient Code Randomization

    Science.gov (United States)

    2016-05-30

    HMACs generated us- ing 128-bit AES encryption . We do not use AES en- cryption to generate HMACs due to its high overhead; the authors of CCFI report...execute-only permissions on memory accesses, (ii) code pointer hid- ing (e.g., indirection or encryption ), and (iii) decoys (e.g., booby traps). Among...lowing techniques: they a) enforce execute-only permis- sions on code pages to mitigate direct information leak- age, b) introduce an encryption or

  11. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    Science.gov (United States)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  12. Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding

    Directory of Open Access Journals (Sweden)

    Dai Qionghai

    2010-01-01

    Full Text Available We propose a Stereoscopic Visual Attention- (SVA- based regional bit allocation optimization for Multiview Video Coding (MVC by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over % bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by  dB at the cost of insensitive image quality degradation of the background image.

  13. Using game theory for perceptual tuned rate control algorithm in video coding

    Science.gov (United States)

    Luo, Jiancong; Ahmad, Ishfaq

    2005-03-01

    This paper proposes a game theoretical rate control technique for video compression. Using a cooperative gaming approach, which has been utilized in several branches of natural and social sciences because of its enormous potential for solving constrained optimization problems, we propose a dual-level scheme to optimize the perceptual quality while guaranteeing "fairness" in bit allocation among macroblocks. At the frame level, the algorithm allocates target bits to frames based on their coding complexity. At the macroblock level, the algorithm distributes bits to macroblocks by defining a bargaining game. Macroblocks play cooperatively to compete for shares of resources (bits) to optimize their quantization scales while considering the Human Visual System"s perceptual property. Since the whole frame is an entity perceived by viewers, macroblocks compete cooperatively under a global objective of achieving the best quality with the given bit constraint. The major advantage of the proposed approach is that the cooperative game leads to an optimal and fair bit allocation strategy based on the Nash Bargaining Solution. Another advantage is that it allows multi-objective optimization with multiple decision makers (macroblocks). The simulation results testify the algorithm"s ability to achieve accurate bit rate with good perceptual quality, and to maintain a stable buffer level.

  14. Content Adaptive Lagrange Multiplier Selection for Rate-Distortion Optimization in 3-D Wavelet-Based Scalable Video Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2018-03-01

    Full Text Available Rate-distortion optimization (RDO plays an essential role in substantially enhancing the coding efficiency. Currently, rate-distortion optimized mode decision is widely used in scalable video coding (SVC. Among all the possible coding modes, it aims to select the one which has the best trade-off between bitrate and compression distortion. Specifically, this tradeoff is tuned through the choice of the Lagrange multiplier. Despite the prevalence of conventional method for Lagrange multiplier selection in hybrid video coding, the underlying formulation is not applicable to 3-D wavelet-based SVC where the explicit values of the quantization step are not available, with on consideration of the content features of input signal. In this paper, an efficient content adaptive Lagrange multiplier selection algorithm is proposed in the context of RDO for 3-D wavelet-based SVC targeting quality scalability. Our contributions are two-fold. First, we introduce a novel weighting method, which takes account of the mutual information, gradient per pixel, and texture homogeneity to measure the temporal subband characteristics after applying the motion-compensated temporal filtering (MCTF technique. Second, based on the proposed subband weighting factor model, we derive the optimal Lagrange multiplier. Experimental results demonstrate that the proposed algorithm enables more satisfactory video quality with negligible additional computational complexity.

  15. Resource-Constrained Low-Complexity Video Coding for Wireless Transmission

    DEFF Research Database (Denmark)

    Ukhanova, Ann

    of video quality. We proposed a new metric for objective quality assessment that considers frame rate. As many applications deal with wireless video transmission, we performed an analysis of compression and transmission systems with a focus on power-distortion trade-off. We proposed an approach...... for ratedistortion-complexity optimization of upcoming video compression standard HEVC. We also provided a new method allowing decrease of power consumption on mobile devices in 3G networks. Finally, we proposed low-delay and low-power approaches for video transmission over wireless personal area networks, including......Constrained resources like memory, power, bandwidth and delay requirements in many mobile systems pose limitations for video applications. Standard approaches for video compression and transmission do not always satisfy system requirements. In this thesis we have shown that it is possible to modify...

  16. Performance and Complexity Co-evaluation of the Advanced Video Coding Standard for Cost-Effective Multimedia Communications

    Directory of Open Access Journals (Sweden)

    Saponara Sergio

    2004-01-01

    Full Text Available The advanced video codec (AVC standard, recently defined by a joint video team (JVT of ITU-T and ISO/IEC, is introduced in this paper together with its performance and complexity co-evaluation. While the basic framework is similar to the motion-compensated hybrid scheme of previous video coding standards, additional tools improve the compression efficiency at the expense of an increased implementation cost. As a first step to bridge the gap between the algorithmic design of a complex multimedia system and its cost-effective realization, a high-level co-evaluation approach is proposed and applied to a real-life AVC design. An exhaustive analysis of the codec compression efficiency versus complexity (memory and computational costs project space is carried out at the early algorithmic design phase. If all new coding features are used, the improved AVC compression efficiency (up to 50% compared to current video coding technology comes with a complexity increase of a factor 2 for the decoder and larger than one order of magnitude for the encoder. This represents a challenge for resource-constrained multimedia systems such as wireless devices or high-volume consumer electronics. The analysis also highlights important properties of the AVC framework allowing for complexity reduction at the high system level: when combining the new coding features, the implementation complexity accumulates, while the global compression efficiency saturates. Thus, a proper use of the AVC tools maintains the same performance as the most complex configuration while considerably reducing complexity. The reported results provide inputs to assist the profile definition in the standard, highlight the AVC bottlenecks, and select optimal trade-offs between algorithmic performance and complexity.

  17. Using self-similarity compensation for improving inter-layer prediction in scalable 3D holoscopic video coding

    Science.gov (United States)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2013-09-01

    Holoscopic imaging, also known as integral imaging, has been recently attracting the attention of the research community, as a promising glassless 3D technology due to its ability to create a more realistic depth illusion than the current stereoscopic or multiview solutions. However, in order to gradually introduce this technology into the consumer market and to efficiently deliver 3D holoscopic content to end-users, backward compatibility with legacy displays is essential. Consequently, to enable 3D holoscopic content to be delivered and presented on legacy displays, a display scalable 3D holoscopic coding approach is required. Hence, this paper presents a display scalable architecture for 3D holoscopic video coding with a three-layer approach, where each layer represents a different level of display scalability: Layer 0 - a single 2D view; Layer 1 - 3D stereo or multiview; and Layer 2 - the full 3D holoscopic content. In this context, a prediction method is proposed, which combines inter-layer prediction, aiming to exploit the existing redundancy between the multiview and the 3D holoscopic layers, with self-similarity compensated prediction (previously proposed by the authors for non-scalable 3D holoscopic video coding), aiming to exploit the spatial redundancy inherent to the 3D holoscopic enhancement layer. Experimental results show that the proposed combined prediction can improve significantly the rate-distortion performance of scalable 3D holoscopic video coding with respect to the authors' previously proposed solutions, where only inter-layer or only self-similarity prediction is used.

  18. A Joint Watermarking and ROI Coding Scheme for Annotating Traffic Surveillance Videos

    Directory of Open Access Journals (Sweden)

    Su Po-Chyi

    2010-01-01

    Full Text Available We propose a new application of information hiding by employing the digital watermarking techniques to facilitate the data annotation in traffic surveillance videos. There are two parts in the proposed scheme. The first part is the object-based watermarking, in which the information of each vehicle collected by the intelligent transportation system will be conveyed/stored along with the visual data via information hiding. The scheme is integrated with H.264/AVC, which is assumed to be adopted by the surveillance system, to achieve an efficient implementation. The second part is a Region of Interest (ROI rate control mechanism for encoding traffic surveillance videos, which helps to improve the overall performance. The quality of vehicles in the video will be better preserved and a good rate-distortion performance can be attained. Experimental results show that this potential scheme works well in traffic surveillance videos.

  19. Efficient video coding integrating MPEG-2 and picture-rate conversion

    NARCIS (Netherlands)

    Bruin, de F.J.; Bruls, W.H.A.; Burazerovic, D.; Haan, de G.

    2002-01-01

    We present an MPEG-2 compliant video codec using picture-rate upconversion during decoding. The upconversion autonomously regenerates major parts of frames without vectorial and residual data. Consequently, the bitrate is greatly reduced.

  20. Accelerating Wavelet-Based Video Coding on Graphics Hardware using CUDA

    NARCIS (Netherlands)

    Laan, Wladimir J. van der; Roerdink, Jos B.T.M.; Jalba, Andrei C.; Zinterhof, P; Loncaric, S; Uhl, A; Carini, A

    2009-01-01

    The Discrete Wavelet Transform (DWT) has a wide range of applications from signal processing to video and image compression. This transform, by means of the lifting scheme, can be performed in a memory mid computation efficient way on modern, programmable GPUs, which can be regarded as massively

  1. Temporal signal energy correction and low-complexity encoder feedback for lossy scalable video coding

    NARCIS (Netherlands)

    Loomans, M.J.H.; Koeleman, C.J.; With, de P.H.N.

    2010-01-01

    In this paper, we address two problems found in embedded implementations of Scalable Video Codecs (SVCs): the temporal signal energy distribution and frame-to-frame quality fluctuations. The unequal energy distribution between the low- and high-pass band with integer-based wavelets leads to

  2. An experimental digital consumer recorder for MPEG-coded video signals

    NARCIS (Netherlands)

    Saeijs, R.W.J.J.; With, de P.H.N.; Rijckaert, A.M.A.; Wong, C.

    1995-01-01

    The concept and real-time implementation of an experimental home-use digital recorder is presented, capable of recording MPEG-compressed video signals. The system has small recording mechanics based on the DVC standard and it uses MPEG compression for trick-mode signals as well

  3. Accelerating wavelet-based video coding on graphics hardware using CUDA

    NARCIS (Netherlands)

    Laan, van der W.J.; Roerdink, J.B.T.M.; Jalba, A.C.; Zinterhof, P.; Loncaric, S.; Uhl, A.; Carini, A.

    2009-01-01

    The DiscreteWavelet Transform (DWT) has a wide range of applications from signal processing to video and image compression. This transform, by means of the lifting scheme, can be performed in a memory and computation efficient way on modern, programmable GPUs, which can be regarded as massively

  4. Temporal Scalability through Adaptive -Band Filter Banks for Robust H.264/MPEG-4 AVC Video Coding

    Directory of Open Access Journals (Sweden)

    Pau G

    2006-01-01

    Full Text Available This paper presents different structures that use adaptive -band hierarchical filter banks for temporal scalability. Open-loop and closed-loop configurations are introduced and illustrated using existing video codecs. In particular, it is shown that the H.264/MPEG-4 AVC codec allows us to introduce scalability by frame shuffling operations, thus keeping backward compatibility with the standard. The large set of shuffling patterns introduced here can be exploited to adapt the encoding process to the video content features, as well as to the user equipment and transmission channel characteristics. Furthermore, simulation results show that this scalability is obtained with no degradation in terms of subjective and objective quality in error-free environments, while in error-prone channels the scalable versions provide increased robustness.

  5. Development of a Coding Instrument to Assess the Quality and Content of Anti-Tobacco Video Games

    Science.gov (United States)

    Alber, Julia M.; Watson, Anna M.; Barnett, Tracey E.; Mercado, Rebeccah

    2015-01-01

    Abstract Previous research has shown the use of electronic video games as an effective method for increasing content knowledge about the risks of drugs and alcohol use for adolescents. Although best practice suggests that theory, health communication strategies, and game appeal are important characteristics for developing games, no instruments are currently available to examine the quality and content of tobacco prevention and cessation electronic games. This study presents the systematic development of a coding instrument to measure the quality, use of theory, and health communication strategies of tobacco cessation and prevention electronic games. Using previous research and expert review, a content analysis coding instrument measuring 67 characteristics was developed with three overarching categories: type and quality of games, theory and approach, and type and format of messages. Two trained coders applied the instrument to 88 games on four platforms (personal computer, Nintendo DS, iPhone, and Android phone) to field test the instrument. Cohen's kappa for each item ranged from 0.66 to 1.00, with an average kappa value of 0.97. Future research can adapt this coding instrument to games addressing other health issues. In addition, the instrument questions can serve as a useful guide for evidence-based game development. PMID:26167842

  6. Development of a Coding Instrument to Assess the Quality and Content of Anti-Tobacco Video Games.

    Science.gov (United States)

    Alber, Julia M; Watson, Anna M; Barnett, Tracey E; Mercado, Rebeccah; Bernhardt, Jay M

    2015-07-01

    Previous research has shown the use of electronic video games as an effective method for increasing content knowledge about the risks of drugs and alcohol use for adolescents. Although best practice suggests that theory, health communication strategies, and game appeal are important characteristics for developing games, no instruments are currently available to examine the quality and content of tobacco prevention and cessation electronic games. This study presents the systematic development of a coding instrument to measure the quality, use of theory, and health communication strategies of tobacco cessation and prevention electronic games. Using previous research and expert review, a content analysis coding instrument measuring 67 characteristics was developed with three overarching categories: type and quality of games, theory and approach, and type and format of messages. Two trained coders applied the instrument to 88 games on four platforms (personal computer, Nintendo DS, iPhone, and Android phone) to field test the instrument. Cohen's kappa for each item ranged from 0.66 to 1.00, with an average kappa value of 0.97. Future research can adapt this coding instrument to games addressing other health issues. In addition, the instrument questions can serve as a useful guide for evidence-based game development.

  7. A Hybrid Scheme Based on Pipelining and Multitasking in Mobile Application Processors for Advanced Video Coding

    Directory of Open Access Journals (Sweden)

    Muhammad Asif

    2015-01-01

    Full Text Available One of the key requirements for mobile devices is to provide high-performance computing at lower power consumption. The processors used in these devices provide specific hardware resources to handle computationally intensive video processing and interactive graphical applications. Moreover, processors designed for low-power applications may introduce limitations on the availability and usage of resources, which present additional challenges to the system designers. Owing to the specific design of the JZ47x series of mobile application processors, a hybrid software-hardware implementation scheme for H.264/AVC encoder is proposed in this work. The proposed scheme distributes the encoding tasks among hardware and software modules. A series of optimization techniques are developed to speed up the memory access and data transferring among memories. Moreover, an efficient data reusage design is proposed for the deblock filter video processing unit to reduce the memory accesses. Furthermore, fine grained macroblock (MB level parallelism is effectively exploited and a pipelined approach is proposed for efficient utilization of hardware processing cores. Finally, based on parallelism in the proposed design, encoding tasks are distributed between two processing cores. Experiments show that the hybrid encoder is 12 times faster than a highly optimized sequential encoder due to proposed techniques.

  8. Coding Local and Global Binary Visual Features Extracted From Video Sequences

    Science.gov (United States)

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks, while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the Bag-of-Visual-Word (BoVW) model. Several applications, including for example visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget, while attaining a target level of efficiency. In this paper we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can be conveniently adopted to support the Analyze-Then-Compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the Compress-Then-Analyze (CTA) paradigm. In this paper we experimentally compare ATC and CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: homography estimation and content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with CTA, especially in bandwidth limited scenarios.

  9. Coding Local and Global Binary Visual Features Extracted From Video Sequences.

    Science.gov (United States)

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.

  10. Direct migration motion estimation and mode decision to decoder for a low-complexity decoder Wyner-Ziv video coding

    Science.gov (United States)

    Lei, Ted Chih-Wei; Tseng, Fan-Shuo

    2017-07-01

    This paper addresses the problem of high-computational complexity decoding in traditional Wyner-Ziv video coding (WZVC). The key focus is the migration of two traditionally high-computationally complex encoder algorithms, namely motion estimation and mode decision. In order to reduce the computational burden in this process, the proposed architecture adopts the partial boundary matching algorithm and four flexible types of block mode decision at the decoder. This approach does away with the need for motion estimation and mode decision at the encoder. The experimental results show that the proposed padding block-based WZVC not only decreases decoder complexity to approximately one hundredth that of the state-of-the-art DISCOVER decoding but also outperforms DISCOVER codec by up to 3 to 4 dB.

  11. A parallel 3-D discrete wavelet transform architecture using pipelined lifting scheme approach for video coding

    Science.gov (United States)

    Hegde, Ganapathi; Vaya, Pukhraj

    2013-10-01

    This article presents a parallel architecture for 3-D discrete wavelet transform (3-DDWT). The proposed design is based on the 1-D pipelined lifting scheme. The architecture is fully scalable beyond the present coherent Daubechies filter bank (9, 7). This 3-DDWT architecture has advantages such as no group of pictures restriction and reduced memory referencing. It offers low power consumption, low latency and high throughput. The computing technique is based on the concept that lifting scheme minimises the storage requirement. The application specific integrated circuit implementation of the proposed architecture is done by synthesising it using 65 nm Taiwan Semiconductor Manufacturing Company standard cell library. It offers a speed of 486 MHz with a power consumption of 2.56 mW. This architecture is suitable for real-time video compression even with large frame dimensions.

  12. Progressive and Error-Resilient Transmission Strategies for VLC Encoded Signals over Noisy Channels

    Directory of Open Access Journals (Sweden)

    Guillemot Christine

    2006-01-01

    Full Text Available This paper addresses the issue of robust and progressive transmission of signals (e.g., images, video encoded with variable length codes (VLCs over error-prone channels. This paper first describes bitstream construction methods offering good properties in terms of error resilience and progressivity. In contrast with related algorithms described in the literature, all proposed methods have a linear complexity as the sequence length increases. The applicability of soft-input soft-output (SISO and turbo decoding principles to resulting bitstream structures is investigated. In addition to error resilience, the amenability of the bitstream construction methods to progressive decoding is considered. The problem of code design for achieving good performance in terms of error resilience and progressive decoding with these transmission strategies is then addressed. The VLC code has to be such that the symbol energy is mainly concentrated on the first bits of the symbol representation (i.e., on the first transitions of the corresponding codetree. Simulation results reveal high performance in terms of symbol error rate (SER and mean-square reconstruction error (MSE. These error-resilience and progressivity properties are obtained without any penalty in compression efficiency. Codes with such properties are of strong interest for the binarization of -ary sources in state-of-the-art image, and video coding systems making use of, for example, the EBCOT or CABAC algorithms. A prior statistical analysis of the signal allows the construction of the appropriate binarization code.

  13. An adaptive mode-driven spatiotemporal motion vector prediction for wavelet video coding

    Science.gov (United States)

    Zhao, Fan; Liu, Guizhong; Qi, Yong

    2010-07-01

    The three-dimensional subband/wavelet codecs use 5/3 filters rather than Haar filters for the motion compensation temporal filtering (MCTF) to improve the coding gain. In order to curb the increased motion vector rate, an adaptive motion mode driven spatiotemporal motion vector prediction (AMDST-MVP) scheme is proposed. First, by making use of the direction histograms of four motion vector fields resulting from the initial spatial motion vector prediction (SMVP), the motion mode of the current GOP is determined according to whether the fast or complex motion exists in the current GOP. Then the GOP-level MVP scheme is thereby determined by either the S-MVP or the AMDST-MVP, namely, AMDST-MVP is the combination of S-MVP and temporal-MVP (T-MVP). If the latter is adopted, the motion vector difference (MVD) between the neighboring MV fields and the S-MVP resulting MV of the current block is employed to decide whether or not the MV of co-located block in the previous frame is used for prediction the current block. Experimental results show that AMDST-MVP not only can improve the coding efficiency but also reduce the number of computation complexity.

  14. Probable mode prediction for H.264 advanced video coding P slices using removable SKIP mode distortion estimation

    Science.gov (United States)

    You, Jongmin; Jeong, Jechang

    2010-02-01

    The H.264/AVC (advanced video coding) is used in a wide variety of applications including digital broadcasting and mobile applications, because of its high compression efficiency. The variable block mode scheme in H.264/AVC contributes much to its high compression efficiency but causes a selection problem. In general, rate-distortion optimization (RDO) is the optimal mode selection strategy, but it is computationally intensive. For this reason, the H.264/AVC encoder requires a fast mode selection algorithm for use in applications that require low-power and real-time processing. A probable mode prediction algorithm for the H.264/AVC encoder is proposed. To reduce the computational complexity of RDO, the proposed method selects probable modes among all allowed block modes using removable SKIP mode distortion estimation. Removable SKIP mode distortion is used to estimate whether or not a further divided block mode is appropriate for a macroblock. It is calculated using a no-motion reference block with a few computations. Then the proposed method reduces complexity by performing the RDO process only for probable modes. Experimental results show that the proposed algorithm can reduce encoding time by an average of 55.22% without significant visual quality degradation and increased bit rate.

  15. Progressive significance map and its application to error-resilient image transmission.

    Science.gov (United States)

    Hu, Yang; Pearlman, William A; Li, Xin

    2012-07-01

    Set partition coding (SPC) has shown tremendous success in image compression. Despite its popularity, the lack of error resilience remains a significant challenge to the transmission of images in error-prone environments. In this paper, we propose a novel data representation called the progressive significance map (prog-sig-map) for error-resilient SPC. It structures the significance map (sig-map) into two parts: a high-level summation sig-map and a low-level complementary sig-map (comp-sig-map). Such a structured representation of the sig-map allows us to improve its error-resilient property at the price of only a slight sacrifice in compression efficiency. For example, we have found that a fixed-length coding of the comp-sig-map in the prog-sig-map renders 64% of the coded bitstream insensitive to bit errors, compared with 40% with that of the conventional sig-map. Simulation results have shown that the prog-sig-map can achieve highly competitive rate-distortion performance for binary symmetric channels while maintaining low computational complexity. Moreover, we note that prog-sig-map is complementary to existing independent packetization and channel-coding-based error-resilient approaches and readily lends itself to other source coding applications such as distributed video coding.

  16. Caregiver Resiliency.

    Science.gov (United States)

    Siebert, Al

    2002-01-01

    This article argues that school counselors cannot teach and preach resilient behavior if they are not models of resiliency themselves. Examines why some people come through challenging times more emotionally intact than others and suggests some tips for increasing one's resilience potential. (GCP)

  17. Low-complexity wavelet-based image/video coding for home-use and remote surveillance

    NARCIS (Netherlands)

    Loomans, M.J.H.; Koeleman, C.J.; Joosen, K.M.J.; With, de P.H.N.

    2011-01-01

    The availability of inexpensive cameras enables alternative applications beyond personal video communication. For example, surveillance of rooms and home premises is such an alternative application, which can be extended with remote viewing on hand-held battery-powered consumer devices. Scalable

  18. Understanding Resilience

    Directory of Open Access Journals (Sweden)

    Gang eWu

    2013-02-01

    Full Text Available Resilience is the ability to adapt successfully in the face of stress and adversity. Stressful life events, trauma and chronic adversity can have a substantial impact on brain function and structure, and can result in the development of PTSD, depression and other psychiatric disorders. However, most individuals do not develop such illnesses after experiencing stressful life events, and are thus thought to be resilient. Resilience as successful adaptation relies on effective responses to environmental challenges and ultimate resistance to the deleterious effects of stress, therefore a greater understanding of the factors that promote such effects is of great relevance. This review focuses on recent findings regarding genetic, epigenetic, developmental, psychosocial and neurochemical factors that are considered essential contributors to the development of resilience. Neural circuits and pathways involved in mediating resilience are also discussed. The growing understanding of resilience factors will hopefully lead to the development of new pharmacological and psychological interventions for enhancing resilience and mitigating the untoward consequences.

  19. On the Impact of Zero-padding in Network Coding Efficiency with Internet Traffic and Video Traces

    DEFF Research Database (Denmark)

    Taghouti, Maroua; Roetter, Daniel Enrique Lucani; Pedersen, Morten Videbæk

    2016-01-01

    Random Linear Network Coding (RLNC) theoretical results typically assume that packets have equal sizes while in reality, data traffic presents a random packet size distribution. Conventional wisdom considers zero-padding of original packets as a viable alternative, but its effect can reduce the e...

  20. Conceptualizing Resilience

    Directory of Open Access Journals (Sweden)

    Thomas A. Birkland

    2016-12-01

    Full Text Available This commentary provides an overview of the idea of resilience, and acknowledges the challenges of defining and applying the idea in practice. The article summarizes a way of looking at resilience called a “resilience delta”, that takes into account both the shock done to a community by a disaster and the capacity of that community to rebound from that shock to return to its prior functionality. I show how different features of the community can create resilience, and consider how the developed and developing world addresses resilience. I also consider the role of focusing events in gaining attention to events and promoting change. I note that, while focusing events are considered by many in the disaster studies field to be major drivers of policy change in the United States disaster policy, most disasters have little effect on the overall doctrine of shared responsibilities between the national and subnational governments.

  1. Design of an H.264/SVC resilient watermarking scheme

    Science.gov (United States)

    Van Caenegem, Robrecht; Dooms, Ann; Barbarien, Joeri; Schelkens, Peter

    2010-01-01

    The rapid dissemination of media technologies has lead to an increase of unauthorized copying and distribution of digital media. Digital watermarking, i.e. embedding information in the multimedia signal in a robust and imperceptible manner, can tackle this problem. Recently, there has been a huge growth in the number of different terminals and connections that can be used to consume multimedia. To tackle the resulting distribution challenges, scalable coding is often employed. Scalable coding allows the adaptation of a single bit-stream to varying terminal and transmission characteristics. As a result of this evolution, watermarking techniques that are robust against scalable compression become essential in order to control illegal copying. In this paper, a watermarking technique resilient against scalable video compression using the state-of-the-art H.264/SVC codec is therefore proposed and evaluated.

  2. Mapping Resilience

    DEFF Research Database (Denmark)

    Carruth, Susan

    2015-01-01

    by planners when aiming to construct resilient energy plans. It concludes that a graphical language has the potential to be a significant tool, flexibly facilitating cross-disciplinary communication and decision-making, while emphasising that its role is to support imaginative, resilient planning rather than...... the relationship between resilience and energy planning, suggesting that planning in, and with, time is a core necessity in this domain. It then reviews four examples of graphically mapping with time, highlighting some of the key challenges, before tentatively proposing a graphical language to be employed...

  3. Water Resilience

    Science.gov (United States)

    The Drinking Water and Wastewater Resiliency site provides tools and resources for drinking water and wastewater utilities in the full spectrum of emergency management which includes prevention, mitigation, preparedness, response and recovery.

  4. Recognizing resilience

    Science.gov (United States)

    Erika S. Svendsen; Gillian Baine; Mary E. Northridge; Lindsay K. Campbell; Sara S. Metcalf

    2014-01-01

    In 2012, a year after a devastating tornado hit the town of Joplin, Missouri, leaving 161 people dead and leveling Joplin High School and St. John's Hospital, President Obama addressed the graduating seniors: "There are a lot of stories here in Joplin of unthinkable courage and resilience. . . . [People in Joplin] learned that we have the power to...

  5. Video processing project

    CSIR Research Space (South Africa)

    Globisch, R

    2009-03-01

    Full Text Available Video processing source code for algorithms and tools used in software media pipelines (e.g. image scalers, colour converters, etc.) The currently available source code is written in C++ with their associated libraries and DirectShow- Filters....

  6. Entropy Coding in HEVC

    OpenAIRE

    Sze, Vivienne; Marpe, Detlev

    2014-01-01

    Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...

  7. Quantifying resilience

    Science.gov (United States)

    Allen, Craig R.; Angeler, David G.

    2016-01-01

    The biosphere is under unprecedented pressure, reflected in rapid changes in our global ecological, social, technological and economic systems. In many cases, ecological and social systems can adapt to these changes over time, but when a critical threshold is surpassed, a system under stress can undergo catastrophic change and reorganize into a different state. The concept of resilience, introduced more than 40 years ago in the ecological sciences, captures the behaviour of systems that can occur in alternative states. The original definition of resilience forwarded by Holling (1973) is still the most useful. It defines resilience as the amount of disturbance that a system can withstand before it shifts into an alternative stable state. The idea of alternative stable states has clear and profound implications for ecological management. Coral reefs, for example, are high-diversity systems that provide key ecosystem services such as fisheries and coastal protection. Human impacts are causing significant, ongoing reef degradation, and many reefs have shifted from coral- to algal-dominated states in response to anthropogenic pressures such as elevated water temperatures and overfishing. Understanding and differentiating between the factors that help maintain reefs in coral-dominated states vs. those that facilitate a shift to an undesired algal-dominated state is a critical step towards sound management and conservation of these, and other, important social–ecological systems.

  8. Special issue on network coding

    Science.gov (United States)

    Monteiro, Francisco A.; Burr, Alister; Chatzigeorgiou, Ioannis; Hollanti, Camilla; Krikidis, Ioannis; Seferoglu, Hulya; Skachek, Vitaly

    2017-12-01

    Future networks are expected to depart from traditional routing schemes in order to embrace network coding (NC)-based schemes. These have created a lot of interest both in academia and industry in recent years. Under the NC paradigm, symbols are transported through the network by combining several information streams originating from the same or different sources. This special issue contains thirteen papers, some dealing with design aspects of NC and related concepts (e.g., fountain codes) and some showcasing the application of NC to new services and technologies, such as data multi-view streaming of video or underwater sensor networks. One can find papers that show how NC turns data transmission more robust to packet losses, faster to decode, and more resilient to network changes, such as dynamic topologies and different user options, and how NC can improve the overall throughput. This issue also includes papers showing that NC principles can be used at different layers of the networks (including the physical layer) and how the same fundamental principles can lead to new distributed storage systems. Some of the papers in this issue have a theoretical nature, including code design, while others describe hardware testbeds and prototypes.

  9. Resilience Thinking: Integrating Resilience, Adaptability and Transformability

    NARCIS (Netherlands)

    Folke, C.; Carpenter, S.R.; Walker, B.; Scheffer, M.; Chapin, T.; Rockstrom, J.

    2010-01-01

    Resilience thinking addresses the dynamics and development of complex social-ecological systems (SES). Three aspects are central: resilience, adaptability and transformability. These aspects interrelate across multiple scales. Resilience in this context is the capacity of a SES to continually change

  10. Resilience thinking: integrating resilience, adaptability and transformability

    Science.gov (United States)

    Carl Folke; Stephen R. Carpenter; Brian Walker; Marten Scheffer; Terry Chapin; Johan. Rockstrom

    2010-01-01

    Resilience thinking addresses the dynamics and development of complex social-ecological systems (SES). Three aspects are central: resilience, adaptability and transformability. These aspects interrelate across multiple scales. Resilience in this context is the capacity of a SES to continually change and adapt yet remain within critical thresholds. Adaptability is part...

  11. Digital video transcoding for transmission and storage

    CERN Document Server

    Sun, Huifang; Chen, Xuemin

    2004-01-01

    Professionals in the video and multimedia industries need a book that explains industry standards for video coding and how to convert the compressed information between standards. Digital Video Transcoding for Transmission and Storage answers this demand while also supplying the theories and principles of video compression and transcoding technologies. Emphasizing digital video transcoding techniques, this book summarizes its content via examples of practical methods for transcoder implementation. It relates almost all of its featured transcoding technologies to practical applications.This vol

  12. Dashboard Videos

    Science.gov (United States)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  13. Resilience - A Concept

    Science.gov (United States)

    2016-04-05

    the assessment of the health of a network or system. The hypothesis is: resiliency is meaningful in the context of holistic assessments of... health , holistic , Resiliency Tier, Resiliency Tier Matrix, State of Resiliency 295Defense ARJ, July 2015, Vol. 22 No. 3 : 294–324 296 Defense ARJ, July...upon who is speaking. Taking this one step further, consider resiliency as a concept that provides a holistic view of a system or capability, just

  14. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  15. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  16. Video pedagogy

    OpenAIRE

    Länsitie, Janne; Stevenson, Blair; Männistö, Riku; Karjalainen, Tommi; Karjalainen, Asko

    2016-01-01

    The short film is an introduction to the concept of video pedagogy. The five categories of video pedagogy further elaborate how videos can be used as a part of instruction and learning process. Most pedagogical videos represent more than one category. A video itself doesn’t necessarily define the category – the ways in which the video is used as a part of pedagogical script are more defining factors. What five categories did you find? Did you agree with the categories, or are more...

  17. Cross-Layer QoS Control for Video Communications over Wireless Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Pei Yong

    2005-01-01

    Full Text Available Assuming a wireless ad hoc network consisting of homogeneous video users with each of them also serving as a possible relay node for other users, we propose a cross-layer rate-control scheme based on an analytical study of how the effective video transmission rate is affected by the prevailing operating parameters, such as the interference environment, the number of transmission hops to a destination, and the packet loss rate. Furthermore, in order to provide error-resilient video delivery over such wireless ad hoc networks, a cross-layer joint source-channel coding (JSCC approach, to be used in conjunction with rate-control, is proposed and investigated. This approach attempts to optimally apply the appropriate channel coding rate given the constraints imposed by the effective transmission rate obtained from the proposed rate-control scheme, the allowable real-time video play-out delay, and the prevailing channel conditions. Simulation results are provided which demonstrate the effectiveness of the proposed cross-layer combined rate-control and JSCC approach.

  18. Resilience Thinking: Integrating Resilience, Adaptability and Transformability

    Directory of Open Access Journals (Sweden)

    Carl Folke

    2010-12-01

    Full Text Available Resilience thinking addresses the dynamics and development of complex social-ecological systems (SES. Three aspects are central: resilience, adaptability and transformability. These aspects interrelate across multiple scales. Resilience in this context is the capacity of a SES to continually change and adapt yet remain within critical thresholds. Adaptability is part of resilience. It represents the capacity to adjust responses to changing external drivers and internal processes and thereby allow for development along the current trajectory (stability domain. Transformability is the capacity to cross thresholds into new development trajectories. Transformational change at smaller scales enables resilience at larger scales. The capacity to transform at smaller scales draws on resilience from multiple scales, making use of crises as windows of opportunity for novelty and innovation, and recombining sources of experience and knowledge to navigate social-ecological transitions. Society must seriously consider ways to foster resilience of smaller more manageable SESs that contribute to Earth System resilience and to explore options for deliberate transformation of SESs that threaten Earth System resilience.

  19. 'Resilience thinking' in transport planning

    OpenAIRE

    Wang, JYT

    2015-01-01

    Resilience has been discussed in ecology for over forty years. While some aspects of resilience have received attention in transport planning, there is no unified definition of resilience in transportation. To define resilience in transportation, I trace back to the origin of resilience in ecology with a view of revealing the essence of resilience thinking and its relevance to transport planning. Based on the fundamental concepts of engineering resilience and ecological resilience, I define "...

  20. Developing the resilience typology

    DEFF Research Database (Denmark)

    Simonsen, Daniel Morten

    2013-01-01

    There is a growing interest in resilience in internal crisis management and crisis communication. How an organization can build up resilience as a response to organisational crisis, at a time when the amount of crises seem only to increase, is more relevant than ever before. Nevertheless resilience...... is often perceived in the literature as something certain organisations have by definition, without further reflection on what it is that creates this resiliency. This article explores what it is that creates organisational resilience, and in view of the different understandings of the resilience...... phenomenon, develops a typology of resilience. Furthermore the resilience phenomenon is discussed against the definition of a crisis as a cosmological episode, and implications for future research is discussed and summarized....

  1. Resilience among Military Youth

    Science.gov (United States)

    Easterbrooks, M. Ann; Ginsburg, Kenneth; Lerner, Richard M.

    2013-01-01

    In this article, the authors present their approach to understanding resilience among military connected young people, and they discuss some of the gaps in their knowledge. They begin by defining resilience, and then present a theoretical model of how young people demonstrate resilient functioning. Next they consider some of the research on…

  2. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  3. Video games

    OpenAIRE

    Kolář, Vojtěch

    2012-01-01

    This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...

  4. Resilience in Adolescents with Cancer: Association of Coping with Positive and Negative Affect.

    Science.gov (United States)

    Murphy, Lexa K; Bettis, Alexandra H; Gruhn, Meredith A; Gerhardt, Cynthia A; Vannatta, Kathryn; Compas, Bruce E

    2017-10-01

    To examine the prospective association between adolescents' coping with cancer-related stress and observed positive and negative affect during a mother-adolescent interaction task involving discussion of cancer-related stressors. Adolescents (age 10-15 years) self-reported about their coping and affect approximately 2 months after cancer diagnosis. Approximately 3 months later, adolescents and mothers were video recorded having a discussion about cancer, and adolescents were coded for expression of positive affect (positive mood) and negative affect (sadness and anxiety). Adolescents' use of secondary control coping (i.e., acceptance, cognitive reappraisal, and distraction) in response to cancer-related stress predicted higher levels of observed positive affect, but not negative affect, over time. Findings provide support for the importance of coping in the regulation of positive emotions. The potential role of coping in preventive interventions to enhance resilience in adolescents facing cancer-related stress is highlighted.

  5. MCNP code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids

  6. No-reference pixel based video quality assessment for HEVC decoded video

    DEFF Research Database (Denmark)

    Huang, Xin; Søgaard, Jacob; Forchhammer, Søren

    2017-01-01

    the quantization step used in the Intra coding is estimated. We map the obtained HEVC features using an Elastic Net to predict subjective video quality scores, Mean Opinion Scores (MOS). The performance is verified on a dataset consisting of HEVC coded 4 K UHD (resolution equal to 3840 x 2160) video sequences...

  7. Resilience in disaster research

    DEFF Research Database (Denmark)

    Dahlberg, Rasmus; Johannessen-Henry, Christine Tind; Raju, Emmanuel

    2015-01-01

    This paper explores the concept of resilience in disaster management settings in modern society. The diversity and relatedness of ‘resilience’ as a concept and as a process are reflected in its presentation through three ‘versions’: (i) pastoral care and the role of the church for victims...... of disaster trauma, (ii) federal policy and the US Critical Infrastructure Plan, and (iii) the building of resilient communities for disaster risk reduction practices. The three versions aim to offer characteristic expressions of resilience, as increasingly evident in current disaster literature....... In presenting resilience through the lens of these three versions, the article highlights the complexity in using resilience as an all-encompassing word. The article also suggests the need for understanding the nexuses between risk, vulnerability, and policy for the future of resilience discourse....

  8. Akademisk video

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    Dette kapitel har fokus på metodiske problemstillinger, der opstår i forhold til at bruge (digital) video i forbindelse med forskningskommunikation, ikke mindst online. Video har længe været benyttet i forskningen til dataindsamling og forskningskommunikation. Med digitaliseringen og internettet ...

  9. Systemic resilience model

    International Nuclear Information System (INIS)

    Lundberg, Jonas; Johansson, Björn JE

    2015-01-01

    It has been realized that resilience as a concept involves several contradictory definitions, both for instance resilience as agile adjustment and as robust resistance to situations. Our analysis of resilience concepts and models suggest that beyond simplistic definitions, it is possible to draw up a systemic resilience model (SyRes) that maintains these opposing characteristics without contradiction. We outline six functions in a systemic model, drawing primarily on resilience engineering, and disaster response: anticipation, monitoring, response, recovery, learning, and self-monitoring. The model consists of four areas: Event-based constraints, Functional Dependencies, Adaptive Capacity and Strategy. The paper describes dependencies between constraints, functions and strategies. We argue that models such as SyRes should be useful both for envisioning new resilience methods and metrics, as well as for engineering and evaluating resilient systems. - Highlights: • The SyRes model resolves contradictions between previous resilience definitions. • SyRes is a core model for envisioning and evaluating resilience metrics and models. • SyRes describes six functions in a systemic model. • They are anticipation, monitoring, response, recovery, learning, self-monitoring. • The model describes dependencies between constraints, functions and strategies

  10. Reconfigurable Secure Video Codec Based on DWT and AES Processor

    OpenAIRE

    Rached Tourki; M. Machhout; B. Bouallegue; M. Atri; M. Zeghid; D. Dia

    2010-01-01

    In this paper, we proposed a secure video codec based on the discrete wavelet transformation (DWT) and the Advanced Encryption Standard (AES) processor. Either, use of video coding with DWT or encryption using AES is well known. However, linking these two designs to achieve secure video coding is leading. The contributions of our work are as follows. First, a new method for image and video compression is proposed. This codec is a synthesis of JPEG and JPEG2000,which is implemented using Huffm...

  11. Watermarking textures in video games

    Science.gov (United States)

    Liu, Huajian; Berchtold, Waldemar; Schäfer, Marcel; Lieb, Patrick; Steinebach, Martin

    2014-02-01

    Digital watermarking is a promising solution to video game piracy. In this paper, based on the analysis of special challenges and requirements in terms of watermarking textures in video games, a novel watermarking scheme for DDS textures in video games is proposed. To meet the performance requirements in video game applications, the proposed algorithm embeds the watermark message directly in the compressed stream in DDS files and can be straightforwardly applied in watermark container technique for real-time embedding. Furthermore, the embedding approach achieves high watermark payload to handle collusion secure fingerprinting codes with extreme length. Hence, the scheme is resistant to collusion attacks, which is indispensable in video game applications. The proposed scheme is evaluated in aspects of transparency, robustness, security and performance. Especially, in addition to classical objective evaluation, the visual quality and playing experience of watermarked games is assessed subjectively in game playing.

  12. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  13. Data Partitioning Technique for Improved Video Prioritization

    Directory of Open Access Journals (Sweden)

    Ismail Amin Ali

    2017-07-01

    Full Text Available A compressed video bitstream can be partitioned according to the coding priority of the data, allowing prioritized wireless communication or selective dropping in a congested channel. Known as data partitioning in the H.264/Advanced Video Coding (AVC codec, this paper introduces a further sub-partition of one of the H.264/AVC codec’s three data-partitions. Results show a 5 dB improvement in Peak Signal-to-Noise Ratio (PSNR through this innovation. In particular, the data partition containing intra-coded residuals is sub-divided into data from: those macroblocks (MBs naturally intra-coded, and those MBs forcibly inserted for non-periodic intra-refresh. Interactive user-to-user video streaming can benefit, as then HTTP adaptive streaming is inappropriate and the High Efficiency Video Coding (HEVC codec is too energy demanding.

  14. Video Podcasts

    DEFF Research Database (Denmark)

    Nortvig, Anne Mette; Sørensen, Birgitte Holm

    2016-01-01

    This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...

  15. Video games.

    Science.gov (United States)

    Funk, Jeanne B

    2005-06-01

    The video game industry insists that it is doing everything possible to provide information about the content of games so that parents can make informed choices; however, surveys indicate that ratings may not reflect consumer views of the nature of the content. This article describes some of the currently popular video games, as well as developments that are on the horizon, and discusses the status of research on the positive and negative impacts of playing video games. Recommendations are made to help parents ensure that children play games that are consistent with their values.

  16. No-Reference Video Quality Assessment using MPEG Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2013-01-01

    We present a method for No-Reference (NR) Video Quality Assessment (VQA) for decoded video without access to the bitstream. This is achieved by extracting and pooling features from a NR image quality assessment method used frame by frame. We also present methods to identify the video coding...... and estimate the video coding parameters for MPEG-2 and H.264/AVC which can be used to improve the VQA. The analysis differs from most other video coding analysis methods since it is without access to the bitstream. The results show that our proposed method is competitive with other recent NR VQA methods...

  17. Segmentation of object-based video of gaze communication

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Stegmann, Mikkel Bille; Forchhammer, Søren

    2005-01-01

    Aspects of video communication based on gaze interaction are considered. The overall idea is to use gaze interaction to control video, e.g. for video conferencing. Towards this goal, animation of a facial mask is demonstrated. The animation is based on images using Active Appearance Models (AAM......). Good quality reproduction of (low-resolution) coded video of an animated facial mask as low as 10-20 kbit/s using MPEG-4 object based video is demonstated....

  18. The Resilient Brain

    Science.gov (United States)

    Brendtro, Larry K.; Longhurst, James E.

    2005-01-01

    Brain research opens new frontiers in working with children and youth experiencing conflict in school and community. Blending this knowledge with resilience science offers a roadmap for reclaiming those identified as "at risk." This article applies findings from resilience research and recent brain research to identify strategies for reaching…

  19. How Resilience Works.

    Science.gov (United States)

    Coutu, Diane L.

    2002-01-01

    Looks at coping skills that carry people through life and why some have them and others do not. Suggests that resilience is a reflex, a way of facing and understanding the world, and that resilient people and companies face reality with staunchness, make meaning out of hardship, and improvise. (JOW)

  20. Multifractal resilience and viability

    Science.gov (United States)

    Tchiguirinskaia, I.; Schertzer, D. J. M.

    2017-12-01

    The term resilience has become extremely fashionable and there had been many attempts to provide operational definition and in fact metrics going beyond a set of more or less ad-hoc indicators. The viability theory (Aubin and Saint-Pierre, 2011) have been used to give a rather precise mathematical definition of resilience (Deffuant and Gilbert, 2011). However, it does not grasp the multiscale nature of resilience that is rather fundamental as particularly stressed by Folke et al (2010). In this communication, we first recall a preliminary attempt (Tchiguirinskaia et al., 2014) to define multifractal resilience with the help of the maximal probable singularity. Then we extend this multifractal approach to the capture basin of the viability, therefore the resilient basin. Aubin, J P, A. Bayen, and P Saint-Pierre (2011). Viability Theory. New Directions. Springer, Berlin,. Deffuant, G. and Gilbert, N. (eds) (2011) Viability and Resilience of Complex Systems. Springer Berlin.Folke, C., S R Carpenter, B Walker, M Sheffer, T Chapin, and J Rockstroem (2010). Resilience thinking: integrating re- silience, adaptability and transformability. Ecology and So- ciety, 14(4):20, Tchiguirinskaia,I., D. Schertzer, , A. Giangola-Murzyn and T. C. Hoang (2014). Multiscale resilience metrics to assess flood. Proceedings of ICCSA 2014, Normandie University, Le Havre, France -.

  1. Building Inner Resilience

    Science.gov (United States)

    Lantieri, Linda

    2008-01-01

    The capacity to be in control of one's thoughts, emotions, and physiology can form an internal safety net preparing children to face the challenges and opportunities of life. This is the goal of the Inner Resilience Program in the New York City Schools. Teachers in the Inner Resilience Program's intervention are exposed to calming and focusing…

  2. Building Resilience through Humor.

    Science.gov (United States)

    Berg, Debra Vande; Van Brockern, Steve

    1995-01-01

    Research on resilience suggests that a sense of humor helps to stress-proof children in conflict. Reports on a workshop for educators and youth workers convened to explore ways humor is being used to foster positive development and resilience with troubled youth. Describes applications of humor front-line professionals report as useful in their…

  3. Resilient Renewable Energy Microgrids

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Katherine H [National Renewable Energy Laboratory (NREL), Golden, CO (United States); DiOrio, Nicholas A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Butt, Robert S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cutler, Dylan S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Richards, Allison [Unaffiliated

    2017-11-14

    This presentation for the Cable-Tec Expo 2017 offers information about how renewable microgrids can be used to increase resiliency. It includes information about why renewable energy battery diesel hybrids microgrids should be considered for backup power, how to estimate economic savings of microgrids, quantifying the resiliency gain of microgrids, and where renewable microgrids will be successful.

  4. Zoogeomorphology and resilience theory

    Science.gov (United States)

    Butler, David R.; Anzah, Faisal; Goff, Paepin D.; Villa, Jennifer

    2018-03-01

    Zoogeomorphology, the study of animals as geomorphic agents, has been largely overlooked in the context of resilience theory and biogeomorphic systems. In this paper, examples are provided of the interactions between external landscape disturbances and zoogeomorphological agents. We describe cases in which naturally occurring zoogeomorphological agents occupy a landscape, and examine whether those zoogeomorphic agents provide resilience to a landscape or instead serve as a landscape stress capable of inducing a phase-state shift. Several cases are described whereby the presence of exotic (introduced) zoogeomorphic agents overwhelms a landscape and induce collapse. The impact of climate change on species with zoogeomorphological importance is discussed in the context of resilience of a landscape. We conclude with a summary diagram illustrating the relationships existing between zoogeomorphic impacts and landscape resilience in the context of our case studies, and speculate about the future of the study of zoogeomorphology in the framework of resilience theory.

  5. Video Comparator

    International Nuclear Information System (INIS)

    Rose, R.P.

    1978-01-01

    The Video Comparator is a comparative gage that uses electronic images from two sources, a standard and an unknown. Two matched video cameras are used to obtain the electronic images. The video signals are mixed and displayed on a single video receiver (CRT). The video system is manufactured by ITP of Chatsworth, CA and is a Tele-Microscope II, Model 148. One of the cameras is mounted on a toolmaker's microscope stand and produces a 250X image of a cast. The other camera is mounted on a stand and produces an image of a 250X template. The two video images are mixed in a control box provided by ITP and displayed on a CRT. The template or the cast can be moved to align the desired features. Vertical reference lines are provided on the CRT, and a feature on the cast can be aligned with a line on the CRT screen. The stage containing the casts can be moved using a Boeckleler micrometer equipped with a digital readout, and a second feature aligned with the reference line and the distance moved obtained from the digital display

  6. Understanding Infrastructure Resiliency in Chennai, India Using Twitter’s Geotags and Texts: A Preliminary Study

    Directory of Open Access Journals (Sweden)

    Wai K. Chong

    2018-04-01

    Full Text Available Geotagging is the process of labeling data and information with geographical identification metadata, and text mining refers to the process of deriving information from text through data analytics. Geotagging and text mining are used to mine rich sources of social media data, such as video, website, text, and Quick Response (QR code. They have been frequently used to model consumer behaviors and market trends. This study uses both techniques to understand the resilience of infrastructure in Chennai, India using data mined from the 2015 flood. This paper presents a conceptual study on the potential use of social media (Twitter in this case to better understand infrastructure resiliency. Using feature-extraction techniques, the research team extracted Twitter data from tweets generated by the Chennai population during the flood. First, this study shows that these techniques are useful in identifying locations, defects, and failure intensities of infrastructure using the location metadata from geotags, words containing the locations, and the frequencies of tweets from each location. However, more efforts are needed to better utilize the texts generated from the tweets, including a better understanding of the cultural contexts of the words used in the tweets, the contexts of the words used to describe the incidents, and the least frequently used words. Keywords: Social media, Flooding, Engineering design

  7. Video Classification and Adaptive QoP/QoS Control for Multiresolution Video Applications on IPTV

    Directory of Open Access Journals (Sweden)

    Huang Shyh-Fang

    2012-01-01

    Full Text Available With the development of heterogeneous networks and video coding standards, multiresolution video applications over networks become important. It is critical to ensure the service quality of the network for time-sensitive video services. Worldwide Interoperability for Microwave Access (WIMAX is a good candidate for delivering video signals because through WIMAX the delivery quality based on the quality-of-service (QoS setting can be guaranteed. The selection of suitable QoS parameters is, however, not trivial for service users. Instead, what a video service user really concerns with is the video quality of presentation (QoP which includes the video resolution, the fidelity, and the frame rate. In this paper, we present a quality control mechanism in multiresolution video coding structures over WIMAX networks and also investigate the relationship between QoP and QoS in end-to-end connections. Consequently, the video presentation quality can be simply mapped to the network requirements by a mapping table, and then the end-to-end QoS is achieved. We performed experiments with multiresolution MPEG coding over WIMAX networks. In addition to the QoP parameters, the video characteristics, such as, the picture activity and the video mobility, also affect the QoS significantly.

  8. Promoting resilience among parents and caregivers of children with cancer.

    Science.gov (United States)

    Rosenberg, Abby R; Baker, K Scott; Syrjala, Karen L; Back, Anthony L; Wolfe, Joanne

    2013-06-01

    Promoting resilience is an aspect of psychosocial care that affects patient and whole-family well-being. There is little consensus about how to define or promote resilience during and after pediatric cancer. The aims of this study were (1) to review the resilience literature in pediatric cancer settings; (2) to qualitatively ascertain caregiver-reported perceptions of resilience; and (3) to develop an integrative model of fixed and mutable factors of resilience among family members of children with cancer, with the goal of enabling better study and promotion of resilience among pediatric cancer families. The study entailed qualitative analysis of small group interviews with eighteen bereaved parents and family members of children with cancer treated at Seattle Children's Hospital. Small-group interviews were conducted with members of each bereaved family. Participant statements were coded for thematic analysis. An integrative, comprehensive framework was then developed. Caregivers' personal appraisals of the cancer experience and their child's legacy shape their definitions of resilience. Described factors of resilience include baseline characteristics (i.e., inherent traits, prior expectations of cancer), processes that evolve over time (i.e., coping strategies, social support, provider interactions), and psychosocial outcomes (i.e., post-traumatic growth and lack of psychological distress). These elements were used to develop a testable model of resilience among family members of children with cancer. Resilience is a complex construct that may be modifiable. Once validated, the proposed framework will not only serve as a model for clinicians, but may also facilitate the development of interventions aimed at promoting resilience in family members of children with cancer.

  9. Teacher Resilience: Theorizing Resilience and Poverty

    Science.gov (United States)

    Ebersöhn, Liesel

    2014-01-01

    In this article, I hope to provide some novel insights into teacher resilience and poverty on the basis of ten-year long-term ethnographic participatory reflection and action data obtained from teachers (n?=?87) in rural (n?=?6) and urban (n?=?8) schools (n?=?14, high schools?=?4, primary schools?=?10) in three South African provinces. In…

  10. Foundations of resilience thinking.

    Science.gov (United States)

    Curtin, Charles G; Parker, Jessica P

    2014-08-01

    Through 3 broad and interconnected streams of thought, resilience thinking has influenced the science of ecology and natural resource management by generating new multidisciplinary approaches to environmental problem solving. Resilience science, adaptive management (AM), and ecological policy design (EPD) contributed to an internationally unified paradigm built around the realization that change is inevitable and that science and management must approach the world with this assumption, rather than one of stability. Resilience thinking treats actions as experiments to be learned from, rather than intellectual propositions to be defended or mistakes to be ignored. It asks what is novel and innovative and strives to capture the overall behavior of a system, rather than seeking static, precise outcomes from discrete action steps. Understanding the foundations of resilience thinking is an important building block for developing more holistic and adaptive approaches to conservation. We conducted a comprehensive review of the history of resilience thinking because resilience thinking provides a working context upon which more effective, synergistic, and systems-based conservation action can be taken in light of rapid and unpredictable change. Together, resilience science, AM, and EPD bridge the gaps between systems analysis, ecology, and resource management to provide an interdisciplinary approach to solving wicked problems. © 2014 Society for Conservation Biology.

  11. Resilience: Theory and Application.

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.L.; Haffenden, R.A.; Bassett, G.W.; Buehring, W.A.; Collins, M.J., III; Folga, S.M.; Petit, F.D.; Phillips, J.A.; Verner, D.R.; Whitfield, R.G. (Decision and Information Sciences)

    2012-02-03

    There is strong agreement among policymakers, practitioners, and academic researchers that the concept of resilience must play a major role in assessing the extent to which various entities - critical infrastructure owners and operators, communities, regions, and the Nation - are prepared to respond to and recover from the full range of threats they face. Despite this agreement, consensus regarding important issues, such as how resilience should be defined, assessed, and measured, is lacking. The analysis presented here is part of a broader research effort to develop and implement assessments of resilience at the asset/facility and community/regional levels. The literature contains various definitions of resilience. Some studies have defined resilience as the ability of an entity to recover, or 'bounce back,' from the adverse effects of a natural or manmade threat. Such a definition assumes that actions taken prior to the occurrence of an adverse event - actions typically associated with resistance and anticipation - are not properly included as determinants of resilience. Other analyses, in contrast, include one or more of these actions in their definitions. To accommodate these different definitions, we recognize a subset of resistance- and anticipation-related actions that are taken based on the assumption that an adverse event is going to occur. Such actions are in the domain of resilience because they reduce both the immediate and longer-term adverse consequences that result from an adverse event. Recognizing resistance- and anticipation-related actions that take the adverse event as a given accommodates the set of resilience-related actions in a clear-cut manner. With these considerations in mind, resilience can be defined as: 'the ability of an entity - e.g., asset, organization, community, region - to anticipate, resist, absorb, respond to, adapt to, and recover from a disturbance.' Because critical infrastructure resilience is important

  12. Video Conferencing for a Virtual Seminar Room

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Fosgerau, A.; Hansen, Peter Søren K.

    2002-01-01

    A PC-based video conferencing system for a virtual seminar room is presented. The platform is enhanced with DSPs for audio and video coding and processing. A microphone array is used to facilitate audio based speaker tracking, which is used for adaptive beam-forming and automatic camera...

  13. Resilience in Utility Technologies

    Science.gov (United States)

    Seaton, Roger

    The following sections are included: * Scope of paper * Preamble * Background to the case-study projects * Source projects * Resilience * Case study 1: Electricity generation * Context * Model * Case study 2: Water recycling * Context * Model * Case study 3: Ecotechnology and water treatment * Context * The problem of classification: Finding a classificatory solution * Application of the new taxonomy to water treatment * Concluding comments and questions * Conclusions * Questions and issues * Purposive or Purposeful? * Resilience: Flexibility and adaptivity? * Resilience: With respect of what? * Risk, uncertainty, surprise, emergence - What sort of shock, and who says so? * Co-evolutionary friction * References

  14. Resilience of the IMS system

    DEFF Research Database (Denmark)

    Kamyod, Chayapol; Nielsen, Rasmus Hjorth; Prasad, Neeli R.

    2014-01-01

    The paper focuses on end-to-end resilience analysis of the IMS based network through the principal resilience parameters by using OPNET. The resilience behaviours of communication across multiple IMS domains are investigated at different communication scenarios and compared with previous state......-of-the-art. Moreover, the resilience effects when adding a redundancy of the S-CSCF unit are examined. The results disclose interesting resilience behaviours for long distance communications....

  15. Video Golf

    Science.gov (United States)

    1995-01-01

    George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.

  16. Fast compressed domain motion detection in H.264 video streams for video surveillance applications

    DEFF Research Database (Denmark)

    Szczerba, Krzysztof; Forchhammer, Søren; Støttrup-Andersen, Jesper

    2009-01-01

    This paper presents a novel approach to fast motion detection in H.264/MPEG-4 advanced video coding (AVC) compressed video streams for IP video surveillance systems. The goal is to develop algorithms which may be useful in a real-life industrial perspective by facilitating the processing of large...... on motion vectors embedded in the video stream without requiring a full decoding and reconstruction of video frames. To improve the robustness to noise, a confidence measure based on temporal and spatial clues is introduced to increase the probability of correct detection. The algorithm was tested on indoor...

  17. The Aesthetics of Coding

    DEFF Research Database (Denmark)

    Andersen, Christian Ulrik

    2007-01-01

    Computer art is often associated with computer-generated expressions (digitally manipulated audio/images in music, video, stage design, media facades, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated...... code, etc.). The presentation relates this artistic fascination of code to a media critique expressed by Florian Cramer, claiming that the graphical interface represents a media separation (of text/code and image) causing alienation to the computer’s materiality. Cramer is thus the voice of a new ‘code...... avant-garde’. In line with Cramer, the artists Alex McLean and Adrian Ward (aka Slub) declare: “art-oriented programming needs to acknowledge the conditions of its own making – its poesis.” By analysing the Live Coding performances of Slub (where they program computer music live), the presentation...

  18. Formal aspects of resilience

    Directory of Open Access Journals (Sweden)

    Diana-Maria Drigă

    2015-12-01

    Full Text Available The concept of resilience has represented during the recent years a leading concern both in Romania, within the European Union and worldwide. Specialists in economics, management, finance, legal sciences, political sciences, sociology, psychology, grant a particular interest to this concept. Multidisciplinary research of resilience has materialized throughout the time in multiple conceptualizations and theorizing, but without being a consensus between specialists in terms of content, specificity and scope. Through this paper it is intended to clarify the concept of resilience, achieving an exploration of the evolution of this concept in ecological, social and economic environment. At the same time, the paper presents aspects of feedback mechanisms and proposes a formalization of resilience using the logic and mathematical analysis.

  19. OAS :: Videos

    Science.gov (United States)

    subscriptions Videos Photos Live Webcast Social Media Facebook @oasofficial Facebook Twitter @oas_official Audios Photos Social Media Facebook Twitter Newsletters Press and Communications Department Contact us at Rights Actions against Corruption C Children Civil Registry Civil Society Contact Us Culture Cyber

  20. Huffman coding in advanced audio coding standard

    Science.gov (United States)

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  1. Multi-hypothesis distributed stereo video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Zamarin, Marco; Forchhammer, Søren

    2013-01-01

    for stereo sequences, exploiting an interpolated intra-view SI and two inter-view SIs. The quality of the SI has a major impact on the DVC Rate-Distortion (RD) performance. As the inter-view SIs individually present lower RD performance compared with the intra-view SI, we propose multi-hypothesis decoding...

  2. Packet-aware transport for video distribution [Invited

    Science.gov (United States)

    Aguirre-Torres, Luis; Rosenfeld, Gady; Bruckman, Leon; O'Connor, Mannix

    2006-05-01

    We describe a solution based on resilient packet rings (RPR) for the distribution of broadcast video and video-on-demand (VoD) content over a packet-aware transport network. The proposed solution is based on our experience in the design and deployment of nationwide Triple Play networks and relies on technologies such as RPR, multiprotocol label switching (MPLS), and virtual private LAN service (VPLS) to provide the most efficient solution in terms of utilization, scalability, and availability.

  3. MPEG2 video parameter and no reference PSNR estimation

    DEFF Research Database (Denmark)

    Li, Huiying; Forchhammer, Søren

    2009-01-01

    MPEG coded video may be processed for quality assessment or postprocessed to reduce coding artifacts or transcoded. Utilizing information about the MPEG stream may be useful for these tasks. This paper deals with estimating MPEG parameter information from the decoded video stream without access t...

  4. Resilience Indicator Summaries and Resilience Scores CNMI JPEG Maps

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Maps of relative classifications (low to high) for six resilience indicators and two anthropogenic stressors and a map of final relative resilience scores for 78...

  5. Resilience Indicator Summaries and Resilience Scores CNMI Excel database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Maps of relative classifications (low to high) for six resilience indicators and two anthropogenic stressors and a map of final relative resilience scores for 78...

  6. Multi-Dimensional Auction Mechanisms for Crowdsourced Mobile Video Streaming

    OpenAIRE

    Tang, Ming; Pang, Haitian; Wang, Shou; Gao, Lin; Huang, Jianwei; Sun, Lifeng

    2017-01-01

    Crowdsourced mobile video streaming enables nearby mobile video users to aggregate network resources to improve their video streaming performances. However, users are often selfish and may not be willing to cooperate without proper incentives. Designing an incentive mechanism for such a scenario is challenging due to the users' asynchronous downloading behaviors and their private valuations for multi-bitrate coded videos. In this work, we propose both single-object and multi-object multi-dime...

  7. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  8. Family Resilience in the Military

    Science.gov (United States)

    Meadows, Sarah O.; Beckett, Megan K.; Bowling, Kirby; Golinelli, Daniela; Fisher, Michael P.; Martin, Laurie T.; Meredith, Lisa S.; Osilla, Karen Chan

    2016-01-01

    Abstract Military life presents a variety of challenges to military families, including frequent separations and relocations as well as the risks that service members face during deployment; however, many families successfully navigate these challenges. Despite a recent emphasis on family resilience, the U.S. Department of Defense (DoD) does not have a standard and universally accepted definition of family resilience. A standard definition is a necessary for DoD to more effectively assess its efforts to sustain and improve family resilience. RAND authors reviewed the literature on family resilience and, in this study, recommend a definition that could be used DoD-wide. The authors also reviewed DoD policies related to family resilience, reviewed models that describe family resilience and identified key family resilience factors, and developed several recommendations for how family-resilience programs and policies could be managed across DoD. PMID:28083409

  9. Resilience in IMS

    DEFF Research Database (Denmark)

    Kamyod, Chayapol; Nielsen, Rasmus Hjorth; Prasad, Neeli R.

    2012-01-01

    ) and supporting always on services. Therefore, not only Quality of Service (QoS) but also resilience is required. In this paper, we attempt to evaluate and analyze end-to-end reliability of the IMS system using a model proposed as a combination of Reliability Block Diagram (RBD) and Markov Reward Models (MRMs......Reliability evaluation of systems has been widely researched for improving system resilience especially in designing processes of a complex system. The convergence of different access networks is possible via IP Multimedia Subsystem (IMS) for development toward Next Generation Networks (NGNs......). The resilience of the IMS architecture is studied by applying 1:1 redundancy at different communication scenarios between end users within and across communication domains. The model analysis provides useful reliability characteristics of the system and can be further applied for system design processes....

  10. A Novel Mobile Video Community Discovery Scheme Using Ontology-Based Semantical Interest Capture

    Directory of Open Access Journals (Sweden)

    Ruiling Zhang

    2016-01-01

    Full Text Available Leveraging network virtualization technologies, the community-based video systems rely on the measurement of common interests to define and steady relationship between community members, which promotes video sharing performance and improves scalability community structure. In this paper, we propose a novel mobile Video Community discovery scheme using ontology-based semantical interest capture (VCOSI. An ontology-based semantical extension approach is proposed, which describes video content and measures video similarity according to video key word selection methods. In order to reduce the calculation load of video similarity, VCOSI designs a prefix-filtering-based estimation algorithm to decrease energy consumption of mobile nodes. VCOSI further proposes a member relationship estimate method to construct scalable and resilient node communities, which promotes video sharing capacity of video systems with the flexible and economic community maintenance. Extensive tests show how VCOSI obtains better performance results in comparison with other state-of-the-art solutions.

  11. Metrics for energy resilience

    International Nuclear Information System (INIS)

    Roege, Paul E.; Collier, Zachary A.; Mancillas, James; McDonagh, John A.; Linkov, Igor

    2014-01-01

    Energy lies at the backbone of any advanced society and constitutes an essential prerequisite for economic growth, social order and national defense. However there is an Achilles heel to today's energy and technology relationship; namely a precarious intimacy between energy and the fiscal, social, and technical systems it supports. Recently, widespread and persistent disruptions in energy systems have highlighted the extent of this dependence and the vulnerability of increasingly optimized systems to changing conditions. Resilience is an emerging concept that offers to reconcile considerations of performance under dynamic environments and across multiple time frames by supplementing traditionally static system performance measures to consider behaviors under changing conditions and complex interactions among physical, information and human domains. This paper identifies metrics useful to implement guidance for energy-related planning, design, investment, and operation. Recommendations are presented using a matrix format to provide a structured and comprehensive framework of metrics relevant to a system's energy resilience. The study synthesizes previously proposed metrics and emergent resilience literature to provide a multi-dimensional model intended for use by leaders and practitioners as they transform our energy posture from one of stasis and reaction to one that is proactive and which fosters sustainable growth. - Highlights: • Resilience is the ability of a system to recover from adversity. • There is a need for methods to quantify and measure system resilience. • We developed a matrix-based approach to generate energy resilience metrics. • These metrics can be used in energy planning, system design, and operations

  12. Introduction 'Governance for Drought Resilience'

    NARCIS (Netherlands)

    Bressers, Nanny; Bressers, Johannes T.A.; Larrue, Corinne; Bressers, Hans; Bressers, Nanny; Larrue, Corinne

    2016-01-01

    This book is about governance for drought resilience. But that simple sentence alone might rouse several questions. Because what do we mean with drought, and how does that relate to water scarcity? And what do we mean with resilience, and why is resilience needed for tackling drought? And how does

  13. Resilience and (in)security

    DEFF Research Database (Denmark)

    dunn cavelty, myriam; Kaufmann, Mareile; Kristensen, Kristian Søby

    2015-01-01

    , and redefine relations of security and insecurity. We show the increased attention – scholarly as well as political – given to resilience in recent times and provide a review of the state of critical security studies literature on resilience. We argue that to advance this discussion, resilience needs...

  14. New Orleans' Resilience Story

    Science.gov (United States)

    Hebert, J.

    2017-12-01

    New Orleans has had unique experience in dealing with and recovering from major urban emergencies. From Hurricanes Katrina and Isaac to the Deepwater Horizon Oil Spill to the city's frequent boil water advisories, New Orleans has learned important lessons about what it takes to become a vibrant, resilient city that serves all its residents — particularly its most vulnerable. The city of New Orleans released its Resilience Strategy on August 28, 2015. On September 12, 2016, the city released its One-Year Progress Update, sharing its key milestones.

  15. Resilience and Complexity

    DEFF Research Database (Denmark)

    Dahlberg, Rasmus

    2015-01-01

    This paper explores two key concepts: resilience and complexity. The first is understood as an emergent property of the latter, and their inter-relatedness is discussed using a three tier approach. First, by exploring the discourse of each concept, next, by analyzing underlying relationships and...... robust. Robustness is a property of simple or complicated systems characterized by predictable behavior, enabling the system to bounce back to its normal state following a perturbation. Resilience, however, is an emergent property of complex adaptive systems. It is suggested that this distinction...

  16. Resilience in Aging Mice.

    Science.gov (United States)

    Kirkland, James L; Stout, Michael B; Sierra, Felipe

    2016-11-01

    Recently discovered interventions that target fundamental aging mechanisms have been shown to increase life span in mice and other species, and in some cases, these same manipulations have been shown to enhance health span and alleviate multiple age-related diseases and conditions. Aging is generally associated with decreases in resilience, the capacity to respond to or recover from clinically relevant stresses such as surgery, infections, or vascular events. We hypothesize that the age-related increase in susceptibility to those diseases and conditions is driven by or associated with the decrease in resilience. Thus, a test for resilience at middle age or even earlier could represent a surrogate approach to test the hypothesis that an intervention delays the process of aging itself. For this, animal models to test resilience accurately and predictably are needed. In addition, interventions that increase resilience might lead to treatments aimed at enhancing recovery following acute illnesses, or preventing poor outcomes from medical interventions in older, prefrail subjects. At a meeting of basic researchers and clinicians engaged in research on mechanisms of aging and care of the elderly, the merits and drawbacks of investigating effects of interventions on resilience in mice were considered. Available and potential stressors for assessing physiological resilience as well as the notion of developing a limited battery of such stressors and how to rank them were discussed. Relevant ranking parameters included value in assessing general health (as opposed to focusing on a single physiological system), ease of use, cost, reproducibility, clinical relevance, and feasibility of being repeated in the same animal longitudinally. During the discussions it became clear that, while this is an important area, very little is known or established. Much more research is needed in the near future to develop appropriate tests of resilience in animal models within an aging context

  17. Cluster Decline and Resilience

    DEFF Research Database (Denmark)

    Østergaard, Christian Richter; Park, Eun Kyung

    Most studies on regional clusters focus on identifying factors and processes that make clusters grow. However, sometimes technologies and market conditions suddenly shift, and clusters decline. This paper analyses the process of decline of the wireless communication cluster in Denmark, 1963......-2011. Our longitudinal study reveals that technological lock-in and exit of key firms have contributed to impairment of the cluster’s resilience in adapting to disruptions. Entrepreneurship has a positive effect on cluster resilience, while multinational companies have contradicting effects by bringing...... in new resources to the cluster but being quick to withdraw in times of crisis....

  18. Multi-Sited Resilience

    DEFF Research Database (Denmark)

    Olwig, Mette Fog

    2012-01-01

    with natural disasters and climate change. In a globalized world, however, it is hard to discern what is “local” as global organizations play an increasingly visible and powerful role. This paper will argue that local understandings and practices of resilience cannot be disentangled from global understandings...... flooding in northern Ghana, this paper examines the mutual construction of “local” and “global” notions and practices of resilience through multi-sited processes. It is based on interviews and participant observation in multiple sites at the “local,” “regional” and “global” levels....

  19. Temporal scalability comparison of the H.264/SVC and distributed video codec

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Belyaev, Evgeny

    2009-01-01

    The problem of the multimedia scalable video streaming is a current topic of interest. There exist many methods for scalable video coding. This paper is focused on the scalable extension of H.264/AVC (H.264/SVC) and distributed video coding (DVC). The paper presents an efficiency comparison of SV...

  20. Can Resilience Thinking Inform Resilience Investments? Learning from Resilience Principles for Disaster Risk Reduction

    Directory of Open Access Journals (Sweden)

    Margot Hill Clarvis

    2015-07-01

    Full Text Available As the human and financial costs of natural disasters rise and state finances continue to deplete, increasing attention is being placed on the role of the private sector to support disaster and climate resilience. However, not only is there a recognised lack of private finance to fill this gap, but international institutional and financing bodies tend to prioritise specific reactive response over preparedness and general resilience building. This paper utilises the central tenets of resilience thinking that have emerged from scholarship on social-ecological system resilience as a lens through which to assess investing in disaster risk reduction (DRR for resilience. It draws on an established framework of resilience principles and examples of resilience investments to explore how resilience principles can actually inform decisions around DRR and resilience investing. It proposes some key lessons for diversifying sources of finance in order to, in turn, enhance “financial resilience”. In doing so, it suggests a series of questions to align investments with resilience building, and to better balance the achievement of the resilience principles with financial requirements such as financial diversification and replicability. It argues for a critical look to be taken at how resilience principles, which focus on longer-term systems perspectives, could complement the focus in DRR on critical and immediate stresses.

  1. Resilience versus "Resilient Individual": What Exactly Do We Study?

    Directory of Open Access Journals (Sweden)

    Jan Sebastian Novotný

    2014-07-01

    Full Text Available The nature and definition of resilience, despite the extensive 40 years of research, is still unclear. Currently is resilience seen as a personality trait, sum of the traits/factors, result of adaptation, or as a process. The concept of resilience as personality traits is usually tied to uni-dimensional or "simplex" theories of resistance as Hardiness, Sense of Control, Ego-Resiliency, Self-efficacy, Sense of Coherence, or specific personality traits. Multidimensional concepts see resilience as a complex of personality and social (environmental factors that work in interaction, complement or replace each other, and, in aggregate, create a comprehensive picture of resilience. The concept of resilience as the result of adaptation examines resilience in terms of the presence/absence of adverse/pathological manifestations, consequences and outcomes in relation to the earlier effect of stressful, risky or otherwise unfavorable situations. Finally, the concept of resilience as the process examines individual's response to risk factors or wounds that are present in the environment. Resilience is thus a process consisting of interactions between individual characteristics and the environment. Most experts and a large part of resilience research is based on the first three concepts that however explore how "resilient" the individual is rather than resilience itself, since they are based on "diagnosing" or at best dimensional, at worse dichotomous rating of the individual's resilience (within personality trait approach, or on the evaluation of the presence/absence of factors/source of resilience, thereby they are still holding the "diagnostic" approach (within multidimensional approach. Only the examination of processes, such as the ongoing interaction between these risk factors, resilience factors, outcomes (expressions of personality, behavior, presence of problems, etc. and other variables allows us to understand resilience (the true nature of how

  2. Measuring resilience in integrated planning

    DEFF Research Database (Denmark)

    Apneseth, K.; Wahl, A. M.; Hollnagel, E.

    2013-01-01

    This chapter demonstrates how a Resilience Analysis Grid (RAG) can be used to profile the performance of a company in terms of the four abilities that characterize a resilient organization. It describes the development of a new, RAG-based tool founded on Resilience Engineering principles that can...... be used to assess an organization's resilience. The tool was tested in a case study involving a company in the offshore oil and gas industry. The company had decided to adopt an Integrated Operations (IO) approach to operations and maintenance planning and the tool was used to evaluate the impact...... of the Integrated Planning (IPL) process on its resilience....

  3. Experimenting for resilience

    DEFF Research Database (Denmark)

    Hagedorn-Rasmussen, Peter; Dupret, Katia

    Focusing on how an experimental approach to organizing may pave the way for organizational resilience, we explore opportunities and barriers of experimental organizing by following a concrete social experiment in civil society and discuss its adaptability in traditional organizations. The social ...... through balancing a strategic and anticipatory strategy with experimental setups inspired by civil society organizing initiatives....

  4. State Energy Resilience Framework

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, J. [Argonne National Lab. (ANL), Argonne, IL (United States); Finster, M. [Argonne National Lab. (ANL), Argonne, IL (United States); Pillon, J. [Argonne National Lab. (ANL), Argonne, IL (United States); Petit, F. [Argonne National Lab. (ANL), Argonne, IL (United States); Trail, J. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-12-01

    The energy sector infrastructure’s high degree of interconnectedness with other critical infrastructure systems can lead to cascading and escalating failures that can strongly affect both economic and social activities.The operational goal is to maintain energy availability for customers and consumers. For this body of work, a State Energy Resilience Framework in five steps is proposed.

  5. Wellbeing And Resilience

    DEFF Research Database (Denmark)

    Harder, Susanne; Davidsen, Kirstine Agnete; MacBeth, Angus

    2015-01-01

    , 16 and 52 weeks in terms of evolution of very early indicators of developmental risk and resilience focusing on three possible environmental transmission mechanisms: stress, maternal caregiver representation, and caregiver-infant interaction. DISCUSSION: The study will provide data on very early risk...

  6. Resilience through adaptation.

    Directory of Open Access Journals (Sweden)

    Guus A Ten Broeke

    Full Text Available Adaptation of agents through learning or evolution is an important component of the resilience of Complex Adaptive Systems (CAS. Without adaptation, the flexibility of such systems to cope with outside pressures would be much lower. To study the capabilities of CAS to adapt, social simulations with agent-based models (ABMs provide a helpful tool. However, the value of ABMs for studying adaptation depends on the availability of methodologies for sensitivity analysis that can quantify resilience and adaptation in ABMs. In this paper we propose a sensitivity analysis methodology that is based on comparing time-dependent probability density functions of output of ABMs with and without agent adaptation. The differences between the probability density functions are quantified by the so-called earth-mover's distance. We use this sensitivity analysis methodology to quantify the probability of occurrence of critical transitions and other long-term effects of agent adaptation. To test the potential of this new approach, it is used to analyse the resilience of an ABM of adaptive agents competing for a common-pool resource. Adaptation is shown to contribute positively to the resilience of this ABM. If adaptation proceeds sufficiently fast, it may delay or avert the collapse of this system.

  7. Resilience from coastal protection.

    Science.gov (United States)

    Ewing, Lesley C

    2015-10-28

    Coastal areas are important residential, commercial and industrial areas; but coastal hazards can pose significant threats to these areas. Shoreline/coastal protection elements, both built structures such as breakwaters, seawalls and revetments, as well as natural features such as beaches, reefs and wetlands, are regular features of a coastal community and are important for community safety and development. These protection structures provide a range of resilience to coastal communities. During and after disasters, they help to minimize damages and support recovery; during non-disaster times, the values from shoreline elements shift from the narrow focus on protection. Most coastal communities have limited land and resources and few can dedicate scarce resources solely for protection. Values from shore protection can and should expand to include environmental, economic and social/cultural values. This paper discusses the key aspects of shoreline protection that influence effective community resilience and protection from disasters. This paper also presents ways that the economic, environmental and social/cultural values of shore protection can be evaluated and quantified. It presents the Coastal Community Hazard Protection Resilience (CCHPR) Index for evaluating the resilience capacity to coastal communities from various protection schemes and demonstrates the use of this Index for an urban beach in San Francisco, CA, USA. © 2015 The Author(s).

  8. Resilience through adaptation

    NARCIS (Netherlands)

    Broeke, ten Guus; Voorn, van George A.K.; Ligtenberg, Arend; Molenaar, Jaap

    2017-01-01

    Adaptation of agents through learning or evolution is an important component of the resilience of Complex Adaptive Systems (CAS). Without adaptation, the flexibility of such systems to cope with outside pressures would be much lower. To study the capabilities of CAS to adapt, social simulations

  9. Resilience through adaptation.

    Science.gov (United States)

    Ten Broeke, Guus A; van Voorn, George A K; Ligtenberg, Arend; Molenaar, Jaap

    2017-01-01

    Adaptation of agents through learning or evolution is an important component of the resilience of Complex Adaptive Systems (CAS). Without adaptation, the flexibility of such systems to cope with outside pressures would be much lower. To study the capabilities of CAS to adapt, social simulations with agent-based models (ABMs) provide a helpful tool. However, the value of ABMs for studying adaptation depends on the availability of methodologies for sensitivity analysis that can quantify resilience and adaptation in ABMs. In this paper we propose a sensitivity analysis methodology that is based on comparing time-dependent probability density functions of output of ABMs with and without agent adaptation. The differences between the probability density functions are quantified by the so-called earth-mover's distance. We use this sensitivity analysis methodology to quantify the probability of occurrence of critical transitions and other long-term effects of agent adaptation. To test the potential of this new approach, it is used to analyse the resilience of an ABM of adaptive agents competing for a common-pool resource. Adaptation is shown to contribute positively to the resilience of this ABM. If adaptation proceeds sufficiently fast, it may delay or avert the collapse of this system.

  10. Lossless Compression of Video using Motion Compensation

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1998-01-01

    We investigate lossless coding of video using predictive coding andmotion compensation. The methods incorporate state-of-the-art lossless techniques such ascontext based prediction and bias cancellation, Golomb coding, high resolution motion field estimation,3d-dimensional predictors, prediction...... using one or multiple previous images, predictor dependent error modelling, and selection of motion field by code length. For slow pan or slow zoom sequences, coding methods that use multiple previous images are up to 20% better than motion compensation using a single previous image and up to 40% better...

  11. Wavelet-based audio embedding and audio/video compression

    Science.gov (United States)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  12. Toward enhancing the distributed video coder under a multiview video codec framework

    Science.gov (United States)

    Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua

    2016-11-01

    The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.

  13. Resilience: Building immunity in psychiatry

    Science.gov (United States)

    Shastri, Priyvadan Chandrakant

    2013-01-01

    The challenges in our personal, professional, financial, and emotional world are on rise, more so in developing countries and people will be longing for mental wellness for achieving complete health in their life. Resilience stands for one's capacity to recover from extremes of trauma and stress. Resilience in a person reflects a dynamic union of factors that encourages positive adaptation despite exposure to adverse life experiences. One needs to have a three-dimensional construct for understanding resilience as a state (what is it and how does one identify it?), a condition (what can be done about it?), and a practice (how does one get there?). Evaluating the level of resilience requires the measurement of internal (personal) and external (environmental) factors, taking into account that family and social environment variables of resilience play very important roles in an individual's resilience. Protection factors seem to be more important in the development of resilience than risk factors. Resilience is a process that lasts a lifetime, with periods of acquisition and maintenance, and reduction and loss for assessment. Overall, currently available data on resilience suggest the presence of a neurobiological substrate, based largely on genetics, which correlates with personality traits, some of which are configured via social learning. The major questions about resilience revolve around properly defining the concept, identifying the factors involved in its development and recognizing whether it is actually possible to immunize mental health against adversities. In the clinical field, it may be possible to identify predisposing factors or risk factors for psychopathologies and to develop new intervention strategies, both preventive and therapeutic, based on the concept of resilience. The preferred environments for application of resilience are health, education, and social policy and the right approach in integrating; it can be developed only with more research

  14. Census Videos

    Science.gov (United States)

    Employment and Payroll Survey of Business Owners Work from Home Our statistics highlight trends in household statistics from multiple surveys. Data Tools & Apps Main American FactFinder Census Business Builder My Classification Codes (i.e., NAICS) Economic Census Economic Indicators Economic Studies Industry Statistics

  15. Multiple descriptions based on multirate coding for JPEG 2000 and H.264/AVC.

    Science.gov (United States)

    Tillo, Tammam; Baccaglini, Enrico; Olmo, Gabriella

    2010-07-01

    Multiple description coding (MDC) makes use of redundant representations of multimedia data to achieve resiliency. Descriptions should be generated so that the quality obtained when decoding a subset of them only depends on their number and not on the particular received subset. In this paper, we propose a method based on the principle of encoding the source at several rates, and properly blending the data encoded at different rates to generate the descriptions. The aim is to achieve efficient redundancy exploitation, and easy adaptation to different network scenarios by means of fine tuning of the encoder parameters. We apply this principle to both JPEG 2000 images and H.264/AVC video data. We consider as the reference scenario the distribution of contents on application-layer overlays with multiple-tree topology. The experimental results reveal that our method favorably compares with state-of-art MDC techniques.

  16. Exploring resilience in rural GP registrars--implications for training.

    Science.gov (United States)

    Walters, Lucie; Laurence, Caroline O; Dollard, Joanne; Elliott, Taryn; Eley, Diann S

    2015-07-02

    Resilience can be defined as the ability to rebound from adversity and overcome difficult circumstances. General Practice (GP) registrars face many challenges in transitioning into general practice, and additional stressors and pressures apply for those choosing a career in rural practice. At this time of international rural generalist medical workforce shortages, it is important to focus on the needs of rural GP registrars and how to support them to become resilient health care providers. This study sought to explore GP registrars' perceptions of their resilience and strategies they used to maintain resilience in rural general practice. In this qualitative interpretive research, semi-structured interviews were recorded, transcribed and analysed using an inductive approach. Initial coding resulted in a coding framework which was refined using constant comparison and negative case analysis. Authors developed consensus around the final conceptual model. Eighteen GP registrars from: Australian College of Rural and Remote Medicine Independent Pathway, and three GP regional training programs with rural training posts. Six main themes emerged from the data. Firstly, rural GP registrars described four dichotomous tensions they faced: clinical caution versus clinical courage; flexibility versus persistence; reflective practice versus task-focused practice; and personal connections versus professional commitment. Further themes included: personal skills for balance which facilitated resilience including optimistic attitude, self-reflection and metacognition; and finally GP registrars recognised the role of their supervisors in supporting and stretching them to enhance their clinical resilience. Resilience is maintained as on a wobble board by balancing professional tensions within acceptable limits. These limits are unique to each individual, and may be expanded through personal growth and professional development as part of rural general practice training.

  17. Multimedia image and video processing

    CERN Document Server

    Guan, Ling

    2012-01-01

    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  18. Digital video technologies and their network requirements

    Energy Technology Data Exchange (ETDEWEB)

    R. P. Tsang; H. Y. Chen; J. M. Brandt; J. A. Hutchins

    1999-11-01

    Coded digital video signals are considered to be one of the most difficult data types to transport due to their real-time requirements and high bit rate variability. In this study, the authors discuss the coding mechanisms incorporated by the major compression standards bodies, i.e., JPEG and MPEG, as well as more advanced coding mechanisms such as wavelet and fractal techniques. The relationship between the applications which use these coding schemes and their network requirements are the major focus of this study. Specifically, the authors relate network latency, channel transmission reliability, random access speed, buffering and network bandwidth with the various coding techniques as a function of the applications which use them. Such applications include High-Definition Television, Video Conferencing, Computer-Supported Collaborative Work (CSCW), and Medical Imaging.

  19. Airborne Video Surveillance

    National Research Council Canada - National Science Library

    Blask, Steven

    2002-01-01

    The DARPA Airborne Video Surveillance (AVS) program was established to develop and promote technologies to make airborne video more useful, providing capabilities that achieve a UAV force multiplier...

  20. Quantifying resilience for resilience engineering of socio technical systems

    OpenAIRE

    Häring, Ivo; Ebenhöch, Stefan; Stolz, Alexander

    2016-01-01

    Resilience engineering can be defined to comprise originally technical, engineering and natural science approaches to improve the resilience and sustainability of socio technical cyber-physical systems of various complexities with respect to disruptive events. It is argued how this emerging interdisciplinary technical and societal science approach may contribute to civil and societal security research. In this context, the article lists expected benefits of quantifying resilience. Along the r...

  1. From resilience thinking to Resilience Planning: Lessons from practice.

    Science.gov (United States)

    Sellberg, M M; Ryan, P; Borgström, S T; Norström, A V; Peterson, G D

    2018-07-01

    Resilience thinking has frequently been proposed as an alternative to conventional natural resource management, but there are few studies of its applications in real-world settings. To address this gap, we synthesized experiences from practitioners that have applied a resilience thinking approach to strategic planning, called Resilience Planning, in regional natural resource management organizations in Australia. This case represents one of the most extensive and long-term applications of resilience thinking in the world today. We conducted semi-structured interviews with Resilience Planning practitioners from nine organizations and reviewed strategic planning documents to investigate: 1) the key contributions of the approach to their existing strategic planning, and 2) what enabled and hindered the practitioners in applying and embedding the new approach in their organizations. Our results reveal that Resilience Planning contributed to developing a social-ecological systems perspective, more adaptive and collaborative approaches to planning, and that it clarified management goals of desirable resource conditions. Applying Resilience Planning required translating resilience thinking to practice in each unique circumstance, while simultaneously creating support among staff, and engaging external actors. Embedding Resilience Planning within organizations implied starting and maintaining longer-term change processes that required sustained multi-level organizational support. We conclude by identifying four lessons for successfully applying and embedding resilience practice in an organization: 1) to connect internal "entrepreneurs" to "interpreters" and "networkers" who work across organizations, 2) to assess the opportunity context for resilience practice, 3) to ensure that resilience practice is a learning process that engages internal and external actors, and 4) to develop reflective strategies for managing complexity and uncertainty. Copyright © 2018 The Authors

  2. Framing resilience: social uncertainty in designing urban climate resilience

    OpenAIRE

    Wardekker, J.A.

    2016-01-01

    Building urban resilience to climate change and other challenges will be essential for maintaining thriving cities into the future. Resilience has become very popular in both research on and practice of climate adaptation. However, people have different interpretations of what it means: what resilience-building contributes to, what the problems, causes and solutions are, and what trade-offs, side-effects and other normative choices are acceptable. These different ways of ‘framing’ climate res...

  3. Creating resilient SMEs

    DEFF Research Database (Denmark)

    Dahlberg, Rasmus; Guay, Fanny

    2015-01-01

    According to the EU, during the past five years, small and medium enterprises (SMEs) have created 85% of new jobs and two-thirds of private sector employment in the region. SMEs are considered the backbone of the economy in Europe and represent more than 95% of enterprises in USA and Australia....... They are considered more vulnerable to disasters because of their size. This paper argues, on the contrary, that SMEs also can be less vulnerable to sudden change than large corporations, drawing upon the ideas of Hayek and Taleb, and that networks of SMEs may contribute to the overall resilience of society...... if certain criteria are met. With this in mind, this paper will be examining how to create resilient SMEs. A well-known concept in the field is business continuity management. BCM is defined as “a holistic management process that identifies potential threats to an organization and the impacts to business...

  4. Video traffic characteristics of modern encoding standards: H.264/AVC with SVC and MVC extensions and H.265/HEVC.

    Science.gov (United States)

    Seeling, Patrick; Reisslein, Martin

    2014-01-01

    Video encoding for multimedia services over communication networks has significantly advanced in recent years with the development of the highly efficient and flexible H.264/AVC video coding standard and its SVC extension. The emerging H.265/HEVC video coding standard as well as 3D video coding further advance video coding for multimedia communications. This paper first gives an overview of these new video coding standards and then examines their implications for multimedia communications by studying the traffic characteristics of long videos encoded with the new coding standards. We review video coding advances from MPEG-2 and MPEG-4 Part 2 to H.264/AVC and its SVC and MVC extensions as well as H.265/HEVC. For single-layer (nonscalable) video, we compare H.265/HEVC and H.264/AVC in terms of video traffic and statistical multiplexing characteristics. Our study is the first to examine the H.265/HEVC traffic variability for long videos. We also illustrate the video traffic characteristics and statistical multiplexing of scalable video encoded with the SVC extension of H.264/AVC as well as 3D video encoded with the MVC extension of H.264/AVC.

  5. 4K Video Traffic Prediction using Seasonal Autoregressive Modeling

    Directory of Open Access Journals (Sweden)

    D. R. Marković

    2017-06-01

    Full Text Available From the perspective of average viewer, high definition video streams such as HD (High Definition and UHD (Ultra HD are increasing their internet presence year over year. This is not surprising, having in mind expansion of HD streaming services, such as YouTube, Netflix etc. Therefore, high definition video streams are starting to challenge network resource allocation with their bandwidth requirements and statistical characteristics. Need for analysis and modeling of this demanding video traffic has essential importance for better quality of service and experience support. In this paper we use an easy-to-apply statistical model for prediction of 4K video traffic. Namely, seasonal autoregressive modeling is applied in prediction of 4K video traffic, encoded with HEVC (High Efficiency Video Coding. Analysis and modeling were performed within R programming environment using over 17.000 high definition video frames. It is shown that the proposed methodology provides good accuracy in high definition video traffic modeling.

  6. Introduce subtitles to your video using Aegisub

    CERN Multimedia

    CERN. Geneva; Dawson, Kyle Richard

    2018-01-01

    This is a video explaining how to equip your video with subtitles using the tool Aegisub. You'll also need site webvtt.org Here is the standard filenames for subtitles in various languages. to be fully compatible with both CDS and Videos, please name the subtitle filename in a standard format, _.vtt, where is a two letters ISO language (https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes).   NB! You need to have the script written beforehand!

  7. Physics and Video Analysis

    Science.gov (United States)

    Allain, Rhett

    2016-05-01

    We currently live in a world filled with videos. There are videos on YouTube, feature movies and even videos recorded with our own cameras and smartphones. These videos present an excellent opportunity to not only explore physical concepts, but also inspire others to investigate physics ideas. With video analysis, we can explore the fantasy world in science-fiction films. We can also look at online videos to determine if they are genuine or fake. Video analysis can be used in the introductory physics lab and it can even be used to explore the make-believe physics embedded in video games. This book covers the basic ideas behind video analysis along with the fundamental physics principles used in video analysis. The book also includes several examples of the unique situations in which video analysis can be used.

  8. Resilience Through Ecological Network

    Directory of Open Access Journals (Sweden)

    Grazia Brunetta

    2014-05-01

    Full Text Available The paper explores the strategic role that urban biodiversity and ecosystem services management, natural infrastructure and adaptive governance approaches can play in making our economies and societies more resilient and in linking human societies and the natural environment. Resilience – a concept that entered the debate on urban governance – means the ability of urban systems, considered as linear-systems, to react to external disturbances by returning to some socio-ecological equilibrium steady-state by overcoming a crisis period (Gunderson & al. 2010, Newman & al. 2009. In this view, green infrastructures can assume a strategic role in restoring and enhancing the ecological and environmental livability in urban areas. Starting from the International and European context, the paper discusses innovative programs and interdisciplinary projects and practices (some cases in Turin Metropolitan Area to demonstrate how green infrastructures can increase the adaptive capacity of urban systems in term of resilience. They can contribute to increase the ability of European cities to adapt to climate change and to reduce their ecological footprints, to enhance security and life quality.

  9. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... YouTube Videos » NEI YouTube Videos: Amblyopia Listen NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration ... Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: ...

  10. Coding of Depth Images for 3DTV

    DEFF Research Database (Denmark)

    Zamarin, Marco; Forchhammer, Søren

    In this short paper a brief overview of the topic of coding and compression of depth images for multi-view image and video coding is provided. Depth images represent a convenient way to describe distances in the 3D scene, useful for 3D video processing purposes. Standard approaches...... for the compression of depth images are described and compared against some recent specialized algorithms able to achieve higher compression performances. Future research directions close the paper....

  11. A video authentication technique

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1987-01-01

    Unattended video surveillance systems are particularly vulnerable to the substitution of false video images into the cable that connects the camera to the video recorder. New technology has made it practical to insert a solid state video memory into the video cable, freeze a video image from the camera, and hold this image as long as desired. Various techniques, such as line supervision and sync detection, have been used to detect video cable tampering. The video authentication technique described in this paper uses the actual video image from the camera as the basis for detecting any image substitution made during the transmission of the video image to the recorder. The technique, designed for unattended video systems, can be used for any video transmission system where a two-way digital data link can be established. The technique uses similar microprocessor circuitry at the video camera and at the video recorder to select sample points in the video image for comparison. The gray scale value of these points is compared at the recorder controller and if the values agree within limits, the image is authenticated. If a significantly different image was substituted, the comparison would fail at a number of points and the video image would not be authenticated. The video authentication system can run as a stand-alone system or at the request of another system

  12. Multimedia signal coding and transmission

    CERN Document Server

    Ohm, Jens-Rainer

    2015-01-01

    This textbook covers the theoretical background of one- and multidimensional signal processing, statistical analysis and modelling, coding and information theory with regard to the principles and design of image, video and audio compression systems. The theoretical concepts are augmented by practical examples of algorithms for multimedia signal coding technology, and related transmission aspects. On this basis, principles behind multimedia coding standards, including most recent developments like High Efficiency Video Coding, can be well understood. Furthermore, potential advances in future development are pointed out. Numerous figures and examples help to illustrate the concepts covered. The book was developed on the basis of a graduate-level university course, and most chapters are supplemented by exercises. The book is also a self-contained introduction both for researchers and developers of multimedia compression systems in industry.

  13. Usage of QR code in tourism industry

    OpenAIRE

    Emek, Mehmet

    2012-01-01

    QR (Quick Response) code scanning allows the user to obtain in-depth information about the scanned item. Apps used for scanning QR codes can be found on nearly all smart phone devices. Travelers who have smart phone, equipped with the correct reader software, can easily access QR coded information (text, photo, video, web page, etc.) when it is available. Travelers can scan QR coded galleries, places, vineyards or monuments when they are visiting and reach the detailed information without usi...

  14. Communal resilience: the Lebanese case

    Directory of Open Access Journals (Sweden)

    Eric BOUTIN

    2015-07-01

    Full Text Available In a turbulent and aggressive environment, organizations are subject to external events. They are sometimes destabilized and can disappear. This context explains the multiplication of works studying resilience of human organizations. Resilience is then defined as the ability of the organization studied to face an external shock.This paper proposes a state of the art of resilience concept and considers the interests of the transposition of the concept to the field of a territorial community. A case study will lead us to apply the concept of resilience to the Lebanese nation.

  15. Assessing Resilience in Stressed Watersheds

    Directory of Open Access Journals (Sweden)

    Kristine T. Nemec

    2014-03-01

    Full Text Available Although several frameworks for assessing the resilience of social-ecological systems (SESs have been developed, some practitioners may not have sufficient time and information to conduct extensive resilience assessments. We have presented a simplified approach to resilience assessment that reviews the scientific, historical, and social literature to rate the resilience of an SES with respect to nine resilience properties: ecological variability, diversity, modularity, acknowledgement of slow variables, tight feedbacks, social capital, innovation, overlap in governance, and ecosystem services. We evaluated the effects of two large-scale projects, the construction of a major dam and the implementation of an ecosystem recovery program, on the resilience of the central Platte River SES (Nebraska, United States. We used this case study to identify the strengths and weaknesses of applying a simplified approach to resilience assessment. Although social resilience has increased steadily since the predam period for the central Platte River SES, ecological resilience was greatly reduced in the postdam period as compared to the predam and ecosystem recovery program time periods.

  16. A quantitative framework for assessing ecological resilience

    Science.gov (United States)

    Quantitative approaches to measure and assess resilience are needed to bridge gaps between science, policy, and management. In this paper, we suggest a quantitative framework for assessing ecological resilience. Ecological resilience as an emergent ecosystem phenomenon can be de...

  17. An improvement analysis on video compression using file segmentation

    Science.gov (United States)

    Sharma, Shubhankar; Singh, K. John; Priya, M.

    2017-11-01

    From the past two decades the extreme evolution of the Internet has lead a massive rise in video technology and significantly video consumption over the Internet which inhabits the bulk of data traffic in general. Clearly, video consumes that so much data size on the World Wide Web, to reduce the burden on the Internet and deduction of bandwidth consume by video so that the user can easily access the video data.For this, many video codecs are developed such as HEVC/H.265 and V9. Although after seeing codec like this one gets a dilemma of which would be improved technology in the manner of rate distortion and the coding standard.This paper gives a solution about the difficulty for getting low delay in video compression and video application e.g. ad-hoc video conferencing/streaming or observation by surveillance. Also this paper describes the benchmark of HEVC and V9 technique of video compression on subjective oral estimations of High Definition video content, playback on web browsers. Moreover, this gives the experimental ideology of dividing the video file into several segments for compression and putting back together to improve the efficiency of video compression on the web as well as on the offline mode.

  18. Rare Disease Video Portal

    OpenAIRE

    Sánchez Bocanegra, Carlos Luis

    2011-01-01

    Rare Disease Video Portal (RD Video) is a portal web where contains videos from Youtube including all details from 12 channels of Youtube. Rare Disease Video Portal (RD Video) es un portal web que contiene los vídeos de Youtube incluyendo todos los detalles de 12 canales de Youtube. Rare Disease Video Portal (RD Video) és un portal web que conté els vídeos de Youtube i que inclou tots els detalls de 12 Canals de Youtube.

  19. Probabilistic Modelling of Robustness and Resilience of Power Grid Systems

    DEFF Research Database (Denmark)

    Qin, Jianjun; Sansavini, Giovanni; Nielsen, Michael Havbro Faber

    2017-01-01

    The present paper proposes a framework for the modeling and analysis of resilience of networked power grid systems. A probabilistic systems model is proposed based on the JCSS Probabilistic Model Code (JCSS, 2001) and deterministic engineering systems modeling techniques such as the DC flow model...... cascading failure event scenarios (Nan and Sansavini, 2017). The concept of direct and indirect consequences proposed by the Joint Committee on Structural Safety (JCSS, 2008) is utilized to model the associated consequences. To facilitate a holistic modeling of robustness and resilience, and to identify how...... these characteristics may be optimized these characteristics, the power grid system is finally interlinked with its fundamental interdependent systems, i.e. a societal model, a regulatory system and control feedback loops. The proposed framework is exemplified with reference to optimal decision support for resilience...

  20. Framing resilience: social uncertainty in designing urban climate resilience

    NARCIS (Netherlands)

    Wardekker, J.A.

    2016-01-01

    Building urban resilience to climate change and other challenges will be essential for maintaining thriving cities into the future. Resilience has become very popular in both research on and practice of climate adaptation. However, people have different interpretations of what it means: what

  1. No-Reference Video Quality Assessment by HEVC Codec Analysis

    DEFF Research Database (Denmark)

    Huang, Xin; Søgaard, Jacob; Forchhammer, Søren

    2015-01-01

    This paper proposes a No-Reference (NR) Video Quality Assessment (VQA) method for videos subject to the distortion given by High Efficiency Video Coding (HEVC). The proposed assessment can be performed either as a BitstreamBased (BB) method or as a Pixel-Based (PB). It extracts or estimates...... the transform coefficients, estimates the distortion, and assesses the video quality. The proposed scheme generates VQA features based on Intra coded frames, and then maps features using an Elastic Net to predict subjective video quality. A set of HEVC coded 4K UHD sequences are tested. Results show...... that the quality scores computed by the proposed method are highly correlated with the subjective assessment....

  2. Midwives׳ experiences of workplace resilience.

    Science.gov (United States)

    Hunter, Billie; Warren, Lucie

    2014-08-01

    many UK midwives experience workplace adversity resulting from a national shortage of midwives, rise in birth rate and increased numbers of women entering pregnancy with complex care needs. Research evidence suggests that workplace pressures, and the emotional demands of the job, may increase midwives׳ experience of stress and contribute to low morale, sickness and attrition. Much less is known about midwives who demonstrate resilience in the face of adversity. Resilience has been investigated in studies of other health and social care workers, but there is a gap in knowledge regarding midwives׳ experiences. to explore clinical midwives׳ understanding and experience of professional resilience and to identify the personal, professional and contextual factors considered to contribute to or act as barriers to resilience. an exploratory qualitative descriptive study. In Stage One, a closed online professional discussion group was conducted over a one month period. Midwives discussed workplace adversity and their resilient responses to this. In Stage Two, the data were discussed with an Expert Panel with representatives from midwifery workforce and resilience research, in order to enhance data interpretation and refine the concept modelling. the online discussion group was hosted by the Royal College of Midwives, UK online professional networking hub: 'Communities'. 11 practising midwives with 15 or more years of 'hands on clinical experience', and who self-identified as being resilient, took part in the online discussion group. thematic analysis of the data identified four themes: challenges to resilience, managing and coping, self-awareness and building resilience. The participants identified 'critical moments' in their careers when midwives were especially vulnerable to workplace adversity. Resilience was seen as a learned process which was facilitated by a range of coping strategies, including accessing support and developing self-awareness and protection of self

  3. Literature Review of Concepts: Psychological Resiliency

    National Research Council Canada - National Science Library

    Wald, Jaye; Taylor, Steven; Asmundson, Gordon J; Jang, Kerry L; Stapleton, Jennifer

    2006-01-01

    ...; and resiliency measures, their development and validation. Existing definitions implicate resiliency with the ability to adapt and successfully cope with adversity, life stressors, and traumatic events...

  4. Surgical navigation with QR codes

    Directory of Open Access Journals (Sweden)

    Katanacho Manuel

    2016-09-01

    Full Text Available The presented work is an alternative to established measurement systems in surgical navigation. The system is based on camera based tracking of QR code markers. The application uses a single video camera, integrated in a surgical lamp, that captures the QR markers attached to surgical instruments and to the patient.

  5. Systemic aspects of conjugal resilience in couples with a child facing cancer and marrow transplantation.

    Science.gov (United States)

    Martin, Julie; Péloquin, Katherine; Vachon, Marie-France; Duval, Michel; Sultan, Serge

    The negative impact of paediatric cancer on parents is well known and is even greater when intensive treatments are used. This study aimed to describe how couples whose child has received a transplant for the treatment of leukaemia view conjugal resilience and to evaluate the role of we-ness as a precursor of conjugal adjustment. Four parental couples were interviewed. Interviews were analysed in two ways: inductive thematic analysis and rating of verbal content with the We-ness Coding Scale . Participants report that conjugal resilience involves the identification of the couple as a team and cohesion in the couple. Being a team generates certain collaborative interactions that lead to conjugal resilience. A sense of we-ness in parents is associated with fluctuation in the frequency of themes. Participants' vision of conjugal resilience introduced novel themes. The sense of we-ness facilitates cohesion and the process of conjugal resilience.

  6. Systemic aspects of conjugal resilience in couples with a child facing cancer and marrow transplantation

    Directory of Open Access Journals (Sweden)

    Julie Martin

    2016-09-01

    Full Text Available Introduction: The negative impact of paediatric cancer on parents is well known and is even greater when intensive treatments are used. This study aimed to describe how couples whose child has received a transplant for the treatment of leukaemia view conjugal resilience and to evaluate the role of we-ness as a precursor of conjugal adjustment. Methods: Four parental couples were interviewed. Interviews were analysed in two ways: inductive thematic analysis and rating of verbal content with the We-ness Coding Scale. Results: Participants report that conjugal resilience involves the identification of the couple as a team and cohesion in the couple. Being a team generates certain collaborative interactions that lead to conjugal resilience. A sense of we-ness in parents is associated with fluctuation in the frequency of themes. Discussion: Participants’ vision of conjugal resilience introduced novel themes. The sense of we-ness facilitates cohesion and the process of conjugal resilience.

  7. The quest for resilience.

    Science.gov (United States)

    Hamel, Gary; Välikangas, Liisa

    2003-09-01

    In less turbulent times, executives had the luxury of assuming that business models were more or less immortal. Companies always had to work to get better, but they seldom had to get different--not at their core, not in their essence. Today, getting different is the imperative. It's the challenge facing Coca-Cola as it struggles to raise its "share of throat" in noncarbonated beverages. It's the task that bedevils McDonald's as it tries to restart its growth in a burger-weary world. It's the hurdle for Sun Microsystems as it searches for ways to protect its high-margin server business from the Linux onslaught. Continued success no longer hinges on momentum. Rather, it rides on resilience-on the ability to dynamically reinvent business models and strategies as circumstances change. Strategic resilience is not about responding to a onetime crisis or rebounding from a setback. It's about continually anticipating and adjusting to deep, secular trends that can permanently impair the earning power of a core business. It's about having the capacity to change even before the case for change becomes obvious. To thrive in turbulent times, companies must become as efficient at renewal as they are at producing today's products and services. To achieve strategic resilience, companies will have to overcome the cognitive challenge of eliminating denial, nostalgia, and arrogance; the strategic challenge of learning how to create a wealth of small tactical experiments; the political challenge of reallocating financial and human resources to where they can earn the best returns; and the ideological challenge of learning that strategic renewal is as important as optimization.

  8. Post-Traumatic Growth and Resilience in Adolescent and Young Adult Cancer Patients: An Overview.

    Science.gov (United States)

    Greup, Suzanne R; Kaal, Suzanne E J; Jansen, Rosemarie; Manten-Horst, Eveliene; Thong, Melissa S Y; van der Graaf, Winette T A; Prins, Judith B; Husson, Olga

    2018-02-01

    The aim of this study was to provide an overview of the literature on post-traumatic growth (PTG) and resilience among adolescent and young adult (AYA) cancer patients. A literature search in Embase, PsychInfo, PubMed, Web of Science, Cochrane Library, and Cinahl was carried out. Thirteen articles met the pre-defined inclusion criteria. Qualitative interview studies showed that AYA cancer patients report PTG and resilience: PTG is described by AYA cancer patients in terms of benefit finding, including changing view of life and feeling stronger and more confident, whereas resilience is described as a balance of several factors, including stress and coping, goals, optimism, finding meaning, connection, and belonging. Quantitative studies showed that sociodemographic and clinical characteristics were not associated with PTG. Enduring stress was negatively, and social support positively, associated with PTG. Symptom distress and defensive coping were negatively and adaptive cognitive coping was positively associated with resilience. Both PTG and resilience were positively associated with satisfaction with life and health-related quality of life (HRQoL). Resilience was found to be a mediator in the relationship between symptom distress and HRQoL. Two interventions aiming to promote resilience, a stress management and a therapeutic music video-intervention, were not successful in significantly increasing overall resilience. Most AYA cancer patients report at least some PTG or resilience. Correlates of PTG and resilience, including symptom distress, stress, coping, social support, and physical activity, provide further insight to improve the effectiveness of interventions aimed at promoting these positive outcomes and potentially buffer negative outcomes.

  9. Integrating Usability Evaluation into Model-Driven Video Game Development

    OpenAIRE

    Fernandez , Adrian; Insfran , Emilio; Abrahão , Silvia; Carsí , José ,; Montero , Emanuel

    2012-01-01

    Part 3: Short Papers; International audience; The increasing complexity of video game development highlights the need of design and evaluation methods for enhancing quality and reducing time and cost. In this context, Model-Driven Development approaches seem to be very promising since a video game can be obtained by transforming platform-independent models into platform-specific models that can be in turn transformed into code. Although this approach is started to being used for video game de...

  10. Resilience and reworking practices

    DEFF Research Database (Denmark)

    Hauge, Mads Martinus; Fold, Niels

    2016-01-01

    of this article is to shed light on the agency of individual workers involved in rapid industrialization processes. In this endeavor we draw inspiration from recent contributions that have integrated Cindi Katz's threefold categorization of agency as reworking, resilience and resistance. In combination...... the labor market. The empirical part of the article draws on interviews with local and migrant first-generation workers in two settlements located next to an industrial zone in Can Tho Province in the Mekong River Delta Region of Vietnam. It is suggested that the alternating practices of reworking...

  11. Stiffness, resilience, compressibility

    Energy Technology Data Exchange (ETDEWEB)

    Leu, Bogdan M. [Argonne National Laboratory, Advanced Photon Source (United States); Sage, J. Timothy, E-mail: jtsage@neu.edu [Northeastern University, Department of Physics and Center for Interdisciplinary Research on Complex Systems (United States)

    2016-12-15

    The flexibility of a protein is an important component of its functionality. We use nuclear resonance vibrational spectroscopy (NRVS) to quantify the flexibility of the heme iron environment in the electron-carrying protein cytochrome c by measuring the stiffness and the resilience. These quantities are sensitive to structural differences between the active sites of different proteins, as illustrated by a comparative analysis with myoglobin. The elasticity of the entire protein, on the other hand, can be probed quantitatively from NRVS and high energy-resolution inelastic X-ray scattering (IXS) measurements, an approach that we used to extract the bulk modulus of cytochrome c.

  12. Leakage resilient password systems

    CERN Document Server

    Li, Yingjiu; Deng, Robert H

    2015-01-01

    This book investigates tradeoff between security and usability in designing leakage resilient password systems (LRP) and introduces two practical LRP systems named Cover Pad and ShadowKey. It demonstrates that existing LRP systems are subject to both brute force attacks and statistical attacks and that these attacks cannot be effectively mitigated without sacrificing the usability of LRP systems. Quantitative analysis proves that a secure LRP system in practical settings imposes a considerable amount of cognitive workload unless certain secure channels are involved. The book introduces a secur

  13. Resilient mounting systems in buildings

    NARCIS (Netherlands)

    Breeuwer, R.; Tukker, J.C.

    1976-01-01

    The basic elements of resilient mounting systems are described and various measures for quantifying the effect of such systems defined. Using electrical analogue circuits, the calculation of these measures is illustrated. With special reference to resilient mounting systems in buildings, under

  14. Tiered Approach to Resilience Assessment.

    Science.gov (United States)

    Linkov, Igor; Fox-Lent, Cate; Read, Laura; Allen, Craig R; Arnott, James C; Bellini, Emanuele; Coaffee, Jon; Florin, Marie-Valentine; Hatfield, Kirk; Hyde, Iain; Hynes, William; Jovanovic, Aleksandar; Kasperson, Roger; Katzenberger, John; Keys, Patrick W; Lambert, James H; Moss, Richard; Murdoch, Peter S; Palma-Oliveira, Jose; Pulwarty, Roger S; Sands, Dale; Thomas, Edward A; Tye, Mari R; Woods, David

    2018-04-25

    Regulatory agencies have long adopted a three-tier framework for risk assessment. We build on this structure to propose a tiered approach for resilience assessment that can be integrated into the existing regulatory processes. Comprehensive approaches to assessing resilience at appropriate and operational scales, reconciling analytical complexity as needed with stakeholder needs and resources available, and ultimately creating actionable recommendations to enhance resilience are still lacking. Our proposed framework consists of tiers by which analysts can select resilience assessment and decision support tools to inform associated management actions relative to the scope and urgency of the risk and the capacity of resource managers to improve system resilience. The resilience management framework proposed is not intended to supplant either risk management or the many existing efforts of resilience quantification method development, but instead provide a guide to selecting tools that are appropriate for the given analytic need. The goal of this tiered approach is to intentionally parallel the tiered approach used in regulatory contexts so that resilience assessment might be more easily and quickly integrated into existing structures and with existing policies. Published 2018. This article is a U.S. government work and is in the public domain in the USA.

  15. Resiliency against stress among athletes

    Directory of Open Access Journals (Sweden)

    Kamila Litwic-Kaminska

    2015-10-01

    Full Text Available Background The aim of this paper is to describe the results of a study concerning the relationship between resiliency and appraisal of a stressful situation, anxiety reactions and undertaken methods of coping among sportsmen. Participants and procedure The research concerned 192 competitors who actively train in one of the Olympic disciplines – individual or team. We used the following instruments: Resiliency Assessment Scale (SPP-25; Stress Appraisal Questionnaire A/B; Reactions to Competition Questionnaire; Coping Inventory for Stressful Situations (CISS; Sport Stress Coping Strategies Questionnaire (SR3S, self-constructed. Results Athletes most frequently apply positive types of stress appraisal, and they cope with stress through a task-oriented style during competitions. There is a relationship between the level of resiliency and the analysed aspects of the process of stress. The higher the resiliency, the more positive is the appraisal of a stressful situation and the more task-oriented are the strategies applied. Similarly, in everyday situations resilient sportspeople positively appraise difficult situations and undertake mostly task-oriented strategies. Resiliency is connected with less frequently experiencing reactions in the form of anxiety. Conclusions The obtained results, similarly to previous research, suggest that resiliency is connected with experiencing positive emotions. It causes more frequent appraisal of stressful situations as a challenge. More resilient people also choose more effective and situation-appropriate coping strategies. Therefore they are more resistant to stress.

  16. Resilia cyber resilience best practices

    CERN Document Server

    , AXELOS

    2015-01-01

    RESILIA™ Cyber Resilience Best Practices offers a practical approach to cyber resilience, reflecting the need to detect and recover from incidents, and not rely on prevention alone. It uses the ITIL® framework, which provides a proven approach to the provision of services that align to business outcomes.

  17. Developing a workplace resilience instrument.

    Science.gov (United States)

    Mallak, Larry A; Yildiz, Mustafa

    2016-05-27

    Resilience benefits from the use of protective factors, as opposed to risk factors, which are associated with vulnerability. Considerable research and instrument development has been conducted in clinical settings for patients. The need existed for an instrument to be developed in a workplace setting to measure resilience of employees. This study developed and tested a resilience instrument for employees in the workplace. The research instrument was distributed to executives and nurses working in the United States in hospital settings. Five-hundred-forty completed and usable responses were obtained. The instrument contained an inventory of workplace resilience, a job stress questionnaire, and relevant demographics. The resilience items were written based on previous work by the lead author and inspired by Weick's [1] sense-making theory. A four-factor model yielded an instrument having psychometric properties showing good model fit. Twenty items were retained for the resulting Workplace Resilience Instrument (WRI). Parallel analysis was conducted with successive iterations of exploratory and confirmatory factor analyses. Respondents were classified based on their employment with either a rural or an urban hospital. Executives had significantly higher WRI scores than nurses, controlling for gender. WRI scores were positively and significantly correlated with years of experience and the Brief Job Stress Questionnaire. An instrument to measure individual resilience in the workplace (WRI) was developed. The WRI's four factors identify dimensions of workplace resilience for use in subsequent investigations: Active Problem-Solving, Team Efficacy, Confident Sense-Making, and Bricolage.

  18. The International Resilience Research Project.

    Science.gov (United States)

    Grotberg, Edith H.

    Resilience is defined as "the human capacity to face, overcome, and be strengthened by experiences of adversity." This study used an Eriksonian developmental model to examine parents', caregivers', and children's resilience-promotion in children up to 12 years of age. Age and gender differences and cultural/ethnic similarities and…

  19. On Undecidability Aspects of Resilient Computations and Implications to Exascale

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S [ORNL

    2014-01-01

    Future Exascale computing systems with a large number of processors, memory elements and interconnection links, are expected to experience multiple, complex faults, which affect both applications and operating-runtime systems. A variety of algorithms, frameworks and tools are being proposed to realize and/or verify the resilience properties of computations that guarantee correct results on failure-prone computing systems. We analytically show that certain resilient computation problems in presence of general classes of faults are undecidable, that is, no algorithms exist for solving them. We first show that the membership verification in a generic set of resilient computations is undecidable. We describe classes of faults that can create infinite loops or non-halting computations, whose detection in general is undecidable. We then show certain resilient computation problems to be undecidable by using reductions from the loop detection and halting problems under two formulations, namely, an abstract programming language and Turing machines, respectively. These two reductions highlight different failure effects: the former represents program and data corruption, and the latter illustrates incorrect program execution. These results call for broad-based, well-characterized resilience approaches that complement purely computational solutions using methods such as hardware monitors, co-designs, and system- and application-specific diagnosis codes.

  20. Rate control scheme for consistent video quality in scalable video codec.

    Science.gov (United States)

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame.

  1. Resilient Grid Operational Strategies

    Energy Technology Data Exchange (ETDEWEB)

    Pasqualini, Donatella [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-01

    Extreme weather-related disturbances, such as hurricanes, are a leading cause of grid outages historically. Although physical asset hardening is perhaps the most common way to mitigate the impacts of severe weather, operational strategies may be deployed to limit the extent of societal and economic losses associated with weather-related physical damage.1 The purpose of this study is to examine bulk power-system operational strategies that can be deployed to mitigate the impact of severe weather disruptions caused by hurricanes, thereby increasing grid resilience to maintain continuity of critical infrastructure during extreme weather. To estimate the impacts of resilient grid operational strategies, Los Alamos National Laboratory (LANL) developed a framework for hurricane probabilistic risk analysis (PRA). The probabilistic nature of this framework allows us to estimate the probability distribution of likely impacts, as opposed to the worst-case impacts. The project scope does not include strategies that are not operations related, such as transmission system hardening (e.g., undergrounding, transmission tower reinforcement and substation flood protection) and solutions in the distribution network.

  2. Resilient leadership and the organizational culture of resilience: construct validation.

    Science.gov (United States)

    Everly, George S; Smith, Kenneth J; Lobo, Rachel

    2013-01-01

    Political, economic, and social unrest and uncertainty seem replete throughout the world. Within the United States, political vitriol and economic volatility have led to severe economic restrictions. Both government and private sector organizations are being asked to do more with less. The specter of dramatic changes in healthcare creates a condition of uncertainty affecting budget allocations and hiring practices. If ever there was a time when a "resilient culture" was needed, it is now. In this paper we shall discuss the application of "tipping point" theory (Gladwell, 2000) operationalized through a special form of leadership: "resilient leadership" (Everly, Strouse, Everly, 2010). Resilient leadership is consistent with Gladwells "Law of the Few" and strives to create an organizational culture of resilience by implementing an initial change within no more than 20% of an organization's workforce. It is expected that such a minority, if chosen correctly, will "tip" the rest of the organization toward enhanced resilience, ideally creating a self-sustaining culture of resilience. This paper reports on the empirical foundations and construct validation of "resilient leadership".

  3. Adaptive live multicast video streaming of SVC with UEP FEC

    Science.gov (United States)

    Lev, Avram; Lasry, Amir; Loants, Maoz; Hadar, Ofer

    2014-09-01

    Ideally, video streaming systems should provide the best quality video a user's device can handle without compromising on downloading speed. In this article, an improved video transmission system is presented which dynamically enhances the video quality based on a user's current network state and repairs errors from data lost in the video transmission. The system incorporates three main components: Scalable Video Coding (SVC) with three layers, multicast based on Receiver Layered Multicast (RLM) and an UnEqual Forward Error Correction (FEC) algorithm. The SVC provides an efficient method for providing different levels of video quality, stored as enhancement layers. In the presented system, a proportional-integral-derivative (PID) controller was implemented to dynamically adjust the video quality, adding or subtracting quality layers as appropriate. In addition, an FEC algorithm was added to compensate for data lost in transmission. A two dimensional FEC was used. The FEC algorithm came from the Pro MPEG code of practice #3 release 2. Several bit errors scenarios were tested (step function, cosine wave) with different bandwidth size and error values were simulated. The suggested scheme which includes SVC video encoding with 3 layers over IP Multicast with Unequal FEC algorithm was investigated under different channel conditions, variable bandwidths and different bit error rates. The results indicate improvement of the video quality in terms of PSNR over previous transmission schemes.

  4. Feasibility of video codec algorithms for software-only playback

    Science.gov (United States)

    Rodriguez, Arturo A.; Morse, Ken

    1994-05-01

    Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.

  5. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  6. OLIVE: Speech-Based Video Retrieval

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Gauvain, Jean-Luc; den Hartog, Jurgen; den Hartog, Jeremy; Netter, Klaus

    1999-01-01

    This paper describes the Olive project which aims to support automated indexing of video material by use of human language technologies. Olive is making use of speech recognition to automatically derive transcriptions of the sound tracks, generating time-coded linguistic elements which serve as the

  7. Video watermarking for mobile phone applications

    Science.gov (United States)

    Mitrea, M.; Duta, S.; Petrescu, M.; Preteux, F.

    2005-08-01

    Nowadays, alongside with the traditional voice signal, music, video, and 3D characters tend to become common data to be run, stored and/or processed on mobile phones. Hence, to protect their related intellectual property rights also becomes a crucial issue. The video sequences involved in such applications are generally coded at very low bit rates. The present paper starts by presenting an accurate statistical investigation on such a video as well as on a very dangerous attack (the StirMark attack). The obtained results are turned into practice when adapting a spread spectrum watermarking method to such applications. The informed watermarking approach was also considered: an outstanding method belonging to this paradigm has been adapted and re evaluated under the low rate video constraint. The experimental results were conducted in collaboration with the SFR mobile services provider in France. They also allow a comparison between the spread spectrum and informed embedding techniques.

  8. Privacy enabling technology for video surveillance

    Science.gov (United States)

    Dufaux, Frédéric; Ouaret, Mourad; Abdeljaoued, Yousri; Navarro, Alfonso; Vergnenègre, Fabrice; Ebrahimi, Touradj

    2006-05-01

    In this paper, we address the problem privacy in video surveillance. We propose an efficient solution based on transformdomain scrambling of regions of interest in a video sequence. More specifically, the sign of selected transform coefficients is flipped during encoding. We address more specifically the case of Motion JPEG 2000. Simulation results show that the technique can be successfully applied to conceal information in regions of interest in the scene while providing with a good level of security. Furthermore, the scrambling is flexible and allows adjusting the amount of distortion introduced. This is achieved with a small impact on coding performance and negligible computational complexity increase. In the proposed video surveillance system, heterogeneous clients can remotely access the system through the Internet or 2G/3G mobile phone network. Thanks to the inherently scalable Motion JPEG 2000 codestream, the server is able to adapt the resolution and bandwidth of the delivered video depending on the usage environment of the client.

  9. Image and video compression for multimedia engineering fundamentals, algorithms, and standards

    CERN Document Server

    Shi, Yun Q

    2008-01-01

    Part I: Fundamentals Introduction Quantization Differential Coding Transform Coding Variable-Length Coding: Information Theory Results (II) Run-Length and Dictionary Coding: Information Theory Results (III) Part II: Still Image Compression Still Image Coding: Standard JPEG Wavelet Transform for Image Coding: JPEG2000 Nonstandard Still Image Coding Part III: Motion Estimation and Compensation Motion Analysis and Motion Compensation Block Matching Pel-Recursive Technique Optical Flow Further Discussion and Summary on 2-D Motion Estimation Part IV: Video Compression Fundam

  10. Identifying resilient and non-resilient middle-adolescents in a ...

    African Journals Online (AJOL)

    The aim in this study was to develop a way of identifying resilient and non- resilient middle adolescents in a formerly black-only urban residential (township) school, in order to ultimately support the development of learners' resilience under stressful circumstances. A Resilience Scale was developed to screen for resilient ...

  11. Hierarchical resilience with lightweight threads

    International Nuclear Information System (INIS)

    Wheeler, Kyle Bruce

    2011-01-01

    This paper proposes methodology for providing robustness and resilience for a highly threaded distributed- and shared-memory environment based on well-defined inputs and outputs to lightweight tasks. These inputs and outputs form a failure 'barrier', allowing tasks to be restarted or duplicated as necessary. These barriers must be expanded based on task behavior, such as communication between tasks, but do not prohibit any given behavior. One of the trends in high-performance computing codes seems to be a trend toward self-contained functions that mimic functional programming. Software designers are trending toward a model of software design where their core functions are specified in side-effect free or low-side-effect ways, wherein the inputs and outputs of the functions are well-defined. This provides the ability to copy the inputs to wherever they need to be - whether that's the other side of the PCI bus or the other side of the network - do work on that input using local memory, and then copy the outputs back (as needed). This design pattern is popular among new distributed threading environment designs. Such designs include the Barcelona STARS system, distributed OpenMP systems, the Habanero-C and Habanero-Java systems from Vivek Sarkar at Rice University, the HPX/ParalleX model from LSU, as well as our own Scalable Parallel Runtime effort (SPR) and the Trilinos stateless kernels. This design pattern is also shared by CUDA and several OpenMP extensions for GPU-type accelerators (e.g. the PGI OpenMP extensions).

  12. Application aware approach to compression and transmission of H.264 encoded video for automated and centralized transportation surveillance.

    Science.gov (United States)

    2012-10-01

    In this report we present a transportation video coding and wireless transmission system specically tailored to automated : vehicle tracking applications. By taking into account the video characteristics and the lossy nature of the wireless channe...

  13. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  14. CLEAR: Cross-Layer Exploration for Architecting Resilience

    Science.gov (United States)

    2017-03-01

    unavailable, or in an economy mode (low resilience, low power mode) when ABFT is available. However, the overheads outweigh benefits (details in [Cheng 16b...layer r Design, Automation , & Test in Europe, 2010. [Chen 05] Chen, Z. and J. Dong table real number codes based on random m Lecture Notes in...tolerate soft errors in processor c ACM/EDAC/IEEE Design Automation Conf., 2016. [Cheng 16b] Cheng, E., et al. -layer exploration for architecting

  15. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer ... me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork ...

  16. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Doctor Find a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For ... Doctor Find a Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources Links Videos ...

  17. High data-rate video broadcasting over 3G wireless systems

    NARCIS (Netherlands)

    Atici, C.; Sunay, M.O.

    2007-01-01

    In cellular environments, video broadcasting is a challenging problem in which the number of users receiving the service and the average successfully decoded video data-rate have to be intelligently optimized. When video is broadcasted using the 3G packet data standard, 1xEV-DO, the code space may

  18. Acquisition, compression and rendering of depth and texture for multi-view video

    NARCIS (Netherlands)

    Morvan, Y.

    2009-01-01

    Three-dimensional (3D) video and imaging technologies is an emerging trend in the development of digital video systems, as we presently witness the appearance of 3D displays, coding systems, and 3D camera setups. Three-dimensional multi-view video is typically obtained from a set of synchronized

  19. Information Risk Management and Resilience

    Science.gov (United States)

    Dynes, Scott

    Are the levels of information risk management efforts within and between firms correlated with the resilience of the firms to information disruptions? This paper examines the question by considering the results of field studies of information risk management practices at organizations and in supply chains. The organizations investigated differ greatly in the degree of coupling from a general and information risk management standpoint, as well as in the levels of internal awareness and activity regarding information risk management. The comparison of the levels of information risk management in the firms and their actual or inferred resilience indicates that a formal information risk management approach is not necessary for resilience in certain sectors.

  20. Resilient health care

    DEFF Research Database (Denmark)

    Hollnagel, E.; Braithwaite, J.; Wears, R. L.

    Health care is everywhere under tremendous pressure with regard to efficiency, safety, and economic viability - to say nothing of having to meet various political agendas - and has responded by eagerly adopting techniques that have been useful in other industries, such as quality management, lean...... production, and high reliability. This has on the whole been met with limited success because health care as a non-trivial and multifaceted system differs significantly from most traditional industries. In order to allow health care systems to perform as expected and required, it is necessary to have...... engineering's unique approach emphasises the usefulness of performance variability, and that successes and failures have the same aetiology. This book contains contributions from acknowledged international experts in health care, organisational studies and patient safety, as well as resilience engineering...

  1. Resilient computer system design

    CERN Document Server

    Castano, Victor

    2015-01-01

    This book presents a paradigm for designing new generation resilient and evolving computer systems, including their key concepts, elements of supportive theory, methods of analysis and synthesis of ICT with new properties of evolving functioning, as well as implementation schemes and their prototyping. The book explains why new ICT applications require a complete redesign of computer systems to address challenges of extreme reliability, high performance, and power efficiency. The authors present a comprehensive treatment for designing the next generation of computers, especially addressing safety-critical, autonomous, real time, military, banking, and wearable health care systems.   §  Describes design solutions for new computer system - evolving reconfigurable architecture (ERA) that is free from drawbacks inherent in current ICT and related engineering models §  Pursues simplicity, reliability, scalability principles of design implemented through redundancy and re-configurability; targeted for energy-,...

  2. Remarkable resilience of teeth.

    Science.gov (United States)

    Chai, Herzl; Lee, James J-W; Constantino, Paul J; Lucas, Peter W; Lawn, Brian R

    2009-05-05

    Tooth enamel is inherently weak, with fracture toughness comparable with glass, yet it is remarkably resilient, surviving millions of functional contacts over a lifetime. We propose a microstructural mechanism of damage resistance, based on observations from ex situ loading of human and sea otter molars (teeth with strikingly similar structural features). Section views of the enamel implicate tufts, hypomineralized crack-like defects at the enamel-dentin junction, as primary fracture sources. We report a stabilization in the evolution of these defects, by "stress shielding" from neighbors, by inhibition of ensuing crack extension from prism interweaving (decussation), and by self-healing. These factors, coupled with the capacity of the tooth configuration to limit the generation of tensile stresses in largely compressive biting, explain how teeth may absorb considerable damage over time without catastrophic failure, an outcome with strong implications concerning the adaptation of animal species to diet.

  3. Video Design Games

    DEFF Research Database (Denmark)

    Smith, Rachel Charlotte; Christensen, Kasper Skov; Iversen, Ole Sejer

    We introduce Video Design Games to train educators in teaching design. The Video Design Game is a workshop format consisting of three rounds in which participants observe, reflect and generalize based on video snippets from their own practice. The paper reports on a Video Design Game workshop...... in which 25 educators as part of a digital fabrication and design program were able to critically reflect on their teaching practice....

  4. Distributed multi-hypothesis coding of depth maps using texture motion information and optical flow

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Zamarin, Marco; Rakêt, Lars Lau

    2013-01-01

    Distributed Video Coding (DVC) is a video coding paradigm allowing a shift of complexity from the encoder to the decoder. Depth maps are images enabling the calculation of the distance of an object from the camera, which can be used in multiview coding in order to generate virtual views, but also...

  5. The Children's Video Marketplace.

    Science.gov (United States)

    Ducey, Richard V.

    This report examines a growing submarket, the children's video marketplace, which comprises broadcast, cable, and video programming for children 2 to 11 years old. A description of the tremendous growth in the availability and distribution of children's programming is presented, the economics of the children's video marketplace are briefly…

  6. Video Self-Modeling

    Science.gov (United States)

    Buggey, Tom; Ogle, Lindsey

    2012-01-01

    Video self-modeling (VSM) first appeared on the psychology and education stage in the early 1970s. The practical applications of VSM were limited by lack of access to tools for editing video, which is necessary for almost all self-modeling videos. Thus, VSM remained in the research domain until the advent of camcorders and VCR/DVD players and,…

  7. Video transmission on ATM networks. Ph.D. Thesis

    Science.gov (United States)

    Chen, Yun-Chung

    1993-01-01

    The broadband integrated services digital network (B-ISDN) is expected to provide high-speed and flexible multimedia applications. Multimedia includes data, graphics, image, voice, and video. Asynchronous transfer mode (ATM) is the adopted transport techniques for B-ISDN and has the potential for providing a more efficient and integrated environment for multimedia. It is believed that most broadband applications will make heavy use of visual information. The prospect of wide spread use of image and video communication has led to interest in coding algorithms for reducing bandwidth requirements and improving image quality. The major results of a study on the bridging of network transmission performance and video coding are: Using two representative video sequences, several video source models are developed. The fitness of these models are validated through the use of statistical tests and network queuing performance. A dual leaky bucket algorithm is proposed as an effective network policing function. The concept of the dual leaky bucket algorithm can be applied to a prioritized coding approach to achieve transmission efficiency. A mapping of the performance/control parameters at the network level into equivalent parameters at the video coding level is developed. Based on that, a complete set of principles for the design of video codecs for network transmission is proposed.

  8. Sociotechnical Resilience: A Preliminary Concept.

    Science.gov (United States)

    Amir, Sulfikar; Kant, Vivek

    2018-01-01

    This article presents the concept of sociotechnical resilience by employing an interdisciplinary perspective derived from the fields of science and technology studies, human factors, safety science, organizational studies, and systems engineering. Highlighting the hybrid nature of sociotechnical systems, we identify three main constituents that characterize sociotechnical resilience: informational relations, sociomaterial structures, and anticipatory practices. Further, we frame sociotechnical resilience as undergirded by the notion of transformability with an emphasis on intentional activities, focusing on the ability of sociotechnical systems to shift from one form to another in the aftermath of shock and disturbance. We propose that the triad of relations, structures, and practices are fundamental aspects required to comprehend the resilience of sociotechnical systems during times of crisis. © 2017 Society for Risk Analysis.

  9. Assessment instruments of urban resilience

    Directory of Open Access Journals (Sweden)

    Giovanna Saporiti

    2012-07-01

    Full Text Available The objective of this work is to highlight the aspects related to the resilient capacity of a neoecosistema. Clarifying what does it means to speak about a resilient neoecosistema and which are the specific characters that make him capable of change and adaptation when facing an environmental, social or economic threat, it will be possible to understand the efficacy related to the model of urban development. From the individuation of perturbing factors of this capacity, it will be possible to generate a panel of the resilient capacity linked to three different ambits that represent the three characteristic elements of natural ecosystems: its physic structure, the persons and the interaction processes between them so we would be able to make explicit the specific characters of resilience distinguished from those of sustainability and urban quality.  

  10. Video encoder/decoder for encoding/decoding motion compensated images

    NARCIS (Netherlands)

    1996-01-01

    Video encoder and decoder, provided with a motion compensator for motion-compensated video coding or decoding in which a picture is coded or decoded in blocks in alternately horizontal and vertical steps. The motion compensator is provided with addressing means (160) and controlled multiplexers

  11. VBR video traffic models

    CERN Document Server

    Tanwir, Savera

    2014-01-01

    There has been a phenomenal growth in video applications over the past few years. An accurate traffic model of Variable Bit Rate (VBR) video is necessary for performance evaluation of a network design and for generating synthetic traffic that can be used for benchmarking a network. A large number of models for VBR video traffic have been proposed in the literature for different types of video in the past 20 years. Here, the authors have classified and surveyed these models and have also evaluated the models for H.264 AVC and MVC encoded video and discussed their findings.

  12. Intelligent video surveillance systems

    CERN Document Server

    Dufour, Jean-Yves

    2012-01-01

    Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi

  13. Computational multispectral video imaging [Invited].

    Science.gov (United States)

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  14. Constructing Resilience: The Wellington Studio

    Directory of Open Access Journals (Sweden)

    Penny Allan

    2014-08-01

    Full Text Available This paper describes the results of a design studio on climate change at Victoria University of Wellington (VUW, New Zealand, in 2007. It discusses the processes and outcomes of the studio and the subsequent testing of student work against a resilience model developed by Canadian ecologist CS Holling (1973, 1998; Walker et al, 2004 to create a framework for the design of resilient cities.

  15. Resilient retfærdighed?

    DEFF Research Database (Denmark)

    Jacobsen, Stefan Gaarsmand

    2016-01-01

    This article uses the idea of resilience as a point of departure for analysing some contemporary challenges to the climate justice movement posed by social-ecological sciences. Climate justice activists are increasingly rallying for a system-change, demanding fundamental changes to political bure...... is that the scientific framework behind resilience is not politically neutral and that this framework tends to weaken the activist’s demands for a just transition and place more emphasis on technical and bureaucratic processes....

  16. Resilience among old Sami women

    OpenAIRE

    Aléx, Lena

    2015-01-01

    Artikkel som utforsker hvordan eldre kvinner forteller om sine erfaringer med helse og mangel på helse. There is lack of research on old indigenous women’s experiences. The aim of this study was to explore how old women narrate their experiences of wellbeing and lack of wellbeing using the salutogenetic concept of resilience. Interviews from nine old Sami women were analysed according to grounded theory with the following themes identified: contributing to resilience and wellbeing built up...

  17. Measuring resilience to energy shocks

    OpenAIRE

    Molyneaux, Lynette; Brown, Colin; Foster, John; Wagner, Liam

    2015-01-01

    Measuring energy security or resilience in energy is, in the main, confined to indicators which are used for comparative purposes or to show trends rather than provide empirical evidence of resilience to unpredicted crises. In this paper, the electricity systems of the individual states within the United States of America are analysed for their response to the 1973-1982 and the 2003-2012 oil price shocks. Empirical evidence is sought for elements which are present in systems that experience r...

  18. Video Monitoring a Simulation-Based Quality Improvement Program in Bihar, India.

    Science.gov (United States)

    Dyer, Jessica; Spindler, Hilary; Christmas, Amelia; Shah, Malay Bharat; Morgan, Melissa; Cohen, Susanna R; Sterne, Jason; Mahapatra, Tanmay; Walker, Dilys

    2018-04-01

    Simulation-based training has become an accepted clinical training andragogy in high-resource settings with its use increasing in low-resource settings. Video recordings of simulated scenarios are commonly used by facilitators. Beyond using the videos during debrief sessions, researchers can also analyze the simulation videos to quantify technical and nontechnical skills during simulated scenarios over time. Little is known about the feasibility and use of large-scale systems to video record and analyze simulation and debriefing data for monitoring and evaluation in low-resource settings. This manuscript describes the process of designing and implementing a large-scale video monitoring system. Mentees and Mentors were consented and all simulations and debriefs conducted at 320 Primary Health Centers (PHCs) were video recorded. The system design, number of video recordings, and inter-rater reliability of the coded videos were assessed. The final dataset included a total of 11,278 videos. Overall, a total of 2,124 simulation videos were coded and 183 (12%) were blindly double-coded. For the double-coded sample, the average inter-rater reliability (IRR) scores were 80% for nontechnical skills, and 94% for clinical technical skills. Among 4,450 long debrief videos received, 216 were selected for coding and all were double-coded. Data quality of simulation videos was found to be very good in terms of recorded instances of "unable to see" and "unable to hear" in Phases 1 and 2. This study demonstrates that video monitoring systems can be effectively implemented at scale in resource limited settings. Further, video monitoring systems can play several vital roles within program implementation, including monitoring and evaluation, provision of actionable feedback to program implementers, and assurance of program fidelity.

  19. Dimensioning Method for Conversational Video Applications in Wireless Convergent Networks

    Directory of Open Access Journals (Sweden)

    Raquel Perez Leal

    2007-12-01

    Full Text Available New convergent services are becoming possible, thanks to the expansion of IP networks based on the availability of innovative advanced coding formats such as H.264, which reduce network bandwidth requirements providing good video quality, and the rapid growth in the supply of dual-mode WiFi cellular terminals. This paper provides, first, a comprehensive subject overview as several technologies are involved, such as medium access protocol in IEEE802.11, H.264 advanced video coding standards, and conversational application characterization and recommendations. Second, the paper presents a new and simple dimensioning model of conversational video over wireless LAN. WLAN is addressed under the optimal network throughput and the perspective of video quality. The maximum number of simultaneous users resulting from throughput is limited by the collisions taking place in the shared medium with the statistical contention protocol. The video quality is conditioned by the packet loss in the contention protocol. Both approaches are analyzed within the scope of the advanced video codecs used in conversational video over IP, to conclude that conversational video dimensioning based on network throughput is not enough to ensure a satisfactory user experience, and video quality has to be taken also into account. Finally, the proposed model has been applied to a real-office scenario.

  20. Dimensioning Method for Conversational Video Applications in Wireless Convergent Networks

    Directory of Open Access Journals (Sweden)

    Alonso JoséI

    2008-01-01

    Full Text Available Abstract New convergent services are becoming possible, thanks to the expansion of IP networks based on the availability of innovative advanced coding formats such as H.264, which reduce network bandwidth requirements providing good video quality, and the rapid growth in the supply of dual-mode WiFi cellular terminals. This paper provides, first, a comprehensive subject overview as several technologies are involved, such as medium access protocol in IEEE802.11, H.264 advanced video coding standards, and conversational application characterization and recommendations. Second, the paper presents a new and simple dimensioning model of conversational video over wireless LAN. WLAN is addressed under the optimal network throughput and the perspective of video quality. The maximum number of simultaneous users resulting from throughput is limited by the collisions taking place in the shared medium with the statistical contention protocol. The video quality is conditioned by the packet loss in the contention protocol. Both approaches are analyzed within the scope of the advanced video codecs used in conversational video over IP, to conclude that conversational video dimensioning based on network throughput is not enough to ensure a satisfactory user experience, and video quality has to be taken also into account. Finally, the proposed model has been applied to a real-office scenario.

  1. Resilience | Science Inventory | US EPA

    Science.gov (United States)

    Resilience is an important framework for understanding and managing complex systems of people and nature that are subject to abrupt and nonlinear change. The idea of ecological resilience was slow to gain acceptance in the scientific community, taking thirty years to become widely accepted (Gunderson 2000, cited under Original Definition). Currently, the concept is commonplace in academics, management, and policy. Although the idea has quantitative roots in the ecological sciences and was proposed as a measurable quality of ecosystems, the broad use of resilience led to an expansion of definitions and applications. Holling’s original definition, presented in 1973 (Holling 1973, cited under Original Definition), was simply the amount of disturbance that a system can withstand before it shifts into an alternative stability domain. Ecological resilience, therefore, emphasizes that the dynamics of complex systems are nonlinear, meaning that these systems can transition, often abruptly, between dynamic states with substantially different structures, functions, and processes. The transition of ecological systems from one state to another frequently has important repercussions for humans. Recent definitions are more normative and qualitative, especially in the social sciences, and a competing definition, that of engineering resilience, is still often used. Resilience is an emergent phenomenon of complex systems, which means it cannot be deduced from the behavior of t

  2. Flip Video for Dummies

    CERN Document Server

    Hutsko, Joe

    2010-01-01

    The full-color guide to shooting great video with the Flip Video camera. The inexpensive Flip Video camera is currently one of the hottest must-have gadgets. It's portable and connects easily to any computer to transfer video you shoot onto your PC or Mac. Although the Flip Video camera comes with a quick-start guide, it lacks a how-to manual, and this full-color book fills that void! Packed with full-color screen shots throughout, Flip Video For Dummies shows you how to shoot the best possible footage in a variety of situations. You'll learn how to transfer video to your computer and then edi

  3. Aligning Organizational Pathologies and Organizational Resilience Indicators

    Directory of Open Access Journals (Sweden)

    Manuel Morales Allende

    2017-07-01

    Full Text Available Developing resilient individuals, organizations and communities is a hot topic in the research agenda in Management, Ecology, Psychology or Engineering. Despite the number of works that focus on resilience is increasing, there is not completely agreed definition of resilience, neither an entirely formal and accepted framework. The cause may be the spread of research among different fields. In this paper, we focus on the study of organizational resilience with the aim of improving the level of resilience in organizations. We review the relation between viable and resilient organizations and their common properties. Based on these common properties, we defend the application of the Viable System Model (VSM to design resilient organizations. We also identify the organizational pathologies defined applying the VSM through resilience indicators. We conclude that an organization with any organizational pathology is not likely to be resilient because it does not fulfill the requirements of viable organizations.

  4. Data compression systems for home-use digital video recording

    NARCIS (Netherlands)

    With, de P.H.N.; Breeuwer, M.; van Grinsven, P.A.M.

    1992-01-01

    The authors focus on image data compression techniques for digital recording. Image coding for storage equipment covers a large variety of systems because the applications differ considerably in nature. Video coding systems suitable for digital TV and HDTV recording and digital electronic still

  5. 3D Video Compression and Transmission

    DEFF Research Database (Denmark)

    Zamarin, Marco; Forchhammer, Søren

    In this short paper we provide a brief introduction to 3D and multi-view video technologies - like three-dimensional television and free-viewpoint video - focusing on the aspects related to data compression and transmission. Geometric information represented by depth maps is introduced as well...... and a novel coding scheme for multi-view data able to exploit geometric information in order to improve compression performances is briefly described and compared against the classical solution based on multi-view motion estimation. Future research directions close the paper....

  6. Flood Resilient Systems and their Application for Flood Resilient Planning

    Science.gov (United States)

    Manojlovic, N.; Gabalda, V.; Antanaskovic, D.; Gershovich, I.; Pasche, E.

    2012-04-01

    Following the paradigm shift in flood management from traditional to more integrated approaches, and considering the uncertainties of future development due to drivers such as climate change, one of the main emerging tasks of flood managers becomes the development of (flood) resilient cities. It can be achieved by application of non-structural - flood resilience measures, summarised in the 4As: assistance, alleviation, awareness and avoidance (FIAC, 2007). As a part of this strategy, the key aspect of development of resilient cities - resilient built environment can be reached by efficient application of Flood Resilience Technology (FReT) and its meaningful combination into flood resilient systems (FRS). FRS are given as [an interconnecting network of FReT which facilitates resilience (including both restorative and adaptive capacity) to flooding, addressing physical and social systems and considering different flood typologies] (SMARTeST, http://www.floodresilience.eu/). Applying the system approach (e.g. Zevenbergen, 2008), FRS can be developed at different scales from the building to the city level. Still, a matter of research is a method to define and systematise different FRS crossing those scales. Further, the decision on which resilient system is to be applied for the given conditions and given scale is a complex task, calling for utilisation of decision support tools. This process of decision-making should follow the steps of flood risk assessment (1) and development of a flood resilience plan (2) (Manojlovic et al, 2009). The key problem in (2) is how to match the input parameters that describe physical&social system and flood typology to the appropriate flood resilient system. Additionally, an open issue is how to integrate the advances in FReT and findings on its efficiency into decision support tools. This paper presents a way to define, systematise and make decisions on FRS at different scales of an urban system developed within the 7th FP Project

  7. Resilience and precarious success

    International Nuclear Information System (INIS)

    Patterson, Mary D; Wears, Robert L

    2015-01-01

    This paper presents an empirical case study to illustrate, corroborate, and perhaps extend some key generalizations about resilient performance in complex adaptive systems. The setting is a pediatric hematology/oncology pharmacy, a complex system embedded in the larger complex of the hospital, which provides chemotherapy and other high risk medications to children with cancer, sickle cell disease and autoimmune disorders. Recently the demands placed on this system have dramatically intensified while the resources allocated to the system have remained static. We describe the adaptations of this system in response to this additional stress. In addition, we discuss the risks associated with miscalibration about the system's adaptive capacity, and the tradeoff between the need to invest in adaptive capacity (to sustain performance when the system is stressed) versus the need to invest in efficient production (to sustain performance under normal circumstances and economic pressures). - Highlights: • We describe a complex adaptive system: a pediatric hematology/oncology pharmacy. • Work in this system has changed and intensified, but resources have remained static. • Staff's adaptive behaviors demonstrate graceful extensibility and fluency. • The HO staff has demonstrated extraordinary adaptive behaviors. • Leadership miscalibrates the efforts required to perform the pharmacy's work

  8. Healthy ageing, resilience and wellbeing.

    Science.gov (United States)

    Cosco, T D; Howse, K; Brayne, C

    2017-12-01

    The extension of life does not appear to be slowing, representing a great achievement for mankind as well as a challenge for ageing populations. As we move towards an increasingly older population we will need to find novel ways for individuals to make the best of the challenges they face, as the likelihood of encountering some form of adversity increases with age. Resilience theories share a common idea that individuals who manage to navigate adversity and maintain high levels of functioning demonstrate resilience. Traditional models of healthy ageing suggest that having a high level of functioning across a number of domains is a requirement. The addition of adversity to the healthy ageing model via resilience makes this concept much more accessible and more amenable to the ageing population. Through asset-based approaches, such as the invoking of individual, social and environmental resources, it is hoped that greater resilience can be fostered at a population level. Interventions aimed at fostering greater resilience may take many forms; however, there is great potential to increase social and environmental resources through public policy interventions. The wellbeing of the individual must be the focus of these efforts; quality of life is an integral component to the enjoyment of additional years and should not be overlooked. Therefore, it will become increasingly important to use resilience as a public health concept and to intervene through policy to foster greater resilience by increasing resources available to older people. Fostering wellbeing in the face of increasing adversity has significant implications for ageing individuals and society as a whole.

  9. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  10. Resilience in nurses: an integrative review.

    Science.gov (United States)

    Hart, Patricia L; Brannan, Jane D; De Chesnay, Mary

    2014-09-01

    To describe nursing research that has been conducted to understand the phenomenon of resilience in nurses. Resilience is the ability to bounce back or cope successfully despite adverse circumstances. Nurses deal with modern-day problems that affect their abilities to remain resilient. Nursing administrators/managers need to look for solutions not only to recruit nurses, but to become knowledgeable about how to support and retain nurses. A comprehensive search was undertaken for nursing research conducted between 1990 and 2011. Key search terms were nurse, resilience, resiliency and resilient. Whittemore and Knafl's integrative approach was used to conduct the methodological review. Challenging workplaces, psychological emptiness, diminishing inner balance and a sense of dissonance are contributing factors for resilience. Examples of intrapersonal characteristics include hope, self-efficacy and coping. Cognitive reframing, toughening up, grounding connections, work-life balance and reconciliation are resilience building strategies. This review provides information about the concept of resilience. Becoming aware of contributing factors to the need for resilience and successful strategies to build resilience can help in recruiting and retaining nurses. Understanding the concept of resilience can assist in providing support and developing programmes to help nurses become and stay resilient. © 2012 John Wiley & Sons Ltd.

  11. Towards resilient cities. Comparing approaches/strategies

    Directory of Open Access Journals (Sweden)

    Angela Colucci

    2012-07-01

    Full Text Available The term “resilience” is used in many disciplines with different meanings. We will adopt the ecological concept of resilience, which epitomises the capacity of a system to adapt itself in response to the action of a force, achieving a state of equilibrium different from the original (White, 2011. Since the end of the last century, with a significant increase over the last few years, resilience has featured as key concept in many technical, political papers and documents, and appears in many researches. Of all this recent and varied range of literature, our focus is on those texts that combine resilience with strategies, processes and models for resilient cities, communities and regions. Starting from the resilience strategies developed as response for risks mitigation, the paper thus explores other approaches and experiences on cities resilience that have been conducted: the aim is to compare and identify innovation in the planning process towards risks mitigation. In this paper we present a summary of the initial survey stage of our research, with three main aims: understanding the approaches to resilience developed so far and identifying which aspects these approaches share (or not;understanding which strategies are being proposed for resilient regions, cities or social-ecological systems;understanding whether proposed resilience strategies involve innovations in urban and regional development disciplines. The aim is to understand whether the proposed concept of resilience, or rather strategies, constitute progress and contribute to innovation in the areas of urban planning and design in relation to risk mitigation. Three main families of literature have been identified from the recent literature promoting resilience as a key strategy. The first aim of the research is to understand which particular concept and which aspects of resilience are used, which resilience strategies are proposed, how the term ‘city’ is defined and interpreted

  12. Video steganography based on bit-plane decomposition of wavelet-transformed video

    Science.gov (United States)

    Noda, Hideki; Furuta, Tomofumi; Niimi, Michiharu; Kawaguchi, Eiji

    2004-06-01

    This paper presents a steganography method using lossy compressed video which provides a natural way to send a large amount of secret data. The proposed method is based on wavelet compression for video data and bit-plane complexity segmentation (BPCS) steganography. BPCS steganography makes use of bit-plane decomposition and the characteristics of the human vision system, where noise-like regions in bit-planes of a dummy image are replaced with secret data without deteriorating image quality. In wavelet-based video compression methods such as 3-D set partitioning in hierarchical trees (SPIHT) algorithm and Motion-JPEG2000, wavelet coefficients in discrete wavelet transformed video are quantized into a bit-plane structure and therefore BPCS steganography can be applied in the wavelet domain. 3-D SPIHT-BPCS steganography and Motion-JPEG2000-BPCS steganography are presented and tested, which are the integration of 3-D SPIHT video coding and BPCS steganography, and that of Motion-JPEG2000 and BPCS, respectively. Experimental results show that 3-D SPIHT-BPCS is superior to Motion-JPEG2000-BPCS with regard to embedding performance. In 3-D SPIHT-BPCS steganography, embedding rates of around 28% of the compressed video size are achieved for twelve bit representation of wavelet coefficients with no noticeable degradation in video quality.

  13. Video Toroid Cavity Imager

    Energy Technology Data Exchange (ETDEWEB)

    Gerald, Rex E. II; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  14. Digital Video in Research

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2012-01-01

    Is video becoming “the new black” in academia, if so, what are the challenges? The integration of video in research methodology (for collection, analysis) is well-known, but the use of “academic video” for dissemination is relatively new (Eriksson and Sørensen). The focus of this paper is academic......). In the video, I appear (along with other researchers) and two Danish film directors, and excerpts from their film. My challenges included how to edit the academic video and organize the collaborative effort. I consider video editing as a semiotic, transformative process of “reassembling” voices....... In the discussion, I review academic video in terms of relevance and implications for research practice. The theoretical background is social constructivist, combining social semiotics (Kress, van Leeuwen, McCloud), visual anthropology (Banks, Pink) and dialogic theory (Bakhtin). The Bakhtinian notion of “voices...

  15. Reflections on academic video

    Directory of Open Access Journals (Sweden)

    Thommy Eriksson

    2012-11-01

    Full Text Available As academics we study, research and teach audiovisual media, yet rarely disseminate and mediate through it. Today, developments in production technologies have enabled academic researchers to create videos and mediate audiovisually. In academia it is taken for granted that everyone can write a text. Is it now time to assume that everyone can make a video essay? Using the online journal of academic videos Audiovisual Thinking and the videos published in it as a case study, this article seeks to reflect on the emergence and legacy of academic audiovisual dissemination. Anchoring academic video and audiovisual dissemination of knowledge in two critical traditions, documentary theory and semiotics, we will argue that academic video is in fact already present in a variety of academic disciplines, and that academic audiovisual essays are bringing trends and developments that have long been part of academic discourse to their logical conclusion.

  16. Research on compression performance of ultrahigh-definition videos

    Science.gov (United States)

    Li, Xiangqun; He, Xiaohai; Qing, Linbo; Tao, Qingchuan; Wu, Di

    2017-11-01

    With the popularization of high-definition (HD) images and videos (1920×1080 pixels and above), there are even 4K (3840×2160) television signals and 8 K (8192×4320) ultrahigh-definition videos. The demand for HD images and videos is increasing continuously, along with the increasing data volume. The storage and transmission cannot be properly solved only by virtue of the expansion capacity of hard disks and the update and improvement of transmission devices. Based on the full use of the coding standard high-efficiency video coding (HEVC), super-resolution reconstruction technology, and the correlation between the intra- and the interprediction, we first put forward a "division-compensation"-based strategy to further improve the compression performance of a single image and frame I. Then, by making use of the above thought and HEVC encoder and decoder, a video compression coding frame is designed. HEVC is used inside the frame. Last, with the super-resolution reconstruction technology, the reconstructed video quality is further improved. The experiment shows that by the proposed compression method for a single image (frame I) and video sequence here, the performance is superior to that of HEVC in a low bit rate environment.

  17. Resilience and Higher Order Thinking

    Directory of Open Access Journals (Sweden)

    Ioan Fazey

    2010-09-01

    Full Text Available To appreciate, understand, and tackle chronic global social and environmental problems, greater appreciation of the importance of higher order thinking is required. Such thinking includes personal epistemological beliefs (PEBs, i.e., the beliefs people hold about the nature of knowledge and how something is known. These beliefs have profound implications for the way individuals relate to each other and the world, such as how people understand complex social-ecological systems. Resilience thinking is an approach to environmental stewardship that includes a number of interrelated concepts and has strong foundations in systemic ways of thinking. This paper (1 summarizes a review of educational psychology literature on PEBs, (2 explains why resilience thinking has potential to facilitate development of more sophisticated PEBs, (3 describes an example of a module designed to teach resilience thinking to undergraduate students in ways conducive to influencing PEBs, and (4 discusses a pilot study that evaluates the module's impact. Theoretical and preliminary evidence from the pilot evaluation suggests that resilience thinking which is underpinned by systems thinking has considerable potential to influence the development of more sophisticated PEBs. To be effective, however, careful consideration of how resilience thinking is taught is required. Finding ways to encourage students to take greater responsibility for their own learning and ensuring close alignment between assessment and desired learning outcomes are particularly important.

  18. Subjective evaluation of next-generation video compression algorithms: a case study

    Science.gov (United States)

    De Simone, Francesca; Goldmann, Lutz; Lee, Jong-Seok; Ebrahimi, Touradj; Baroncini, Vittorio

    2010-08-01

    This paper describes the details and the results of the subjective quality evaluation performed at EPFL, as a contribution to the effort of the Joint Collaborative Team on Video Coding (JCT-VC) for the definition of the next-generation video coding standard. The performance of 27 coding technologies have been evaluated with respect to two H.264/MPEG-4 AVC anchors, considering high definition (HD) test material. The test campaign involved a total of 494 naive observers and took place over a period of four weeks. While similar tests have been conducted as part of the standardization process of previous video coding technologies, the test campaign described in this paper is by far the most extensive in the history of video coding standardization. The obtained subjective quality scores show high consistency and support an accurate comparison of the performance of the different coding solutions.

  19. Sound for digital video

    CERN Document Server

    Holman, Tomlinson

    2013-01-01

    Achieve professional quality sound on a limited budget! Harness all new, Hollywood style audio techniques to bring your independent film and video productions to the next level.In Sound for Digital Video, Second Edition industry experts Tomlinson Holman and Arthur Baum give you the tools and knowledge to apply recent advances in audio capture, video recording, editing workflow, and mixing to your own film or video with stunning results. This fresh edition is chockfull of techniques, tricks, and workflow secrets that you can apply to your own projects from preproduction

  20. Integrated Approach to a Resilient City: Associating Social, Environmental and Infrastructure Resilience in its Whole

    Directory of Open Access Journals (Sweden)

    Birutė PITRĖNAITĖ-ŽILĖNIENĖ

    2014-12-01

    Full Text Available Rising complexity, numbers and severity of natural and manmade disasters enhance the importance of reducing vulnerability, or on contrary – increasing resilience, of different kind of systems, including those of social, engineering (infrastructure, and environmental (ecological nature. The goal of this research is to explore urban resilience as an integral system of social, environmental, and engineering resilience. This report analyses the concepts of each kind of resilience and identifies key factors influencing social, ecological, and infrastructure resilience discussing how these factors relate within urban systems. The achievement of resilience of urban and regional systems happens through the interaction of the different elements (social, psychological, physical, structural, and environmental, etc.; therefore, resilient city could be determined by synergy of resilient society, resilient infrastructure and resilient environment of the given area. Based on literature analysis, the current research provides some insights on conceptual framework for assessment of complex urban systems in terms of resilience. To be able to evaluate resilience and define effective measures for prevention and risk mitigation, and thereby strengthen resilience, we propose to develop an e-platform, joining risk parameters’ Monitoring Systems, which feed with data Resiliency Index calculation domain. Both these elements result in Multirisk Platform, which could serve for awareness and shared decision making for resilient people in resilient city.

  1. Water Infrastructure and Resiliency Finance Center

    Science.gov (United States)

    The Water Infrastructure and Resiliency Finance Center serves as a resource to communities to improve their wastewater, drinking water and stormwater systems, particularly through innovative financing and increased resiliency to climate change.

  2. Reconstructing Interlaced High-Dynamic-Range Video Using Joint Learning.

    Science.gov (United States)

    Inchang Choi; Seung-Hwan Baek; Kim, Min H

    2017-11-01

    For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extended dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with the state-of-the-art high-dynamic-range video methods.

  3. Processing Decoded Video for Backlight Dimming

    DEFF Research Database (Denmark)

    Burini, Nino; Korhonen, Jari

    rendition of the signals, particularly in the case of LCDs with dynamic local backlight. This thesis shows that it is possible to model LCDs with dynamic backlight to design algorithms that improve the visual quality of 2D and 3D content, and that digital video coding artifacts like blocking or ringing can......Quality of digital image and video signals on TV screens is aected by many factors, including the display technology and compression standards. An accurate knowledge of the characteristics of the display andof the video signals can be used to develop advanced algorithms that improve the visual...... be reduced with post-processing. LCD screens with dynamic local backlight are modeled in their main aspects, like pixel luminance, light diusion and light perception. Following the model, novel algorithms based on optimization are presented and extended, then reduced in complexity, to produce backlights...

  4. Codesigning a resilient food system

    Directory of Open Access Journals (Sweden)

    Sari J. Himanen

    2016-12-01

    Full Text Available Global changes, especially the progression of climate change, create a plethora of adaptation needs for social-ecological systems. With increasing uncertainty, more resilient food systems that are able to adapt and shape their operations in response to emerging challenges are required. Most of the research on this subject has been focused on developing countries; however, developed countries also face increasing environmental, economic, and social pressures. Because food systems are complex and involve multiple actors, using codesign might be the most productive way to develop desirable system characteristics. For this study, we engaged food system actors in a scenario-planning exercise to identify means of building more resilient food systems. In particular, the actors focused on determinants of adaptive capacity in developed countries, with Finland as a case study. The brainstorming session followed by a two-round Delphi study raised three main characteristics for effective food system resilience, namely, energy and nutrient sovereignty, transparency and dialogue in the food chain, and continuous innovativeness and evidence-based learning. In addition, policy interventions were found instrumental for supporting such food system resilience. The main actor-specific determinants of adaptive capacity identified included the farmers' utilization of agri-technology and expertise; energy and logistic efficiency of the input and processing industry; and for retail, communication to build consumer trust and environmental awareness, and effective use of information and communication technology. Of the food system actors, farmers and the processing industry were perceived to be the closest to reaching the limits of their adaptive capacities. The use of adaptive capacity as a proxy seemed to concretize food system resilience effectively. Our study suggests that the resilience approach generates new perspectives that can guide actors in developing food

  5. Coding Labour

    Directory of Open Access Journals (Sweden)

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  6. Teaching Resiliency Theory to Substance Abuse Counselors

    Science.gov (United States)

    Ward, Kelly

    2003-01-01

    Resiliency is the ability to cope in the face of adversity. One protective factor that promotes resiliency in substance-abusing dysfunctional families is family rituals and traditions. Social workers and substance abuse counselors can teach family members how to instill resiliency in their families and themselves through rituals and traditions. To…

  7. The Resiliency Scale for Young Adults

    Science.gov (United States)

    Prince-Embury, Sandra; Saklofske, Donald H.; Nordstokke, David W.

    2017-01-01

    The Resiliency Scale for Young Adults (RSYA) is presented as an upward extension of the Resiliency Scales for Children and Adolescents (RSCA). The RSYA is based on the "three-factor model of personal resiliency" including "mastery," "relatedness," and "emotional reactivity." Several stages of scale…

  8. A comprehensive approach to assess operational resilience

    NARCIS (Netherlands)

    Stolker, R.J.M.; Karydas, D.M.; Rouvroye, J.L.; Hollnagel, E.; Pieri, F.

    2008-01-01

    This paper presents a first attempt to apply Multi-Attribute Utility Theory (MAUT) to the concept of resilience. The focus of this paper is measuring the management performance of operational resilience in an organization. Operational resilience refers to the ability of an organization to prevent

  9. Risk Behavior and Personal Resiliency in Adolescents

    Science.gov (United States)

    Prince-Embury, Sandra

    2015-01-01

    This study explores the relationship between self-reported risk behaviors and personal resiliency in adolescents; specifically whether youth with higher personal resiliency report less frequent risk behaviors than those with lower personal resiliency. Self-reported risk behavior is surveyed by the "Adolescent Risk Behavior Inventory"…

  10. Depression and Resilience in Breast Cancer Patients

    Directory of Open Access Journals (Sweden)

    Gordana Ristevska-Dimitrоvska

    2015-11-01

    CONCLUSION: This study shows that patients who are less depressed have higher levels of resilience and that psychological resilience may independently contribute to lower levels of depression among breast cancer patients. The level of psychological resilience may be a protective factor for depression and psychological distress.

  11. A comparison of cigarette- and hookah-related videos on YouTube.

    Science.gov (United States)

    Carroll, Mary V; Shensa, Ariel; Primack, Brian A

    2013-09-01

    YouTube is now the second most visited site on the internet. The authors aimed to compare characteristics of and messages conveyed by cigarette- and hookah-related videos on YouTube. Systematic search procedures yielded 66 cigarette-related and 61 hookah-related videos. After three trained qualitative researchers used an iterative approach to develop and refine definitions for the coding of variables, two of them independently coded each video for content including positive and negative associations with smoking and major content type. Median view counts were 606,884 for cigarettes-related videos and 102,307 for hookah-related videos (puser-generated videos related to cigarette smoking often acknowledge harmful consequences and provide explicit antismoking messages, hookah-related videos do not. It may be valuable for public health programmes to correct common misconceptions regarding hookah use.

  12. Business resiliency and stakeholder management.

    Science.gov (United States)

    Carey, Noel; Perry, Tony

    2014-01-01

    The authors facilitated separate round table discussions at the City and Financial Conference in London on 29th January, 2014. The theme of these discussions was business resiliency and stakeholder management. This topic attracted the largest group of all the breakout sessions, as the issue continues to generate much interest across the business resilience community. In this paper, the authors summarise the discussions held at the event and add their own insights into the subject of who are stakeholders, and the different means and messages to communicate to them.

  13. Fostering resilience through changing realities. Introduction to operational resilience capabilities

    NARCIS (Netherlands)

    Zuiderwijk, D.; Vorm, J. van der; Beek, F.A. van der; Veldhuis, G.J.

    2016-01-01

    The reality of operations does not always follow the book. Operational circumstances may develop into surprising situations that procedures have not accounted for. Still, we make things work. Resilient performance recognizes surprise early and acts upon it through adaptation, which is critical for

  14. The Video Generation.

    Science.gov (United States)

    Provenzo, Eugene F., Jr.

    1992-01-01

    Video games are neither neutral nor harmless but represent very specific social and symbolic constructs. Research on the social content of today's video games reveals that sex bias and gender stereotyping are widely evident throughout the Nintendo games. Violence and aggression also pervade the great majority of the games. (MLF)

  15. SECRETS OF SONG VIDEO

    Directory of Open Access Journals (Sweden)

    Chernyshov Alexander V.

    2014-04-01

    Full Text Available The article focuses on the origins of the song videos as TV and Internet-genre. In addition, it considers problems of screen images creation depending on the musical form and the text of a songs in connection with relevant principles of accent and phraseological video editing and filming techniques as well as with additional frames and sound elements.

  16. Reviews in instructional video

    NARCIS (Netherlands)

    van der Meij, Hans

    2017-01-01

    This study investigates the effectiveness of a video tutorial for software training whose construction was based on a combination of insights from multimedia learning and Demonstration-Based Training. In the videos, a model of task performance was enhanced with instructional features that were

  17. Videos - The National Guard

    Science.gov (United States)

    Legislative Liaison Small Business Programs Social Media State Websites Videos Featured Videos On Every Front 2:17 Always Ready, Always There National Guard Bureau Diversity and Inclusion Play Button 1:04 National Guard Bureau Diversity and Inclusion The ChalleNGe Ep.5 [Graduation] Play Button 3:51 The

  18. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos ... member of our patient care team. Managing Your Arthritis Managing Your Arthritis Managing Chronic Pain and Depression ...

  19. Rheumatoid Arthritis Educational Video Series

    Science.gov (United States)

    ... Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos ... Your Arthritis Managing Chronic Pain and Depression in Arthritis Nutrition & Rheumatoid Arthritis Arthritis and Health-related Quality of Life ...

  20. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ... *PDF files require the free Adobe® Reader® software for viewing. This website is maintained by the ...

  1. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... questions Clinical Studies Publications Catalog Photos and Images Spanish Language Information Grants and Funding Extramural Research Division ... Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video ...

  2. Social video content delivery

    CERN Document Server

    Wang, Zhi; Zhu, Wenwu

    2016-01-01

    This brief presents new architecture and strategies for distribution of social video content. A primary framework for socially-aware video delivery and a thorough overview of the possible approaches is provided. The book identifies the unique characteristics of socially-aware video access and social content propagation, revealing the design and integration of individual modules that are aimed at enhancing user experience in the social network context. The change in video content generation, propagation, and consumption for online social networks, has significantly challenged the traditional video delivery paradigm. Given the massive amount of user-generated content shared in online social networks, users are now engaged as active participants in the social ecosystem rather than as passive receivers of media content. This revolution is being driven further by the deep penetration of 3G/4G wireless networks and smart mobile devices that are seamlessly integrated with online social networking and media-sharing s...

  3. Online video examination

    DEFF Research Database (Denmark)

    Qvist, Palle

    have large influence on their own teaching, learning and curriculum. The programme offers streamed videos in combination with other learning resources. It is a concept which offers video as pure presentation - video lectures - but also as an instructional tool which gives the students the possibility...... to construct their knowledge, collaboration and communication. In its first years the programme has used Skype video communication for collaboration and communication within and between groups, group members and their facilitators. Also exams have been mediated with the help of Skype and have for all students......, examiners and external examiners been a challenge and opportunity and has brought new knowledge and experience. This paper brings results from a questionnaire focusing on how the students experience the video examination....

  4. FBCOT: a fast block coding option for JPEG 2000

    Science.gov (United States)

    Taubman, David; Naman, Aous; Mathew, Reji

    2017-09-01

    Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically Coding with Optimized Truncation).

  5. Picture data compression coder using subband/transform coding with a Lempel-Ziv-based coder

    Science.gov (United States)

    Glover, Daniel R. (Inventor)

    1995-01-01

    Digital data coders/decoders are used extensively in video transmission. A digitally encoded video signal is separated into subbands. Separating the video into subbands allows transmission at low data rates. Once the data is separated into these subbands it can be coded and then decoded by statistical coders such as the Lempel-Ziv based coder.

  6. 78 FR 78486 - Notice of Funding Availability for Resilience Projects in Response to Hurricane Sandy

    Science.gov (United States)

    2013-12-26

    ... service. This resilience funding is intended to protect public transportation infrastructure that has been... infrastructure after a future major storm or natural disaster. Furthermore, the activities funded under this... billion to other agencies to fund programs authorized under titles 23 and 49, United States Code, in order...

  7. A Novel High Efficiency Fractal Multiview Video Codec

    Directory of Open Access Journals (Sweden)

    Shiping Zhu

    2015-01-01

    Full Text Available Multiview video which is one of the main types of three-dimensional (3D video signals, captured by a set of video cameras from various viewpoints, has attracted much interest recently. Data compression for multiview video has become a major issue. In this paper, a novel high efficiency fractal multiview video codec is proposed. Firstly, intraframe algorithm based on the H.264/AVC intraprediction modes and combining fractal and motion compensation (CFMC algorithm in which range blocks are predicted by domain blocks in the previously decoded frame using translational motion with gray value transformation is proposed for compressing the anchor viewpoint video. Then temporal-spatial prediction structure and fast disparity estimation algorithm exploiting parallax distribution constraints are designed to compress the multiview video data. The proposed fractal multiview video codec can exploit temporal and spatial correlations adequately. Experimental results show that it can obtain about 0.36 dB increase in the decoding quality and 36.21% decrease in encoding bitrate compared with JMVC8.5, and the encoding time is saved by 95.71%. The rate-distortion comparisons with other multiview video coding methods also demonstrate the superiority of the proposed scheme.

  8. Music, videos and the risk for CERN

    CERN Multimedia

    Computer Security Team

    2012-01-01

    Do you like listening to music while you work? What about watching videos during your leisure time? Sure this is fun. Having your colleagues participate in this is even more fun. However, this fun is usually not free. There are artists and the music and film companies who earn their living from music and videos.   Thus, if you want to listen to music or watch films at CERN, make sure that you own the proper rights to do so (and that you have the agreement of your supervisor to do this during working hours). Note that these rights are personal: you usually do not have the right to share music or videos with third parties without violating copyrights. Therefore, making copyrighted music and videos public, or sharing music and videos as well as other copyrighted material, is forbidden at CERN and outside CERN. It violates the CERN Computing Rules and it contradicts CERN's Code of Conduct, which expects each of us to behave ethically and honestly, and to credit others for their c...

  9. Music, videos and the risk for CERN

    CERN Multimedia

    IT Department

    2010-01-01

    Do you like listening to music while working? What about watching videos during leisure time? Sure this is fun. Having your colleagues participating in this is even more fun. However, this fun is usually not free. There are music and film companies who earn their living from music and videos. Thus, if you want to listen to music or watch films at CERN, make sure that you own the proper rights to do so (and you have the agreement of your supervisor to do this during working hours). Note that these rights are personal: You usually do not have the right to share this music or these videos with third parties without violating copyrights. Therefore, making copyrighted music and videos public, or sharing music and video files as well as other copyrighted material, is forbidden at CERN --- and also outside CERN. It violates the CERN Computing Rules (http://cern.ch/ComputingRules) and it contradicts CERN's Code of Coduct (https://cern.ch/hr-info/codeofconduct.asp) which expects each of us to behave ethically and be ...

  10. Notions of Video Game Addiction and Their Relation to Self-Reported Addiction among Players of World of Warcraft

    Science.gov (United States)

    Oggins, Jean; Sammis, Jeffrey

    2012-01-01

    In this study, 438 players of the online video game, World of Warcraft, completed a survey about video game addiction and answered an open-ended question about behaviors they considered characteristic of video game addiction. Responses were coded and correlated with players' self-reports of being addicted to games and scores on a modified video…

  11. Systems Measures of Water Distribution System Resilience

    Energy Technology Data Exchange (ETDEWEB)

    Klise, Katherine A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Murray, Regan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Walker, La Tonya Nicole [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Resilience is a concept that is being used increasingly to refer to the capacity of infrastructure systems to be prepared for and able to respond effectively and rapidly to hazardous events. In Section 2 of this report, drinking water hazards, resilience literature, and available resilience tools are presented. Broader definitions, attributes and methods for measuring resilience are presented in Section 3. In Section 4, quantitative systems performance measures for water distribution systems are presented. Finally, in Section 5, the performance measures and their relevance to measuring the resilience of water systems to hazards is discussed along with needed improvements to water distribution system modeling tools.

  12. Dynamic code block size for JPEG 2000

    Science.gov (United States)

    Tsai, Ping-Sing; LeCornec, Yann

    2008-02-01

    Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.

  13. Performance evaluation of packet video transfer over local area networks

    OpenAIRE

    Lu, Jie

    1993-01-01

    This research investigates the implementation and performance of packet video transfer over local area networks. A network architecture is defined for packet video such that most of the processing is performed by the higher layers of the Open Systems Interconnection (OSI) reference model, while the lower layers provide real-time services. Implementation methods are discussed for coding schemes, including data compression, the network interface unit, and the underlying local are...

  14. Seeding Stress Resilience through Inoculation

    Directory of Open Access Journals (Sweden)

    Archana Ashokan

    2016-01-01

    Full Text Available Stress is a generalized set of physiological and psychological responses observed when an organism is placed under challenging circumstances. The stress response allows organisms to reattain the equilibrium in face of perturbations. Unfortunately, chronic and/or traumatic exposure to stress frequently overwhelms coping ability of an individual. This is manifested as symptoms affecting emotions and cognition in stress-related mental disorders. Thus environmental interventions that promote resilience in face of stress have much clinical relevance. Focus of the bulk of relevant neurobiological research at present remains on negative aspects of health and psychological outcomes of stress exposure. Yet exposure to the stress itself can promote resilience to subsequent stressful episodes later in the life. This is especially true if the prior stress occurs early in life, is mild in its magnitude, and is controllable by the individual. This articulation has been referred to as “stress inoculation,” reminiscent of resilience to the pathology generated through vaccination by attenuated pathogen itself. Using experimental evidence from animal models, this review explores relationship between nature of the “inoculum” stress and subsequent psychological resilience.

  15. MSY from catch and resilience

    DEFF Research Database (Denmark)

    Jørgensen, Ole A; Chrysafi, Anna

    A simple Schaefer model was tested on the Greenland halibut stock offshore in NAFO SA 0 and 1. The minimum data required for this model is a catch time series and a measure of the resilience of the species. Other input parameters that had to be guessed were the carrying capacity, the biomass...

  16. Strengthening community resilience: a toolkit

    NARCIS (Netherlands)

    Davis, Scott; Duijnhoven, Hanneke; Dinesen, Cecilie; Kerstholt, Johanna Helena

    2016-01-01

    While community resilience is said to have gained a lot of traction politically and given credence by disaster management professionals, this perception is not always shared by the individual members of communities. One solution to addressing the difficulty of individuals ‘conceptualising’ the

  17. Interprofessionals' definitions of moral resilience.

    Science.gov (United States)

    Holtz, Heidi; Heinze, Katherine; Rushton, Cynda

    2018-02-01

    To describe common characteristics and themes of the concept of moral resilience as reported by interprofessional clinicians in health care. Research has provided an abundance of data on moral distress with limited research to resolve and help negate the detrimental effects of moral distress. This reveals a critical need for research on how to mitigate the negative consequences of moral distress that plague nurses and other healthcare providers. One promising direction is to build resilience as an individual strategy concurrently with interventions to build a culture of ethical practice. Qualitative descriptive methods were used to analyse descriptive definitions provided by 184 interprofessional clinicians in health care attending educational programmes in various locations as well as a small group of 23 professionals with backgrounds such as chaplaincy and nonhealthcare providers. Three primary themes and three subthemes emerged from the data. The primary themes are integrity-personal and relational, and buoyancy. The subthemes are self-regulation, self-stewardship and moral efficacy. Individual healthcare providers and healthcare systems can use this research to help negate the detrimental effects of moral distress by finding ways to develop interventions to cultivate moral resilience. Moral resilience involves not only building and fostering the individual's capacity to navigate moral adversity but also developing systems that support a culture of ethical practice for healthcare providers. © 2017 John Wiley & Sons Ltd.

  18. Cyber Resilience in de Boardroom

    NARCIS (Netherlands)

    Klaver, M.H.A.; Zielstra, A.

    2012-01-01

    The Grand Conference - Building a Resilient Digital Society - took place in Amsterdam on October 16, 2012. The international conference aimed for top decision-makers of industry government and other organisations. Two hundred participants from twenty-two nations participated. Three Dutch

  19. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  20. What do you mean, 'resilient geomorphic systems'?

    Science.gov (United States)

    Thoms, M. C.; Piégay, H.; Parsons, M.

    2018-03-01

    Resilience thinking has many parallels in the study of geomorphology. Similarities and intersections exist between the scientific discipline of geomorphology and the scientific concept of resilience. Many of the core themes fundamental to geomorphology are closely related to the key themes of resilience. Applications of resilience thinking in the study of natural and human systems have expanded, based on the fundamental premise that ecosystems, economies, and societies must be managed as linked social-ecological systems. Despite geomorphology and resilience sharing core themes, appreciation is limited of the history and development of geomorphology as a field of scientific endeavor by many in the field of resilience, as well as a limited awareness of the foundations of the former in the more recent emergence of resilience. This potentially limits applications of resilience concepts to the study of geomorphology. In this manuscript we provide a collective examination of geomorphology and resilience as a means to conceptually advance both areas of study, as well as to further cement the relevance and importance of not only understanding the complexities of geomorphic systems in an emerging world of interdisciplinary challenges but also the importance of viewing humans as an intrinsic component of geomorphic systems rather than just an external driver. The application of the concepts of hierarchy and scale, fundamental tenets of the study of geomorphic systems, provide a means to overcome contemporary scale-limited approaches within resilience studies. Resilience offers a framework for geomorphology to expand its application into the broader social-ecological domain.

  1. Resilience definitions, theory, and challenges: interdisciplinary perspectives

    Science.gov (United States)

    Southwick, Steven M.; Bonanno, George A.; Masten, Ann S.; Panter-Brick, Catherine; Yehuda, Rachel

    2014-01-01

    In this paper, inspired by the plenary panel at the 2013 meeting of the International Society for Traumatic Stress Studies, Dr. Steven Southwick (chair) and multidisciplinary panelists Drs. George Bonanno, Ann Masten, Catherine Panter-Brick, and Rachel Yehuda tackle some of the most pressing current questions in the field of resilience research including: (1) how do we define resilience, (2) what are the most important determinants of resilience, (3) how are new technologies informing the science of resilience, and (4) what are the most effective ways to enhance resilience? These multidisciplinary experts provide insight into these difficult questions, and although each of the panelists had a slightly different definition of resilience, most of the proposed definitions included a concept of healthy, adaptive, or integrated positive functioning over the passage of time in the aftermath of adversity. The panelists agreed that resilience is a complex construct and it may be defined differently in the context of individuals, families, organizations, societies, and cultures. With regard to the determinants of resilience, there was a consensus that the empirical study of this construct needs to be approached from a multiple level of analysis perspective that includes genetic, epigenetic, developmental, demographic, cultural, economic, and social variables. The empirical study of determinates of resilience will inform efforts made at fostering resilience, with the recognition that resilience may be enhanced on numerous levels (e.g., individual, family, community, culture). PMID:25317257

  2. Resilience definitions, theory, and challenges: interdisciplinary perspectives.

    Science.gov (United States)

    Southwick, Steven M; Bonanno, George A; Masten, Ann S; Panter-Brick, Catherine; Yehuda, Rachel

    2014-01-01

    In this paper, inspired by the plenary panel at the 2013 meeting of the International Society for Traumatic Stress Studies, Dr. Steven Southwick (chair) and multidisciplinary panelists Drs. George Bonanno, Ann Masten, Catherine Panter-Brick, and Rachel Yehuda tackle some of the most pressing current questions in the field of resilience research including: (1) how do we define resilience, (2) what are the most important determinants of resilience, (3) how are new technologies informing the science of resilience, and (4) what are the most effective ways to enhance resilience? These multidisciplinary experts provide insight into these difficult questions, and although each of the panelists had a slightly different definition of resilience, most of the proposed definitions included a concept of healthy, adaptive, or integrated positive functioning over the passage of time in the aftermath of adversity. The panelists agreed that resilience is a complex construct and it may be defined differently in the context of individuals, families, organizations, societies, and cultures. With regard to the determinants of resilience, there was a consensus that the empirical study of this construct needs to be approached from a multiple level of analysis perspective that includes genetic, epigenetic, developmental, demographic, cultural, economic, and social variables. The empirical study of determinates of resilience will inform efforts made at fostering resilience, with the recognition that resilience may be enhanced on numerous levels (e.g., individual, family, community, culture).

  3. Multidimensional approach to complex system resilience analysis

    International Nuclear Information System (INIS)

    Gama Dessavre, Dante; Ramirez-Marquez, Jose E.; Barker, Kash

    2016-01-01

    Recent works have attempted to formally define a general metric for quantifying resilience for complex systems as a relationship of performance of the systems against time. The technical content in the proposed work introduces a new model that allows, for the first time, to compare the system resilience among systems (or different modifications to a system), by introducing a new dimension to system resilience models, called stress, to mimic the definition of resilience in material science. The applicability and usefulness of the model is shown with a new heat map visualization proposed in this work, and it is applied to a simulated network resilience case to exemplify its potential benefits. - Highlights: • We analyzed two of the main current metrics of resilience. • We create a new model that relates events with the effects they have. • We develop a novel heat map visualization to compare system resilience. • We showed the model and visualization usefulness in a simulated case.

  4. Aztheca Code

    International Nuclear Information System (INIS)

    Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.

    2017-09-01

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  5. Portrayal of smokeless tobacco in YouTube videos.

    Science.gov (United States)

    Bromberg, Julie E; Augustson, Erik M; Backinger, Cathy L

    2012-04-01

    Videos of smokeless tobacco (ST) on YouTube are abundant and easily accessible, yet no studies have examined the content of ST videos. This study assesses the overall portrayal, genre, and messages of ST YouTube videos. In August 2010, researchers identified the top 20 search results on YouTube by "relevance" and "view count" for the following search terms: "ST," "chewing tobacco," "snus," and "Skoal." After eliminating videos that were not about ST (n = 26), non-English (n = 14), or duplicate (n = 42), a final sample of 78 unique videos was coded for overall portrayal, genre, and various content measures. Among the 78 unique videos, 15.4% were anti-ST, while 74.4% were pro-ST. Researchers were unable to determine the portrayal of ST in the remaining 10.3% of videos because they involved excessive or "sensationalized" use of the ST, which could be interpreted either positively or negatively, depending on the viewer. The most common ST genre was positive video diaries (or "vlogs"), which made up almost one third of the videos (29.5%), followed by promotional advertisements (20.5%) and anti-ST public service announcements (12.8%). While YouTube is intended for user-generated content, 23.1% of the videos were created by professional organizations. These results demonstrate that ST videos on YouTube are overwhelmingly pro-ST. More research is needed to determine who is viewing these ST YouTube videos and how they may affect people's knowledge, attitudes, and behaviors regarding ST use.

  6. Portrayal of Smokeless Tobacco in YouTube Videos

    Science.gov (United States)

    Augustson, Erik M.; Backinger, Cathy L.

    2012-01-01

    Objectives: Videos of smokeless tobacco (ST) on YouTube are abundant and easily accessible, yet no studies have examined the content of ST videos. This study assesses the overall portrayal, genre, and messages of ST YouTube videos. Methods: In August 2010, researchers identified the top 20 search results on YouTube by “relevance” and “view count” for the following search terms: “ST,” “chewing tobacco,” “snus,” and “Skoal.” After eliminating videos that were not about ST (n = 26), non-English (n = 14), or duplicate (n = 42), a final sample of 78 unique videos was coded for overall portrayal, genre, and various content measures. Results: Among the 78 unique videos, 15.4% were anti-ST, while 74.4% were pro-ST. Researchers were unable to determine the portrayal of ST in the remaining 10.3% of videos because they involved excessive or “sensationalized” use of the ST, which could be interpreted either positively or negatively, depending on the viewer. The most common ST genre was positive video diaries (or “vlogs”), which made up almost one third of the videos (29.5%), followed by promotional advertisements (20.5%) and anti-ST public service announcements (12.8%). While YouTube is intended for user-generated content, 23.1% of the videos were created by professional organizations. Conclusions: These results demonstrate that ST videos on YouTube are overwhelmingly pro-ST. More research is needed to determine who is viewing these ST YouTube videos and how they may affect people’s knowledge, attitudes, and behaviors regarding ST use. PMID:22080585

  7. Adaptive Noise Model for Transform Domain Wyner-Ziv Video using Clustering of DCT Blocks

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    The noise model is one of the most important aspects influencing the coding performance of Distributed Video Coding. This paper proposes a novel noise model for Transform Domain Wyner-Ziv (TDWZ) video coding by using clustering of DCT blocks. The clustering algorithm takes advantage of the residual...... modelling. Furthermore, the proposed cluster level noise model is adaptively combined with a coefficient level noise model in this paper to robustly improve coding performance of TDWZ video codec up to 1.24 dB (by Bjøntegaard metric) compared to the DISCOVER TDWZ video codec....... information of all frequency bands, iteratively classifies blocks into different categories and estimates the noise parameter in each category. The experimental results show that the coding performance of the proposed cluster level noise model is competitive with state-ofthe- art coefficient level noise...

  8. Lossless Compression of Broadcast Video

    DEFF Research Database (Denmark)

    Martins, Bo; Eriksen, N.; Faber, E.

    1998-01-01

    We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...... cannot be expected to code losslessly at a rate of 125 Mbit/s. We investigate the rate and quality effects of quantization using standard JPEG-LS quantization and two new techniques: visual quantization and trellis quantization. Visual quantization is not part of baseline JPEG-LS, but is applicable...... in the framework of JPEG-LS. Visual tests show that this quantization technique gives much better quality than standard JPEG-LS quantization. Trellis quantization is a process by which the original image is altered in such a way as to make lossless JPEG-LS encoding more effective. For JPEG-LS and visual...

  9. CameraCast: flexible access to remote video sensors

    Science.gov (United States)

    Kong, Jiantao; Ganev, Ivan; Schwan, Karsten; Widener, Patrick

    2007-01-01

    New applications like remote surveillance and online environmental or traffic monitoring are making it increasingly important to provide flexible and protected access to remote video sensor devices. Current systems use application-level codes like web-based solutions to provide such access. This requires adherence to user-level APIs provided by such services, access to remote video information through given application-specific service and server topologies, and that the data being captured and distributed is manipulated by third party service codes. CameraCast is a simple, easily used system-level solution to remote video access. It provides a logical device API so that an application can identically operate on local vs. remote video sensor devices, using its own service and server topologies. In addition, the application can take advantage of API enhancements to protect remote video information, using a capability-based model for differential data protection that offers fine grain control over the information made available to specific codes or machines, thereby limiting their ability to violate privacy or security constraints. Experimental evaluations of CameraCast show that the performance of accessing remote video information approximates that of accesses to local devices, given sufficient networking resources. High performance is also attained when protection restrictions are enforced, due to an efficient kernel-level realization of differential data protection.

  10. A new video programme

    CERN Multimedia

    CERN video productions

    2011-01-01

    "What's new @ CERN?", a new monthly video programme, will be broadcast on the Monday of every month on webcast.cern.ch. Aimed at the general public, the programme will cover the latest CERN news, with guests and explanatory features. Tune in on Monday 3 October at 4 pm (CET) to see the programme in English, and then at 4:20 pm (CET) for the French version.   var flash_video_player=get_video_player_path(); insert_player_for_external('Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-0753-kbps-640x360-25-fps-audio-64-kbps-44-kHz-stereo', 'mms://mediastream.cern.ch/MediaArchive/Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-Multirate-200-to-753-kbps-640x360-25-fps.wmv', 'false', 480, 360, 'https://mediastream.cern.ch/MediaArchive/Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-posterframe-640x360-at-10-percent.jpg', '1383406', true, 'Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-0600-kbps-maxH-360-25-fps-...

  11. Gamifying Video Object Segmentation.

    Science.gov (United States)

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  12. Hierarchical video summarization

    Science.gov (United States)

    Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.

    1998-12-01

    We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.

  13. Medical video server construction.

    Science.gov (United States)

    Dańda, Jacek; Juszkiewicz, Krzysztof; Leszczuk, Mikołaj; Loziak, Krzysztof; Papir, Zdzisław; Sikora, Marek; Watza, Rafal

    2003-01-01

    The paper discusses two implementation options for a Digital Video Library, a repository used for archiving, accessing, and browsing of video medical records. Two crucial issues to be decided on are a video compression format and a video streaming platform. The paper presents numerous decision factors that have to be taken into account. The compression formats being compared are DICOM as a format representative for medical applications, both MPEGs, and several new formats targeted for an IP networking. The comparison includes transmission rates supported, compression rates, and at least options for controlling a compression process. The second part of the paper presents the ISDN technique as a solution for provisioning of tele-consultation services between medical parties that are accessing resources uploaded to a digital video library. There are several backbone techniques (like corporate LANs/WANs, leased lines or even radio/satellite links) available, however, the availability of network resources for hospitals was the prevailing choice criterion pointing to ISDN solutions. Another way to provide access to the Digital Video Library is based on radio frequency domain solutions. The paper describes possibilities of both, wireless and cellular network's data transmission service to be used as a medical video server transport layer. For the cellular net-work based solution two communication techniques are used: Circuit Switched Data and Packet Switched Data.

  14. Vocable Code

    DEFF Research Database (Denmark)

    Soon, Winnie; Cox, Geoff

    2018-01-01

    a computational and poetic composition for two screens: on one of these, texts and voices are repeated and disrupted by mathematical chaos, together exploring the performativity of code and language; on the other, is a mix of a computer programming syntax and human language. In this sense queer code can...... be understood as both an object and subject of study that intervenes in the world’s ‘becoming' and how material bodies are produced via human and nonhuman practices. Through mixing the natural and computer language, this article presents a script in six parts from a performative lecture for two persons...

  15. NSURE code

    International Nuclear Information System (INIS)

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  16. Camera network video summarization

    Science.gov (United States)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  17. Resiliency experiences of the youth against substance abuse: A qualitative study

    Directory of Open Access Journals (Sweden)

    Salah Adin Karimi

    2015-09-01

    Full Text Available Background: Many researches have been done on the addiction process, but few studies have examined the process of resiliency against addiction. This study was aimed to identify the facilitating and inhibiting factors in the young people’s resiliency process by analyzing their living experiences. Methodology: This qualitative study was based on grounded theory with Strauss and Corbin’s approach. The data underwent open, axial and selective coding. The study focused on Darvaze Ghar neighborhood of Tehran. Data were collected through open unstructured interviews and focus group discussion. In total, 21 interviews were conducted with 12 respondents and a focus group discussion was held with 7 participants. Lincoln and Guba (1985 scales were used to ensure the trustworthiness of the study. Results: The obtained codes were classified into 19 categories, including personal characteristics, family support, culture, spirituality, spiritual beliefs, environment, and social interventions. According to their nature, these categories were facilitating or inhibiting the process of resiliency against substance abuse. Conclusion: The youth with more religious beliefs, awareness, self-confidence, optimism and hatred towards drugs are more resilient against substance abuse. Moreover, the families with a higher sense of responsibility trust and monitor their children’s activities, talk to them about different issues and provide them with good trainings from the very childhood; therefore their children will be more resilient against addiction.

  18. Video frame processor

    International Nuclear Information System (INIS)

    Joshi, V.M.; Agashe, Alok; Bairi, B.R.

    1993-01-01

    This report provides technical description regarding the Video Frame Processor (VFP) developed at Bhabha Atomic Research Centre. The instrument provides capture of video images available in CCIR format. Two memory planes each with a capacity of 512 x 512 x 8 bit data enable storage of two video image frames. The stored image can be processed on-line and on-line image subtraction can also be carried out for image comparisons. The VFP is a PC Add-on board and is I/O mapped within the host IBM PC/AT compatible computer. (author). 9 refs., 4 figs., 19 photographs

  19. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  20. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  1. Collaborative Video Sketching

    DEFF Research Database (Denmark)

    Henningsen, Birgitte; Gundersen, Peter Bukovica; Hautopp, Heidi

    2017-01-01

    This paper introduces to what we define as a collaborative video sketching process. This process links various sketching techniques with digital storytelling approaches and creative reflection processes in video productions. Traditionally, sketching has been used by designers across various...... findings: 1) They are based on a collaborative approach. 2) The sketches act as a mean to externalizing hypotheses and assumptions among the participants. Based on our analysis we present an overview of factors involved in collaborative video sketching and shows how the factors relate to steps, where...... the participants: shape, record, review and edit their work, leading the participants to new insights about their work....

  2. Categorizing Video Game Audio

    DEFF Research Database (Denmark)

    Westerberg, Andreas Rytter; Schoenau-Fog, Henrik

    2015-01-01

    they can use audio in video games. The conclusion of this study is that the current models' view of the diegetic spaces, used to categorize video game audio, is not t to categorize all sounds. This can however possibly be changed though a rethinking of how the player interprets audio.......This paper dives into the subject of video game audio and how it can be categorized in order to deliver a message to a player in the most precise way. A new categorization, with a new take on the diegetic spaces, can be used a tool of inspiration for sound- and game-designers to rethink how...

  3. Parallel iterative decoding of transform domain Wyner-Ziv video using cross bitplane correlation

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    decoding scheme is proposed to improve the coding efficiency of TDWZ video codecs. The proposed parallel iterative LDPC decoding scheme is able to utilize cross bitplane correlation during decoding, by iteratively refining the soft-input, updating a modeled noise distribution and thereafter enhancing......In recent years, Transform Domain Wyner-Ziv (TDWZ) video coding has been proposed as an efficient Distributed Video Coding (DVC) solution, which fully or partly exploits the source statistics at the decoder to reduce the computational burden at the encoder. In this paper, a parallel iterative LDPC...

  4. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  5. Performance comparison of AV1, HEVC, and JVET video codecs on 360 (spherical) video

    Science.gov (United States)

    Topiwala, Pankaj; Dai, Wei; Krishnan, Madhu; Abbas, Adeel; Doshi, Sandeep; Newman, David

    2017-09-01

    This paper compares the coding efficiency performance on 360 videos, of three software codecs: (a) AV1 video codec from the Alliance for Open Media (AOM); (b) the HEVC Reference Software HM; and (c) the JVET JEM Reference SW. Note that 360 video is especially challenging content, in that one codes full res globally, but typically looks locally (in a viewport), which magnifies errors. These are tested in two different projection formats ERP and RSP, to check consistency. Performance is tabulated for 1-pass encoding on two fronts: (1) objective performance based on end-to-end (E2E) metrics such as SPSNR-NN, and WS-PSNR, currently developed in the JVET committee; and (2) informal subjective assessment of static viewports. Constant quality encoding is performed with all the three codecs for an unbiased comparison of the core coding tools. Our general conclusion is that under constant quality coding, AV1 underperforms HEVC, which underperforms JVET. We also test with rate control, where AV1 currently underperforms the open source X265 HEVC codec. Objective and visual evidence is provided.

  6. Layer-based buffer aware rate adaptation design for SHVC video streaming

    Science.gov (United States)

    Gudumasu, Srinivas; Hamza, Ahmed; Asbun, Eduardo; He, Yong; Ye, Yan

    2016-09-01

    This paper proposes a layer based buffer aware rate adaptation design which is able to avoid abrupt video quality fluctuation, reduce re-buffering latency and improve bandwidth utilization when compared to a conventional simulcast based adaptive streaming system. The proposed adaptation design schedules DASH segment requests based on the estimated bandwidth, dependencies among video layers and layer buffer fullness. Scalable HEVC video coding is the latest state-of-art video coding technique that can alleviate various issues caused by simulcast based adaptive video streaming. With scalable coded video streams, the video is encoded once into a number of layers representing different qualities and/or resolutions: a base layer (BL) and one or more enhancement layers (EL), each incrementally enhancing the quality of the lower layers. Such layer based coding structure allows fine granularity rate adaptation for the video streaming applications. Two video streaming use cases are presented in this paper. The first use case is to stream HD SHVC video over a wireless network where available bandwidth varies, and the performance comparison between proposed layer-based streaming approach and conventional simulcast streaming approach is provided. The second use case is to stream 4K/UHD SHVC video over a hybrid access network that consists of a 5G millimeter wave high-speed wireless link and a conventional wired or WiFi network. The simulation results verify that the proposed layer based rate adaptation approach is able to utilize the bandwidth more efficiently. As a result, a more consistent viewing experience with higher quality video content and minimal video quality fluctuations can be presented to the user.

  7. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...

  8. Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.

  9. ANIMAL code

    International Nuclear Information System (INIS)

    Lindemuth, I.R.

    1979-01-01

    This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables

  10. Network Coding

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621 ...

  11. Expander Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.

  12. Using QR Codes to Differentiate Learning for Gifted and Talented Students

    Science.gov (United States)

    Siegle, Del

    2015-01-01

    QR codes are two-dimensional square patterns that are capable of coding information that ranges from web addresses to links to YouTube video. The codes save time typing and eliminate errors in entering addresses incorrectly. These codes make learning with technology easier for students and motivationally engage them in news ways.

  13. Video error concealment using block matching and frequency selective extrapolation algorithms

    Science.gov (United States)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  14. A hybrid video compression based on zerotree wavelet structure

    International Nuclear Information System (INIS)

    Kilic, Ilker; Yilmaz, Reyat

    2009-01-01

    A video compression algorithm comparable to the standard techniques at low bit rates is presented in this paper. The overlapping block motion compensation (OBMC) is combined with discrete wavelet transform which followed by Lloyd-Max quantization and zerotree wavelet (ZTW) structure. The novel feature of this coding scheme is the combination of hierarchical finite state vector quantization (HFSVQ) with the ZTW to encode the quantized wavelet coefficients. It is seen that the proposed video encoder (ZTW-HFSVQ) performs better than the MPEG-4 and Zerotree Entropy Coding (ZTE). (author)

  15. Parkinson's Disease Videos

    Medline Plus

    Full Text Available ... Nonmotor Symptoms of Parkinson's Disease Expert Briefings: Gait, Balance and Falls in Parkinson's Disease Expert Briefings: Coping ... Library is an extensive collection of books, fact sheets, videos, podcasts, and more. To get started, use ...

  16. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Facts What is acoustic neuroma? Diagnosing Symptoms Side Effects Keywords World Language Videos Questions to ask Choosing ... Surgery What is acoustic neuroma Diagnosing Symptoms Side effects Question To Ask Treatment Options Back Overview Observation ...

  17. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... 8211 info@ANAUSA.org About ANA Mission, Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video English English Arabic Catalan Chinese (Simplified) Chinese ( ...

  18. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Care Disease Types FAQ Handout for Patients and Families Is It Right for You How to Get ... For the Media For Clinicians For Policymakers For Family Caregivers Glossary Menu In this section Links Videos ...

  19. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... patient kit Treatment Options Overview Observation Radiation Surgery What is acoustic neuroma Diagnosing ... Back Community Patient Stories Share Your Story Video Stories Caregivers Milestones Gallery Submit Your Milestone Team ANA Volunteer ...

  20. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Support Groups Is a support group for me? Find a Group Upcoming Events Video Library Photo Gallery ... Support ANetwork Peer Support Program Community Connections Overview Find a Meeting Host a Meeting Volunteer Become a ...

  1. Photos and Videos

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Observers are required to take photos and/or videos of all incidentally caught sea turtles, marine mammals, seabirds and unusual or rare fish. On the first 3...

  2. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Mission, Vision & Values Shop ANA Leadership & Staff Annual Reports Acoustic Neuroma Association 600 Peachtree Parkway Suite 108 ... About ANA Mission, Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video English English ...

  3. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Search Search What Is It Definition Pediatric Palliative Care Disease Types FAQ Handout for Patients and Families ... For Family Caregivers Glossary Resources Browse our palliative care resources below: Links Videos Podcasts Webinars For the ...

  4. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... About ANA Mission, Vision & Values Shop ANA Leadership & Staff Annual Reports Acoustic Neuroma Association 600 Peachtree Parkway ... ANAUSA.org About ANA Mission, Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video ...

  5. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... info@ANAUSA.org About ANA Mission, Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video English English Arabic Catalan Chinese (Simplified) Chinese ( ...

  6. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... to your Doctor Find a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts ... to your Doctor Find a Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources ...

  7. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Click to learn more... LOGIN CALENDAR DONATE NEWS Home Learn Back Learn about acoustic neuroma AN Facts ... Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video English English Arabic Catalan Chinese ( ...

  8. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... the Media For Clinicians For Policymakers For Family Caregivers Glossary Menu In this section Links Videos Podcasts ... the Media For Clinicians For Policymakers For Family Caregivers Glossary Resources Browse our palliative care resources below: ...

  9. Resilience in women with autoimmune rheumatic diseases.

    Science.gov (United States)

    Rojas, Manuel; Rodriguez, Yhojan; Pacheco, Yovana; Zapata, Elizabeth; Monsalve, Diana M; Mantilla, Rubén D; Rodríguez-Jimenez, Monica; Ramírez-Santana, Carolina; Molano-González, Nicolás; Anaya, Juan-Manuel

    2017-12-28

    To evaluate the relationship between resilience and clinical outcomes in patients with autoimmune rheumatic diseases. Focus groups, individual interviews, and chart reviews were done to collect data on 188 women with autoimmune rheumatic diseases, namely rheumatoid arthritis (n=51), systemic lupus erythematosus (n=70), systemic sclerosis (n=35), and Sjögren's syndrome (n=32). Demographic, clinical, and laboratory variables were assessed including disease activity by patient reported outcomes. Resilience was evaluated by using the Brief Resilience Scale. Bivariate, multiple linear regression, and classification and regression trees were used to analyse data. Resilience was influenced by age, duration of disease, and socioeconomic status. Lower resilience scores were observed in younger patients (50years) had higher resilience scores regardless of socioeconomic status. There was no influence of disease activity on resilience. A particular behaviour was observed in systemic sclerosis in which patients with high socioeconomic status and regular physical activity had higher resilience scores. Resilience in patients with autoimmune rheumatic diseases is a continuum process influenced by age and socioeconomic status. The ways in which these variables along with exercise influence resilience deserve further investigation. Copyright © 2017 Société française de rhumatologie. Published by Elsevier SAS. All rights reserved.

  10. Augmented video viewing: transforming video consumption into an active experience

    OpenAIRE

    WIJNANTS, Maarten; Leën, Jeroen; QUAX, Peter; LAMOTTE, Wim

    2014-01-01

    Traditional video productions fail to cater to the interactivity standards that the current generation of digitally native customers have become accustomed to. This paper therefore advertises the \\activation" of the video consumption process. In particular, it proposes to enhance HTML5 video playback with interactive features in order to transform video viewing into a dynamic pastime. The objective is to enable the authoring of more captivating and rewarding video experiences for end-users. T...

  11. Using WNTR to Model Water Distribution System Resilience ...

    Science.gov (United States)

    The Water Network Tool for Resilience (WNTR) is a new open source Python package developed by the U.S. Environmental Protection Agency and Sandia National Laboratories to model and evaluate resilience of water distribution systems. WNTR can be used to simulate a wide range of disruptive events, including earthquakes, contamination incidents, floods, climate change, and fires. The software includes the EPANET solver as well as a WNTR solver with the ability to model pressure-driven demand hydraulics, pipe breaks, component degradation and failure, changes to supply and demand, and cascading failure. Damage to individual components in the network (i.e. pipes, tanks) can be selected probabilistically using fragility curves. WNTR can also simulate different types of resilience-enhancing actions, including scheduled pipe repair or replacement, water conservation efforts, addition of back-up power, and use of contamination warning systems. The software can be used to estimate potential damage in a network, evaluate preparedness, prioritize repair strategies, and identify worse case scenarios. As a Python package, WNTR takes advantage of many existing python capabilities, including parallel processing of scenarios and graphics capabilities. This presentation will outline the modeling components in WNTR, demonstrate their use, give the audience information on how to get started using the code, and invite others to participate in this open source project. This pres

  12. Social impacts of corruption upon community resilience and poverty

    Directory of Open Access Journals (Sweden)

    James Lewis

    2017-05-01

    Full Text Available Corruption at all levels of all societies is a behavioural consequence of power and greed. With no rulebook, corruption is covert, opportunistic, repetitive and powerful, reliant upon dominance, fear and unspoken codes: a significant component of the ‘quiet violence’. Descriptions of financial corruption in China, Italy and Africa lead into a discussion of ‘grand’, ‘political’ and ‘petty’ corruption. Social consequences are given emphasis but elude analysis; those in Bangladesh and the Philippines are considered against prerequisites for resilience. People most dependent upon self-reliance are most prone to its erosion by exploitation, ubiquitous impediments to prerequisites of resilience – latent abilities to ‘accommodate and recover’ and to ‘change in order to survive’. Rarely spoken of to those it does not dominate, for long-term effectiveness, sustainability and reliability, eradication of corrupt practices should be prerequisite to initiatives for climate change, poverty reduction, disaster risk reduction and resilience.

  13. Deception Detection in Videos

    OpenAIRE

    Wu, Zhe; Singh, Bharat; Davis, Larry S.; Subrahmanian, V. S.

    2017-01-01

    We present a system for covert automated deception detection in real-life courtroom trial videos. We study the importance of different modalities like vision, audio and text for this task. On the vision side, our system uses classifiers trained on low level video features which predict human micro-expressions. We show that predictions of high-level micro-expressions can be used as features for deception prediction. Surprisingly, IDT (Improved Dense Trajectory) features which have been widely ...

  14. Industrial-Strength Streaming Video.

    Science.gov (United States)

    Avgerakis, George; Waring, Becky

    1997-01-01

    Corporate training, financial services, entertainment, and education are among the top applications for streaming video servers, which send video to the desktop without downloading the whole file to the hard disk, saving time and eliminating copyrights questions. Examines streaming video technology, lists ten tips for better net video, and ranks…

  15. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... search for current job openings visit HHS USAJobs Home >> NEI YouTube Videos >> NEI YouTube Videos: Amblyopia Listen NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia Animations Blindness Cataract ...

  16. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... Amaurosis Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia NEI Home Contact Us A-Z Site Map NEI on Social Media Information in Spanish (Información en español) Website, ...

  17. A Framework for Video Modeling

    NARCIS (Netherlands)

    Petkovic, M.; Jonker, Willem

    In recent years, research in video databases has increased greatly, but relatively little work has been done in the area of semantic content-based retrieval. In this paper, we present a framework for video modelling with emphasis on semantic content of video data. The video data model presented

  18. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... search for current job openings visit HHS USAJobs Home » NEI YouTube Videos » NEI YouTube Videos: Amblyopia Listen NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia Animations Blindness Cataract ...

  19. Robust video object cosegmentation.

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing; Li, Xuelong; Porikli, Fatih

    2015-10-01

    With ever-increasing volumes of video data, automatic extraction of salient object regions became even more significant for visual analytic solutions. This surge has also opened up opportunities for taking advantage of collective cues encapsulated in multiple videos in a cooperative manner. However, it also brings up major challenges, such as handling of drastic appearance, motion pattern, and pose variations, of foreground objects as well as indiscriminate backgrounds. Here, we present a cosegmentation framework to discover and segment out common object regions across multiple frames and multiple videos in a joint fashion. We incorporate three types of cues, i.e., intraframe saliency, interframe consistency, and across-video similarity into an energy optimization framework that does not make restrictive assumptions on foreground appearance and motion model, and does not require objects to be visible in all frames. We also introduce a spatio-temporal scale-invariant feature transform (SIFT) flow descriptor to integrate across-video correspondence from the conventional SIFT-flow into interframe motion flow from optical flow. This novel spatio-temporal SIFT flow generates reliable estimations of common foregrounds over the entire video data set. Experimental results show that our method outperforms the state-of-the-art on a new extensive data set (ViCoSeg).

  20. Parity Bit Replenishment for JPEG 2000-Based Video Streaming

    Directory of Open Access Journals (Sweden)

    François-Olivier Devaux

    2009-01-01

    Full Text Available This paper envisions coding with side information to design a highly scalable video codec. To achieve fine-grained scalability in terms of resolution, quality, and spatial access as well as temporal access to individual frames, the JPEG 2000 coding algorithm has been considered as the reference algorithm to encode INTRA information, and coding with side information has been envisioned to refresh the blocks that change between two consecutive images of a video sequence. One advantage of coding with side information compared to conventional closed-loop hybrid video coding schemes lies in the fact that parity bits are designed to correct stochastic errors and not to encode deterministic prediction errors. This enables the codec to support some desynchronization between the encoder and the decoder, which is particularly helpful to adapt on the fly pre-encoded content to fluctuating network resources and/or user preferences in terms of regions of interest. Regarding the coding scheme itself, to preserve both quality scalability and compliance to the JPEG 2000 wavelet representation, a particular attention has been devoted to the definition of a practical coding framework able to exploit not only the temporal but also spatial correlation among wavelet subbands coefficients, while computing the parity bits on subsets of wavelet bit-planes. Simulations have shown that compared to pure INTRA-based conditional replenishment solutions, the addition of the parity bits option decreases the transmission cost in terms of bandwidth, while preserving access flexibility.

  1. Robust Adaptable Video Copy Detection

    DEFF Research Database (Denmark)

    Assent, Ira; Kremer, Hardy

    2009-01-01

    in contrast). Our query processing combines filtering and indexing structures for efficient multistep computation of video copies under this model. We show that our model successfully identifies altered video copies and does so more reliably than existing models.......Video copy detection should be capable of identifying video copies subject to alterations e.g. in video contrast or frame rates. We propose a video copy detection scheme that allows for adaptable detection of videos that are altered temporally (e.g. frame rate change) and/or visually (e.g. change...

  2. Panda code

    International Nuclear Information System (INIS)

    Altomare, S.; Minton, G.

    1975-02-01

    PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)

  3. The resilience of paradigm mixes

    DEFF Research Database (Denmark)

    Daugbjerg, Carsten; Farsund, Arild Aurvåg; Langhelle, Oluf

    2017-01-01

    This paper argues that a policy regime based on a paradigm mix may be resilient when challenged by changing power balances and new agendas. Controversies between the actors can be contained within the paradigm mix as it enables them to legitimize different ideational positions. Rather than engaging...... context changed. The paradigm mix proved sufficiently flexible to accommodate food security concerns and at the same time continue to take steps toward further liberalization. Indeed, the main players have not challenged the paradigm mix....

  4. CANAL code

    International Nuclear Information System (INIS)

    Gara, P.; Martin, E.

    1983-01-01

    The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr

  5. Focusing the Meaning(s of Resilience: Resilience as a Descriptive Concept and a Boundary Object

    Directory of Open Access Journals (Sweden)

    Fridolin Simon. Brand

    2007-06-01

    Full Text Available This article reviews the variety of definitions proposed for "resilience" within sustainability science and suggests a typology according to the specific degree of normativity. There is a tension between the original descriptive concept of resilience first defined in ecological science and a more recent, vague, and malleable notion of resilience used as an approach or boundary object by different scientific disciplines. Even though increased conceptual vagueness can be valuable to foster communication across disciplines and between science and practice, both conceptual clarity and practical relevance of the concept of resilience are critically in danger. The fundamental question is what conceptual structure we want resilience to have. This article argues that a clearly specified, descriptive concept of resilience is critical in providing a counterbalance to the use of resilience as a vague boundary object. A clear descriptive concept provides the basis for operationalization and application of resilience within ecological science.

  6. Mass-storage management for distributed image/video archives

    Science.gov (United States)

    Franchi, Santina; Guarda, Roberto; Prampolini, Franco

    1993-04-01

    The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.

  7. Deriving video content type from HEVC bitstream semantics

    Science.gov (United States)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio R.

    2014-05-01

    As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can

  8. Incorporating Resilience into Dynamic Social Models

    Science.gov (United States)

    2016-07-20

    form contains classified information, stamp classification level on the top and bottom of this page. 17. LIMITATION OF ABSTRACT. This block must be...resilience as a multi- level resilience and study their resilience at individual, family and society levels . However, having more than one level on a...4] U. Fischbacher, S. Gächter, and E. Fehr, “Are People Conditionally Cooperative ? Evidence from a Public Goods Experiment,” Econ . Lett., vol

  9. Evaluating multicast resilience in carrier ethernet

    DEFF Research Database (Denmark)

    Ruepp, Sarah Renée; Wessing, Henrik; Zhang, Jiang

    2010-01-01

    This paper gives an overview of the Carrier Ethernet technology with specific focus on resilience. In particular, we show how multicast traffic, which is essential for IPTV can be protected. We detail the ackground for resilience mechanisms and their control and e present Carrier Ethernet...... resilience methods for linear nd ring networks. By simulation we show that the vailability of a multicast connection can be significantly increased by applying protection methods....

  10. Temporal Coding of Volumetric Imagery

    Science.gov (United States)

    Llull, Patrick Ryan

    'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration

  11. Novel memory architecture for video signal processor

    Science.gov (United States)

    Hung, Jen-Sheng; Lin, Chia-Hsing; Jen, Chein-Wei

    1993-11-01

    An on-chip memory architecture for video signal processor (VSP) is proposed. This memory structure is a two-level design for the different data locality in video applications. The upper level--Memory A provides enough storage capacity to reduce the impact on the limitation of chip I/O bandwidth, and the lower level--Memory B provides enough data parallelism and flexibility to meet the requirements of multiple reconfigurable pipeline function units in a single VSP chip. The needed memory size is decided by the memory usage analysis for video algorithms and the number of function units. Both levels of memory adopted a dual-port memory scheme to sustain the simultaneous read and write operations. Especially, Memory B uses multiple one-read-one-write memory banks to emulate the real multiport memory. Therefore, one can change the configuration of Memory B to several sets of memories with variable read/write ports by adjusting the bus switches. Then the numbers of read ports and write ports in proposed memory can meet requirement of data flow patterns in different video coding algorithms. We have finished the design of a prototype memory design using 1.2- micrometers SPDM SRAM technology and will fabricated it through TSMC, in Taiwan.

  12. Degrees of Resilience: Profiling Psychological Resilience and Prospective Academic Achievement in University Inductees

    Science.gov (United States)

    Allan, John F.; McKenna, Jim; Dominey, Susan

    2014-01-01

    University inductees may be increasingly vulnerable to stressors during transition into higher education (HE), requiring psychological resilience to achieve academic success. This study aimed to profile inductees' resilience and to investigate links to prospective end of year academic outcomes. Scores for resilience were based on a validated…

  13. Teaching Resilience: A Narrative Inquiry into the Importance of Teacher Resilience

    Science.gov (United States)

    Vance, Angela; Pendergast, Donna; Garvis, Susanne

    2015-01-01

    This study set out to explore how high school teachers perceive their resilience as they teach a scripted social and emotional learning program to students with the goal of promoting the resilience skills of the students in their pastoral care classes. In this emerging field of research on teacher resilience, there is a paucity of research…

  14. Reframing Resilience: Pilot Evaluation of a Program to Promote Resilience in Marginalized Older Adults

    Science.gov (United States)

    Fullen, Matthew C.; Gorby, Sean R.

    2016-01-01

    Resilience has been described as a paradigm for aging that is more inclusive than models that focus on physiological and functional abilities. We evaluated a novel program, Resilient Aging, designed to influence marginalized older adults' perceptions of their resilience, self-efficacy, and wellness. The multiweek group program incorporated an…

  15. Priority Queues Resilient to Memory Faults

    DEFF Research Database (Denmark)

    Jørgensen, Allan Grønlund; Moruz, Gabriel; Mølhave, Thomas

    2007-01-01

    In the faulty-memory RAM model, the content of memory cells can get corrupted at any time during the execution of an algorithm, and a constant number of uncorruptible registers are available. A resilient data structure in this model works correctly on the set of uncorrupted values. In this paper we...... introduce a resilient priority queue. The deletemin operation of a resilient priority queue returns either the minimum uncorrupted element or some corrupted element. Our resilient priority queue uses $O(n)$ space to store $n$ elements. Both insert and deletemin operations are performed in $O(\\log n...... queues storing only structural information in the uncorruptible registers between operations....

  16. Enhancing quantitative approaches for assessing community resilience

    Science.gov (United States)

    Chuang, W. C.; Garmestani, A.S.; Eason, T. N.; Spanbauer, T. L.; Fried-Peterson, H. B.; Roberts, C.P.; Sundstrom, Shana M.; Burnett, J.L.; Angeler, David G.; Chaffin, Brian C.; Gunderson, L.; Twidwell, Dirac; Allen, Craig R.

    2018-01-01

    Scholars from many different intellectual disciplines have attempted to measure, estimate, or quantify resilience. However, there is growing concern that lack of clarity on the operationalization of the concept will limit its application. In this paper, we discuss the theory, research development and quantitative approaches in ecological and community resilience. Upon noting the lack of methods that quantify the complexities of the linked human and natural aspects of community resilience, we identify several promising approaches within the ecological resilience tradition that may be useful in filling these gaps. Further, we discuss the challenges for consolidating these approaches into a more integrated perspective for managing social-ecological systems.

  17. Practical Leakage-Resilient Symmetric Cryptography

    DEFF Research Database (Denmark)

    Faust, Sebastian; Pietrzak, Krzysztof; Schipper, Joachim

    2012-01-01

    Leakage resilient cryptography attempts to incorporate side-channel leakage into the black-box security model and designs cryptographic schemes that are provably secure within it. Informally, a scheme is leakage-resilient if it remains secure even if an adversary learns a bounded amount of arbitr......Leakage resilient cryptography attempts to incorporate side-channel leakage into the black-box security model and designs cryptographic schemes that are provably secure within it. Informally, a scheme is leakage-resilient if it remains secure even if an adversary learns a bounded amount...

  18. Resilience to Surprises through Communicative Planning

    Directory of Open Access Journals (Sweden)

    Bruce Evan. Goldstein

    2009-12-01

    Full Text Available Resilience thinkers share an interest in collaborative deliberation with communicative planners, who aim to accommodate different forms of knowledge and styles of reasoning to promote social learning and yield creative and equitable agreements. Members of both fields attended a symposium at Virginia Tech in late 2008, where communicative planners considered how social-ecological resilience informed new possibilities for planning practice beyond disaster mitigation and response. In turn, communicative planners offered resilience scholars ideas about how collaboration could accomplish more than enhance rational decision making of the commons. Through these exchanges, the symposium fostered ideas about collaborative governance and the critical role of expertise in fostering communicative resilience.

  19. Frequent video game players resist perceptual interference.

    Directory of Open Access Journals (Sweden)

    Aaron V Berard

    Full Text Available Playing certain types of video games for a long time can improve a wide range of mental processes, from visual acuity to cognitive control. Frequent gamers have also displayed generalized improvements in perceptual learning. In the Texture Discrimination Task (TDT, a widely used perceptual learning paradigm, participants report the orientation of a target embedded in a field of lines and demonstrate robust over-night improvement. However, changing the orientation of the background lines midway through TDT training interferes with overnight improvements in overall performance on TDT. Interestingly, prior research has suggested that this effect will not occur if a one-hour break is allowed in between the changes. These results have suggested that after training is over, it may take some time for learning to become stabilized and resilient against interference. Here, we tested whether frequent gamers have faster stabilization of perceptual learning compared to non-gamers and examined the effect of daily video game playing on interference of training of TDT with one background orientation on perceptual learning of TDT with a different background orientation. As a result, we found that non-gamers showed overnight performance improvement only on one background orientation, replicating previous results with the interference in TDT. In contrast, frequent gamers demonstrated overnight improvements in performance with both background orientations, suggesting that they are better able to overcome interference in perceptual learning. This resistance to interference suggests that video game playing not only enhances the amplitude and speed of perceptual learning but also leads to faster and/or more robust stabilization of perceptual learning.

  20. Frequent video game players resist perceptual interference.

    Science.gov (United States)

    Berard, Aaron V; Cain, Matthew S; Watanabe, Takeo; Sasaki, Yuka

    2015-01-01

    Playing certain types of video games for a long time can improve a wide range of mental processes, from visual acuity to cognitive control. Frequent gamers have also displayed generalized improvements in perceptual learning. In the Texture Discrimination Task (TDT), a widely used perceptual learning paradigm, participants report the orientation of a target embedded in a field of lines and demonstrate robust over-night improvement. However, changing the orientation of the background lines midway through TDT training interferes with overnight improvements in overall performance on TDT. Interestingly, prior research has suggested that this effect will not occur if a one-hour break is allowed in between the changes. These results have suggested that after training is over, it may take some time for learning to become stabilized and resilient against interference. Here, we tested whether frequent gamers have faster stabilization of perceptual learning compared to non-gamers and examined the effect of daily video game playing on interference of training of TDT with one background orientation on perceptual learning of TDT with a different background orientation. As a result, we found that non-gamers showed overnight performance improvement only on one background orientation, replicating previous results with the interference in TDT. In contrast, frequent gamers demonstrated overnight improvements in performance with both background orientations, suggesting that they are better able to overcome interference in perceptual learning. This resistance to interference suggests that video game playing not only enhances the amplitude and speed of perceptual learning but also leads to faster and/or more robust stabilization of perceptual learning.