WorldWideScience

Sample records for complex motion video

  1. Distributed Video Coding (DVC): Motion estimation and DCT quantization in low complexity video compression

    NARCIS (Netherlands)

    Borchert, S.

    2010-01-01

    The main focus of video encoding in the past twenty years has been on video broadcasting. A video is captured and encoded by professional equipment and then watched on varying consumer devices. Consequently, the focus was to increase the quality and to keep down the decoder complexity. In more rece

  2. A low-complexity region-based video coder using backward morphological motion field segmentation.

    Science.gov (United States)

    Yang, X; Ramchandran, K

    1999-01-01

    We introduce a novel region-based video compression framework based on morphology to efficiently capture motion correspondences between consecutive frames in an image sequence. Our coder is built on the observation that the motion field associated with typical image sequences can be segmented into component motion subfield "clusters" associated with distinct objects or regions in the scene, and further, that these clusters can be efficiently captured using morphological operators in a "backward" framework that avoids the need to send region shape information. Region segmentation is performed directly on the motion field by introducing a small "core" for each cluster that captures the essential features of the cluster and reliably represents its motion behavior. Cluster matching is used in lieu of the conventional block matching methods of standard video coders to define a cluster motion representation paradigm. Furthermore, a region-based pel-recursive approach is applied to find the refinement motion field for each cluster and the cluster motion prediction error image is coded by a novel adaptive scalar quantization method. Experimental results reveal a 10-20% reduction in prediction error energy and 1-3 dB gain in the final reconstructed peak signal-to-noise ratio (PSNR) over the standard MPEG-1 coder at typical bit rates of 500 Kb/s to 1 Mb/s on standard test sequences, while also requiring lower computational complexity.

  3. Video summarization using motion descriptors

    Science.gov (United States)

    Divakaran, Ajay; Peker, Kadir A.; Sun, Huifang

    2001-01-01

    We describe a technique for video summarization that uses motion descriptors computed in the compressed domain to speed up conventional color based video summarization technique. The basic hypothesis of the work is that the intensity of motion activity of a video segment is a direct indication of its 'summarizability.' We present experimental verification of this hypothesis. We are thus able to quickly identify easy to summarize segments of a video sequence since they have a low intensity of motion activity. Moreover, the compressed domain extraction of motion activity intensity is much simpler than the color-based calculations. We are able to easily summarize these segments by simply choosing a key-frame at random from each low- activity segment. We can then apply conventional color-based summarization techniques to the remaining segments. We are thus able to speed up color-based summarization techniques by reducing the number of segments on which computationally more expensive color-based computation is needed.

  4. Dense and sparse aggregations in complex motion: Video coupled with simulation modeling

    Science.gov (United States)

    In censuses of aggregations composed of highly mobile animals, the link between image processing technology and simulation modeling remains relatively unexplored despite demonstrated ecological needs for abundance and density assessments. We introduce a framework that connects video censusing with ...

  5. Video-Based Motion Analysis

    Science.gov (United States)

    French, Paul; Peterson, Joel; Arrighi, Julie

    2005-04-01

    Video-based motion analysis has recently become very popular in introductory physics classes. This paper outlines general recommendations regarding equipment and software; videography issues such as scaling, shutter speed, lighting, background, and camera distance; as well as other methodological aspects. Also described are the measurement and modeling of the gravitational, drag, and Magnus forces on 1) a spherical projectile undergoing one-dimensional motion and 2) a spinning spherical projectile undergoing motion within a plane. Measurement and correction methods are devised for four common, major sources of error: parallax, lens distortion, discretization, and improper scaling.

  6. GPU-based video motion magnification

    Science.gov (United States)

    DomŻał, Mariusz; Jedrasiak, Karol; Sobel, Dawid; Ryt, Artur; Nawrat, Aleksander

    2016-06-01

    Video motion magnification (VMM) allows people see otherwise not visible subtle changes in surrounding world. VMM is also capable of hiding them with a modified version of the algorithm. It is possible to magnify motion related to breathing of patients in hospital to observe it or extinguish it and extract other information from stabilized image sequence for example blood flow. In both cases we would like to perform calculations in real time. Unfortunately, the VMM algorithm requires a great amount of computing power. In the article we suggest that VMM algorithm can be parallelized (each thread processes one pixel) and in order to prove that we implemented the algorithm on GPU using CUDA technology. CPU is used only to grab, write, display frame and schedule work for GPU. Each GPU kernel performs spatial decomposition, reconstruction and motion amplification. In this work we presented approach that achieves a significant speedup over existing methods and allow to VMM process video in real-time. This solution can be used as preprocessing for other algorithms in more complex systems or can find application wherever real time motion magnification would be useful. It is worth to mention that the implementation runs on most modern desktops and laptops compatible with CUDA technology.

  7. Repurposing video recordings for structure motion estimations

    Science.gov (United States)

    Khaloo, Ali; Lattanzi, David

    2016-04-01

    Video monitoring of public spaces is becoming increasingly ubiquitous, particularly near essential structures and facilities. During any hazard event that dynamically excites a structure, such as an earthquake or hurricane, proximal video cameras may inadvertently capture the motion time-history of the structure during the event. If this dynamic time-history could be extracted from the repurposed video recording it would become a valuable forensic analysis tool for engineers performing post-disaster structural evaluations. The difficulty is that almost all potential video cameras are not installed to monitor structure motions, leading to camera perspective distortions and other associated challenges. This paper presents a method for extracting structure motions from videos using a combination of computer vision techniques. Images from a video recording are first reprojected into synthetic images that eliminate perspective distortion, using as-built knowledge of a structure for calibration. The motion of the camera itself during an event is also considered. Optical flow, a technique for tracking per-pixel motion, is then applied to these synthetic images to estimate the building motion. The developed method was validated using the experimental records of the NEESHub earthquake database. The results indicate that the technique is capable of estimating structural motions, particularly the frequency content of the response. Further work will evaluate variants and alternatives to the optical flow algorithm, as well as study the impact of video encoding artifacts on motion estimates.

  8. Motion estimation techniques for digital video coding

    CERN Document Server

    Metkar, Shilpa

    2013-01-01

    The book deals with the development of a methodology to estimate the motion field between two frames for video coding applications. This book proposes an exhaustive study of the motion estimation process in the framework of a general video coder. The conceptual explanations are discussed in a simple language and with the use of suitable figures. The book will serve as a guide for new researchers working in the field of motion estimation techniques.

  9. Indexing Motion Detection Data for Surveillance Video

    DEFF Research Database (Denmark)

    Vind, Søren Juhl; Bille, Philip; Gørtz, Inge Li

    2014-01-01

    We show how to compactly index video data to support fast motion detection queries. A query specifies a time interval T, a area A in the video and two thresholds v and p. The answer to a query is a list of timestamps in T where ≥ p% of A has changed by ≥ v values. Our results show that by building...... a small index, we can support queries with a speedup of two to three orders of magnitude compared to motion detection without an index. For high resolution video, the index size is about 20% of the compressed video size....

  10. Low Complexity for Scalable Video Coding Extension of H.264 based on the Complexity of Video

    Directory of Open Access Journals (Sweden)

    Mayada Khairy

    2016-12-01

    Full Text Available Scalable Video Coding (SVC / H.264 is one type of video compression techniques. Which provided more reality in dealing with video compression to provide an efficient video coding based on H.264/AVC. This ensures higher performance through high compression ratio. SVC/H.264 is a complexity technique whereas the takes considerable time for computation the best mode of macroblock and motion estimation through using the exhaustive search techniques. This work reducing the processing time through matching between the complexity of the video and the method of selection macroblock and motion estimation. The goal of this approach is reducing the encoding time and improving the quality of video stream the efficiency of the proposed approach makes it suitable for are many applications as video conference application and security application.

  11. Fast compressed domain motion detection in H.264 video streams for video surveillance applications

    DEFF Research Database (Denmark)

    Szczerba, Krzysztof; Forchhammer, Søren; Støttrup-Andersen, Jesper;

    2009-01-01

    This paper presents a novel approach to fast motion detection in H.264/MPEG-4 advanced video coding (AVC) compressed video streams for IP video surveillance systems. The goal is to develop algorithms which may be useful in a real-life industrial perspective by facilitating the processing of large...... numbers of video streams on a single server. The focus of the work is on using the information in coded video streams to reduce the computational complexity and memory requirements, which translates into reduced hardware requirements and costs. The devised algorithm detects and segments activity based...... on motion vectors embedded in the video stream without requiring a full decoding and reconstruction of video frames. To improve the robustness to noise, a confidence measure based on temporal and spatial clues is introduced to increase the probability of correct detection. The algorithm was tested on indoor...

  12. Evaluation and Comparison of Motion Estimation Algorithms for Video Compression

    Directory of Open Access Journals (Sweden)

    Avinash Nayak

    2013-08-01

    Full Text Available Video compression has become an essential component of broadcast and entertainment media. Motion Estimation and compensation techniques, which can eliminate temporal redundancy between adjacent frames effectively, have been widely applied to popular video compression coding standards such as MPEG-2, MPEG-4. Traditional fast block matching algorithms are easily trapped into the local minima resulting in degradation on video quality to some extent after decoding. In this paper various computing techniques are evaluated in video compression for achieving global optimal solution for motion estimation. Zero motion prejudgment is implemented for finding static macro blocks (MB which do not need to perform remaining search thus reduces the computational cost. Adaptive Rood Pattern Search (ARPS motion estimation algorithm is also adapted to reduce the motion vector overhead in frame prediction. The simulation results showed that the ARPS algorithm is very effective in reducing the computations overhead and achieves very good Peak Signal to Noise Ratio (PSNR values. This method significantly reduces the computational complexity involved in the frame prediction and also least prediction error in all video sequences. Thus ARPS technique is more efficient than the conventional searching algorithms in video compression.

  13. Low complexity video encoding for UAV inspection

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Zhang, Ruo; Forchhammer, Søren

    2016-01-01

    In this work we present several methods for fast integer motion estimation of videos recorded aboard an Unmanned Aerial Vehicle (UAV). Different from related work, the field depth is not considered to be consistent. The novel methods designed for low complexity MV prediction in H.264/AVC...

  14. Lossless Compression of Video using Motion Compensation

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1998-01-01

    We investigate lossless coding of video using predictive coding andmotion compensation. The methods incorporate state-of-the-art lossless techniques such ascontext based prediction and bias cancellation, Golomb coding, high resolution motion field estimation,3d-dimensional predictors, prediction...... using one or multiple previous images, predictor dependent error modelling, and selection of motion field by code length. For slow pan or slow zoom sequences, coding methods that use multiple previous images are up to 20% better than motion compensation using a single previous image and up to 40% better...... than coding that does not utilize motion compensation....

  15. Lossless Compression of Video using Motion Compensation

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1998-01-01

    Summary form only given. We investigate lossless coding of video using predictive coding and motion compensation. The new coding methods combine state-of-the-art lossless techniques as JPEG (context based prediction and bias cancellation, Golomb coding), with high resolution motion field estimation......-predictors and intra-frame predictors as well. As proposed by Ribas-Corbera (see PhD thesis, University of Michigan, 1996), we use bi-linear interpolation in order to achieve sub-pixel precision of the motion field. Using more reference images is another way of achieving higher accuracy of the match. The motion...

  16. H.264 MOTION ESTIMATION ALGORITHM BASED ON VIDEO SEQUENCES ACTIVITY

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Motion estimation is an important part of H.264/AVC encoding progress, with high computational complexity. Therefore, it is quite necessary to find a fast motion estimation algorithm for real-time applications. The algorithm proposed in this letter adjudges the macroblocks activity degree first; then classifies different video sequences, and applies different search strategies according to the result. Experiments show that this method obtains almost the same video quality with the Full Search (FS) algorithm but with reduced more than 95% computation cost.

  17. Lossless Compression of Video using Motion Compensation

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1998-01-01

    We investigate lossless coding of video using predictive coding andmotion compensation. The methods incorporate state-of-the-art lossless techniques such ascontext based prediction and bias cancellation, Golomb coding, high resolution motion field estimation,3d-dimensional predictors, prediction...

  18. Quantitative assessment of human motion using video motion analysis

    Science.gov (United States)

    Probe, John D.

    1993-01-01

    In the study of the dynamics and kinematics of the human body a wide variety of technologies has been developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development, coupled with recent advances in video technology, have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System (APAS) to develop data on shirtsleeved and space-suited human performance in order to plan efficient on-orbit intravehicular and extravehicular activities. APAS is a fully integrated system of hardware and software for biomechanics and the analysis of human performance and generalized motion measurement. Major components of the complete system include the video system, the AT compatible computer, and the proprietary software.

  19. Fast motion prediction algorithm for multiview video coding

    Science.gov (United States)

    Abdelazim, Abdelrahman; Zhang, Guang Y.; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel

    2011-06-01

    Multiview Video Coding (MVC) is an extension to the H.264/MPEG-4 AVC video compression standard developed with joint efforts by MPEG/VCEG to enable efficient encoding of sequences captured simultaneously from multiple cameras using a single video stream. Therefore the design is aimed at exploiting inter-view dependencies in addition to reducing temporal redundancies. However, this further increases the overall encoding complexity In this paper, the high correlation between a macroblock and its enclosed partitions is utilised to estimate motion homogeneity, and based on the result inter-view prediction is selectively enabled or disabled. Moreover, if the MVC is divided into three layers in terms of motion prediction; the first being the full and sub-pixel motion search, the second being the mode selection process and the third being repetition of the first and second for inter-view prediction, the proposed algorithm significantly reduces the complexity in the three layers. To assess the proposed algorithm, a comprehensive set of experiments were conducted. The results show that the proposed algorithm significantly reduces the motion estimation time whilst maintaining similar Rate Distortion performance, when compared to both the H.264/MVC reference software and recently reported work.

  20. Motion feature extraction scheme for content-based video retrieval

    Science.gov (United States)

    Wu, Chuan; He, Yuwen; Zhao, Li; Zhong, Yuzhuo

    2001-12-01

    This paper proposes the extraction scheme of global motion and object trajectory in a video shot for content-based video retrieval. Motion is the key feature representing temporal information of videos. And it is more objective and consistent compared to other features such as color, texture, etc. Efficient motion feature extraction is an important step for content-based video retrieval. Some approaches have been taken to extract camera motion and motion activity in video sequences. When dealing with the problem of object tracking, algorithms are always proposed on the basis of known object region in the frames. In this paper, a whole picture of the motion information in the video shot has been achieved through analyzing motion of background and foreground respectively and automatically. 6-parameter affine model is utilized as the motion model of background motion, and a fast and robust global motion estimation algorithm is developed to estimate the parameters of the motion model. The object region is obtained by means of global motion compensation between two consecutive frames. Then the center of object region is calculated and tracked to get the object motion trajectory in the video sequence. Global motion and object trajectory are described with MPEG-7 parametric motion and motion trajectory descriptors and valid similar measures are defined for the two descriptors. Experimental results indicate that our proposed scheme is reliable and efficient.

  1. Estimating the Video Registration Using Image Motions

    Directory of Open Access Journals (Sweden)

    N.Kannaiya Raja

    2012-07-01

    Full Text Available In this research, we consider the problems of registering multiple video sequences dynamic scenes which are not limited non rigid objects such as fireworks, blasting, high speed car moving taken from different vantage points. In this paper we propose a simple algorithm we can create different frames on particular videos moving for matching such complex scenes. Our algorithm does not require the cameras to be synchronized, and is not based on frame-by-frame or volume-by-volume registration. Instead, we model each video as the output of a linear dynamical system and transform the task of registering the video sequences to that of registering the parameters of the corresponding dynamical models. In this paper we use of a joint frame together to form distinct frame concurrently. The joint identification and the Jordan canonical form are not only applicable to the case of registering video sequences, but also to the entire genre of algorithms based on the dynamic texture model. We have also shown that out of all the possible choices for the method of identification and canonical form, the JID using JCF performs the best.

  2. Reconstructed key frame and object motion based video retrieval

    Science.gov (United States)

    Hu, Shuangyan; Li, Junshan; Li, Kun; Wang, Rui; Yang, Weijun

    2007-11-01

    This paper proposes a video retrieval scheme which can retrieve desired video clips from video databases using color and object motion. The retrieval method includes two steps. In the first step, get the Intra picture frames (I-frames) set from the query MPEG video and reconstruct the key frame of the video based on the set. Then, the video retrieval equals to the retrieval of the reconstructed key frame(R-key frame) and can be easily performed according the methods of content based image retrieval. The second step, the local object motion information that is local motion vector field, is extracted from the video clips set which is the result of the first step, and the final similarity of videos is measured based on the constructed directional histogram. Experimental results show that the proposed two-step retrieval method performed excellently for video retrieval.

  3. Global motion compensated visual attention-based video watermarking

    Science.gov (United States)

    Oakes, Matthew; Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Imperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated wavelet-based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking.

  4. Video shot boundary detection using motion activity descriptor

    CERN Document Server

    Amel, Abdelati Malek; Abdellatif, Mtibaa

    2010-01-01

    This paper focus on the study of the motion activity descriptor for shot boundary detection in video sequences. We interest in the validation of this descriptor in the aim of its real time implementation with reasonable high performances in shot boundary detection. The motion activity information is extracted in uncompressed domain based on adaptive rood pattern search (ARPS) algorithm. In this context, the motion activity descriptor was applied for different video sequence.

  5. Video Background Subtraction in Complex Environments

    Directory of Open Access Journals (Sweden)

    Juana E. Santoyo-Morales

    2014-06-01

    Full Text Available Background subtraction models based on mixture of Gaussians have been extensively used for detecting objects in motion in a wide variety of computer vision applications. However, background subtraction modeling is still an open problem particularly in video scenes with drastic illumination changes and dynamic backgrounds (complex backgrounds. The purpose of the present work is focused on increasing the robustness of background subtraction models to complex environments. For this, we proposed the following enhancements: a redefine the model distribution parameters involved in the detection of moving objects (distribution weight, mean and variance, b improve pixel classification (background/foreground and variable update mechanism by a new time-space dependent learning-rate parameter, and c replace the pixel-based modeling currently used in the literature by a new space-time region-based model that eliminates the noise effect caused by drastic changes in illumination. Our proposed scheme can be implemented on any state of the art background subtraction scheme based on mixture of Gaussians to improve its resilient to complex backgrounds. Experimental results show excellent noise removal and object motion detection properties under complex environments.

  6. Video Coding with Motion-Compensated Lifted Wavelet Transforms

    OpenAIRE

    Flierl, M.; Girod, B.

    2004-01-01

    This article explores the efficiency of motion-compensated three-dimensional transform coding, a compression scheme that employs a motion-compensated transform for a group of pictures. We investigate this coding scheme experimentally and theoretically. The practical coding scheme employs in temporal direction a wavelet decomposition with motion-compensated lifting steps. Further, we compare the experimental results to that of a predictive video codec with single-hypothesis motion compensation...

  7. Evaluation and Comparison of Motion Estimation Algorithms for Video Compression

    OpenAIRE

    Avinash Nayak; Bijayinee Biswal; S. K. Sabut

    2013-01-01

    Video compression has become an essential component of broadcast and entertainment media. Motion Estimation and compensation techniques, which can eliminate temporal redundancy between adjacent frames effectively, have been widely applied to popular video compression coding standards such as MPEG-2, MPEG-4. Traditional fast block matching algorithms are easily trapped into the local minima resulting in degradation on video quality to some extent after decoding. In this paper various computing...

  8. Video summarization using descriptors of motion activity: a motion activity based approach to key-frame extraction from video shots

    Science.gov (United States)

    Divakaran, Ajay; Radhakrishnan, Regunathan; Peker, Kadir A.

    2001-10-01

    We describe a video summarization technique that uses motion descriptors computed in the compressed domain. It can either speed up conventional color-based video summarization techniques, or rapidly generate a key-frame based summary by itself. The basic hypothesis of the work is that the intensity of motion activity of a video segment is a direct indication of its `summarizability,' which we experimentally verify using the MPEG-7 motion activity descriptor and the fidelity measure proposed in H. S. Chang, S. Sull, and S. U. Lee, `Efficient video indexing scheme for content-based retrieval,' IEEE Trans. Circuits Syst. Video Technol. 9(8), (1999). Note that the compressed domain extraction of motion activity intensity is much simpler than the color-based calculations. We are thus able to quickly identify easy to summarize segments of a video sequence since they have a low intensity of motion activity. We are able to easily summarize these segments by simply choosing their first frames. We can then apply conventional color-based summarization techniques to the remaining segments. We thus speed up color-based summarization by reducing the number of segments processed. Our results also motivate a simple and novel key-frame extraction technique that relies on a motion activity based nonuniform sampling of the frames. Our results indicate that it can either be used by itself or to speed up color-based techniques as explained earlier.

  9. Automatic Person Identification in Camera Video by Motion Correlation

    Directory of Open Access Journals (Sweden)

    Dingbo Duan

    2014-01-01

    Full Text Available Person identification plays an important role in semantic analysis of video content. This paper presents a novel method to automatically label persons in video sequence captured from fixed camera. Instead of leveraging traditional face recognition approaches, we deal with the task of person identification by fusing information from motion sensor platforms, like smart phones, carried on human bodies and extracted from camera video. More specifically, a sequence of motion features extracted from camera video are compared with each of those collected from accelerometers of smart phones. When strong correlation is detected, identity information transmitted from the corresponding smart phone is used to identify the phone wearer. To test the feasibility and efficiency of the proposed method, extensive experiments are conducted which achieved impressive performance.

  10. Motion based parsing for video from observational psychology

    Science.gov (United States)

    Kokaram, Anil; Doyle, Erika; Lennon, Daire; Joyeux, Laurent; Fuller, Ray

    2006-01-01

    In Psychology it is common to conduct studies involving the observation of humans undertaking some task. The sessions are typically recorded on video and used for subjective visual analysis. The subjective analysis is tedious and time consuming, not only because much useless video material is recorded but also because subjective measures of human behaviour are not necessarily repeatable. This paper presents tools using content based video analysis that allow automated parsing of video from one such study involving Dyslexia. The tools rely on implicit measures of human motion that can be generalised to other applications in the domain of human observation. Results comparing quantitative assessment of human motion with subjective assessment are also presented, illustrating that the system is a useful scientific tool.

  11. Video coding using Karhunen-Loeve transform and motion compensation

    Science.gov (United States)

    Musatenko, Yurij S.; Soloveyko, Olexandr M.; Kurashov, Vitalij N.; Dubikovskiy, Vladislav A.

    1999-07-01

    The paper present a new method for video compression. The discussed techniques consider video frames as a set of correlated images. Common approach to the problem of compression of correlated images is to use some orthogonal transform, for example cosine or wavelet transform, in order to remove the correlation among images and then to compress resolution coefficients using already known compression technique such as JPEG or EZW. However, the most optimal representation for removing of correlation among images is Karhunen-Loeve (KL) transform. In the paper we apply recently proposed Optimal Image Coding using KL transform method (OICKL) based on this approach. In order to take into account the nature of video we use Triangle Motion Compensation to improve correlation among frames. Experimental part compares the performance of plain OICKL codec with OICKL and motion compensation combined. Recommendations concerning using of motion compensation with OICKL technique are worked out.

  12. Design and Implementation of the Motion Compensation Module for HDTV Video Decoder

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper presented a new solution for motion compensation module in the high definition television (HDTV) video decoder. The overall architecture and the design of the major functional units, such as the motion vector decoder, the predictor , and the mixer, were discussed. Based on the exploitation of the special characteristics inherent in the motion compensation algorithm, the motion compensation module and its functional units adopt various novel architectures in order to allow the module to meet real-time constraints. This solution resolves the problem of high hardware costs, low bus efficiency and complex control schemes in conventional designs.

  13. Motion-compensated wavelet video coding using adaptive mode selection

    Science.gov (United States)

    Zhai, Fan; Pappas, Thrasyvoulos N.

    2004-01-01

    A motion-compensated wavelet video coder is presented that uses adaptive mode selection (AMS) for each macroblock (MB). The block-based motion estimation is performed in the spatial domain, and an embedded zerotree wavelet coder (EZW) is employed to encode the residue frame. In contrast to other motion-compensated wavelet video coders, where all the MBs are forced to be in INTER mode, we construct the residue frame by combining the prediction residual of the INTER MBs with the coding residual of the INTRA and INTER_ENCODE MBs. Different from INTER MBs that are not coded, the INTRA and INTER_ENCODE MBs are encoded separately by a DCT coder. By adaptively selecting the quantizers of the INTRA and INTER_ENCODE coded MBs, our goal is to equalize the characteristics of the residue frame in order to improve the overall coding efficiency of the wavelet coder. The mode selection is based on the variance of the MB, the variance of the prediction error, and the variance of the neighboring MBs' residual. Simulations show that the proposed motion-compensated wavelet video coder achieves a gain of around 0.7-0.8dB PSNR over MPEG-2 TM5, and a comparable PSNR to other 2D motion-compensated wavelet-based video codecs. It also provides potential visual quality improvement.

  14. Complexity scalable motion estimation for H.264/AVC

    Science.gov (United States)

    Kim, Changsung; Xin, Jun; Vetro, Anthony; Kuo, C.-C. Jay

    2006-01-01

    A new complexity-scalable framework for motion estimation is proposed to efficiently reduce the motioncomplexity of encoding process, with focus on long term memory motion-compensated prediction of the H.264 video coding standard in this work. The objective is to provide a complexity scalable scheme for the given motion estimation algorithm such that it reduces the encoding complexity to the desired level with insignificant penalty in rate-distortion performance. In principle, the proposed algorithm adaptively allocates available motion-complexity budget to macroblock based on estimated impact towards overall rate-distortion (RD) performance subject to the given encoding time limit. To estimate macroblock-wise tradeoff between RD coding gain (J) and motion-complexity (C), the correlation of J-C curve between current macroblock and collocated macroblock in previous frame is exploited to predict initial motion-complexity budget of current macroblock. The initial budget is adaptively assigned to each blocksize and block-partition successively and motion-complexity budget is updated at the end of every encoding unit for remaining ones. Based on experiment, proposed J-C slope based allocation is better than uniform motion-complexity allocation scheme in terms of RDC tradeoff. It is demonstrated by experimental results that the proposed algorithm can reduce the H.264 motion estimation complexity to the desired level with little degradation in the rate distortion performance.

  15. Identifying sports videos using replay, text, and camera motion features

    Science.gov (United States)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  16. A Variational Framework for Simultaneous Motion Estimation and Restoration of Motion-Blurred Video (PREPRINT)

    Science.gov (United States)

    2007-08-01

    A VARIATIONAL FRAMEWORK FOR SIMULTANEOUS MOTION ESTIMATION AND RESTORATION OF MOTION-BLURRED VIDEO By Leah Bar Benjamin Berkels Martin Rumpf and...Numerical Simulation University of Bonn, Germany benjamin.berkels@ins.uni-bonn.de Martin Rumpf Institute for Numerical Simulation University of Bonn...Image Processing, 10, no. 2:266 – 277, 2001. 6, 7 [6]D. Cremers and S. Soatto. Motion competition: A variotional approach to piecewiese parametric

  17. Mode extraction on wind turbine blades via phase-based video motion estimation

    Science.gov (United States)

    Sarrafi, Aral; Poozesh, Peyman; Niezrecki, Christopher; Mao, Zhu

    2017-04-01

    In recent years, image processing techniques are being applied more often for structural dynamics identification, characterization, and structural health monitoring. Although as a non-contact and full-field measurement method, image processing still has a long way to go to outperform other conventional sensing instruments (i.e. accelerometers, strain gauges, laser vibrometers, etc.,). However, the technologies associated with image processing are developing rapidly and gaining more attention in a variety of engineering applications including structural dynamics identification and modal analysis. Among numerous motion estimation and image-processing methods, phase-based video motion estimation is considered as one of the most efficient methods regarding computation consumption and noise robustness. In this paper, phase-based video motion estimation is adopted for structural dynamics characterization on a 2.3-meter long Skystream wind turbine blade, and the modal parameters (natural frequencies, operating deflection shapes) are extracted. Phase-based video processing adopted in this paper provides reliable full-field 2-D motion information, which is beneficial for manufacturing certification and model updating at the design stage. The phase-based video motion estimation approach is demonstrated through processing data on a full-scale commercial structure (i.e. a wind turbine blade) with complex geometry and properties, and the results obtained have a good correlation with the modal parameters extracted from accelerometer measurements, especially for the first four bending modes, which have significant importance in blade characterization.

  18. Video quality assessment based on correlation between spatiotemporal motion energies

    Science.gov (United States)

    Yan, Peng; Mou, Xuanqin

    2016-09-01

    Video quality assessment (VQA) has been a hot research topic because of rapid increase of huge demand of video communications. From the earliest PSNR metric to advanced models that are perceptual aware, researchers have made great progress in this field by introducing properties of human vision system (HVS) into VQA model design. Among various algorithms that model the property of HVS perceiving motion, the spatiotemporal energy model has been validated to be high consistent with psychophysical experiments. In this paper, we take the spatiotemporal energy model into VQA model design by the following steps. 1) According to the pristine spatiotemporal energy model proposed by Adelson et al, we apply the linear filters, which are oriented in space-time and tuned in spatial frequency, to filter the reference and test videos respectively. The outputs of quadrature pairs of above filters are then squared and summed to give two measures of motion energy, which are named rightward and leftward energy responses, respectively. 2) Based on the pristine model, we calculate summation of the rightward and leftward energy responses as spatiotemporal features to represent perceptual quality information for videos, named total spatiotemporal motion energy maps. 3) The proposed FR-VQA model, named STME, is calculated with statistics based on the pixel-wise correlation between the total spatiotemporal motion energy maps of the reference and distorted videos. The STME model was validated on the LIVE VQA Database by comparing with existing FR-VQA models. Experimental results show that STME performs with excellent prediction accuracy and stays in state-of-the-art VQA models.

  19. Motion-based morphological segmentation of wildlife video

    Science.gov (United States)

    Thomas, Naveen M.; Canagarajah, Nishan

    2005-03-01

    Segmentation of objects in a video sequence is a key stage in most content-based retrieval systems. By further analysing the behaviour of these objects, it is possible to extract semantic information suitable for higher level content analysis. Since interesting content in a video is usually provided by moving objects, motion is a key feature to be used for pre content analysis segmentation. A motion based segmentation algorithm is presented in this paper that is both efficient and robust. The algorithm is also robust to the type of camera motion. The framework presented consists of three stages. These are the motion estimation stage, foreground detection stage and the refinement stage. An iteration of the first two stages, adaptively altering the motion estimation parameters each time, results in a joint segmentation and motion estimation approach that is extremely fast and accurate. Two dimensional histograms are used as a tool to carry out the foreground detection. The last stage uses morphological approaches as well as a prediction of foreground regions in future frames to further refine the segmentation. In this paper, results obtained from traditional approaches are compared with that of the proposed framework in the wildlife domain.

  20. Adaptive Motion Estimation Processor for Autonomous Video Devices

    Directory of Open Access Journals (Sweden)

    T. Dias

    2007-05-01

    Full Text Available Motion estimation is the most demanding operation of a video encoder, corresponding to at least 80% of the overall computational cost. As a consequence, with the proliferation of autonomous and portable handheld devices that support digital video coding, data-adaptive motion estimation algorithms have been required to dynamically configure the search pattern not only to avoid unnecessary computations and memory accesses but also to save energy. This paper proposes an application-specific instruction set processor (ASIP to implement data-adaptive motion estimation algorithms that is characterized by a specialized datapath and a minimum and optimized instruction set. Due to its low-power nature, this architecture is highly suitable to develop motion estimators for portable, mobile, and battery-supplied devices. Based on the proposed architecture and the considered adaptive algorithms, several motion estimators were synthesized both for a Virtex-II Pro XC2VP30 FPGA from Xilinx, integrated within an ML310 development platform, and using a StdCell library based on a 0.18 μm CMOS process. Experimental results show that the proposed architecture is able to estimate motion vectors in real time for QCIF and CIF video sequences with a very low-power consumption. Moreover, it is also able to adapt the operation to the available energy level in runtime. By adjusting the search pattern and setting up a more convenient operating frequency, it can change the power consumption in the interval between 1.6 mW and 15 mW.

  1. Adaptive Motion Estimation Processor for Autonomous Video Devices

    Directory of Open Access Journals (Sweden)

    Dias T

    2007-01-01

    Full Text Available Motion estimation is the most demanding operation of a video encoder, corresponding to at least 80% of the overall computational cost. As a consequence, with the proliferation of autonomous and portable handheld devices that support digital video coding, data-adaptive motion estimation algorithms have been required to dynamically configure the search pattern not only to avoid unnecessary computations and memory accesses but also to save energy. This paper proposes an application-specific instruction set processor (ASIP to implement data-adaptive motion estimation algorithms that is characterized by a specialized datapath and a minimum and optimized instruction set. Due to its low-power nature, this architecture is highly suitable to develop motion estimators for portable, mobile, and battery-supplied devices. Based on the proposed architecture and the considered adaptive algorithms, several motion estimators were synthesized both for a Virtex-II Pro XC2VP30 FPGA from Xilinx, integrated within an ML310 development platform, and using a StdCell library based on a 0.18 μm CMOS process. Experimental results show that the proposed architecture is able to estimate motion vectors in real time for QCIF and CIF video sequences with a very low-power consumption. Moreover, it is also able to adapt the operation to the available energy level in runtime. By adjusting the search pattern and setting up a more convenient operating frequency, it can change the power consumption in the interval between 1.6 mW and 15 mW.

  2. Motion estimation for video coding efficient algorithms and architectures

    CERN Document Server

    Chakrabarti, Indrajit; Chatterjee, Sumit Kumar

    2015-01-01

    The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.

  3. Video super-resolution using simultaneous motion and intensity calculations

    DEFF Research Database (Denmark)

    Keller, Sune Høgild; Lauze, Francois Bernard; Nielsen, Mads

    2011-01-01

    for the joint estimation of a super-resolution sequence and its flow field. Via the calculus of variations, this leads to a coupled system of partial differential equations for image sequence and motion estimation. We solve a simplified form of this system and as a by-product we indeed provide a motion field...... for super-resolved sequences. Computing super-resolved flows has to our knowledge not been done before. Most advanced super-resolution (SR) methods found in literature cannot be applied to general video with arbitrary scene content and/or arbitrary optical flows, as it is possible with our simultaneous VSR...

  4. Hierarchical Search Motion Estimation Algorithms for Real-time Video Coding

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    Data fetching and memory management are two factors as important as computation complexity in Motion Estimation(ME) implementation. In this paper, a new Large-scale Sampling Hierarchical Search motion estimation algorithm(LSHS) is proposed. The LSHS is suitable for real-time video coding with low computational complexity, reduced data fetching and simple memory access. The experiment results indicate the average decoding PSNR with LSHS is only about 0.2dB lower than that with Full Search (FS) scheme.

  5. DEFINITION AND ANALYSIS OF MOTION ACTIVITY AFTER-STROKE PATIENT FROM THE VIDEO STREAM

    Directory of Open Access Journals (Sweden)

    M. Yu. Katayev

    2014-01-01

    Full Text Available This article describes an approach to the assessment of motion activity of man in after-stroke period, allowing the doctor to get new information to give a more informed recommendations on rehabilitation treatment than in traditional approaches. Consider description of the hardware-software complex for determination and analysis of motion activity after-stroke patient for the video stream. The article provides a description of the complex, its algorithmic filling and the results of the work on the example of processing of the actual data. The algorithms and technology to significantly accelerate the gait analysis and improve the quality of diagnostics post-stroke patients.

  6. Human Motion Video Analysis in Clinical Practice (Review)

    OpenAIRE

    V.V. Borzikov; N.N. Rukina; O.V. Vorobyova; A.N. Kuznetsov; A. N. Belova

    2015-01-01

    The development of new rehabilitation approaches to neurological and traumatological patients requires understanding of normal and pathological movement patterns. Biomechanical analysis of video images is the most accurate method of investigation and quantitative assessment of human normal and pathological locomotion. The review of currently available methods and systems of optical human motion analysis used in clinical practice is presented here. Short historical background is provi...

  7. Power-scalable video encoder for mobile devices based on collocated motion estimation

    Science.gov (United States)

    Jung, Joel; Bourge, Arnaud

    2004-01-01

    In this paper, a method for designing low-power video schemes is presented. Algorithms that imply a very low dissipation are required for new applications where the energy source is limited, e.g. mobile phones including a camera and video features. Whereas it can be observed that video standards are mainly designed around coding efficiency, we propose to take into account power consumption characteristics directly when designing the algorithm. More precisely, we give some guidelines for the design of low-power video codecs in the scope of modern hardware architectures and we introduce the notion of power scalability. We present an original encoder based on so-called 'Collocated Motion Estimation' designed using the proposed methodology. Experimental results show that we remain close to the coding efficiency of the reference H.264 baseline encoder while the power consumption is largely reduced in our solution. Moreoever this encoder is scalable in memory transfer and computational complexity.

  8. Video Pedestrian Detection Based on Orthogonal Scene Motion Pattern

    Directory of Open Access Journals (Sweden)

    Jianming Qu

    2014-01-01

    Full Text Available In fixed video scenes, scene motion patterns can be a very useful prior knowledge for pedestrian detection which is still a challenge at present. A new approach of cascade pedestrian detection using an orthogonal scene motion pattern model in a general density video is developed in this paper. To statistically model the pedestrian motion pattern, a probability grid overlaying the whole scene is set up to partition the scene into paths and holding areas. Features extracted from different pattern areas are classified by a group of specific strategies. Instead of using a unitary classifier, the employed classifier is composed of two directional subclassifiers trained, respectively, with different samples which are selected by two orthogonal directions. Considering that the negative images from the detection window scanning are much more than the positive ones, the cascade AdaBoost technique is adopted by the subclassifiers to reduce the negative image computations. The proposed approach is proved effectively by static classification experiments and surveillance video experiments.

  9. Motion Vector Estimation Using Line-Square Search Block Matching Algorithm for Video Sequences

    Directory of Open Access Journals (Sweden)

    Guo Bao-long

    2004-09-01

    Full Text Available Motion estimation and compensation techniques are widely used for video coding applications but the real-time motion estimation is not easily achieved due to its enormous computations. In this paper, a new fast motion estimation algorithm based on line search is presented, in which computation complexity is greatly reduced by using the line search strategy and a parallel search pattern. Moreover, the accurate search is achieved because the small square search pattern is used. It has a best-case scenario of only 9 search points, which is 4 search points less than the diamond search algorithm. Simulation results show that, compared with the previous techniques, the LSPS algorithm significantly reduces the computational requirements for finding motion vectors, and also produces close performance in terms of motion compensation errors.

  10. Slow Motion and Zoom in HD Digital Videos Using Fractals

    Directory of Open Access Journals (Sweden)

    Maurizio Murroni

    2009-01-01

    Full Text Available Slow motion replay and spatial zooming are special effects used in digital video rendering. At present, most techniques to perform digital spatial zoom and slow motion are based on interpolation for both enlarging the size of the original pictures and generating additional intermediate frames. Mainly, interpolation is done either by linear or cubic spline functions or by motion estimation/compensation which both can be applied pixel by pixel, or by partitioning frames into blocks. Purpose of this paper is to present an alternative technique combining fractals theory and wavelet decomposition to achieve spatial zoom and slow motion replay of HD digital color video sequences. Fast scene change detection, active scene detection, wavelet subband analysis, and color fractal coding based on Earth Mover's Distance (EMD measure are used to reduce computational load and to improve visual quality. Experiments show that the proposed scheme achieves better results in terms of overall visual quality compared to the state-of-the-art techniques.

  11. Memory bandwidth-scalable motion estimation for mobile video coding

    Science.gov (United States)

    Hsieh, Jui-Hung; Tai, Wei-Cheng; Chang, Tian-Sheuan

    2011-12-01

    The heavy memory access of motion estimation (ME) execution consumes significant power and could limit ME execution when the available memory bandwidth (BW) is reduced because of access congestion or changes in the dynamics of the power environment of modern mobile devices. In order to adapt to the changing BW while maintaining the rate-distortion (R-D) performance, this article proposes a novel data BW-scalable algorithm for ME with mobile multimedia chips. The available BW is modeled in a R-D sense and allocated to fit the dynamic contents. The simulation result shows 70% BW savings while keeping equivalent R-D performance compared with H.264 reference software for low-motion CIF-sized video. For high-motion sequences, the result shows our algorithm can better use the available BW to save an average bit rate of up to 13% with up to 0.1-dB PSNR increase for similar BW usage.

  12. Complexity Control of Fast Motion Estimation in H.264/MPEG-4 AVC with Rate-Distortion-Complexity optimization

    DEFF Research Database (Denmark)

    Wu, Mo; Forchhammer, Søren; Aghito, Shankar Manuel

    2007-01-01

    A complexity control algorithm for H.264 advanced video coding is proposed. The algorithm can control the complexity of integer inter motion estimation for a given target complexity. The Rate-Distortion-Complexity performance is improved by a complexity prediction model, simple analysis of the past...

  13. Visual Acuity and Contrast Sensitivity with compressed motion video

    NARCIS (Netherlands)

    Bijl, P.; Vries, S.C. de

    2009-01-01

    Video of Visual Acuity (VA) and Contrast Sensitivity (CS) test charts in a complex background was recorded using a CCD camera mounted on a computer-controlled tripod and fed into real-time MPEG2 compression/decompression equipment. The test charts were based on the Triangle Orientation

  14. Action Video Games Improve Direction Discrimination of Parafoveal Translational Global Motion but Not Reaction Times.

    Science.gov (United States)

    Pavan, Andrea; Boyce, Matthew; Ghin, Filippo

    2016-10-01

    Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response.

  15. Scalable video compression using longer motion compensated temporal filters

    Science.gov (United States)

    Golwelkar, Abhijeet V.; Woods, John W.

    2003-06-01

    Three-dimensional (3-D) subband/wavelet coding using a motion compensated temporal filter (MCTF) is emerging as a very effective structure for highly scalable video coding. Most previous work has used two-tap Haar filters for the temporal analysis/synthesis. To make better use of the temporal redundancies, we are proposing an MCTF scheme based on longer biorthogonal filters. We show a lifting based coder capable of subpixel accurate motion compensation. If we retain the fixed size GOP structure of the Haar filter MCTFs, we need to use symmetric extensions at both ends of the GOP. This gives rise to loss of coding efficiency at the GOP boundaries resulting in significant PSNR drops there. This performance can be considerably improved by using a 'sliding window,' in place of the GOP block. We employ the 5/3 filter and its non-orthogonality causes PSNR variation, which can be reduced by employing filter-based weighting coefficients. Overall the longer filters have a higher coding gain than the Haar filters and show significant improvement in average PSNR at high bit rates. However, a doubling in the number of motion vectors to be transmitted, translates to a drop in PSNR at the lower video bit rates.

  16. Video compression using conditional replenishment and motion prediction

    Science.gov (United States)

    Hein, D. N.; Ahmed, N.

    1984-01-01

    A study of a low-rate monochrome video compression system is presentd in this paper. This system is a conditional-replenishment coder that uses two-dimensional Walsh-transform coding within each video frame. The conditional-replenishment algorithm works by transmitting only the portions of an image that are changing in time. This system is augmented with a motn-prediction algorithm that measures spatial dispalcement parameters from frame to frame, and codes the data using these parameters. A comparison is made between the conditional-replenishment system with, and without, the motion-predictinalgorthm. Subsampling in time is ued to maintain the data rate rate at a fixed value. Average bit rates of 1 bit/picture element (pel) to 1/16 bit/pel are considered. The resultant performance of the compression simulations is presented in terms of the average frame rates produced.

  17. SAD PROCESSOR FOR MULTIPLE MACROBLOCK MATCHING IN FAST SEARCH VIDEO MOTION ESTIMATION

    Directory of Open Access Journals (Sweden)

    Nehal N. Shah

    2015-02-01

    Full Text Available Motion estimation is a very important but computationally complex task in video coding. Process of determining motion vectors based on the temporal correlation of consecutive frame is used for video compression. In order to reduce the computational complexity of motion estimation and maintain the quality of encoding during motion compensation, different fast search techniques are available. These block based motion estimation algorithms use the sum of absolute difference (SAD between corresponding macroblock in current frame and all the candidate macroblocks in the reference frame to identify best match. Existing implementations can perform SAD between two blocks using sequential or pipeline approach but performing multi operand SAD in single clock cycle with optimized recourses is state of art. In this paper various parallel architectures for computation of the fixed block size SAD is evaluated and fast parallel SAD architecture is proposed with optimized resources. Further SAD processor is described with 9 processing elements which can be configured for any existing fast search block matching algorithm. Proposed SAD processor consumes 7% fewer adders compared to existing implementation for one processing elements. Using nine PE it can process 84 HD frames per second in worse case which is good outcome for real time implementation. In average case architecture process 325 HD frames per second.

  18. Low-complexity JPEG-based progressive video codec for wireless video transmission

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Forchhammer, Søren

    2010-01-01

    This paper discusses the question of video codec enhancement for wireless video transmission of high definition video data taking into account constraints on memory and complexity. Starting from parameter adjustment for JPEG2000 compression algorithm used for wireless transmission and achieving.......264/SVC for specific video content....

  19. Complexity-aware high efficiency video coding

    CERN Document Server

    Correa, Guilherme; Agostini, Luciano; Cruz, Luis A da Silva

    2016-01-01

    This book discusses computational complexity of High Efficiency Video Coding (HEVC) encoders with coverage extending from the analysis of HEVC compression efficiency and computational complexity to the reduction and scaling of its encoding complexity. After an introduction to the topic and a review of the state-of-the-art research in the field, the authors provide a detailed analysis of the HEVC encoding tools compression efficiency and computational complexity.  Readers will benefit from a set of algorithms for scaling the computational complexity of HEVC encoders, all of which take advantage from the flexibility of the frame partitioning structures allowed by the standard.  The authors also provide a set of early termination methods based on data mining and machine learning techniques, which are able to reduce the computational complexity required to find the best frame partitioning structures. The applicability of the proposed methods is finally exemplified with an encoding time control system that emplo...

  20. A Web-Based Video Digitizing System for the Study of Projectile Motion.

    Science.gov (United States)

    Chow, John W.; Carlton, Les G.; Ekkekakis, Panteleimon; Hay, James G.

    2000-01-01

    Discusses advantages of a video-based, digitized image system for the study and analysis of projectile motion in the physics laboratory. Describes the implementation of a web-based digitized video system. (WRM)

  1. COMPARISON AND IMPLEMENTATION OF FAST BLOCK MATCHING MOTION ESTIMATION ALGORITHMS FOR VIDEO COMPRESSSION

    Directory of Open Access Journals (Sweden)

    D.V.MANJUNATHA

    2011-10-01

    Full Text Available In Digital video communication it is not practical, to store the full digital video without processing, because of the problems encountered in storage and transmission, so the processing technique called videocompression is essential. In video compression, one of the computationally expensive and resource hungry key element is the Motion Estimation. The Motion estimation is a process which determines the motion between two or more frames of video. In this paper, Four block matching motion estimation algorithms, namely Exhaustive Search (ES, Three Step Search (TSS, New Three Step Search (NTSS, and Diamond Search (DS algorithms are compared and implemented for different distances between the frames of the video by exploiting the temporal correlation between successive frames of mristack and foreman slow motion videos and proved that Diamond Search (DS algorithm is the best matching motion estimation algorithm that achieve best tradeoff between search speed (number of computations and reconstructed picture quality with extensive simulation results and comparative analysis.

  2. Digital Watermarking Applied To MEPG-2 Coded Video Sequences Exploiting Motion Vector

    Institute of Scientific and Technical Information of China (English)

    DAI Yuan-jun; ZHANG Li-he; YANG Yi-xian

    2004-01-01

    This paper proposes a video watermarking technology to hide copyright information by a slight modification of the motion vector in MPEG-2 video bitstream. In this method, the watermark is embedded in the motion residual of the large value motion vector, then the motion residual is regularized into a modified bitstream, from which the watermark information can be retrieved easily and exactly. From the experimental results, this technology has the advantage of little influence on the MPEG decoding speed, degrading the perceptive effect little, and the capability to embed watermark in a short video sequence, and can be used to watermark directly on the compressed and uncompressed video sequence.

  3. A Novel Real Time Motion Detection Algorithm For Videos

    Directory of Open Access Journals (Sweden)

    M. Nagaraju

    2013-11-01

    Full Text Available Real-time detection of moving objects is vital for video surveillance. Background subtraction serves as a basic method typically used to segment the moving objects in image sequences taken from a camera. Some existing algorithms cannot fine-tune changing circumstances and they need manual calibration in relation to specification of parameters or some hypotheses for dynamic changing background. An adaptive motion segmentation and detection strategy is developed by using motion variation and chromatic characteristics, which eliminates undesired corruption of the background model and it doesn't look on the adaptation coefficient. In this particular proposed work, a novel real-time motion detection algorithm is proposed for dynamic changing background features. The algorithm integrates the temporal differencing along with optical flow method, double background filtering method and morphological processing techniques to achieve better detection performance. Temporal differencing is designed to detect initial motion areas for the optical-flow calculation to produce real-time and accurate object motion vectors detection. The double background filtering method is obtain and keep a reliable background image to handle variations on environmental changing conditions that is designed to get rid of the background interference and separate the moving objects from it. The morphological processing methods are adopted and mixed with the double background filtering to obtain improved results. The most attractive benefit for this algorithm is that the algorithm does not require to figure out the background model from hundreds of images and can handle quick image variations without prior understanding of the object size and shape.

  4. New FPSoC-based architecture for efficient FSBM motion estimation processing in video standards

    Science.gov (United States)

    Canals, J. A.; Martínez, M. A.; Ballester, F. J.; Mora, A.

    2007-05-01

    Due to the timing constraints in real time video encoding, hardware accelerator cores are used for video compression. System on Chip (SoC) designing tools offer a complex microprocessor system designing methodologies with an easy Intellectual Property (IP) core integration. This paper presents a PowerPC-based SoC with a motion-estimation accelerator core attached to the system bus. Motion-estimation (ME) algorithms are the most critical part in video compression due to the huge amount of data transfers and processing time. The main goal of our proposed architecture is to minimize the amount of memory accesses, thus exploiting the bandwidth of a direct memory connection. This architecture has been developed using Xilinx XPS, a SoC platforms design tool. The results show that our system is able to process the integer pixel full search block matching (FSBM) motion-estimation process and interframe mode decision of a QCIF frame (176*144 pixels), using a 48*48 pixel searching window, with an embedded PPC in a Xilinx Virtex-4 FPGA running at 100 MHz, in 1.5 ms, 4.5 % of the total processing time at 30 fps.

  5. ROBUST MOTION SEGMENTATION FOR HIGH DEFINITION VIDEO SEQUENCES USING A FAST MULTI-RESOLUTION MOTION ESTIMATION BASED ON SPATIO-TEMPORAL TUBES

    OpenAIRE

    Brouard, Olivier; Delannay, Fabrice; Ricordel, Vincent; Barba, Dominique

    2007-01-01

    4 pages; International audience; Motion segmentation methods are effective for tracking video objects. However, objects segmentation methods based on motion need to know the global motion of the video in order to back-compensate it before computing the segmentation. In this paper, we propose a method which estimates the global motion of a High Definition (HD) video shot and then segments it using the remaining motion information. First, we develop a fast method for multi-resolution motion est...

  6. Video Image Block-matching Motion Estimation Algorithm Based on Two-step Search

    Institute of Scientific and Technical Information of China (English)

    Wei-qi JIN; Yan CHEN; Ling-xue WANG; Bin LIU; Chong-liang LIU; Ya-zhong SHEN; Gui-qing ZHANG

    2010-01-01

    Aiming at the shortcoming that certain existing blocking-matching algorithms, such as full search, three-step search, and diamond search algorithms, usually can not keep a good balance between high accuracy and low computational complexity, a block-matching motion estimation algorithm based on two-step search is proposed in this paper. According to the fact that the gray values of adjacent pixels will not vary fast, the algorithm employs an interlaced search pattern in the search window to estimate the motion vector of the object-block. Simulation and actual experiments demonstrate that the proposed algorithm greatly outperforms the well-known three-step search and diamond search algorithms, no matter the motion vector is large or small. Compared with the full search algorithm, the proposed one achieves similar performance but requires much less computation, therefore, the algorithm is well qualified for real-time video image processing.

  7. Combination Restoration for Motion-blurred Color Videos under Limited Transmission Bandwidth

    Directory of Open Access Journals (Sweden)

    Shi Li

    2009-10-01

    Full Text Available Color video images degraded in a deterministic way by motion-blurring can be restored by the new algorithm in real-time by using color components combination to fit to the limited transmission bandwidth. The image motion PSF of each surface of YUV422 image can be obtained based on the color space conversion model. The Y, U, V planes are packed to construct a 2 dimensional complex array. Through the decomposition of frequency domain, the Y, U, V frequency can be had respectively by performing Fourier transform a time on the specific complex array. The resulting frequencies will be filtered by Wiener filter to generate the final restored images. The proposed algorithm can restore 1024x1024 24-bit motionblurred color video images at 18 ms/frame speed on GPU, and the PSNR of the restored frame is 31.45. The experiment results show that the proposed algorithm is 3X speed compared to the traditional algorithm, and it reduces the bandwidth of video data stream 1/3.

  8. Automatic Video-based Analysis of Human Motion

    DEFF Research Database (Denmark)

    Fihl, Preben

    received great interest from both industry and research communities. The focus of this thesis is on video-based analysis of human motion and the thesis presents work within three overall topics, namely foreground segmentation, action recognition, and human pose estimation. Foreground segmentation is often...... foreground camouflage, shadows, and moving backgrounds. The method continuously updates the background model to maintain high quality segmentation over long periods of time. Within action recognition the thesis presents work on both recognition of arm gestures and gait types. A key-frame based approach...... range of gait which deals with an inherent ambiguity of gait types. Human pose estimation does not target a specific action but is considered as a good basis for the recognition of any action. The pose estimation work presented in this thesis is mainly concerned with the problems of interacting people...

  9. Video Waterscrambling: Towards a Video Protection Scheme Based on the Disturbance of Motion Vectors

    Directory of Open Access Journals (Sweden)

    Yann Bodo

    2004-10-01

    Full Text Available With the popularity of high-bandwidth modems and peer-to-peer networks, the contents of videos must be highly protected from piracy. Traditionally, the models utilized to protect this kind of content are scrambling and watermarking. While the former protects the content against eavesdropping (a priori protection, the latter aims at providing a protection against illegal mass distribution (a posteriori protection. Today, researchers agree that both models must be used conjointly to reach a sufficient level of security. However, scrambling works generally by encryption resulting in an unintelligible content for the end-user. At the moment, some applications (such as e-commerce may require a slight degradation of content so that the user has an idea of the content before buying it. In this paper, we propose a new video protection model, called waterscrambling, whose aim is to give such a quality degradation-based security model. This model works in the compressed domain and disturbs the motion vectors, degrading the video quality. It also allows embedding of a classical invisible watermark enabling protection against mass distribution. In fact, our model can be seen as an intermediary solution to scrambling and watermarking.

  10. A Motion Compensated Lifting Wavelet Codec for 3D Video Coding

    Institute of Scientific and Technical Information of China (English)

    LUO Lin(罗琳); LI Jin(李劲); LI ShiPeng(李世鹏); ZHUANG ZhenQuan(庄镇泉)

    2003-01-01

    A motion compensated lifting (MCLIFT) framework for the 3D wavelet videocoding is proposed in this paper. By using bi-directional motion compensation in each lifting stepof the temporal direction, the video frames are effectively de-correlated. With the proper entropycoding and bit-stream packaging schemes, the MCLIFT wavelet video coder is scalable at framerate and quality level. Experimental results show that the MCLIFT video coder outperforms the3D wavelet video coder without motion by an average of 0.9-1.3dB, and outperforms MPEG-4coder by an average of 0.2-0.6dB.

  11. Flexible three-band motion-compensated temporal filtering for scalable video coding

    Institute of Scientific and Technical Information of China (English)

    WANG Yong-yu; SUN Qu; YUAN Chao-wei

    2009-01-01

    A novel scheme for scalable video coding using three-band lifting-based motion-compensated transform is presented in this article. A series of flexible three-band motion-compensated lifting steps are used to implement the temporal wavelet transform, which provide improved compression performance by selecting specific motion model according to real video sequences, and offer higher temporal scalability flexibility by using three-band lifting steps. The experimental results compared with motion picture expert group (MPEG)-4 codec concerning standard video sequences demonstrate the effectiveness of the method.

  12. FPGA-Based Real-Time Motion Detection for Automated Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2016-03-01

    Full Text Available Design of automated video surveillance systems is one of the exigent missions in computer vision community because of their ability to automatically select frames of interest in incoming video streams based on motion detection. This research paper focuses on the real-time hardware implementation of a motion detection algorithm for such vision based automated surveillance systems. A dedicated VLSI architecture has been proposed and designed for clustering-based motion detection scheme. The working prototype of a complete standalone automated video surveillance system, including input camera interface, designed motion detection VLSI architecture, and output display interface, with real-time relevant motion detection capabilities, has been implemented on Xilinx ML510 (Virtex-5 FX130T FPGA platform. The prototyped system robustly detects the relevant motion in real-time in live PAL (720 × 576 resolution video streams directly coming from the camera.

  13. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  14. The temporomandibular joint in video motion--noninvasive image techniques to present the functional anatomy.

    Science.gov (United States)

    Kordass, B

    1999-01-01

    The presentation of the functional anatomy of the temporomandibular joint (TMJ) is involved with difficulties if dynamic aspects are to be of prime interest, and it should be demonstrated with the highest resolution. Usually noninvasive techniques like MRI and sonography are available for presenting functionality of the temporomandibular joint in video motion. Such images reflect the functional anatomy much better than single pictures of figures could do. In combination with computer aided records of the condyle movements the video motion of MR and sonographical images represent tools for better understanding the relationships between functional or dysfunctional patterns and the morphological or dysmorphological shape and structure of the temporomandibular joint. The possibilities of such tools will be explained and discussed in detail relating, in addition, to loading effects caused by transmitted occlusal pressure onto the joint compartments. If pressure occurs the condyle slides mainly more or less retrocranially whereas the articular disc takes up a more displaced position and a deformed shape. In a few extreme cases the disc prolapses out of the joint space. These video pictures offer new aspects for the diagnosis of the disc-condyle stability and can also be used for explicit educational programs on the complex dysfunction-dysmorphology-relationship of temporomandibular diseases.

  15. Key Issues in Modeling of Complex 3D Structures from Video Sequences

    Directory of Open Access Journals (Sweden)

    Shengyong Chen

    2012-01-01

    Full Text Available Construction of three-dimensional structures from video sequences has wide applications for intelligent video analysis. This paper summarizes the key issues of the theory and surveys the recent advances in the state of the art. Reconstruction of a scene object from video sequences often takes the basic principle of structure from motion with an uncalibrated camera. This paper lists the typical strategies and summarizes the typical solutions or algorithms for modeling of complex three-dimensional structures. Open difficult problems are also suggested for further study.

  16. Video segmentation and classification for content-based storage and retrieval using motion vectors

    Science.gov (United States)

    Fernando, W. A. C.; Canagarajah, Cedric N.; Bull, David R.

    1998-12-01

    Video parsing is an important step in content-based indexing techniques, where the input video is decomposed into segments with uniform content. In video parsing detection of scene changes is one of the approaches widely used for extracting key frames from the video sequence. In this paper, an algorithm, based on motion vectors, is proposed to detect sudden scene changes and gradual scene changes (camera movements such as panning, tilting and zooming). Unlike some of the existing schemes, the proposed scheme is capable of detecting both sudden and gradual changes in uncompressed, as well as, compressed domain video. It is shown that the resultant motion vector can be used to identify and classify gradual changes due to camera movements. Results show that algorithm performed as well as the histogram-based schemes, with uncompressed video. The performance of the algorithm was also investigated with H.263 compressed video. The detection and classification of both sudden and gradual scene changes was successfully demonstrated.

  17. Complex motions and chaos in nonlinear systems

    CERN Document Server

    Machado, José; Zhang, Jiazhong

    2016-01-01

    This book brings together 10 chapters on a new stream of research examining complex phenomena in nonlinear systems—including engineering, physics, and social science. Complex Motions and Chaos in Nonlinear Systems provides readers a particular vantage of the nature and nonlinear phenomena in nonlinear dynamics that can develop the corresponding mathematical theory and apply nonlinear design to practical engineering as well as the study of other complex phenomena including those investigated within social science.

  18. An Adaptive Motion Segmentation for Automated Video Surveillance

    Directory of Open Access Journals (Sweden)

    Hossain MJulius

    2008-01-01

    Full Text Available This paper presents an adaptive motion segmentation algorithm utilizing spatiotemporal information of three most recent frames. The algorithm initially extracts the moving edges applying a novel flexible edge matching technique which makes use of a combined distance transformation image. Then watershed-based iterative algorithm is employed to segment the moving object region from the extracted moving edges. The challenges of existing three-frame-based methods include slow movement, edge localization error, minor movement of camera, and homogeneity of background and foreground region. The proposed method represents edges as segments and uses a flexible edge matching algorithm to deal with edge localization error and minor movement of camera. The combined distance transformation image works in favor of accumulating gradient information of overlapping region which effectively improves the sensitivity to slow movement. The segmentation algorithm uses watershed, gradient information of difference image, and extracted moving edges. It helps to segment moving object region with more accurate boundary even some part of the moving edges cannot be detected due to region homogeneity or other reasons during the detection step. Experimental results using different types of video sequences are presented to demonstrate the efficiency and accuracy of the proposed method.

  19. Human body motion capture from multi-image video sequences

    Science.gov (United States)

    D'Apuzzo, Nicola

    2003-01-01

    In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points

  20. Video coding with lifted wavelet transforms and complementary motion-compensated signals

    Science.gov (United States)

    Flierl, Markus H.; Vandergheynst, Pierre; Girod, Bernd

    2004-01-01

    This paper investigates video coding with wavelet transforms applied in the temporal direction of a video sequence. The wavelets are implemented with the lifting scheme in order to permit motion compensation between successive pictures. We improve motion compensation in the lifting steps and utilize complementary motion-compensated signals. Similar to superimposed predictive coding with complementary signals, this approach improves compression efficiency. We investigate experimentally and theoretically complementary motion-compensated signals for lifted wavelet transforms. Experimental results with the complementary motion-compensated Haar wavelet and frame-adaptive motion compensation show improvements in coding efficiency of up to 3 dB. The theoretical results demonstrate that the lifted Haar wavelet scheme with complementary motion-compensated signals is able to approach the bound for bit-rate savings of 2 bits per sample and motion-accuracy step when compared to optimum intra-frame coding of the input pictures.

  1. Very Low Bit-Rate Video Coding Using Motion ompensated 3-D Wavelet Transform

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    A new motion-compensated 3-D wavelet transform (MC-3DWT) video coding scheme is presented in thispaper. The new coding scheme has a good performance in average PSNR, compression ratio and visual quality of re-constructions compared with the existing 3-D wavelet transform (3DWT) coding methods and motion-compensated2-D wavelet transform (MC-WT) coding method. The new MC-3DWT coding scheme is suitable for very low bit-rate video coding.

  2. The influence of motion quality on responses towards video playback stimuli

    Directory of Open Access Journals (Sweden)

    Emma Ware

    2015-07-01

    Full Text Available Visual motion, a critical cue in communication, can be manipulated and studied using video playback methods. A primary concern for the video playback researcher is the degree to which objects presented on video appear natural to the non-human subject. Here we argue that the quality of motion cues on video, as determined by the video's image presentation rate (IPR, are of particular importance in determining a subject's social response behaviour. We present an experiment testing the effect of variations in IPR on pigeon (Columbia livia response behaviour towards video images of courting opposite sex partners. Male and female pigeons were presented with three video playback stimuli, each containing a different social partner. Each stimulus was then modified to appear at one of three IPRs: 15, 30 or 60 progressive (p frames per second. The results showed that courtship behaviour became significantly longer in duration as IPR increased. This finding implies that the IPR significantly affects the perceived quality of motion cues impacting social behaviour. In males we found that the duration of courtship also depended on the social partner viewed and that this effect interacted with the effects of IPR on behaviour. Specifically, the effect of social partner reached statistical significance only when the stimuli were displayed at 60 p, demonstrating the potential for erroneous results when insufficient IPRs are used. In addition to demonstrating the importance of IPR in video playback experiments, these findings help to highlight and describe the role of visual motion processing in communication behaviour.

  3. A NOVEL APPROACH TO VIDEO COMPRESSION TECHNIQUE USING VARIABLE BLOCK SIZES IN MOTION ESTIMATION PROCESS

    Directory of Open Access Journals (Sweden)

    Vinith Chauhan

    2012-06-01

    Full Text Available Compression basically means reducing image data. As mentioned previously, a digitized analog video sequence can comprise of up to 165 Mbps of data. To reduce the media overheads for distributing these sequences, the following techniques are commonly employed to achieve desirable reductions in image data Reduce color nuances within the image, reduce the color resolution with respect to the prevailing light intensity, Remove small, invisible parts, of the picture, Compare adjacent images and remove details that are unchanged between two images. The first three are image based compression techniques, where only one frame is evaluated and compressed at a time. The last one is or video compression technique where different adjacent frames are compared as a way to further reduced the image data. All of these techniques are based on an accurate understanding of how the human brain and eyes work together to form a complex visual system. As a result of these subtle reductions, a significant reduction in the resultant files size for the image sequences is achievable with little or no adverse effect in their visual quality. The extent, to which these image modifications are humanly visible, is typically dependent upon the degree to which the chosen compression technique is used. Often 50% to 90% compression can be achieved with no visible difference, and in some scenarios even beyond 95%. In this paper variable block sizes in motion estimation process is used for video compression.

  4. The 3D Human Motion Control Through Refined Video Gesture Annotation

    Science.gov (United States)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  5. Overview of AVS-video: tools, performance and complexity

    Science.gov (United States)

    Yu, Lu; Yi, Feng; Dong, Jie; Zhang, Cixun

    2005-07-01

    Audio Video coding Standard (AVS) is established by the Working Group of China in the same name. AVS-video is an application driven coding standard. AVS Part 2 targets to high-definition digital video broadcasting and high-density storage media and AVS Part 7 targets to low complexity, low picture resolution mobility applications. Integer transform, intra and inter-picture prediction, in-loop deblocking filter and context-based two dimensional variable length coding are the major compression tools in AVS-video, which are well-tuned for target applications. It achieves similar performance to H.264/AVC with lower cost.

  6. Motion Entropy Feature and Its Applications to Event-Based Segmentation of Sports Video

    Directory of Open Access Journals (Sweden)

    Chen-Yu Chen

    2008-08-01

    Full Text Available An entropy-based criterion is proposed to characterize the pattern and intensity of object motion in a video sequence as a function of time. By applying a homoscedastic error model-based time series change point detection algorithm to this motion entropy curve, one is able to segment the corresponding video sequence into individual sections, each consisting of a semantically relevant event. The proposed method is tested on six hours of sports videos including basketball, soccer, and tennis. Excellent experimental results are observed.

  7. Efficient reduction of complex noise in passive millimeter-wavelength video utilizing Bayesian surprise

    Science.gov (United States)

    Mundhenk, T. Nathan; Baron, Josh; Matic, Roy M.

    2011-06-01

    Passive millimeter wavelength (PMMW) video holds great promise given its ability to see targets and obstacles through fog, smoke and rain. However, current imagers produce undesirable complex noise. This can come as a mixture of fast shot (snow like) noise and a slower forming circular fixed pattern. Shot noise can be removed by a simple gain style filter. However, this can produce blurring of objects in the scene. To alleviate this, we measure the amount of Bayesian surprise in videos. Bayesian surprise is feature change in time which is abrupt, but cannot be accounted for as shot noise. Surprise is used to attenuate the shot noise filter in locations of high surprise. Since high Bayesian surprise in videos is very salient to observers, this reduces blurring particularly in places where people visually attend. Fixed pattern noise is removed after the shot noise using a combination of Non-uniformity correction (NUC) and Eigen Image Wavelet Transformation. The combination allows for online removal of time varying fixed pattern noise even when background motion may be absent. It also allows for online adaptation to differing intensities of fixed pattern noise. The fixed pattern and shot noise filters are all efficient allowing for real time video processing of PMMW video. We show several examples of PMMW video with complex noise that is much cleaner as a result of the noise removal. Processed video clearly shows cars, houses, trees and utility poles at 20 frames per second.

  8. Methods for efficient correction of complex noise in outdoor video rate passive millimeter wavelength imagery

    Science.gov (United States)

    Mundhenk, T. Nathan; Baron, Joshua; Matic, Roy M.

    2012-09-01

    Passive millimeter wavelength (PMMW) video holds great promise, given its ability to see targets and obstacles through fog, smoke, and rain. However, current imagers produce undesirable complex noise. This can come as a mixture of fast shot (snowlike) noise and a slower-forming circular fixed pattern. Shot noise can be removed by a simple gain style filter. However, this can produce blurring of objects in the scene. To alleviate this, we measure the amount of Bayesian surprise in videos. Bayesian surprise measures feature change in time that is abrupt but cannot be accounted for as shot noise. Surprise is used to attenuate the shot noise filter in locations of high surprise. Since high Bayesian surprise in videos is very salient to observers, this reduces blurring, particularly in places where people visually attend. Fixed pattern noise is removed after the shot noise using a combination of non-uniformity correction and mean image wavelet transformation. The combination allows for online removal of time-varying fixed pattern noise, even when background motion may be absent. It also allows for online adaptation to differing intensities of fixed pattern noise. We also discuss a method for sharpening frames using deconvolution. The fixed pattern and shot noise filters are all efficient, which allows real time video processing of PMMW video. We show several examples of PMMW video with complex noise that is much cleaner as a result of the noise removal. Processed video clearly shows cars, houses, trees, and utility poles at 20 frames per second.

  9. Predictive-based cross line for fast motion estimation in MPEG-4 videos

    Science.gov (United States)

    Fang, Hui; Jiang, Jianmin

    2004-05-01

    Block-based motion estimation is widely used in the field of video compression due to its feature of high processing speed and competitive compression efficiency. In the chain of compression operations, however, motion estimation still remains to be the most time-consuming process. As a result, any improvement in fast motion estimation will enable practical applications of MPEG techniques more efficient and more sustainable in terms of both processing speed and computing cost. To meet the requirements of real-time compression of videos and image sequences, such as video conferencing, remote video surveillance and video phones etc., we propose a new search algorithm and achieve fast motion estimation for MPEG compression standards based on existing algorithm developments. To evaluate the proposed algorithm, we adopted MPEG-4 and the prediction line search algorithm as the benchmarks to design the experiments. Their performances are measured by: (i) reconstructed video quality; (ii) processing time. The results reveal that the proposed algorithm provides a competitive alternative to the existing prediction line search algorithm. In comparison with MPEG-4, the proposed algorithm illustrates significant advantages in terms of processing speed and video quality.

  10. Correction of spatially varying image and video motion blur using a hybrid camera.

    Science.gov (United States)

    Tai, Yu-Wing; Du, Hao; Brown, Michael S; Lin, Stephen

    2010-06-01

    We describe a novel approach to reduce spatially varying motion blur in video and images using a hybrid camera system. A hybrid camera is a standard video camera that is coupled with an auxiliary low-resolution camera sharing the same optical path but capturing at a significantly higher frame rate. The auxiliary video is temporally sharper but at a lower resolution, while the lower frame-rate video has higher spatial resolution but is susceptible to motion blur. Our deblurring approach uses the data from these two video streams to reduce spatially varying motion blur in the high-resolution camera with a technique that combines both deconvolution and super-resolution. Our algorithm also incorporates a refinement of the spatially varying blur kernels to further improve results. Our approach can reduce motion blur from the high-resolution video as well as estimate new high-resolution frames at a higher frame rate. Experimental results on a variety of inputs demonstrate notable improvement over current state-of-the-art methods in image/video deblurring.

  11. VIDEO OBJECT SEGMENTATION BY 2-D MESH-BASED MOTION ANALYSIS

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Video object extraction is a key technology in content-based video coding. A novel video object extracting algorithm by two Dimensional (2-D) mesh-based motion analysis is proposed in this paper. Firstly, a 2-D mesh fitting the original frame image is obtained via feature detection algorithm.Then, higher order statistics motion analysis is applied on the 2-D mesh representation to get an initial motion detection mask. After post-processing, the final segmenting mask is quickly obtained. And hence the video object is effectively extracted. Experimental results show that the proposed algorithm combines the merits of mesh-based segmenting algorithms and pixel-based segmenting algorithms, and hereby achieves satisfactory subjective and objective performance while dramatically increasing the segmenting speed.

  12. Slow motion replay detection of tennis video based on color auto-correlogram

    Science.gov (United States)

    Zhang, Xiaoli; Zhi, Min

    2012-04-01

    In this paper, an effective slow motion replay detection method for tennis videos which contains logo transition is proposed. This method is based on the theory of color auto-correlogram and achieved by fowllowing steps: First,detect the candidate logo transition areas from the video frame sequence. Second, generate logo template. Then use color auto-correlogram for similarity matching between video frames and logo template in the candidate logo transition areas. Finally, select logo frames according to the matching results and locate the borders of slow motion accurately by using the brightness change during logo transition process. Experiment shows that, unlike previous approaches, this method has a great improvement in border locating accuracy rate, and can be used for other sports videos which have logo transition, too. In addition, as the algorithm only calculate the contents in the central area of the video frames, speed of the algorithm has been improved greatly.

  13. Analytical and ethical complexities in video game research

    DEFF Research Database (Denmark)

    Andersen, Mads Lund; Chimiri, Niklas Alexander; Søndergaard, Dorte Marie

    Session: Sociomaterial complexities in digital-analog spaces Abstract: Analytical and ethical complexities in video game research A central issue that video game research seldom explicitly articulates is the ethical complexities involved in its empirical and analytical work. The presentation...... explores common research questions posed and analytical foci chosen by video game researchers subscribing to either the media effects tradition, represented by (ref.) or to interdisciplinary Game Studies. Both fields, which tend to depict themselves as polar-opposites, build on ethical assumptions...... of theoretical or analytical arrogance. The relevance of acknowledging and situating ethical complexity becomes pertinent when alternatively taking a sociomaterial perspective on doing empirical and analytical work on video gaming. From an agential realist point of view, for instance, a researcher...

  14. Flexible synthesis of video frames based on motion hints.

    Science.gov (United States)

    Naman, Aous Thabit; Taubman, David

    2014-09-01

    In this paper, we propose the use of "motion hints" to produce interframe predictions. A motion hint is a loose and global description of motion that can be communicated using metadata; it describes a continuous and invertible motion model over multiple frames, spatially overlapping other motion hints. A motion hint provides a reasonably accurate description of motion but only a loose description of where it is applicable; it is the task of the client to identify the exact locations where this motion model is applicable. The focus of this paper is a probabilistic multiscale approach to identifying these locations of applicability; the method is robust to noise, quantization, and contrast changes. The proposed approach employs the Laplacian pyramid; it generates motion hint probabilities from observations at each scale of the pyramid. These probabilities are then combined across the scales of the pyramid starting from the coarsest scale. The computational cost of the approach is reasonable, and only the neighborhood of a pixel is employed to determine a motion hint probability, which makes parallel implementation feasible. This paper also elaborates on how motion hint probabilities are exploited in generating interframe predictions. The scheme of this paper is applicable to closed-loop prediction, but it is more useful in open-loop prediction scenarios, such as using prediction in conjunction with remote browsing of surveillance footage, communicated by a JPEG2000 Interactive Protocol (JPIP) server. We show that the interframe predictions obtained using the proposed approach are good both visually and in terms of PSNR.

  15. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    CERN Document Server

    Dettmer, Simon L; Pagliara, Stefano

    2014-01-01

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local...

  16. Summarizing motion contents of the video clip using moving edge overlaid frame (MEOF)

    Science.gov (United States)

    Yu, Tianli; Zhang, Yujin

    2001-12-01

    How to quickly and effectively exchange video information with the user is a major task for video searching engine's user interface. In this paper, we proposed to use Moving Edge Overlaid Frame (MEOF) image to summarize both the local object motion and global camera motion information of the video clip into a single image. MEOF will supplement the motion information that is generally dropped by the key frame representation, and it will enable faster perception for the user than viewing the actual video. The key technology of our MEOF generating algorithm involves the global motion estimation (GME). In order to extract the precise global motion model from general video, our GME module takes two stages, the match based initial GME and the gradient based GME refinement. The GME module also maintains a sprite image that will be aligned with the new input frame in the background after the global motion compensation transform. The difference between the aligned sprite and the new frame will be used to extract the masks that will help to pick out the moving objects' edges. The sprite is updated with each input frame and the moving edges are extracted at a constant interval. After all the frames are processed, the extracted moving edges are overlaid to the sprite according to there global motion displacement with the sprite and the temporal distance with the last frame, thus create our MEOF image. Experiments show that the MEOF representation of the video clip helps the user acquire the motion knowledge much faster and also be compact enough to serve the needs of online applications.

  17. Video object's behavior analyzing based on motion history image and hidden markov model

    Institute of Scientific and Technical Information of China (English)

    Meng Fanfeng; Qu Zhenshen; Zeng Qingshuang; Li li

    2009-01-01

    A novel method was proposed, which extracted video object's track and analyzed video object's behavior. Firstly, this method tracked the video object based on motion history image, and obtained the coordinate-based track sequence and orientation-based track sequence of the video object. Then the proposed hidden markov model (HMM) based algorithm was used to analyze the behavior of video object with the track sequence as input. Experimental results on traffic object show that this method can achieve the statistics of a mass of traffic objects' behavior efficiently, can acquire the reasonable velocity behavior curve of traffic object, and can recognize traffic object's various behaviors accurately. It provides a base for further research on video object behavior.

  18. Video compression using lapped transforms for motion estimation/compensation and coding

    Science.gov (United States)

    Young, Robert W.; Kingsbury, Nick G.

    1992-11-01

    Many conventional video coding schemes, such as the CCITT H.261 recommendation, are based on the independent processing of non-overlapping image blocks. An important disadvantage with this approach is that blocking artifacts may be visible in the decoded frames. In this paper, we propose a coding scheme based entirely on the processing of overlapping, windowed data blocks, thus eliminating blocking effects. Motion estimation and compensation are both performed in the frequency domain using a complex lapped transform (CLT), which may be viewed as a complex extension of the lapped orthogonal transform (LOT). The motion compensation algorithm is equivalent to overlapped compensation in the spatial domain, but also allows image interpolation for sub-pel displacements and sophisticated loop filters to be conveniently applied in the frequency domain. For inter- and intra-frame coding, we define the modified fast lapped transform (MFLT). This is a modified form of the LOT, which entirely eliminates blocking artifacts in the reconstructed data. The transform is applied in a hierarchical structure, and performs better than the discrete cosine transform (DCT) for both coding modes. The proposed coder is compared with the H.261 scheme, and is found to have significantly improved performance.

  19. Video compression using lapped transforms for motion estimation compensation and coding

    Science.gov (United States)

    Young, Robert W.; Kingsbury, Nick G.

    1993-07-01

    Many conventional video coding schemes, such as the CCITT H.261 recommendation, are based on the independent processing of nonoverlapping image blocks. An important disadvantage with this approach is that blocking artifacts may be visible in the decoded frames. We propose a coding scheme based entirely on the processing of overlapping, windowed data blocks, thus eliminating blocking effects. Motion estimation and, in part, compensation are performed in the frequency domain using a complex lapped transform (CLT), which can be viewed as a complex extension of the lapped orthogonal transform (LOT). The motion compensation algorithm is equivalent to overlapped compensation in the spatial domain, but also allows image interpolation for subpixel displacements and sophisticated loop filters to be conveniently applied in the frequency domain. For inter- and intraframe coding, we define the modified fast lapped transform (MFLT). This is a modified form of the LOT that entirely eliminates blocking artifacts in the reconstructed data. The transform is applied in a hierarchical structure, and performs better than the discrete cosine transform (DCT) for both coding modes. The proposed coder is compared with the H.261 scheme and is found to have significantly improved performance.

  20. Adapting the Streaming Video Based on the Estimated Motion Position

    Directory of Open Access Journals (Sweden)

    Hussein Muzahim Aziz

    2012-01-01

    Full Text Available In real time video streaming, the frames must meet their timing constraints, typically specified as their deadlines. Wireless networks may suffer from bandwidth limitations. To reduce the data transmission over the wireless networks, we propose an adaption technique in the server side by extracting a part of the video frames that considered as a Region Of Interest (ROI, and drop the part outside the ROI from the frames that are between reference frames. The estimated position of the selection of the ROI is computed by using the Sum of Squared Differences (SSD between consecutive frames. The reconstruction mechanism to the region outside the ROI is implemented in the mobile side by using linear interpolation between reference frames. We evaluate the proposed approach by using Mean Opinion Score (MOS measurements. MOS are used to evaluate two scenarios with equivalent encoding size, where the users observe the first scenario with low bit rate for the original videos, while for the second scenario the users observe our proposed approach with high bit rate. The results show that our technique significantly reduces the amounts of data are streamed over wireless networks, while the reconstruction mechanism will provides acceptable video quality.

  1. Mass asymmetry and tricyclic wobble motion assessment using automated launch video analysis

    Institute of Scientific and Technical Information of China (English)

    Ryan DECKER; Joseph DONINI; William GARDNER; Jobin JOHN; Walter KOENIG

    2016-01-01

    This paper describes an approach to identify epicyclic and tricyclic motion during projectile flight caused by mass asymmetries in spin-stabilized projectiles. Flight video was captured following projectile launch of several M110A2E1 155 mm artillery projectiles. These videos were then analyzed using the automated flight video analysis method to attain their initial position and orientation histories. Examination of the pitch and yaw histories clearly indicates that in addition to epicyclic motion’s nutation and precession oscillations, an even faster wobble amplitude is present during each spin revolution, even though some of the amplitudes of the oscillation are smaller than 0.02 degree. The results are compared to a sequence of shots where little appreciable mass asymmetries were present, and only nutation and precession frequencies are predominantly apparent in the motion history results. Magnitudes of the wobble motion are estimated and compared to product of inertia measurements of the asymmetric projectiles.

  2. A video-based system for hand-driven stop-motion animation.

    Science.gov (United States)

    Han, Xiaoguang; Fu, Hongbo; Zheng, Hanlin; Liu, Ligang; Wang, Jue

    2013-01-01

    Stop-motion is a well-established animation technique but is often laborious and requires craft skills. A new video-based system can animate the vast majority of everyday objects in stop-motion style, more flexibly and intuitively. Animators can perform and capture motions continuously instead of breaking them into increments and shooting one still picture per increment. More important, the system permits direct hand manipulation without resorting to rigs, achieving more natural object control for beginners. The system's key component is two-phase keyframe-based capturing and processing, assisted by computer vision techniques. With this system, even amateurs can generate high-quality stop-motion animations.

  3. Reliability and Validity of Quantitative Video Analysis of Baseball Pitching Motion.

    Science.gov (United States)

    Oyama, Sakiko; Sosa, Araceli; Campbell, Rebekah; Correa, Alexandra

    2017-02-01

    Video recordings are used to quantitatively analyze pitchers' techniques. However, reliability and validity of such analysis is unknown. The purpose of the study was to investigate the reliability and validity of joint and segment angles identified during a pitching motion using video analysis. Thirty high school baseball pitchers participated. The pitching motion was captured using 2 high-speed video cameras and a motion capture system. Two raters reviewed the videos to digitize the body segments to calculate 2-dimensional angles. The corresponding 3-dimensional angles were calculated from the motion capture data. Intrarater reliability, interrater reliability, and validity of the 2-dimensional angles were determined. The intrarater and interrater reliability of the 2-dimensional angles were high for most variables. The trunk contralateral flexion at maximum external rotation was the only variable with high validity. Trunk contralateral flexion at ball release, trunk forward flexion at foot contact and ball release, shoulder elevation angle at foot contact, and maximum shoulder external rotation had moderate validity. Two-dimensional angles at the shoulder, elbow, and trunk could be measured with high reliability. However, the angles are not necessarily anatomically correct, and thus use of quantitative video analysis should be limited to angles that can be measured with good validity.

  4. A Memory Hierarchy Model Based on Data Reuse for Full-Search Motion Estimation on High-Definition Digital Videos

    Directory of Open Access Journals (Sweden)

    Alba Sandyra Bezerra Lopes

    2012-01-01

    Full Text Available The motion estimation is the most complex module in a video encoder requiring a high processing throughput and high memory bandwidth, mainly when the focus is high-definition videos. The throughput problem can be solved increasing the parallelism in the internal operations. The external memory bandwidth may be reduced using a memory hierarchy. This work presents a memory hierarchy model for a full-search motion estimation core. The proposed memory hierarchy model is based on a data reuse scheme considering the full search algorithm features. The proposed memory hierarchy expressively reduces the external memory bandwidth required for the motion estimation process, and it provides a very high data throughput for the ME core. This throughput is necessary to achieve real time when processing high-definition videos. When considering the worst bandwidth scenario, this memory hierarchy is able to reduce the external memory bandwidth in 578 times. A case study for the proposed hierarchy, using 32×32 search window and 8×8 block size, was implemented and prototyped on a Virtex 4 FPGA. The results show that it is possible to reach 38 frames per second when processing full HD frames (1920×1080 pixels using nearly 299 Mbytes per second of external memory bandwidth.

  5. Simplified Block Matching Algorithm for Fast Motion Estimation in Video Compression

    Directory of Open Access Journals (Sweden)

    M. Ezhilarasan

    2008-01-01

    Full Text Available Block matching motion estimation was one of the most important modules in the design of any video encoder. It consumed more than 85% of video encoding time due to searching of a candidate block in the search window of the reference frame. To minimize the search time on block matching, a simplified and efficient Block Matching Algorithm for Fast Motion Estimation was proposed. It had two steps such as prediction and refinement. The temporal correlation among successive frames and the direction of the previously processed frame for predicting the motion vector of the candidate block was considered during prediction step. Different combination of search points was considered in the refinement step of the algorithm which subsequently minimize the search time. Experiments were conducted on various SIF and CIF video sequences. The performance of the algorithm was compared with existing fast block matching motion estimation algorithms which were used in recent video coding standards. The experimental results were shown that the algorithm provided a faster search with minimum distortion when compared to the optimal fast block matching motion estimation algorithms.

  6. Effectiveness of slow motion video compared to real time video in improving the accuracy and consistency of subjective gait analysis in dogs.

    Science.gov (United States)

    Lane, D M; Hill, S A; Huntingford, J L; Lafuente, P; Wall, R; Jones, K A

    2015-01-01

    Objective measures of canine gait quality via force plates, pressure mats or kinematic analysis are considered superior to subjective gait assessment (SGA). Despite research demonstrating that SGA does not accurately detect subtle lameness, it remains the most commonly performed diagnostic test for detecting lameness in dogs. This is largely because the financial, temporal and spatial requirements for existing objective gait analysis equipment makes this technology impractical for use in general practice. The utility of slow motion video as a potential tool to augment SGA is currently untested. To evaluate a more accessible way to overcome the limitations of SGA, a slow motion video study was undertaken. Three experienced veterinarians reviewed video footage of 30 dogs, 15 with a diagnosis of primary limb lameness based on history and physical examination, and 15 with no indication of limb lameness based on history and physical examination. Four different videos were made for each dog, demonstrating each dog walking and trotting in real time, and then again walking and trotting in 50% slow motion. For each video, the veterinary raters assessed both the degree of lameness, and which limb(s) they felt represented the source of the lameness. Spearman's rho, Cramer's V, and t-tests were performed to determine if slow motion video increased either the accuracy or consistency of raters' SGA relative to real time video. Raters demonstrated no significant increase in consistency or accuracy in their SGA of slow motion video relative to real time video. Based on these findings, slow motion video does not increase the consistency or accuracy of SGA values. Further research is required to determine if slow motion video will benefit SGA in other ways.

  7. Low-Complexity Multiple Description Coding of Video Based on 3D Block Transforms

    Directory of Open Access Journals (Sweden)

    Norkin Andrey

    2007-01-01

    Full Text Available The paper presents a multiple description (MD video coder based on three-dimensional (3D transforms. Two balanced descriptions are created from a video sequence. In the encoder, video sequence is represented in a form of coarse sequence approximation (shaper included in both descriptions and residual sequence (details which is split between two descriptions. The shaper is obtained by block-wise pruned 3D-DCT. The residual sequence is coded by 3D-DCT or hybrid, LOT+DCT, 3D-transform. The coding scheme is targeted to mobile devices. It has low computational complexity and improved robustness of transmission over unreliable networks. The coder is able to work at very low redundancies. The coding scheme is simple, yet it outperforms some MD coders based on motion-compensated prediction, especially in the low-redundancy region. The margin is up to 3 dB for reconstruction from one description.

  8. Low-Complexity Multiple Description Coding of Video Based on 3D Block Transforms

    Directory of Open Access Journals (Sweden)

    Andrey Norkin

    2007-02-01

    Full Text Available The paper presents a multiple description (MD video coder based on three-dimensional (3D transforms. Two balanced descriptions are created from a video sequence. In the encoder, video sequence is represented in a form of coarse sequence approximation (shaper included in both descriptions and residual sequence (details which is split between two descriptions. The shaper is obtained by block-wise pruned 3D-DCT. The residual sequence is coded by 3D-DCT or hybrid, LOT+DCT, 3D-transform. The coding scheme is targeted to mobile devices. It has low computational complexity and improved robustness of transmission over unreliable networks. The coder is able to work at very low redundancies. The coding scheme is simple, yet it outperforms some MD coders based on motion-compensated prediction, especially in the low-redundancy region. The margin is up to 3 dB for reconstruction from one description.

  9. MotionEstimation Technique forReal Time Compressed Video Transmission

    Directory of Open Access Journals (Sweden)

    Prof. D. S. Maind

    2014-11-01

    Full Text Available Motion Estimation is one of the most critical modules in a typical digital video encoder. many implementation tradeoffs should be considered while designing such a module. It candefine ME as a part of inter coding technique.Inter coding refers to a mechanism of finding co-relation between two frames (stillimages, which are not far away from each other as far as the order of occurrence is concerned, one frame is called the reference frameand the other frame called the current frame, and then encoding the information which is a function of this co-relation‟ instead of the frame itself. This paper focuses more on block matching algorithms which comes under feature/region matching. Block motion estimation algorithms are widely adopted by video coding standards, mainly due to their simplicity and good distortion performance In FS every candidate points will be evaluated and more time will be taken for predicting the suitable motion vectors. based on the above noted drawback, the above said adaptive pattern is proposed Optimization is proposed at algorithm/code-level for both encoder and decoder to make it feasible to perform real-time H.264/AVC video encoding/decoding on mobile device for mobile multimedia applications. For encoder an improved motion estimation algorithm based on hexagonal pattern search is proposed exploiting the temporal redundancy of a video sequence. For decoder, at code level memory access minimization scheme is proposed and at algorithm level a fast interpolation scheme is proposed.

  10. Rigid body motion analysis system for off-line processing of time-coded video sequences

    Science.gov (United States)

    Snow, Walter L.; Shortis, Mark R.

    1995-09-01

    Photogrammetry affords the only noncontact means of providing unambiguous six-degree-of- freedom estimates for rigid body motion analysis. Video technology enables convenient off- the-shelf capability for obtaining and storing image data at frame (30 Hz) or field (60 Hz) rates. Videometry combines these technologies with frame capture capability accessible to PCs to allow unavailable measurements critical to the study of rigid body dynamics. To effectively utilize this capability, however, some means of editing, post processing, and sorting substantial amounts of time coded video data is required. This paper discusses a prototype motion analysis system built around PC and video disk technology, which is proving useful in exploring applications of these concepts to rigid body tracking and deformation analysis. Calibration issues and user interactive software development associated with this project will be discussed, as will examples of measurement projects and data reduction.

  11. Analytical and ethical complexities in video game research

    DEFF Research Database (Denmark)

    Andersen, Mads Lund; Chimirri, Niklas Alexander; Søndergaard, Dorte Marie

    2016-01-01

    A central issue that video game research seldom explicitly articulates are the ethical complexities involved in its empirical and analytical work. The presentation explores common research questions posed and analytical foci chosen by video game researchers subscribing to either the media effects...... tradition or to interdisciplinary Game Studies. Both fields, which tend to depict themselves as standing in opposition to one another, build on ethical assumptions that are deeply engrained in their respective research questions, analytical concepts and methodological tools. However, these ethical...... presumptions are little addressed in their respective discussions. The relevance of acknowledging and situating ethical complexity becomes pertinent when alternatively taking a sociomaterial perspective on doing empirical and analytical work on video gaming. From an agential realist point of view, for instance...

  12. Temporal segmentation of video objects for hierarchical object-based motion description.

    Science.gov (United States)

    Fu, Yue; Ekin, Ahmet; Tekalp, A Murat; Mehrotra, Rajiv

    2002-01-01

    This paper describes a hierarchical approach for object-based motion description of video in terms of object motions and object-to-object interactions. We present a temporal hierarchy for object motion description, which consists of low-level elementary motion units (EMU) and high-level action units (AU). Likewise, object-to-object interactions are decomposed into a hierarchy of low-level elementary reaction units (ERU) and high-level interaction units (IU). We then propose an algorithm for temporal segmentation of video objects into EMUs, whose dominant motion can be described by a single representative parametric model. The algorithm also computes a representative (dominant) affine model for each EMU. We also provide algorithms for identification of ERUs and for classification of the type of ERUs. Experimental results demonstrate that segmenting the life-span of video objects into EMUS and ERUs facilitates the generation of high-level visual summaries for fast browsing and navigation. At present, the formation of high-level action and interaction units is done interactively. We also provide a set of query-by-example results for low-level EMU retrieval from a database based on similarity of the representative dominant affine models.

  13. Enabling Error-Resilient Internet Broadcasting using Motion Compensated Spatial Partitioning and Packet FEC for the Dirac Video Codec

    Directory of Open Access Journals (Sweden)

    Myo Tun

    2008-06-01

    Full Text Available Video transmission over the wireless or wired network require protection from channel errors since compressed video bitstreams are very sensitive to transmission errors because of the use of predictive coding and variable length coding. In this paper, a simple, low complexity and patent free error-resilient coding is proposed. It is based upon the idea of using spatial partitioning on the motion compensated residual frame without employing the transform coefficient coding. The proposed scheme is intended for open source Dirac video codec in order to enable the codec to be used for Internet broadcasting. By partitioning the wavelet transform coefficients of the motion compensated residual frame into groups and independently processing each group using arithmetic coding and Forward Error Correction (FEC, robustness to transmission errors over the packet erasure wired network could be achieved. Using the Rate Compatibles Punctured Code (RCPC and Turbo Code (TC as the FEC, the proposed technique provides gracefully decreasing perceptual quality over packet loss rates up to 30%. The PSNR performance is much better when compared with the conventional data partitioning only methods. Simulation results show that the use of multiple partitioning of wavelet coefficient in Dirac can achieve up to 8 dB PSNR gain over its existing un-partitioned method.

  14. Console video games, postural activity, and motion sickness during passive restraint.

    Science.gov (United States)

    Chang, Chih-Hui; Pan, Wu-Wen; Chen, Fu-Chen; Stoffregen, Thomas A

    2013-08-01

    We examined the influence of passive restraint on postural activity and motion sickness in individuals who actively controlled a potentially nauseogenic visual motion stimulus (a driving video game). Twenty-four adults (20.09 ± 1.56 years; 167.80 ± 7.94 cm; 59.02 ± 9.18 kg) were recruited as participants. Using elastic bands, standing participants were passively restrained at the head, shoulders, hips, and knees. During restraint, participants played (i.e., controlled) a driving video game (a motorcycle race), for 50 min. During game play, we recorded the movement of the head and torso, using a magnetic tracking system. Following game play, participants answered a forced choice, yes/no question about whether they were motion sick, and were assigned to sick and well groups on this basis. In addition, before and after game play, participants completed the Simulator Sickness Questionnaire, which provided numerical ratings of the severity of individual symptoms. Five of 24 participants (20.83 %) reported motion sickness. Participants moved despite being passively restrained. Both the magnitude and the temporal dynamics of movement differed between the sick and well groups. The results show that passive restraint of the body can reduce motion sickness when the nauseogenic visual stimulus is under participants' active control and confirm that motion sickness is preceded by distinct patterns of postural activity even during passive restraint.

  15. 36 CFR 1254.88 - What are the rules for the Motion Picture, Sound, and Video Research Room at the National...

    Science.gov (United States)

    2010-07-01

    ... Motion Picture, Sound, and Video Research Room at the National Archives at College Park? 1254.88 Section... to Using Copying Equipment § 1254.88 What are the rules for the Motion Picture, Sound, and Video.... (c) We provide you with a copy of the Motion Picture, Sound, and Video Research Room rules and...

  16. Distributed coding/decoding complexity in video sensor networks.

    Science.gov (United States)

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  17. Integrating Illumination, Motion, and Shape Models for Robust Face Recognition in Video

    Directory of Open Access Journals (Sweden)

    Keyur Patel

    2008-05-01

    Full Text Available The use of video sequences for face recognition has been relatively less studied compared to image-based approaches. In this paper, we present an analysis-by-synthesis framework for face recognition from video sequences that is robust to large changes in facial pose and lighting conditions. This requires tracking the video sequence, as well as recognition algorithms that are able to integrate information over the entire video; we address both these problems. Our method is based on a recently obtained theoretical result that can integrate the effects of motion, lighting, and shape in generating an image using a perspective camera. This result can be used to estimate the pose and structure of the face and the illumination conditions for each frame in a video sequence in the presence of multiple point and extended light sources. We propose a new inverse compositional estimation approach for this purpose. We then synthesize images using the face model estimated from the training data corresponding to the conditions in the probe sequences. Similarity between the synthesized and the probe images is computed using suitable distance measurements. The method can handle situations where the pose and lighting conditions in the training and testing data are completely disjoint. We show detailed performance analysis results and recognition scores on a large video dataset.

  18. Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification

    Science.gov (United States)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-02-01

    user supervision and calibration. First a multi-scale image processing method is applied on the frames of the video of a vibrating structure to extract the local pixel phases that encode local structural vibration, establishing a full-field spatiotemporal motion matrix. Then a high-spatial dimensional, yet low-modal-dimensional, over-complete model is used to represent the extracted full-field motion matrix using modal superposition, which is physically connected and manipulated by a family of unsupervised learning models and techniques, respectively. Thus, the proposed method is able to blindly extract modal frequencies, damping ratios, and full-field (as many points as the pixel number of the video frame) mode shapes from line of sight video measurements of the structure. The method is validated by laboratory experiments on a bench-scale building structure and a cantilever beam. Its ability for output (video measurements)-only identification and visualization of the weakly-excited mode is demonstrated and several issues with its implementation are discussed.

  19. A model of R-D performance evaluation for Rate-Distortion-Complexity evaluation of H.264 video coding

    DEFF Research Database (Denmark)

    Wu, Mo; Forchhammer, Søren

    2007-01-01

    This paper considers a method for evaluation of Rate-Distortion-Complexity (R-D-C) performance of video coding. A statistical model of the transformed coefficients is used to estimate the Rate-Distortion (R-D) performance. A model frame work for rate, distortion and slope of the R-D curve for int...... and intra frame is presented. Assumptions are given for analyzing an R-D model for fast R-D-C evaluation. The theoretical expressions are combined with H.264 video coding, and confirmed by experimental results. The complexity frame work is applied to the integer motion estimation....

  20. Variable structure multiple model for articulated human motion tracking from monocular video sequences

    Institute of Scientific and Technical Information of China (English)

    HAN Hong; TONG MingLei; CHEN ZhiChao; FAN YouJian

    2012-01-01

    A new model-based human body tracking framework with learning-based theory is introduced inthis paper.We propose a variable structure multiple model (VSMM) framework to address challenging problems such as uncertainty of motion styles,imprecise detection of feature points,and ambiguity of joint locations.Key human joint points are detected automatically and the undetected points are estimated with Kalman filters.Multiple motion models are learned from motion capture data using a ridge regression method.The model set that covers the total motion set is designed on the basis of topological and compatibility relationships,while the VSMM algorithm is used to estimate quaternion vectors of joint rotation.Experiments using real image sequences and simulation videos demonstrate the high efficiency of our proposed human tracking framework.

  1. Joint redundant motion vector and intra macroblock refreshment for video transmission

    Directory of Open Access Journals (Sweden)

    Tillo Tammam

    2011-01-01

    Full Text Available Abstract This paper proposes a scheme for error-resilient transmission of videos which jointly uses intra macroblock refreshment and redundant motion vector. The selection of using intra refreshment or redundant motion vector is determined by the rate-distortion optimization procedure. The end-to-end distortion is used for the rate-distortion optimization, which can be easily calculated with the recursive optimal per-pixel estimate (ROPE method. Simulation results show that the proposed method outperforms both the intra refreshment approach and redundant motion vector approach significantly, when the two approaches are deployed separately. Specifically, for the Foreman sequence, the average PSNR of the proposed approach can be 1.12 dB higher than that of the intra refreshment approach and 5 dB higher than that of the redundant motion vector approach.

  2. Multi-scale AM-FM motion analysis of ultrasound videos of carotid artery plaques

    Science.gov (United States)

    Murillo, Sergio; Murray, Victor; Loizou, C. P.; Pattichis, C. S.; Pattichis, Marios; Barriga, E. Simon

    2012-03-01

    An estimated 82 million American adults have one or more type of cardiovascular diseases (CVD). CVD is the leading cause of death (1 of every 3 deaths) in the United States. When considered separately from other CVDs, stroke ranks third among all causes of death behind diseases of the heart and cancer. Stroke accounts for 1 out of every 18 deaths and is the leading cause of serious long-term disability in the United States. Motion estimation of ultrasound videos (US) of carotid artery (CA) plaques provides important information regarding plaque deformation that should be considered for distinguishing between symptomatic and asymptomatic plaques. In this paper, we present the development of verifiable methods for the estimation of plaque motion. Our methodology is tested on a set of 34 (5 symptomatic and 29 asymptomatic) ultrasound videos of carotid artery plaques. Plaque and wall motion analysis provides information about plaque instability and is used in an attempt to differentiate between symptomatic and asymptomatic cases. The final goal for motion estimation and analysis is to identify pathological conditions that can be detected from motion changes due to changes in tissue stiffness.

  3. A Hierarchical Framework Combining Motion and Feature Information for Infrared-Visible Video Registration

    Science.gov (United States)

    Sun, Xinglong; Xu, Tingfa; Zhang, Jizhou; Li, Xiangmin

    2017-01-01

    In this paper, we propose a novel hierarchical framework that combines motion and feature information to implement infrared-visible video registration on nearly planar scenes. In contrast to previous approaches, which involve the direct use of feature matching to find the global homography, the framework adds coarse registration based on the motion vectors of targets to estimate scale and rotation prior to matching. In precise registration based on keypoint matching, the scale and rotation are used in re-location to eliminate their impact on targets and keypoints. To strictly match the keypoints, first, we improve the quality of keypoint matching by using normalized location descriptors and descriptors generated by the histogram of edge orientation. Second, we remove most mismatches by counting the matching directions of correspondences. We tested our framework on a public dataset, where our proposed framework outperformed two recently-proposed state-of-the-art global registration methods in almost all tested videos. PMID:28212350

  4. Real-time construction and visualisation of drift-free video mosaics from unconstrained camera motion

    Directory of Open Access Journals (Sweden)

    Mateusz Brzeszcz

    2015-08-01

    Full Text Available This work proposes a novel approach for real-time video mosaicking facilitating drift-free mosaic construction and visualisation, with integrated frame blending and redundancy management, that is shown to be flexible to a range of varying mosaic scenarios. The approach supports unconstrained camera motion with in-sequence loop closing, variation in camera focal distance (zoom and recovery from video sequence breaks. Real-time performance, over extended duration sequences, is realised via novel aspects of frame management within the mosaic representation and thus avoiding the high data redundancy associated with temporally dense, spatially overlapping video frame inputs. This managed set of image frames is visualised in real time using a dynamic mosaic representation of overlapping textured graphics primitives in place of the traditional globally constructed, and hence frequently reconstructed, mosaic image. Within this formulation, subsequent optimisation occurring during online construction can thus efficiency adjust relative frame positions via simple primitive position transforms. Effective visualisation is similarly facilitated by online inter-frame blending to overcome the illumination and colour variance associated with modern camera hardware. The evaluation illustrates overall robustness in video mosaic construction under a diverse range of conditions including indoor and outdoor environments, varying illumination and presence of in-scene motion on varying computational platforms.

  5. Perception of complex motion in humans and pigeons (Columba livia).

    Science.gov (United States)

    Nankoo, Jean-François; Madan, Christopher R; Spetch, Marcia L; Wylie, Douglas R

    2014-06-01

    In the primate visual system, local motion signals are pooled to create a global motion percept. Like primates, many birds are highly dependent on vision for their survival, yet relatively little is known about motion perception in birds. We used random-dot stimuli to investigate pigeons' ability to detect complex motion (radial, rotation, and spiral) compared to humans. Our human participants had a significantly lower threshold for rotational and radial motion when compared to spiral motion. The data from the pigeons, however, showed that the pigeons were most sensitive to rotational motion and least sensitive to radial motion, while sensitivity for spiral motion was intermediate. We followed up the pigeon results with an investigation of the effect of display aperture shape for rotational motion and velocity gradient for radial motion. We found no effect of shape of the aperture on thresholds, but did observe that radial motion containing accelerating dots improved thresholds. However, this improvement did not reach the thresholds levels observed for rotational motion. In sum, our experiments demonstrate that the pooling mechanism in the pigeon motion system is most efficient for rotation.

  6. A Simple and High Performing Rate Control Initialization Method for H.264 AVC Coding Based on Motion Vector Map and Spatial Complexity at Low Bitrate

    Directory of Open Access Journals (Sweden)

    Yalin Wu

    2014-01-01

    Full Text Available The temporal complexity of video sequences can be characterized by motion vector map which consists of motion vectors of each macroblock (MB. In order to obtain the optimal initial QP (quantization parameter for the various video sequences which have different spatial and temporal complexities, this paper proposes a simple and high performance initial QP determining method based on motion vector map and temporal complexity to decide an initial QP in given target bit rate. The proposed algorithm produces the reconstructed video sequences with outstanding and stable quality. For any video sequences, the initial QP can be easily determined from matrices by target bit rate and mapped spatial complexity using proposed mapping method. Experimental results show that the proposed algorithm can show more outstanding objective and subjective performance than other conventional determining methods.

  7. Stability of Synchronized Motion in Complex Networks

    CERN Document Server

    Pereira, Tiago

    2011-01-01

    We give a succinct and self-contained description of the synchronized motion on networks of mutually coupled oscillators. Usually, the stability criterion for the stability of synchronized motion is obtained in terms of Lyapunov exponents. We consider the fully diffusive case which is amenable to treatment in terms of uniform contractions. This approach provides a rigorous, yet clear and concise, way to the important results.

  8. Multi-modal gesture recognition using integrated model of motion, audio and video

    Science.gov (United States)

    Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko

    2015-07-01

    Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.

  9. Multi-modal Gesture Recognition using Integrated Model of Motion, Audio and Video

    Institute of Scientific and Technical Information of China (English)

    GOUTSU Yusuke; KOBAYASHI Takaki; OBARA Junya; KUSAJIMAIkuo; TAKEICHI Kazunari; TAKANO Wataru; NAKAMURA Yoshihiko

    2015-01-01

    Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.

  10. An approach to detecting abnormal vehicle events in complex factors over highway surveillance video

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The detection of abnormal vehicle events is a research hotspot in the analysis of highway surveillance video.Because of the complex factors,which include different conditions of weather,illumination,noise and so on,vehicle’s feature extraction and abnormity detection become difficult.This paper proposes a Fast Constrained Delaunay Triangulation(FCDT) algorithm to replace complicated segmentation algorithms for multi-feature extraction.Based on the video frames segmented by FCDT,an improved algorithm is presented to estimate background self-adaptively.After the estimation,a multi-feature eigenvector is generated by Principal Component Analysis(PCA) in accordance with the static and motional features extracted through locating and tracking each vehicle.For abnormity detection,adaptive detection modeling of vehicle events(ADMVE) is presented,for which a semi-supervised Mixture of Gaussian Hidden Markov Model(MGHMM) is trained with the multi-feature eigenvectors from each video segment.The normal model is developed by supervised mode with manual labeling,and becomes more accurate via iterated adaptation.The abnormal models are trained through the adapted Bayesian learning with unsupervised mode.The paper also presents experiments using real video sequence to verify the proposed method.

  11. Block-classified bidirectional motion compensation scheme for wavelet-decomposed digital video

    Energy Technology Data Exchange (ETDEWEB)

    Zafar, S. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.; Zhang, Y.Q. [David Sarnoff Research Center, Princeton, NJ (United States); Jabbari, B. [George Mason Univ., Fairfax, VA (United States)

    1997-08-01

    In this paper the authors introduce a block-classified bidirectional motion compensation scheme for the previously developed wavelet-based video codec, where multiresolution motion estimation is performed in the wavelet domain. The frame classification structure described in this paper is similar to that used in the MPEG standard. Specifically, the I-frames are intraframe coded, the P-frames are interpolated from a previous I- or a P-frame, and the B-frames are bidirectional interpolated frames. They apply this frame classification structure to the wavelet domain with variable block sizes and multiresolution representation. They use a symmetric bidirectional scheme for the B-frames and classify the motion blocks as intraframe, compensated either from the preceding or the following frame, or bidirectional (i.e., compensated based on which type yields the minimum energy). They also introduce the concept of F-frames, which are analogous to P-frames but are predicted from the following frame only. This improves the overall quality of the reconstruction in a group of pictures (GOP) but at the expense of extra buffering. They also study the effect of quantization of the I-frames on the reconstruction of a GOP, and they provide intuitive explanation for the results. In addition, the authors study a variety of wavelet filter-banks to be used in a multiresolution motion-compensated hierarchical video codec.

  12. Complex software training: Harnessing and optimizing video instruction

    NARCIS (Netherlands)

    Brar, Jagvir; van der Meij, Hans

    2017-01-01

    This article investigates the design and effect of optimized video for statistics instruction. In addition, the use of video reviews to further optimize video instruction is examined. A Demonstration-Based Training (DBT) model was proposed and followed for the construction of the video. The videos w

  13. Complex software training: Harnessing and optimizing video instruction

    NARCIS (Netherlands)

    Brar, Jagvir; van der Meij, Hans

    2017-01-01

    This article investigates the design and effect of optimized video for statistics instruction. In addition, the use of video reviews to further optimize video instruction is examined. A Demonstration-Based Training (DBT) model was proposed and followed for the construction of the video. The videos

  14. HIERARCHICAL ADAPTIVE ROOD PATTERN SEARCH FOR MOTION ESTIMATION AT VIDEO SEQUENCE ANALYSIS

    Directory of Open Access Journals (Sweden)

    V. T. Nguyen

    2016-05-01

    Full Text Available Subject of Research.The paper deals with the motion estimation algorithms for the analysis of video sequences in compression standards MPEG-4 Visual and H.264. Anew algorithm has been offered based on the analysis of the advantages and disadvantages of existing algorithms. Method. Thealgorithm is called hierarchical adaptive rood pattern search (Hierarchical ARPS, HARPS. This new algorithm includes the classic adaptive rood pattern search ARPS and hierarchical search MP (Hierarchical search or Mean pyramid. All motion estimation algorithms have been implemented using MATLAB package and tested with several video sequences. Main Results. The criteria for evaluating the algorithms were: speed, peak signal to noise ratio, mean square error and mean absolute deviation. The proposed method showed a much better performance at a comparable error and deviation. The peak signal to noise ratio in different video sequences shows better and worse results than characteristics of known algorithms so it requires further investigation. Practical Relevance. Application of this algorithm in MPEG-4 and H.264 codecs instead of the standard can significantly reduce compression time. This feature enables to recommend it in telecommunication systems for multimedia data storing, transmission and processing.

  15. Adaptive mode decision with residual motion compensation for distributed video coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren; Slowack, Jurgen;

    2015-01-01

    Distributed video coding (DVC) is a coding paradigm that entails low complexity encoding by exploiting the source statistics at the decoder. To improve the DVC coding efficiency, this paper proposes a novel adaptive technique for mode decision to control and take advantage of skip mode and intra...

  16. Visual acuity, contrast sensitivity, and range performance with compressed motion video

    NARCIS (Netherlands)

    Bijl, P.; Vries, S.C. de

    2010-01-01

    Video of visual acuity (VA) and contrast sensitivity (CS) test charts in a complex background was recorded using a CCD color camera mounted on a computer-controlled tripod and was fed into real-time MPEG-2 compression/decompression equipment. The test charts were based on the triangle orientation

  17. Visual acuity, contrast sensitivity, and range performance with compressed motion video

    NARCIS (Netherlands)

    Bijl, P.; Vries, S.C. de

    2010-01-01

    Video of visual acuity (VA) and contrast sensitivity (CS) test charts in a complex background was recorded using a CCD color camera mounted on a computer-controlled tripod and was fed into real-time MPEG-2 compression/decompression equipment. The test charts were based on the triangle orientation di

  18. High-Performance Motion Estimation for Image Sensors with Video Compression

    Directory of Open Access Journals (Sweden)

    Weizhi Xu

    2015-08-01

    Full Text Available It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME. Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME.

  19. VIDEO DENOISING USING SWITCHING ADAPTIVE DECISION BASED ALGORITHM WITH ROBUST MOTION ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V. Jayaraj

    2010-08-01

    Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.

  20. Re-estimation of Motion and Reconstruction for Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Raket, Lars Lau; Forchhammer, Søren

    2014-01-01

    Transform domain Wyner-Ziv (TDWZ) video coding is an efficient approach to distributed video coding (DVC), which provides low complexity encoding by exploiting the source statistics at the decoder side. The DVC coding efficiency depends mainly on side information and noise modeling. This paper...... re-estimation (MORE) are integrated in the SING TDWZ codec, which uses side information and noise learning. For Wyner-Ziv frames using GOP size 2, the MORE codec significantly improves the TDWZ coding efficiency with an average (Bjøntegaard) PSNR improvement of 2.5 dB and up to 6 dB improvement...

  1. Do Motion Controllers Make Action Video Games Less Sedentary? A Randomized Experiment

    Directory of Open Access Journals (Sweden)

    Elizabeth J. Lyons

    2012-01-01

    Full Text Available Sports- and fitness-themed video games using motion controllers have been found to produce physical activity. It is possible that motion controllers may also enhance energy expenditure when applied to more sedentary games such as action games. Young adults (N = 100 were randomized to play three games using either motion-based or traditional controllers. No main effect was found for controller or game pair (P > .12. An interaction was found such that in one pair, motion control (mean [SD] 0.96 [0.20] kcal ⋅ kg-1 ⋅ hr-1 produced 0.10 kcal ⋅ kg-1 ⋅ hr-1 (95% confidence interval 0.03 to 0.17 greater energy expenditure than traditional control (0.86 [0.17] kcal ⋅ kg-1 ⋅ hr-1, P = .048. All games were sedentary. As currently implemented, motion control is unlikely to produce moderate intensity physical activity in action games. However, some games produce small but significant increases in energy expenditure, which may benefit health by decreasing sedentary behavior.

  2. Complex polarization analysis of particle motion

    OpenAIRE

    Vidale, John E.

    1986-01-01

    Knowledge of particle motion polarization aids in identifying phases on three-component seismograms. The scheme of Montalbetti and Kanasewich (1970) is extended to analytic three-component seismograms, where the imaginary part of the signal is the Hilbert transform of the real part. This scheme has only one free parameter, the length of the time window over which the polarization parameters are estimated, so it can be applied in a routine way to three-component data. The azimuth and dip of th...

  3. END-TO-END DEPTH FROM MOTION WITH STABILIZED MONOCULAR VIDEOS

    Directory of Open Access Journals (Sweden)

    C. Pinard

    2017-08-01

    Full Text Available We propose a depth map inference system from monocular videos based on a novel dataset for navigation that mimics aerial footage from gimbal stabilized monocular camera in rigid scenes. Unlike most navigation datasets, the lack of rotation implies an easier structure from motion problem which can be leveraged for different kinds of tasks such as depth inference and obstacle avoidance. We also propose an architecture for end-to-end depth inference with a fully convolutional network. Results show that although tied to camera inner parameters, the problem is locally solvable and leads to good quality depth prediction.

  4. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  5. Localized motion in random matrix decomposition of complex financial systems

    Science.gov (United States)

    Jiang, Xiong-Fei; Zheng, Bo; Ren, Fei; Qiu, Tian

    2017-04-01

    With the random matrix theory, we decompose the multi-dimensional time series of complex financial systems into a set of orthogonal eigenmode functions, which are classified into the market mode, sector mode, and random mode. In particular, the localized motion generated by the business sectors, plays an important role in financial systems. Both the business sectors and their impact on the stock market are identified from the localized motion. We clarify that the localized motion induces different characteristics of the time correlations for the stock-market index and individual stocks. With a variation of a two-factor model, we reproduce the return-volatility correlations of the eigenmodes.

  6. Aging affects postural tracking of complex visual motion cues.

    Science.gov (United States)

    Sotirakis, H; Kyvelidou, A; Mademli, L; Stergiou, N; Hatzitaki, V

    2016-09-01

    Postural tracking of visual motion cues improves perception-action coupling in aging, yet the nature of the visual cues to be tracked is critical for the efficacy of such a paradigm. We investigated how well healthy older (72.45 ± 4.72 years) and young (22.98 ± 2.9 years) adults can follow with their gaze and posture horizontally moving visual target cues of different degree of complexity. Participants tracked continuously for 120 s the motion of a visual target (dot) that oscillated in three different patterns: a simple periodic (simulated by a sine), a more complex (simulated by the Lorenz attractor that is deterministic displaying mathematical chaos) and an ultra-complex random (simulated by surrogating the Lorenz attractor) pattern. The degree of coupling between performance (posture and gaze) and the target motion was quantified in the spectral coherence, gain, phase and cross-approximate entropy (cross-ApEn) between signals. Sway-target coherence decreased as a function of target complexity and was lower for the older compared to the young participants when tracking the chaotic target. On the other hand, gaze-target coherence was not affected by either target complexity or age. Yet, a lower cross-ApEn value when tracking the chaotic stimulus motion revealed a more synchronous gaze-target relationship for both age groups. Results suggest limitations in online visuo-motor processing of complex motion cues and a less efficient exploitation of the body sway dynamics with age. Complex visual motion cues may provide a suitable training stimulus to improve visuo-motor integration and restore sway variability in older adults.

  7. Toolkits Control Motion of Complex Robotics

    Science.gov (United States)

    2010-01-01

    That space is a hazardous environment for humans is common knowledge. Even beyond the obvious lack of air and gravity, the extreme temperatures and exposure to radiation make the human exploration of space a complicated and risky endeavor. The conditions of space and the space suits required to conduct extravehicular activities add layers of difficulty and danger even to tasks that would be simple on Earth (tightening a bolt, for example). For these reasons, the ability to scout distant celestial bodies and perform maintenance and construction in space without direct human involvement offers significant appeal. NASA has repeatedly turned to complex robotics for solutions to extend human presence deep into space at reduced risk and cost and to enhance space operations in low Earth orbit. At Johnson Space Center, engineers explore the potential applications of dexterous robots capable of performing tasks like those of an astronaut during extravehicular activities and even additional ones too delicate or dangerous for human participation. Johnson's Dexterous Robotics Laboratory experiments with a wide spectrum of robot manipulators, such as the Mitsubishi PA-10 and the Robotics Research K-1207i robotic arms. To simplify and enhance the use of these robotic systems, Johnson researchers sought generic control methods that could work effectively across every system.

  8. Performance and Complexity Co-evaluation of the Advanced Video Coding Standard for Cost-Effective Multimedia Communications

    Directory of Open Access Journals (Sweden)

    Saponara Sergio

    2004-01-01

    Full Text Available The advanced video codec (AVC standard, recently defined by a joint video team (JVT of ITU-T and ISO/IEC, is introduced in this paper together with its performance and complexity co-evaluation. While the basic framework is similar to the motion-compensated hybrid scheme of previous video coding standards, additional tools improve the compression efficiency at the expense of an increased implementation cost. As a first step to bridge the gap between the algorithmic design of a complex multimedia system and its cost-effective realization, a high-level co-evaluation approach is proposed and applied to a real-life AVC design. An exhaustive analysis of the codec compression efficiency versus complexity (memory and computational costs project space is carried out at the early algorithmic design phase. If all new coding features are used, the improved AVC compression efficiency (up to 50% compared to current video coding technology comes with a complexity increase of a factor 2 for the decoder and larger than one order of magnitude for the encoder. This represents a challenge for resource-constrained multimedia systems such as wireless devices or high-volume consumer electronics. The analysis also highlights important properties of the AVC framework allowing for complexity reduction at the high system level: when combining the new coding features, the implementation complexity accumulates, while the global compression efficiency saturates. Thus, a proper use of the AVC tools maintains the same performance as the most complex configuration while considerably reducing complexity. The reported results provide inputs to assist the profile definition in the standard, highlight the AVC bottlenecks, and select optimal trade-offs between algorithmic performance and complexity.

  9. Geometric estimation of intestinal contraction for motion tracking of video capsule endoscope

    Science.gov (United States)

    Mi, Liang; Bao, Guanqun; Pahlavan, Kaveh

    2014-03-01

    Wireless video capsule endoscope (VCE) provides a noninvasive method to examine the entire gastrointestinal (GI) tract, especially small intestine, where other endoscopic instruments can barely reach. VCE is able to continuously provide clear pictures in short fixed intervals, and as such researchers have attempted to use image processing methods to track the video capsule in order to locate the abnormalities inside the GI tract. To correctly estimate the speed of the motion of the endoscope capsule, the radius of the intestinal track must be known a priori. Physiological factors such as intestinal contraction, however, dynamically change the radius of the small intestine, which could bring large errors in speed estimation. In this paper, we are aiming to estimate the radius of the contracted intestinal track. First a geometric model is presented for estimating the radius of small intestine based on the black hole on endoscopic images. To validate our proposed model, a 3-dimentional virtual testbed that emulates the intestinal contraction is then introduced in details. After measuring the size of the black holes on the test images, we used our model to esimate the radius of the contracted intestinal track. Comparision between analytical results and the emulation model parameters has verified that our proposed method could preciously estimate the radius of the contracted small intestine based on endoscopic images.

  10. The motion analysis of fire video images based on moment features and flicker frequency

    Institute of Scientific and Technical Information of China (English)

    LI Jin; FONG N. K.; CHOW W. K.; WONG L.T.; LU Puyi; XU Dian-guo

    2004-01-01

    In this paper, motion analysis methods based on the moment features and flicker frequency features for early fire flame from ordinary CCD video camera were proposed, and in order to describe the changing of flame and disturbance of non-flame phenomena further more, the average changing pixel number of the first-order moments of consecutive flames has been defined in the moment analysis as well. The first-order moments of all kinds of flames used in our experiments present irregularly flickering, and their average changing pixel numbers of first-order moments are greater than fire-like disturbances. For the analysis of flicker frequency of flame, which is extracted and calculated in spatial domain, and therefore it is computational simple and fast. The method of extracting flicker frequency from video images is not affected by the catalogues of combustion material and distance. In experiments, we adopted two kinds of flames, i. e. , fixed flame and movable flame. Many comparing and disturbing experiments were done and verified that the methods can be used as criteria for early fire detection.

  11. Increased ISR operator capability utilizing a centralized 360° full motion video display

    Science.gov (United States)

    Andryc, K.; Chamberlain, J.; Eagleson, T.; Gottschalk, G.; Kowal, B.; Kuzdeba, P.; LaValley, D.; Myers, E.; Quinn, S.; Rose, M.; Rusiecki, B.

    2012-06-01

    In many situations, the difference between success and failure comes down to taking the right actions quickly. While the myriad of electronic sensors available today can provide data quickly, it may overload the operator; where only a contextualized centralized display of information and intuitive human interface can help to support the quick and effective decisions needed. If these decisions are to result in quick actions, then the operator must be able to understand all of the data of his environment. In this paper we present a novel approach in contextualizing multi-sensor data onto a full motion video real-time 360 degree imaging display. The system described could function as a primary display system for command and control in security, military and observation posts. It has the ability to process and enable interactive control of multiple other sensor systems. It enhances the value of these other sensors by overlaying their information on a panorama of the surroundings. Also, it can be used to interface to other systems including: auxiliary electro-optical systems, aerial video, contact management, Hostile Fire Indicators (HFI), and Remote Weapon Stations (RWS).

  12. Bringing Javanesse Traditional Dance into Basic Physics Class: Exemplifying Projectile Motion through Video Analysis

    Science.gov (United States)

    Handayani, Langlang; Prasetya Aji, Mahardika; Susilo; Marwoto, Putut

    2016-08-01

    An alternative approach of an arts-based instruction for Basic Physics class has been developed through the implementation of video analysis of a Javanesse traditional dance: Bambangan Cakil. A particular movement of the dance -weapon throwing- was analyzed by employing the LoggerPro software package to exemplify projectile motion. The results of analysis indicated that the movement of the thrown weapon in Bambangan Cakil dance provides some helping explanations of several physics concepts of projectile motion: object's path, velocity, and acceleration, in a form of picture, graph and also table. Such kind of weapon path and velocity can be shown via a picture or graph, while such concepts of decreasing velocity in y direction (weapon moving downward and upward) due to acceleration g can be represented through the use of a table. It was concluded that in a Javanesse traditional dance there are many physics concepts which can be explored. The study recommends to bring the traditional dance into a science class which will enable students to get more understanding of both physics concepts and Indonesia cultural heritage.

  13. Spatio-temporal databases complex motion pattern queries

    CERN Document Server

    Vieira, Marcos R

    2013-01-01

    This brief presents several new query processing techniques, called complex motion pattern queries, specifically designed for very large spatio-temporal databases of moving objects. The brief begins with the definition of flexible pattern queries, which are powerful because of the integration of variables and motion patterns. This is followed by a summary of the expressive power of patterns and flexibility of pattern queries. The brief then present the Spatio-Temporal Pattern System (STPS) and density-based pattern queries. STPS databases contain millions of records with information about mobi

  14. The real-time complex cruise scene motion detection system based on DSP

    Science.gov (United States)

    Wu, Zhi-guo; Wang, Ming-jia

    2014-11-01

    Dynamic target recognition is an important issue in the field of image processing research. It is widely used in photoelectric detection, target tracking, video surveillance areas. Complex cruise scene of target detection, compared to the static background, since the target and background objects together and both are in motion, greatly increases the complexity of moving target detection and recognition. Based on the practical engineering applications, combining an embedded systems and real-time image detection technology, this paper proposes a real-time movement detection method on an embedded system based on the FPGA + DSP system architecture on an embedded system. The DSP digital image processing system takes high speed digital signal processor DSP TMS320C6416T as the main computing components. And we take large capacity FPGA as coprocessor. It is designed and developed a high-performance image processing card. The FPGA is responsible for the data receiving and dispatching, DSP is responsible for data processing. The FPGA collects image data and controls SDRAM according to the digital image sequence. The SDRAM realizes multiport image buffer. DSP reads real-time image through SDRAM and performs scene motion detection algorithm. Then we implement the data reception and data processing parallelization. This system designs and realizes complex cruise scene motion detection for engineering application. The image edge information has the anti-light change and the strong anti-interference ability. First of all, the adjacent frame and current frame image are processed by convolution operation, extract the edge images. Then we compute correlation strength and the value of movement offset. We can complete scene motion parameters estimation by the result, in order to achieve real-time accurate motion detection. We use images in resolution of 768 * 576 and 25Hz frame rate to do the real-time cruise experiment. The results show that the proposed system achieves real

  15. Reinforcement learning agents providing advice in complex video games

    Science.gov (United States)

    Taylor, Matthew E.; Carboni, Nicholas; Fachantidis, Anestis; Vlahavas, Ioannis; Torrey, Lisa

    2014-01-01

    This article introduces a teacher-student framework for reinforcement learning, synthesising and extending material that appeared in conference proceedings [Torrey, L., & Taylor, M. E. (2013)]. Teaching on a budget: Agents advising agents in reinforcement learning. {Proceedings of the international conference on autonomous agents and multiagent systems}] and in a non-archival workshop paper [Carboni, N., &Taylor, M. E. (2013, May)]. Preliminary results for 1 vs. 1 tactics in StarCraft. {Proceedings of the adaptive and learning agents workshop (at AAMAS-13)}]. In this framework, a teacher agent instructs a student agent by suggesting actions the student should take as it learns. However, the teacher may only give such advice a limited number of times. We present several novel algorithms that teachers can use to budget their advice effectively, and we evaluate them in two complex video games: StarCraft and Pac-Man. Our results show that the same amount of advice, given at different moments, can have different effects on student learning, and that teachers can significantly affect student learning even when students use different learning methods and state representations.

  16. Pilot study on real-time motion detection in UAS video data by human observer and image exploitation algorithm

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Voit, Michael; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2017-05-01

    Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface - again, we compare gaze-based and traditional mouse-based interaction - we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.

  17. Comparing Simple and Advanced Video Tools as Supports for Complex Collaborative Design Processes

    Science.gov (United States)

    Zahn, Carmen; Pea, Roy; Hesse, Friedrich W.; Rosen, Joe

    2010-01-01

    Working with digital video technologies, particularly advanced video tools with editing capabilities, offers new prospects for meaningful learning through design. However, it is also possible that the additional complexity of such tools does "not" advance learning. We compared in an experiment the design processes and learning outcomes…

  18. Comparing Simple and Advanced Video Tools as Supports for Complex Collaborative Design Processes

    Science.gov (United States)

    Zahn, Carmen; Pea, Roy; Hesse, Friedrich W.; Rosen, Joe

    2010-01-01

    Working with digital video technologies, particularly advanced video tools with editing capabilities, offers new prospects for meaningful learning through design. However, it is also possible that the additional complexity of such tools does "not" advance learning. We compared in an experiment the design processes and learning outcomes of 24…

  19. Concerning video game concerns: A collective approach to conceptually inquiring into their empirical complexity

    DEFF Research Database (Denmark)

    Chimiri, Niklas Alexander; Andersen, Mads Lund; Jensen, Tine

    2017-01-01

    and conceptual development. The complexity of video game concerns, for instance in terms of their digital-analogue entanglements and how these co-enact the effects and meaning of violent video gaming, is neither conceptually debated nor of concern. In DGS, on the other hand, such specificities and entanglements......Concerning video game concerns: A collective approach to conceptually inquiring into their empirical complexity Niklas Alexander Chimirri, Mads Lund Andersen, Tine Jensen, Dorte Marie Søndergaard, Anders Wulff Abstract This paper suggests a collectively developed qualitative approach into inquiring...... and thereby shedding unexpected light on common concerns as expressed in psychological research on video gaming. For years, “The Video Game War” has been reproducing polarized debates on whether games are harmful or not. The search for universal knowledge and unequivocal answers to individual gaming behavior...

  20. The right frame of reference makes it simple: an example of introductory mechanics supported by video analysis of motion

    Science.gov (United States)

    Klein, P.; Gröber, S.; Kuhn, J.; Fleischhauer, A.; Müller, A.

    2015-01-01

    The selection and application of coordinate systems is an important issue in physics. However, considering different frames of references in a given problem sometimes seems un-intuitive and is difficult for students. We present a concrete problem of projectile motion which vividly demonstrates the value of considering different frames of references. We use this example to explore the effectiveness of video-based motion analysis (VBMA) as an instructional technique at university level in enhancing students’ understanding of the abstract concept of coordinate systems. A pilot study with 47 undergraduate students indicates that VBMA instruction improves conceptual understanding of this issue.

  1. Motion-based video monitoring for early detection of livestock diseases: The case of African swine fever.

    Science.gov (United States)

    Fernández-Carrión, Eduardo; Martínez-Avilés, Marta; Ivorra, Benjamin; Martínez-López, Beatriz; Ramos, Ángel Manuel; Sánchez-Vizcaíno, José Manuel

    2017-01-01

    Early detection of infectious diseases can substantially reduce the health and economic impacts on livestock production. Here we describe a system for monitoring animal activity based on video and data processing techniques, in order to detect slowdown and weakening due to infection with African swine fever (ASF), one of the most significant threats to the pig industry. The system classifies and quantifies motion-based animal behaviour and daily activity in video sequences, allowing automated and non-intrusive surveillance in real-time. The aim of this system is to evaluate significant changes in animals' motion after being experimentally infected with ASF virus. Indeed, pig mobility declined progressively and fell significantly below pre-infection levels starting at four days after infection at a confidence level of 95%. Furthermore, daily motion decreased in infected animals by approximately 10% before the detection of the disease by clinical signs. These results show the promise of video processing techniques for real-time early detection of livestock infectious diseases.

  2. A complexity-scalable software-based MPEG-2 video encoder

    Institute of Scientific and Technical Information of China (English)

    陈国斌; 陆新宁; 王兴国; 刘济林

    2004-01-01

    With the development of general-purpose processors (GPP) and video signal processing algorithms, it is possible to implement a software-based real-time video encoder on GPP, and its low cost and easy upgrade attract developers' interests to transfer video encoding from specialized hardware to more flexible software. In this paper, the encoding structure is set up first to support complexity scalability; then a lot of high performance algorithms are used on the key time-consuming modules in coding process; finally, at programming level, processor characteristics are considered to improve data access efficiency and processing parallelism. Other programming methods such as lookup table are adopted to reduce the computational complexity. Simulation results showed that these ideas could not only improve the global performance of video coding, but also provide great flexibility in complexity regulation.

  3. Learning-Based Tracking of Complex Non-Rigid Motion

    Institute of Scientific and Technical Information of China (English)

    Qiang Wang; Hai-Zhou Ai; Guang-You Xu

    2004-01-01

    This paper describes a novel method for tracking complex non-rigid motions by learning the intrinsic object structure.The approach builds on and extends the studies on non-linear dimensionality reduction for object representation,object dynamics modeling and particle filter style tracking.First,the dimensionality reduction and density estimation algorithm is derived for unsupervised learning of object intrinsic representation,and the obtained non-rigid part of object state reduces even to 2-3 dimensions.Secondly the dynamical model is derived and trained based on this intrinsic representation.Thirdly the learned intrinsic object structure is integrated into a particle filter style tracker.It is shown that this intrinsic object representation has some interesting properties and based on which the newly derived dynamical model makes particle filter style tracker more robust and reliable.Extensive experiments are done on the tracking of challenging non-rigid motions such as fish twisting with selfocclusion,large inter-frame lip motion and facial expressions with global head rotation.Quantitative results are given to make comparisons between the newly proposed tracker and the existing tracker.The proposed method also has the potential to solve other type of tracking problems.

  4. A Complexity-Aware Video Adaptation Mechanism for Live Streaming Systems

    Science.gov (United States)

    Lu, Meng-Ting; Yao, Jason J.; Chen, Homer H.

    2007-12-01

    The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.

  5. A Complexity-Aware Video Adaptation Mechanism for Live Streaming Systems

    Directory of Open Access Journals (Sweden)

    Homer H. Chen

    2007-01-01

    Full Text Available The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.

  6. The Texas Production Manual. A Source Book for the Motion Picture and Video Industry. Fifth Edition.

    Science.gov (United States)

    Texas State Film Commission, Austin.

    This cross-reference directory to the resources of film and video personnel and services in Texas is divided into eight sections: who's who, pre-production, production, post-production, video, miscellaneous, major city information, and addenda. The first section contains alphabetical listings of companies and individuals engaged in some aspect of…

  7. Teaching complex social skills to children with autism; advances of video modeling

    OpenAIRE

    Nikopoulos, C K; Nikopoulou-Smyrni, P G

    2008-01-01

    Although there has been a corresponding explosion of literature regarding the treatment of the social deficits in autism, the establishment of more complex social behaviors still remains a challenge. Video modeling appears as one approach to have the potential to successfully address this challenge. Following an introduction to modeling that constitutes the basis of this procedure, the current paper explores those video modeling studies that have targeted the promotion of compl...

  8. 基于运动矢量的鲁棒视频去抖动算法%Robust Video Stabilization Based on Motion Vectors

    Institute of Scientific and Technical Information of China (English)

    SONG Li; ZHOU Yuan-hua; ZHOU Jun

    2005-01-01

    This paper proposes a new robust video stabilization algorithm to remove unwanted vibrations in video sequences. A complete theoretical analysis is first established for video stabilization, providing a basis for new stabilization algorithm. Secondly, a new robust global motion estimation (GME) algorithm is proposed. Different from classic methods, the GME algorithm is based on spatial-temporal filtered motion vectors computed by block-matching methods. In addition, effective schemes are employed in correction phase to prevent boundary artifacts and error accumulation. Experiments show that the proposed algorithm has satisfactory stabilization effects while maintaining good tradeoff between speed and precision.

  9. MPEG video shot boundary detection based on motion vectors%基于运动矢量的MPEG视频镜头边界检测

    Institute of Scientific and Technical Information of China (English)

    王成儒; 王微微

    2012-01-01

    首先提取Ⅰ帧的DC图进行镜头的粗略检测,然后提取P帧的前向运动补偿矢量,利用扩展矢量中值(EVM)滤波对运动矢量进行滤波预处理,并且进一步提取三个运动特征:运动强度值、运动强度差和运动矢量方向直方图绝对差,最后用模糊逻辑对上述三个特征进行综合,推理得出突变、渐变镜头和非镜头,实现镜头的检测.由于不需要对视频完全解压,直接从MPEG压缩码流中提取信息,所以计算复杂度低,提取速度较快.最后通过实验验证了该方法.%First, DC map of I frame was extracted for rough detection of the shots. And then the forward motion compensation vector of P frame was extracted, and Extended Vector Median (EVM) filtering was used for the preprocessing of motion vector. Finally the characteristics of three sports features, exercise intensity value, exercise intensity difference and the direction of motion vector histogram, absolute difference were calculated. Fuzzy inference was introduced to synthesize these three characteristics and classified shots to abrupt-change, gradual-change and no-change ones. The MPEG video shot boundary detection method do not need to decompress the video fully, and extracts information directly from MPEG compressed bit stream, so it is of low computation complexity and high extration speed, which is verified by the experimental results.

  10. MOTION ESTIMATION IN MPEG-4 VIDEO SEQUENCE USING BLOCK MATCHING ALGORITHM

    Directory of Open Access Journals (Sweden)

    KISHORE PINNINTI

    2011-12-01

    Full Text Available Now a day, MPEG-4 is the most apparent multimedia standard which combines natural interactivity, synthetic digital videos and computer graphics. It has a wide variety of applications such as video conferencing, computer games and mobile phones etc. All the applications need portable video communicators, so low power VLSIimplementations are required. In addition to the required portable devices, a care must be taken for the band width limitations. To perform an effective transmission of video sequences using the limited Bandwidth, the input data must be compressed and coded to fit these limited resources .This paper aims towards the realizationof an efficient estimation of moving components in a video image sequences to isolate moving image from the static background. The architecture developed in this paper uses LMS algorithm for estimating the noise components and also uses Block Matching Algorithm for the detection of moving components in the frame sequences. Further Huffman decoder is used for decoding the compressed data in the video codec. The proposed task is implemented in VHDL Language and the results are analyzed in XILINX Spartan-III.

  11. Method and system for efficient video compression with low-complexity encoder

    Science.gov (United States)

    Chen, Jun (Inventor); He, Dake (Inventor); Jagmohan, Ashish (Inventor); Lu, Ligang (Inventor); Sheinin, Vadim (Inventor)

    2012-01-01

    Disclosed are a method and system for video compression, wherein the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a video decoder, wherein the method for encoding includes the steps of converting a source frame into a space-frequency representation; estimating conditional statistics of at least one vector of space-frequency coefficients; estimating encoding rates based on the said conditional statistics; and applying Slepian-Wolf codes with the said computed encoding rates. The preferred method for decoding includes the steps of; generating a side-information vector of frequency coefficients based on previously decoded source data, encoder statistics, and previous reconstructions of the source frequency vector; and performing Slepian-Wolf decoding of at least one source frequency vector based on the generated side-information, the Slepian-Wolf code bits and the encoder statistics.

  12. Técnicas de Marca de Agua para Video MPEG usando Sensibilidad Visual y Vectores de Movimiento Video Watermarking Technique using Visual Sensibility and Motion Vector

    Directory of Open Access Journals (Sweden)

    Antonio Cedillo

    2008-01-01

    Full Text Available Se propone un algoritmo de marca de agua para video digital, en donde la inserción y detección de la marca de agua se realiza durante el proceso de codificación MPEG-2. La señal de marca de agua esta incrustada en los coeficientes de DCT en el canal azul de los cuadros I y P del video MPEG. La energía de la marca de agua se calcula usando la sensibilidad visual de los cuadros (I y P y la magnitud de los vectores de movimiento en los cuadros de P. Los resultados de simulación computacional muestran la imperceptibilidad de marca de agua, obteniendo un PSNR mayor a 45dB, conservando la robustez de la misma cuando es sometida a diversas distorsiones, tales como contaminación por ruido, recorte, y eliminación de cuadros, entre otros. El algoritmo propuesto tiene una influencia mínima en la velocidad del proceso de codificación MPEG.A video watermarking algorithm, in which watermark embedding and detection process are carried out during the MPEG-2 coding process is proposed. The watermark signal is embedded in the DCT coefficients of blue channel of I-frames and P-frames. The embedding energy is computed adaptively using perceptual information of the I-frames and the P-frames, and motion vectors information in the P-frame. Computer simulation results show the watermark imperceptibility, obtaining a PSNR greater than 45 dB and maintaining its robustness to common signal distortions such as contamination by noise, cropping, frame dropping, frame swapping and frame averaging, among others. The proposed algorithm has very little influence on the MPEG decoding speed.

  13. Intermittent motion in desert locusts: behavioural complexity in simple environments.

    Directory of Open Access Journals (Sweden)

    Sepideh Bazazi

    Full Text Available Animals can exhibit complex movement patterns that may be the result of interactions with their environment or may be directly the mechanism by which their behaviour is governed. In order to understand the drivers of these patterns we examine the movement behaviour of individual desert locusts in a homogenous experimental arena with minimal external cues. Locust motion is intermittent and we reveal that as pauses become longer, the probability that a locust changes direction from its previous direction of travel increases. Long pauses (of greater than 100 s can be considered reorientation bouts, while shorter pauses (of less than 6 s appear to act as periods of resting between displacements. We observe power-law behaviour in the distribution of move and pause lengths of over 1.5 orders of magnitude. While Lévy features do exist, locusts' movement patterns are more fully described by considering moves, pauses and turns in combination. Further analysis reveals that these combinations give rise to two behavioural modes that are organized in time: local search behaviour (long exploratory pauses with short moves and relocation behaviour (long displacement moves with shorter resting pauses. These findings offer a new perspective on how complex animal movement patterns emerge in nature.

  14. Spatio-temporal Rich Model Based Video Steganalysis on Cross Sections of Motion Vector Planes.

    Science.gov (United States)

    Tasdemir, Kasim; Kurugollu, Fatih; Sezer, Sakir

    2016-05-11

    A rich model based motion vector steganalysis benefiting from both temporal and spatial correlations of motion vectors is proposed in this work. The proposed steganalysis method has a substantially superior detection accuracy than the previous methods, even the targeted ones. The improvement in detection accuracy lies in several novel approaches introduced in this work. Firstly, it is shown that there is a strong correlation, not only spatially but also temporally, among neighbouring motion vectors for longer distances. Therefore, temporal motion vector dependency along side the spatial dependency is utilized for rigorous motion vector steganalysis. Secondly, unlike the filters previously used, which were heuristically designed against a specific motion vector steganography, a diverse set of many filters which can capture aberrations introduced by various motion vector steganography methods is used. The variety and also the number of the filter kernels are substantially more than that of used in previous ones. Besides that, filters up to fifth order are employed whereas the previous methods use at most second order filters. As a result of these, the proposed system captures various decorrelations in a wide spatio-temporal range and provides a better cover model. The proposed method is tested against the most prominent motion vector steganalysis and steganography methods. To the best knowledge of the authors, the experiments section has the most comprehensive tests in motion vector steganalysis field including five stego and seven steganalysis methods. Test results show that the proposed method yields around 20% detection accuracy increase in low payloads and 5% in higher payloads.

  15. A novel methodology for complex part motion planning

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A motion planning strategy for the assembly task of inserting a dissymmetrical T-shaped part into a C-shaped slot is presented. The coarse motion planning strategy is expounded by geometric reasoning. A medial axis diagram decreases the unnecessary configuration states and optimizes the planning strategy. Due to the uncertainties, force sensing and force control is indispensable for motion planning. Combining the coarse motion planning strategy with fine motion planning strategy, the task of assembling a dissymmetrical T-shaped part can be completed successfully.

  16. A New Motion Capture System For Automated Gait Analysis Based On Multi Video Sequence Analysis

    DEFF Research Database (Denmark)

    Jensen, Karsten; Juhl, Jens

    There is an increasing demand for assessing foot mal positions and an interest in monitoring the effect of treatment. In the last decades several different motion capture systems has been used. This abstract describes a new low cost motion capture system....

  17. Video Analysis of Projectile Motion Using Tablet Computers as Experimental Tools

    Science.gov (United States)

    Klein, P.; Gröber, S.; Kuhn, J.; Müller, A.

    2014-01-01

    Tablet computers were used as experimental tools to record and analyse the motion of a ball thrown vertically from a moving skateboard. Special applications plotted the measurement data component by component, allowing a simple determination of initial conditions and "g" in order to explore the underlying laws of motion. This experiment…

  18. Autonomous Motion Segmentation of Multiple Objects in Low Resolution Video Using Variational Level Sets

    Energy Technology Data Exchange (ETDEWEB)

    Moelich, M

    2003-11-18

    This report documents research that was done during a ten week internship in the Sapphire research group at the Lawrence Livermore National Laboratory during the Summer of 2003. The goal of the study was to develop an algorithm that is capable of isolating (segmenting) moving objects in low resolution video sequences. This capability is currently being developed by the Sapphire research group as the first stage in a longer term video data mining project. This report gives a chronological account of what ideas were tried in developing the algorithm and what was learned from each attempt. The final version of the algorithm, which is described in detail, gives good results and is fast.

  19. Video thermography: complex regional pain syndrome in the picture

    NARCIS (Netherlands)

    S.P. Niehof (Sjoerd)

    2007-01-01

    textabstractIn this thesis videothermography is developed and evaluated as a diagnostic and monitoring tool in Complex Regional Pain Syndrome type 1 (CRPS1). This work is conducted within four pre- set developmental phases: namely, the initial, potential, monitoring and diagnostic phases. Tw

  20. Video thermography: complex regional pain syndrome in the picture

    NARCIS (Netherlands)

    S.P. Niehof (Sjoerd)

    2007-01-01

    textabstractIn this thesis videothermography is developed and evaluated as a diagnostic and monitoring tool in Complex Regional Pain Syndrome type 1 (CRPS1). This work is conducted within four pre- set developmental phases: namely, the initial, potential, monitoring and diagnostic phases.

  1. Mass asymmetry and tricyclic wobble motion assessment using automated launch video analysis

    Directory of Open Access Journals (Sweden)

    Ryan Decker

    2016-04-01

    Examination of the pitch and yaw histories clearly indicates that in addition to epicyclic motion's nutation and precession oscillations, an even faster wobble amplitude is present during each spin revolution, even though some of the amplitudes of the oscillation are smaller than 0.02 degree. The results are compared to a sequence of shots where little appreciable mass asymmetries were present, and only nutation and precession frequencies are predominantly apparent in the motion history results. Magnitudes of the wobble motion are estimated and compared to product of inertia measurements of the asymmetric projectiles.

  2. The Computational Complexity of Portal and Other 3D Video Games

    OpenAIRE

    Erik D. Demaine; Lockhart, Joshua; Lynch, Jayson

    2016-01-01

    We classify the computational complexity of the popular video games Portal and Portal 2. We isolate individual mechanics of the game and prove NP-hardness, PSPACE-completeness, or (pseudo)polynomiality depending on the specific game mechanics allowed. One of our proofs generalizes to prove NP-hardness of many other video games such as Half-Life 2, Halo, Doom, Elder Scrolls, Fallout, Grand Theft Auto, Left 4 Dead, Mass Effect, Deus Ex, Metal Gear Solid, and Resident Evil. These results build o...

  3. Using video-reflexive ethnography to capture the complexity of leadership enactment in the healthcare workplace.

    Science.gov (United States)

    Gordon, Lisi; Rees, Charlotte; Ker, Jean; Cleland, Jennifer

    2016-12-30

    Current theoretical thinking asserts that leadership should be distributed across many levels of healthcare organisations to improve the patient experience and staff morale. However, much healthcare leadership education focusses on the training and competence of individuals and little attention is paid to the interprofessional workplace and how its inherent complexities might contribute to the emergence of leadership. Underpinned by complexity theory, this research aimed to explore how interprofessional healthcare teams enact leadership at a micro-level through influential acts of organising. A whole (interprofessional) team workplace-based study utilising video-reflexive ethnography occurred in two UK clinical sites. Thematic framework analyses of the video data (video-observation and video-reflexivity sessions) were undertaken, followed by in-depth analyses of human-human and human-material interactions. Data analysis revealed a complex interprofessional environment where leadership is a dynamic process, negotiated and renegotiated in various ways throughout interactions (both formal and informal). Being able to "see" themselves at work gave participants the opportunity to discuss and analyse their everyday leadership practices and challenge some of their sometimes deeply entrenched values, beliefs, practices and assumptions about healthcare leadership. These study findings therefore indicate a need to redefine the way that medical and healthcare educators facilitate leadership development and argue for new approaches to research which shifts the focus from leaders to leadership.

  4. Sensor Selection and Integration to Improve Video Segmentation in Complex Environments

    Directory of Open Access Journals (Sweden)

    Adam R. Reckley

    2014-01-01

    Full Text Available Background subtraction is often considered to be a required stage of any video surveillance system being used to detect objects in a single frame and/or track objects across multiple frames in a video sequence. Most current state-of-the-art techniques for object detection and tracking utilize some form of background subtraction that involves developing a model of the background at a pixel, region, or frame level and designating any elements that deviate from the background model as foreground. However, most existing approaches are capable of segmenting a number of distinct components but unable to distinguish between the desired object of interest and complex, dynamic background such as moving water and high reflections. In this paper, we propose a technique to integrate spatiotemporal signatures of an object of interest from different sensing modalities into a video segmentation method in order to improve object detection and tracking in dynamic, complex scenes. Our proposed algorithm utilizes the dynamic interaction information between the object of interest and background to differentiate between mistakenly segmented components and the desired component. Experimental results on two complex data sets demonstrate that our proposed technique significantly improves the accuracy and utility of state-of-the-art video segmentation technique.

  5. Key frames extraction of motion video based on prior knowledge%基于先验的动作视频关键帧提取

    Institute of Scientific and Technical Information of China (English)

    庞亚俊

    2016-01-01

    针对运动视频关键帧提取结果运动表达能力差的问题,以健美操运动视频关键帧提取为例,将先验语义引入到视频片段分割和关键帧提取特征提取等过程中,提出基于先验的运动视频关键帧提取算法.该算法采用韵律特征和动作节拍连续性等先验知识,将健美操动作视频分解成不同长度的动作视频片段,并利用Hog人体分类器从每一帧图像中识别出人体边界框;通过人体模板将人体边界框分割为16个运动块,并采用光流法计算每个运动块的基本运动方向;通过比较运动块基本运动方向的差异实现了动作视频关键帧提取.实验证明,该方法在保证关键帧视频压缩的情况下,具有更好地动作概括力.%To enhance the motion express ability of key frame,the key frame extraction of calisthenics video as an example,the prior is applied to the video split and key frame feature extraction and an algorithm of key frames extraction of motion video binding prior knowledge is proposed.Based on music beat detection algorithms and rhythm constraints,the calisthenics video is broken down into continuous motion video clips.Then the bounding box of human is identified from the calisthenics video picture with HoG human classifier.And then,each bounding box of human is divided into 16 motion block by body template and the optical flow is adopted to set the basic direction of motion of each block.Finally,key frames of calisthenics video are gotten by comparing the differences of the basic direction of motion of each block.Experiments show that the proposed algorithm has better motion generalization ability while keeping motion video compression efficiency.

  6. A Novel Compression for Enormous Motion Data in Video Using Repeated Object Clips [ROC

    Directory of Open Access Journals (Sweden)

    S. Lavanya

    2012-12-01

    Full Text Available Entertainment industry is roofed with audio, video, graphics and multimedia. It deals with large variety of data. As the data increases, it becomes a troublesome problem especially in large database or realtime and telepresence applications, where the memory, bandwidth and storages are limited. So, in this study we propose, a novel compression method, which is achieved by video content analysis using Repeated Object Clips [ROC]. To achieve ROC, the key frame selection algorithm is used. In key frame selection process, we extract key frame from total number of frames. For selection process, initially frame separation process is carried out, then object detection and object segmentation is performed by level set method. By this novel technique moving object was tracked effectively and compression ratio increased significantly.

  7. Evaluation of H.264 and H.265 full motion video encoding for small UAS platforms

    Science.gov (United States)

    McGuinness, Christopher D.; Walker, David; Taylor, Clark; Hill, Kerry; Hoffman, Marc

    2016-05-01

    Of all the steps in the image acquisition and formation pipeline, compression is the only process that degrades image quality. A selected compression algorithm succeeds or fails to provide sufficient quality at the requested compression rate depending on how well the algorithm is suited to the input data. Applying an algorithm designed for one type of data to a different type often results in poor compression performance. This is mostly the case when comparing the performance of H.264, designed for standard definition data, to HEVC (High Efficiency Video Coding), which the Joint Collaborative Team on Video Coding (JCT-VC) designed for high-definition data. This study focuses on evaluating how HEVC compares to H.264 when compressing data from small UAS platforms. To compare the standards directly, we assess two open-source traditional software solutions: x264 and x265. These software-only comparisons allow us to establish a baseline of how much improvement can generally be expected of HEVC over H.264. Then, specific solutions leveraging different types of hardware are selected to understand the limitations of commercial-off-the-shelf (COTS) options. Algorithmically, regardless of the implementation, HEVC is found to provide similar quality video as H.264 at 40% lower data rates for video resolutions greater than 1280x720, roughly 1 Megapixel (MPx). For resolutions less than 1MPx, H.264 is an adequate solution though a small (roughly 20%) compression boost is earned by employing HEVC. New low cost, size, weight, and power (CSWAP) HEVC implementations are being developed and will be ideal for small UAS systems.

  8. Mobile video-to-audio transducer and motion detection for sensory substitution

    Directory of Open Access Journals (Sweden)

    Maxime eAmbard

    2015-10-01

    Full Text Available Visuo-auditory sensory substitution systems are augmented reality devices that translate a video stream into an audio stream in order to help the blind in daily tasks requiring visuo-spatial information. In this work, we present both a new mobile device and a transcoding method specifically designed to sonify moving objects. Frame differencing is used to extract spatial features from the video stream and two-dimensional spatial information is converted into audio cues using pitch, interaural time difference and interaural level difference. Using numerical methods, we attempt to reconstruct visuo-spatial information based on audio signals generated from various video stimuli. We show that despite a contrasted visual background and a highly lossy encoding method, the information in the audio signal is sufficient to allow object localization, object trajectory evaluation, object approach detection, and spatial separation of multiple objects. We also show that this type of audio signal can be interpreted by human users by asking ten subjects to discriminate trajectories based on generated audio signals.

  9. Pre-trained D-CNN models for detecting complex events in unconstrained videos

    Science.gov (United States)

    Robinson, Joseph P.; Fu, Yun

    2016-05-01

    Rapid event detection faces an emergent need to process large videos collections; whether surveillance videos or unconstrained web videos, the ability to automatically recognize high-level, complex events is a challenging task. Motivated by pre-existing methods being complex, computationally demanding, and often non-replicable, we designed a simple system that is quick, effective and carries minimal overhead in terms of memory and storage. Our system is clearly described, modular in nature, replicable on any Desktop, and demonstrated with extensive experiments, backed by insightful analysis on different Convolutional Neural Networks (CNNs), as stand-alone and fused with others. With a large corpus of unconstrained, real-world video data, we examine the usefulness of different CNN models as features extractors for modeling high-level events, i.e., pre-trained CNNs that differ in architectures, training data, and number of outputs. For each CNN, we use 1-fps from all training exemplar to train one-vs-rest SVMs for each event. To represent videos, frame-level features were fused using a variety of techniques. The best being to max-pool between predetermined shot boundaries, then average-pool to form the final video-level descriptor. Through extensive analysis, several insights were found on using pre-trained CNNs as off-the-shelf feature extractors for the task of event detection. Fusing SVMs of different CNNs revealed some interesting facts, finding some combinations to be complimentary. It was concluded that no single CNN works best for all events, as some events are more object-driven while others are more scene-based. Our top performance resulted from learning event-dependent weights for different CNNs.

  10. A new approach for overlay text detection and extraction from complex video scene.

    Science.gov (United States)

    Kim, Wonjun; Kim, Changick

    2009-02-01

    Overlay text brings important semantic clues in video content analysis such as video information retrieval and summarization, since the content of the scene or the editor's intention can be well represented by using inserted text. Most of the previous approaches to extracting overlay text from videos are based on low-level features, such as edge, color, and texture information. However, existing methods experience difficulties in handling texts with various contrasts or inserted in a complex background. In this paper, we propose a novel framework to detect and extract the overlay text from the video scene. Based on our observation that there exist transient colors between inserted text and its adjacent background, a transition map is first generated. Then candidate regions are extracted by a reshaping method and the overlay text regions are determined based on the occurrence of overlay text in each candidate. The detected overlay text regions are localized accurately using the projection of overlay text pixels in the transition map and the text extraction is finally conducted. The proposed method is robust to different character size, position, contrast, and color. It is also language independent. Overlay text region update between frames is also employed to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

  11. Entropic Movement Complexity Reflects Subjective Creativity Rankings of Visualized Hand Motion Trajectories

    Science.gov (United States)

    Peng, Zhen; Braun, Daniel A.

    2015-01-01

    In a previous study we have shown that human motion trajectories can be characterized by translating continuous trajectories into symbol sequences with well-defined complexity measures. Here we test the hypothesis that the motion complexity individuals generate in their movements might be correlated to the degree of creativity assigned by a human observer to the visualized motion trajectories. We asked participants to generate 55 novel hand movement patterns in virtual reality, where each pattern had to be repeated 10 times in a row to ensure reproducibility. This allowed us to estimate a probability distribution over trajectories for each pattern. We assessed motion complexity not only by the previously proposed complexity measures on symbolic sequences, but we also propose two novel complexity measures that can be directly applied to the distributions over trajectories based on the frameworks of Gaussian Processes and Probabilistic Movement Primitives. In contrast to previous studies, these new methods allow computing complexities of individual motion patterns from very few sample trajectories. We compared the different complexity measures to how a group of independent jurors rank ordered the recorded motion trajectories according to their personal creativity judgment. We found three entropic complexity measures that correlate significantly with human creativity judgment and discuss differences between the measures. We also test whether these complexity measures correlate with individual creativity in divergent thinking tasks, but do not find any consistent correlation. Our results suggest that entropic complexity measures of hand motion may reveal domain-specific individual differences in kinesthetic creativity. PMID:26733896

  12. Fast and Simple Motion Tracking Unit with Motion Estimation

    Institute of Scientific and Technical Information of China (English)

    Hyeon-cheol YANG; Yoon-sup KIM; Seong-soo LEE; Sang-keun OH; Sung-hwa KIM; Doo-won CHOI

    2010-01-01

    Surveillance system using active tracking camera has no distance limitation of surveillance range compared to supersonic or sound sensors. However, complex motion tracking algorithm requires huge amount of computation, and it often requires expensive DSPs or embedded processors. This paper proposes a novel motion tracking unit based on different image for fast and simple motion tracking. It uses configuration factor to avoid noise and inaccuracy. It reduces the required computation significantly, so as to be implemented on Field Programmable Gate Array(FPGAs) instead of expensive Digital Signal Processing(DSPs). It also performs calculation for motion estimation in video compression, so it can be easily combined with surveillance system with video recording functionality based on video compression. The proposed motion tracking system implemented on Xilinx Vertex-4 FPGA can process 48 frames per second, and operating frequency of motion tracking unit is 100 MHz.

  13. Context-adaptive binary arithmetic coding with precise probability estimation and complexity scalability for high-efficiency video coding

    Science.gov (United States)

    Karwowski, Damian; Domański, Marek

    2016-01-01

    An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.

  14. Facial Expression Recognition from Video Sequences Based on Spatial-Temporal Motion Local Binary Pattern and Gabor Multiorientation Fusion Histogram

    Directory of Open Access Journals (Sweden)

    Lei Zhao

    2017-01-01

    Full Text Available This paper proposes novel framework for facial expressions analysis using dynamic and static information in video sequences. First, based on incremental formulation, discriminative deformable face alignment method is adapted to locate facial points to correct in-plane head rotation and break up facial region from background. Then, spatial-temporal motion local binary pattern (LBP feature is extracted and integrated with Gabor multiorientation fusion histogram to give descriptors, which reflect static and dynamic texture information of facial expressions. Finally, a one-versus-one strategy based multiclass support vector machine (SVM classifier is applied to classify facial expressions. Experiments on Cohn-Kanade (CK + facial expression dataset illustrate that integrated framework outperforms methods using single descriptors. Compared with other state-of-the-art methods on CK+, MMI, and Oulu-CASIA VIS datasets, our proposed framework performs better.

  15. A Common Framework for the Analysis of Complex Motion? Standstill and Capture Illusions

    Directory of Open Access Journals (Sweden)

    Max Reinhard Dürsteler

    2014-12-01

    Full Text Available A series of illusions was created by presenting stimuli, which consisted of two overlapping surfaces each defined by textures of independent visual features (i.e. modulation of luminance, color, depth, etc.. When presented concurrently with a stationary 2-D luminance texture, observers often fail to perceive the motion of an overlapping stereoscopically defined depth-texture. This illusory motion standstill arises due to a failure to represent two independent surfaces (one for luminance and one for depth textures and motion transparency (the ability to perceive motion of both surfaces simultaneously. Instead the stimulus is represented as a single non-transparent surface taking on the stationary nature of the luminance-defined texture. By contrast, if it is the 2D-luminance defined texture that is in motion, observers often perceive the stationary depth texture as also moving. In this latter case, the failure to represent the motion transparency of the two textures gives rise to illusionary motion capture. Our past work demonstrated that the illusions of motion standstill and motion capture can occur for depth-textures that are rotating, or expanding / contracting, or else spiraling. Here I extend these findings to include stereo-shearing. More importantly, it is the motion (or lack thereof of the luminance texture that determines how the motion of the depth will be perceived. This observation is strongly in favor of a single pathway for complex motion that operates on luminance-defines texture motion signals only. In addition, these complex motion illusions arise with chromatically-defined textures with smooth, transitions between their colors. This suggests that in respect to color motion perception the complex motions’ pathway is only able to accurately process signals from isoluminant colored textures with sharp transitions between colors, and/or moving at high speeds, which is conceivable if it relies on inputs from a hypothetical dual

  16. Tactical 3D Model Generation using Structure-From-Motion on Video from Unmanned Systems

    Science.gov (United States)

    2015-04-01

    and easily able to consume is growing dramatically. In this paper, we have presented one such method known as structure-from-motion (SfM). SfM takes in...statistical_outlier.php#stastical-outlier-removal. 15. Youtube , “Us navy - uss independence (lcs 2) maneuvering capabilities demonstration,” (July 2013). https

  17. Adaptive mode decision with residual motion compensation for distributed video coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren; Slowack, Jurgen

    2013-01-01

    mode in DVC. The adaptive mode decision is not only based on quality of key frames but also the rate of Wyner-Ziv (WZ) frames. To improve noise distribution estimation for a more accurate mode decision, a residual motion compensation is proposed to estimate a current noise residue based on a previously...

  18. Complexity constrained rate-distortion optimization of sign language video using an objective intelligibility metric

    Science.gov (United States)

    Ciaramello, Frank M.; Hemami, Sheila S.

    2008-01-01

    Sign language users are eager for the freedom and convenience of video communication over cellular devices. Compression of sign language video in this setting offers unique challenges. The low bitrates available make encoding decisions extremely important, while the power constraints of the device limit the encoder complexity. The ultimate goal is to maximize the intelligibility of the conversation given the rate-constrained cellular channel and power constrained encoding device. This paper uses an objective measure of intelligibility, based on subjective testing with members of the Deaf community, for rate-distortion optimization of sign language video within the H.264 framework. Performance bounds are established by using the intelligibility metric in a Lagrangian cost function along with a trellis search to make optimal mode and quantizer decisions for each macroblock. The optimal QP values are analyzed and the unique structure of sign language is exploited in order to reduce complexity by three orders of magnitude relative to the trellis search technique with no loss in rate-distortion performance. Further reductions in complexity are made by eliminating rarely occuring modes in the encoding process. The low complexity SL optimization technique increases the measured intelligibility up to 3.5 dB, at fixed rates, and reduces rate by as much as 60% at fixed levels of intelligibility with respect to a rate control algorithm designed for aesthetic distortion as measured by MSE.

  19. MATRIX BASED INDEXING TECHNIQUE FOR VIDEO DATA

    OpenAIRE

    2013-01-01

    Due to increasing the usage of media, the utilization of video play central role as it supports various applications. Video is the particular media which contains complex collection of objects like audio, motion, text, color and picture. Due to the rapid growth of this information video indexing process is mandatory for fast and effective retrieval. Many current indexing techniques fails to extract the needed image from the stored data set, based on the users query. Urgent attention in the fi...

  20. Parent-Driven Campaign Videos: An Analysis of the Motivation and Affect of Videos Created by Parents of Children With Complex Healthcare Needs.

    Science.gov (United States)

    Carter, Bernie; Bray, Lucy; Keating, Paula; Wilkinson, Catherine

    2017-09-15

    Caring for a child with complex health care needs places additional stress and time demands on parents. Parents often turn to their peers to share their experiences, gain support, and lobby for change; increasingly this is done through social media. The WellChild #notanurse_but is a parent-driven campaign that states its aim is to "shine a light" on the care parents, who are not nurses, have to undertake for their child with complex health care needs and to raise decision-makers' awareness of the gaps in service provision and support. This article reports on a study that analyzed the #notanurse_but parent-driven campaign videos. The purpose of the study was to consider the videos in terms of the range, content, context, perspectivity (motivation), and affect (sense of being there) in order to inform the future direction of the campaign. Analysis involved repeated viewing of a subset of 30 purposively selected videos and documenting our analysis on a specifically designed data extraction sheet. Each video was analyzed by a minimum of 2 researchers. All but 2 of the 30 videos were filmed inside the home. A variety of filming techniques were used. Mothers were the main narrators in all but 1 set of videos. The sense of perspectivity was clearly linked to the campaign with the narration pressing home the reality, complexity, and need for vigilance in caring for a child with complex health care needs. Different clinical tasks and routines undertaken as part of the child's care were depicted. Videos also reported on a sense of feeling different than "normal families"; the affect varied among the researchers, ranging from strong to weaker emotional responses.

  1. Rate-prediction structure complexity analysis for multi-view video coding using hybrid genetic algorithms

    Science.gov (United States)

    Liu, Yebin; Dai, Qionghai; You, Zhixiang; Xu, Wenli

    2007-01-01

    Efficient exploitation of the temporal and inter-view correlation is critical to multi-view video coding (MVC), and the key to it relies on the design of prediction chain structure according to the various pattern of correlations. In this paper, we propose a novel prediction structure model to design optimal MVC coding schemes along with tradeoff analysis in depth between compression efficiency and prediction structure complexity for certain standard functionalities. Focusing on the representation of the entire set of possible chain structures rather than certain typical ones, the proposed model can given efficient MVC schemes that adaptively vary with the requirements of structure complexity and video source characteristics (the number of views, the degrees of temporal and interview correlations). To handle large scale problem in model optimization, we deploy a hybrid genetic algorithm which yields satisfactory results shown in the simulations.

  2. Videography-Based Unconstrained Video Analysis.

    Science.gov (United States)

    Li, Kang; Li, Sheng; Oh, Sangmin; Fu, Yun

    2017-05-01

    Video analysis and understanding play a central role in visual intelligence. In this paper, we aim to analyze unconstrained videos, by designing features and approaches to represent and analyze videography styles in the videos. Videography denotes the process of making videos. The unconstrained videos are defined as the long duration consumer videos that usually have diverse editing artifacts and significant complexity of contents. We propose to construct a videography dictionary, which can be utilized to represent every video clip as a sequence of videography words. In addition to semantic features, such as foreground object motion and camera motion, we also incorporate two novel interpretable features to characterize videography, including the scale information and the motion correlations. We then demonstrate that, by using statistical analysis methods, the unique videography signatures extracted from different events can be automatically identified. For real-world applications, we explore the use of videography analysis for three types of applications, including content-based video retrieval, video summarization (both visual and textual), and videography-based feature pooling. In the experiments, we evaluate the performance of our approach and other methods on a large-scale unconstrained video dataset, and show that the proposed approach significantly benefits video analysis in various ways.

  3. Lane Detection in Video-Based Intelligent Transportation Monitoring via Fast Extracting and Clustering of Vehicle Motion Trajectories

    Directory of Open Access Journals (Sweden)

    Jianqiang Ren

    2014-01-01

    Full Text Available Lane detection is a crucial process in video-based transportation monitoring system. This paper proposes a novel method to detect the lane center via rapid extraction and high accuracy clustering of vehicle motion trajectories. First, we use the activity map to realize automatically the extraction of road region, the calibration of dynamic camera, and the setting of three virtual detecting lines. Secondly, the three virtual detecting lines and a local background model with traffic flow feedback are used to extract and group vehicle feature points in unit of vehicle. Then, the feature point groups are described accurately by edge weighted dynamic graph and modified by a motion-similarity Kalman filter during the sparse feature point tracking. After obtaining the vehicle trajectories, a rough k-means incremental clustering with Hausdorff distance is designed to realize the rapid online extraction of lane center with high accuracy. The use of rough set reduces effectively the accuracy decrease, which results from the trajectories that run irregularly. Experimental results prove that the proposed method can detect lane center position efficiently, the affected time of subsequent tasks can be reduced obviously, and the safety of traffic surveillance systems can be enhanced significantly.

  4. Adaptive multifoveation for low-complexity video compression with a stationary camera perspective

    Science.gov (United States)

    Sankaran, Sriram; Ansari, Rashid; Khokhar, Ashfaq A.

    2005-03-01

    In human visual system the spatial resolution of a scene under view decreases uniformly at points of increasing distance from the point of gaze, also called foveation point. This phenomenon is referred to as foveation and has been exploited in foveated imaging to allocate bits in image and video coding according to spatially varying perceived resolution. Several digital image processing techniques have been proposed in the past to realize foveated images and video. In most cases a single foveation point is assumed in a scene. Recently there has been a significant interest in dynamic as well as multi-point foveation. The complexity involved in identification of foveation points is however significantly high in the proposed approaches. In this paper, an adaptive multi-point foveation technique for video data based on the concepts of regions of interests (ROIs) is proposed and its performance is investigated. The points of interest are assumed to be centroid of moving objects and dynamically determined by the foveation algorithm proposed. Fast algorithm for implementing region based multi-foveation processing is proposed. The proposed adaptive multi-foveation fully integrates with existing video codec standard in both spatial and DCT domain.

  5. Three-directional motion compensation-based novel-look-up-table for video hologram generation of three-dimensional objects freely maneuvering in space.

    Science.gov (United States)

    Dong, Xiao-Bin; Kim, Seung-Cheol; Kim, Eun-Soo

    2014-07-14

    A new three-directional motion compensation-based novel-look-up-table (3DMC-NLUT) based on its shift-invariance and thin-lens properties, is proposed for video hologram generation of three-dimensional (3-D) objects moving with large depth variations in space. The input 3-D video frames are grouped into a set of eight in sequence, where the first and remaining seven frames in each set become the reference frame (RF) and general frames (GFs), respectively. Hence, each 3-D video frame is segmented into a set of depth-sliced object images (DOIs). Then x, y, and z-directional motion vectors are estimated from blocks and DOIs between the RF and each of the GFs, respectively. With these motion vectors, object motions in space are compensated. Then, only the difference images between the 3-directionally motion-compensated RF and each of the GFs are applied to the NLUT for hologram calculation. Experimental results reveal that the average number of calculated object points and the average calculation time of the proposed method have been reduced compared to those of the conventional NLUT, TR-NLUT and MPEG-NLUT by 38.14%, 69.48%, and 67.41% and 35.30%, 66.39%, and 64.46%, respectively.

  6. Modeling of Video Sequences by Gaussian Mixture: Application in Motion Estimation by Block Matching Method

    Directory of Open Access Journals (Sweden)

    Nsiri Benayad

    2010-01-01

    Full Text Available This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices are estimated by the Expectation Maximization algorithm (EM which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.

  7. Micro-motion Signature Extraction Method for Wideband Radar Based on Complex Image OMP Decomposition

    Directory of Open Access Journals (Sweden)

    Luo Ying

    2012-12-01

    Full Text Available In order to extract the micro-motion signatures in condition of Migration Through Range Cells (MTRC of micro-motional scatterers and azimuthal undersampling in wideband radar, a method based on the Orthogonal Matching Pursuit (OMP decomposition of the complex image is proposed. By making use of the amplitude and phase information of “range-slow-time image”, a set of micro-Doppler signal atoms is constructed in the complex image space. The OMP algorithm in vector space is then extend to the complex image space to obtain the micro-motion parameters. Simulations demonstrate the proposed method can extract the micro-motion signatures when MTRC of micro-motional scatterers is occurred, and can also work well when the sampling rate is lower than the Nyquist sampling rate.

  8. Geometric reasoning about damped and forced harmonic motion in the complex plane

    Science.gov (United States)

    Close, Hunter G.

    2015-09-01

    Complex-valued functions are commonly used to solve differential equations for one-dimensional motion of a harmonic oscillator with linear damping, a sinusoidal driving force, or both. However, the usual approach treats complex functions as an algebraic shortcut, neglecting geometrical representations of those functions and discarding imaginary parts. This article emphasizes the benefit of using diagrams in the complex plane for such systems, in order to build intuition about harmonic motion and promote spatial reasoning and the use of varied representations. Examples include the analysis of exact time sequences of various kinematic events in damped harmonic motion, sense-making about the phase difference between a driving force and the resulting motion, and understanding the discrepancy between the resonant frequency and the natural undamped frequency for forced, damped harmonic motion. The approach is suitable for supporting instruction in undergraduate upper-division classical mechanics.

  9. Game-Theoretic Rate-Distortion-Complexity Optimization of High Efficiency Video Coding

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Milani, Simone; Forchhammer, Søren

    2013-01-01

    This paper presents an algorithm for rate-distortioncomplexity optimization for the emerging High Efficiency Video Coding (HEVC) standard, whose high computational requirements urge the need for low-complexity optimization algorithms. Optimization approaches need to specify different complexity...... profiles in order to tailor the computational load to the different hardware and power-supply resources of devices. In this work, we focus on optimizing the quantization parameter and partition depth in HEVC via a game-theoretic approach. The proposed rate control strategy alone provides 0.2 dB improvement...

  10. Representation of planar motion of complex joints by means of rolling pairs. Application to neck motion.

    Science.gov (United States)

    Page, Alvaro; de Rosario, Helios; Gálvez, José A; Mata, Vicente

    2011-02-24

    We propose to model planar movements between two human segments by means of rolling-without-slipping kinematic pairs. We compute the path traced by the instantaneous center of rotation (ICR) as seen from the proximal and distal segments, thus obtaining the fixed and moving centrodes, respectively. The joint motion is then represented by the rolling-without-slipping of one centrode on the other. The resulting joint kinematic model is based on the real movement and accounts for nonfixed axes of rotation; therefore it could improve current models based on revolute pairs in those cases where joint movement implies displacement of the ICR. Previous authors have used the ICR to characterize human joint motion, but they only considered the fixed centrode. Such an approach is not adequate for reproducing motion because the fixed centrode by itself does not convey information about body position. The combination of the fixed and moving centrodes gathers the kinematic information needed to reproduce the position and velocities of moving bodies. To illustrate our method, we applied it to the flexion-extension movement of the head relative to the thorax. The model provides a good estimation of motion both for position variables (mean R(pos)=0.995) and for velocities (mean R(vel)=0.958). This approach is more realistic than other models of neck motion based on revolute pairs, such as the dual-pivot model. The geometry of the centrodes can provide some information about the nature of the movement. For instance, the ascending and descending curves of the fixed centrode suggest a sequential movement of the cervical vertebrae. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. The Implementation of Mirror-Image Effect in MPEG-2 Compressed Video

    Institute of Scientific and Technical Information of China (English)

    NI Qiang; ZHOU Lei; ZHANG Wen-jun

    2005-01-01

    Straightforward techniques for spatial domain digital video editing (DVE) of compressed video via decompression and recompression are computationally expensive. In this paper, a novel algorithm was proposed for mirror-image special effect editing in compressed video without full frame decompression and motion estimation.The results show that with the reducing of computational complexity, the quality of edited video in compressed domain is still close to the quality of the edited video in uncompressed domain at the same bit rate.

  12. Motion

    CERN Document Server

    Graybill, George

    2007-01-01

    Take the mystery out of motion. Our resource gives you everything you need to teach young scientists about motion. Students will learn about linear, accelerating, rotating and oscillating motion, and how these relate to everyday life - and even the solar system. Measuring and graphing motion is easy, and the concepts of speed, velocity and acceleration are clearly explained. Reading passages, comprehension questions, color mini posters and lots of hands-on activities all help teach and reinforce key concepts. Vocabulary and language are simplified in our resource to make them accessible to str

  13. Time motion and video analysis of classical ballet and contemporary dance performance.

    Science.gov (United States)

    Wyon, M A; Twitchett, E; Angioi, M; Clarke, F; Metsios, G; Koutedakis, Y

    2011-11-01

    Video analysis has become a useful tool in the preparation for sport performance and its use has highlighted the different physiological demands of seemingly similar sports and playing positions. The aim of the current study was to examine the performance differences between classical ballet and contemporary dance. In total 93 dance performances (48 ballet and 45 contemporary) were analysed for exercise intensity, changes in direction and specific discrete skills (e. g., jumps, lifts). Results revealed significant differences between the 2 dance forms for exercise intensity (pdance featured more continuous moderate exercise intensities (27 s x min(-1)). These differences have implications on the energy systems utilised during performance with ballet potentially stressing the anaerobic system more than contemporary dance. The observed high rates in the discrete skills in ballet (5 jumps x min(-1); 2 lifts x min(-1)) can cause local muscular damage, particularly in relatively weaker individuals. In conclusion, classical ballet and contemporary dance performances are as significantly different in the underlying physical demands placed on their performers as the artistic aspects of the choreography.

  14. Using video modeling to teach complex social sequences to children with autism.

    Science.gov (United States)

    Nikopoulos, Christos K; Keenan, Mickey

    2007-04-01

    This study comprised of two experiments was designed to teach complex social sequences to children with autism. Experimental control was achieved by collecting data using means of within-system design methodology. Across a number of conditions children were taken to a room to view one of the four short videos of two people engaging in a simple sequence of activities. Then, each child's behavior was assessed in the same room. Results showed that this video modeling procedure enhanced the social initiation skills of all children. It also facilitated reciprocal play engagement and imitative responding of a sequence of behaviors, in which social initiation was not included. These behavior changes generalized across peers and maintained after a 1- and 2-month follow-up period.

  15. Improved spatio-SNR FGS video coding scheme using motion compensation on enhancement-layer

    Institute of Scientific and Technical Information of China (English)

    Jiang Tao; Zhang Zhaoyang; Ma Ran; Shi Xuli

    2006-01-01

    MPEG-4 fine-granularity-scalable (FGS) technology is an effective solution to resolve the network bandwidth varying because FGS provides very fine granular SNR scalability. However, this scalability is obtained with sacrifice of coding efficiency. An one-loop FGS structure is presented based on motion compensation (MC + FGS) to improve the coding efficiency of base FGS. Then it describes and discusses the hybrid spatial-SNR FGS (FGSS) structure that extends SNR scalability of FGS to spatial scalability (spatio-SNR scalability). FGSS structure inherent the low coding efficiency of FGS structure. Combining MC + FGS structure with FGSS structure, a structure of MC + FGSS structure is obtained which acquires both structures' advantages and counteracts both structures' defects. Experimental results prove the MC + FGSS structure not only obtains fine granular spatio-SNR scalability, but also achieves high coding efficiency.

  16. Implementation An image processing technique for video motion analysis during the gait cycle canine

    Science.gov (United States)

    López, G.; Hernández, J. O.

    2017-01-01

    Nowadays the analyses of human movement, more specifically of the gait have ceased to be a priority for our species. Technological advances and implementations engineering have joined to obtain data and information regarding the gait cycle in another animal species. The aim of this paper is to analyze the canine gait in order to get results that describe the behavior of the limbs during the gait cycle. The research was performed by: 1. Dog training, where it is developed the step of adaptation and trust; 2. Filming gait cycle; 3. Data acquisition, in order to obtain values that describe the motion cycle canine and 4. Results, obtaining the kinematics variables involved in the march. Which are essential to determine the behavior of the limbs, as well as for the development of prosthetic or orthotic. This project was carried out with conventional equipment and using computational tools easily accessible.

  17. Tidal Motion in a Complex Inlet and Bay System, Ponce de Leon Inlet, Florida

    Science.gov (United States)

    2000-01-01

    REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Tidal Motion in a Complex Inlet and Bay System, Ponce de Leon Inlet, Florida 5a...investigated in Ponce de Leon (Ponce) Inlet, Florida, and its bay channels through a 10-week data-collection campaign and two-dimensional numerical...Beach, Florida Summer 2000 Tidal Motion in a Complex Inlet and Bay System, Ponce de Leon Inlet, Florida Adele Militellot and Gary A. Zarillo:j: t

  18. A High-Throughput Hardware Architecture for the H.264/AVC Half-Pixel Motion Estimation Targeting High-Definition Videos

    Directory of Open Access Journals (Sweden)

    Marcel M. Corrêa

    2011-01-01

    Full Text Available This paper presents a high-performance hardware architecture for the H.264/AVC Half-Pixel Motion Estimation that targets high-definition videos. This design can process very high-definition videos like QHDTV (3840×2048 in real time (30 frames per second. It also presents an optimized arrangement of interpolated samples, which is the main key to achieve an efficient search. The interpolation process is interleaved with the SAD calculation and comparison, allowing the high throughput. The architecture was fully described in VHDL, synthesized for two different Xilinx FPGA devices, and it achieved very good results when compared to related works.

  19. Robust recovery of human motion from video using Kalman filters and virtual humans.

    Science.gov (United States)

    Cerveri, P; Pedotti, A; Ferrigno, G

    2003-08-01

    In sport science, as in clinical gait analysis, optoelectronic motion capture systems based on passive markers are widely used to recover human movement. By processing the corresponding image points, as recorded by multiple cameras, the human kinematics is resolved through multistage processing involving spatial reconstruction, trajectory tracking, joint angle determination, and derivative computation. Key problems with this approach are that marker data can be indistinct, occluded or missing from certain cameras, that phantom markers may be present, and that both 3D reconstruction and tracking may fail. In this paper, we present a novel technique, based on state space filters, that directly estimates the kinematical variables of a virtual mannequin (biomechanical model) from 2D measurements, that is, without requiring 3D reconstruction and tracking. Using Kalman filters, the configuration of the model in terms of joint angles, first and second order derivatives is automatically updated in order to minimize the distances, as measured on TV-cameras, between the 2D measured markers placed on the subject and the corresponding back-projected virtual markers located on the model. The Jacobian and Hessian matrices of the nonlinear observation function are computed through a multidimensional extension of Stirling's interpolation formula. Extensive experiments on simulated and real data confirmed the reliability of the developed system that is robust against false matching and severe marker occlusions. In addition, we show how the proposed technique can be extended to account for skin artifacts and model inaccuracy.

  20. [Preliminary efficacy of video-assisted anal fistula treatment for complex anal fistula].

    Science.gov (United States)

    Liu, Hailong; Xiao, Yihua; Zhang, Yong; Pan, Zhihui; Peng, Jian; Tang, Wenxian; Li, Ajian; Zhou, Lulu; Yin, Lu; Lin, Moubin

    2015-12-01

    To evaluate the preliminary efficacy of video-assisted anal fistula treatment (VAAFT) for complex anal fistula. Clinical data of 11 consecutive patients with complex anal fistula undergoing VAAFT in our department from May to July 2015 were reviewed. VAAFT was performed to manage the fistula under endoscope without cutting or resection. VAAFT was successfully performed in all the 11 patients. The internal ostium was closed using mattress suture in 10 cases, and Endo-GIA stapler in 1 case. The mean operative time was (42.0±12.4) min, mean hospital stay was (4.1±1.5) d. Complication included bleeding and perianal infection in 1 case respectively. After 1 to 3.2 months follow-up, success rate was 72.7%(8/11), and no fecal incontinence was observed. Video-assisted anal fistula treatment is an effective, safe and minimally invasive surgical procedure for complex anal fistula with preservation of anal sphincter function.

  1. Complex motion tomography of the sacroiliac joint. An anatomical and roentgenological study

    Energy Technology Data Exchange (ETDEWEB)

    Dijkstra, P.F.; Vleeming, A.; Stoeckart, R.

    1989-06-01

    To find a better method for diagnosing sacroiliac (SI) joint disease, an anatomical approach was combined with conventional roentgenology, complex motion tomography and computed tomography. Complex motion tomography is suggested as the method of choice in the investigation of the SI-joint. Because of its complex (sinusoidal) form, the dorsal portion of the joint has to be tomographed in frontal projection and the middle and ventral portions in oblique projection. In 56 patients, referred for probable ankylosing spondylitis, 72 SI joints were investigated. Based on plain radiography six and on frontal tomography five SI joints were diagnosed as normal. However, based on oblique tomography 31 joints were diagnosed as normal. (orig.).

  2. Interactive multiview video system with non-complex navigation at the decoder

    CERN Document Server

    Maugey, Thomas

    2012-01-01

    Multiview video with interactive and smooth view switching at the receiver is a challenging application with several issues in terms of effective use of storage and bandwidth resources, reactivity of the system, quality of the viewing experience and system complexity. The classical decoding system for generating virtual views first projects a reference or encoded frame to a given viewpoint and then fills in the holes due to potential occlusions. This last step still constitutes a complex operation with specific software or hardware at the receiver and requires a certain quantity of information from the neighboring frames for insuring consistency between the virtual images. In this work we propose a new approach that shifts most of the burden due to interactivity from the decoder to the encoder, by anticipating the navigation of the decoder and sending auxiliary information that guarantees temporal and interview consistency. This leads to an additional cost in terms of transmission rate and storage, which we m...

  3. Softening the Complexity of Entropic Motion on Curved Statistical Manifolds

    CERN Document Server

    Cafaro, Carlo; Lupo, Cosmo; Mancini, Stefano

    2011-01-01

    We study the information geometry and the entropic dynamics of a 3D Gaussian statistical model. We then compare our analysis to that of a 2D Gaussian statistical model obtained from the higher-dimensional model via introduction of an additional information constraint that resembles the quantum mechanical canonical minimum uncertainty relation. We show that the chaoticity (temporal complexity) of the 2D Gaussian statistical model, quantified by means of the Information Geometric Entropy (IGE) and the Jacobi vector field intensity, is softened with respect to the chaoticity of the 3D Gaussian statistical model.

  4. Topological complexity of motion planning in projective product spaces

    CERN Document Server

    Gonzalez, Jesus; Torres-Giese, Enrique; Xicotencatl, Miguel

    2012-01-01

    We study Farber's topological complexity (TC) of Davis' projective product spaces (PPS's). We show that, in many non-trivial instances, the TC of PPS's coming from at least two sphere factors is (much) lower than the dimension of the manifold. This is in high contrast with the known situation for (usual) real projective spaces for which, in fact, the Euclidean immersion dimension and TC are two facets of the same problem. Low TC-values have been observed for infinite families of non-simply connected spaces only for H-spaces, for finite complexes whose fundamental group has cohomological dimension not exceeding 2, and now in this work for infinite families of PPS's. We discuss general bounds for the TC (and the Lusternik-Schnirelmann category) of PPS's, and compute these invariants for specific families of such manifolds. Some of our methods involve the use of an equivariant version of TC. We also give a characterization of the Euclidean immersion dimension of PPS's through generalized concepts of axial maps a...

  5. Video classification for video quality prediction

    Institute of Scientific and Technical Information of China (English)

    LIU Yu-xin; KURCEREN Ragip; BUDHIA Udit

    2006-01-01

    In this paper we propose a novel method for video quality prediction using video classification. In essence, our approach can serve two goals: (1) To measure the video quality of compressed video sequences without referencing to the original uncompressed videos, i.e., to realize No-Reference (NR) video quality evaluation; (2) To predict quality scores for uncompressed video sequences at various bitrates without actually encoding them. The use of our approach can help realize video streaming with ideal Quality of Service (QoS). Our approach is a low complexity solution, which is specially suitable for application to mobile video streaming where the resources at the handsets are scarce.

  6. Improved motion contrast and processing efficiency in OCT angiography using complex-correlation algorithm

    Science.gov (United States)

    Guo, Li; Li, Pei; Pan, Cong; Liao, Rujia; Cheng, Yuxuan; Hu, Weiwei; Chen, Zhong; Ding, Zhihua; Li, Peng

    2016-02-01

    The complex-based OCT angiography (Angio-OCT) offers high motion contrast by combining both the intensity and phase information. However, due to involuntary bulk tissue motions, complex-valued OCT raw data are processed sequentially with different algorithms for correcting bulk image shifts (BISs), compensating global phase fluctuations (GPFs) and extracting flow signals. Such a complicated procedure results in massive computational load. To mitigate such a problem, in this work, we present an inter-frame complex-correlation (CC) algorithm. The CC algorithm is suitable for parallel processing of both flow signal extraction and BIS correction, and it does not need GPF compensation. This method provides high processing efficiency and shows superiority in motion contrast. The feasibility and performance of the proposed CC algorithm is demonstrated using both flow phantom and live animal experiments.

  7. Low-Complexity Error-Control Methods for Scalable Video Streaming

    Institute of Scientific and Technical Information of China (English)

    Zhijie Zhao; Jom Ostermann

    2012-01-01

    In this paper, low-complexity error-resilience and error-concealment methods for the scalable video coding (SVC) extension of H.264/AVC are described. At the encoder, multiple-description coding (MDC) is used as error-resilient coding. Balanced scalable multiple descriptions are generated by mixing the pre-encoded scalable bit streams. Each description is wholly decodable using a standard SVC decoder. A preprocessor can be placed before an SVC decoder to extract the packets from the highest-quality bit stream. At the decoder, error concealment involves using a lightweight decoder preprocessor to generate a valid bit stream from the available network abstraction layer (NAL) units when medium-grain scalability (MGS) layers are used. Modifications are made to the NAL unit header or slice header if some NAL units of MGS layers are lost. The number of additional packets that a decoder discards as a result of a packet loss is minimized. The proposed error-resilience and error-concealment methods require little computation, which makes them suitable for real-time video streaming. Experiment results show that the proposed methods significantly reduce quality degradation caused by packet loss.

  8. Adaptive GOP structure based on motion coherence

    Science.gov (United States)

    Ma, Yanzhuo; Wan, Shuai; Chang, Yilin; Yang, Fuzheng; Wang, Xiaoyu

    2009-08-01

    Adaptive Group of Pictures (GOP) is helpful for increasing the efficiency of video encoding by taking account of characteristics of video content. This paper proposes a method for adaptive GOP structure selection for video encoding based on motion coherence, which extracts key frames according to motion acceleration, and assigns coding type for each key and non-key frame correspondingly. Motion deviation is then used instead of motion magnitude in the selection of the number of B frames. Experimental results show that the proposed method for adaptive GOP structure selection achieves performance gain of 0.2-1dB over the fixed GOP, and has the advantage of better transmission resilience. Moreover, this method can be used in real-time video coding due to its low complexity.

  9. Approximate reconstruction of continuous spatially complex domain motions by multialignment NMR residual dipolar couplings.

    Science.gov (United States)

    Fisher, Charles K; Al-Hashimi, Hashim M

    2009-05-07

    NMR spectroscopy is one of the most powerful techniques for studying the internal dynamics of biomolecules. Current formalisms approximate the dynamics using simple continuous motional models or models involving discrete jumps between a small number of states. However, no approach currently exists for interpreting NMR data in terms of continuous spatially complex motional paths that may feature more than one distinct maneuver. Here, we present an approach for approximately reconstructing spatially complex continuous motions of chiral domains using NMR anisotropic interactions. The key is to express Wigner matrix elements, which can be determined experimentally using residual dipolar couplings, as a line integral over a curve in configuration space containing an ensemble of conformations and to approximate the curve using a series of geodesic segments. Using this approach and five sets of synthetic residual dipolar couplings computed for five linearly independent alignment conditions, we show that it is theoretically possible to reconstruct salient features of a multisegment interhelical motional trajectory obtained from a 65 ns molecular dynamics simulation of a stem-loop RNA. Our study shows that the 3-D atomic reconstruction of complex motions in biomolecules is within experimental reach.

  10. Using video-based observation research methods in primary care health encounters to evaluate complex interactions.

    Science.gov (United States)

    Asan, Onur; Montague, Enid

    2014-01-01

    The purpose of this paper is to describe the use of video-based observation research methods in primary care environment and highlight important methodological considerations and provide practical guidance for primary care and human factors researchers conducting video studies to understand patient-clinician interaction in primary care settings. We reviewed studies in the literature which used video methods in health care research, and we also used our own experience based on the video studies we conducted in primary care settings. This paper highlighted the benefits of using video techniques, such as multi-channel recording and video coding, and compared "unmanned" video recording with the traditional observation method in primary care research. We proposed a list that can be followed step by step to conduct an effective video study in a primary care setting for a given problem. This paper also described obstacles, researchers should anticipate when using video recording methods in future studies. With the new technological improvements, video-based observation research is becoming a promising method in primary care and HFE research. Video recording has been under-utilised as a data collection tool because of confidentiality and privacy issues. However, it has many benefits as opposed to traditional observations, and recent studies using video recording methods have introduced new research areas and approaches.

  11. High-grade video compression of echocardiographic studies: a multicenter validation study of selected motion pictures expert groups (MPEG)-4 algorithms.

    Science.gov (United States)

    Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Celeste, Fabrizio; Gentile, Francesco; Mantero, Antonio; Montericcio, Vincenzo; Muratori, Manuela

    2007-05-01

    Large files produced by standard compression algorithms slow down spread of digital and tele-echocardiography. We validated echocardiographic video high-grade compression with the new Motion Pictures Expert Groups (MPEG)-4 algorithms with a multicenter study. Seven expert cardiologists blindly scored (5-point scale) 165 uncompressed and compressed 2-dimensional and color Doppler video clips, based on combined diagnostic content and image quality (uncompressed files as references). One digital video and 3 MPEG-4 algorithms (WM9, MV2, and DivX) were used, the latter at 3 compression levels (0%, 35%, and 60%). Compressed file sizes decreased from 12 to 83 MB to 0.03 to 2.3 MB (1:1051-1:26 reduction ratios). Mean SD of differences was 0.81 for intraobserver variability (uncompressed and digital video files). Compared with uncompressed files, only the DivX mean score at 35% (P = .04) and 60% (P = .001) compression was significantly reduced. At subcategory analysis, these differences were still significant for gray-scale and fundamental imaging but not for color or second harmonic tissue imaging. Original image quality, session sequence, compression grade, and bitrate were all independent determinants of mean score. Our study supports use of MPEG-4 algorithms to greatly reduce echocardiographic file sizes, thus facilitating archiving and transmission. Quality evaluation studies should account for the many independent variables that affect image quality grading.

  12. Postural sway and gaze can track the complex motion of a visual target.

    Directory of Open Access Journals (Sweden)

    Vassilia Hatzitaki

    Full Text Available Variability is an inherent and important feature of human movement. This variability has form exhibiting a chaotic structure. Visual feedback training using regular predictive visual target motions does not take into account this essential characteristic of the human movement, and may result in task specific learning and loss of visuo-motor adaptability. In this study, we asked how well healthy young adults can track visual target cues of varying degree of complexity during whole-body swaying in the Anterior-Posterior (AP and Medio-Lateral (ML direction. Participants were asked to track three visual target motions: a complex (Lorenz attractor, a noise (brown and a periodic (sine moving target while receiving online visual feedback about their performance. Postural sway, gaze and target motion were synchronously recorded and the degree of force-target and gaze-target coupling was quantified using spectral coherence and Cross-Approximate entropy. Analysis revealed that both force-target and gaze-target coupling was sensitive to the complexity of the visual stimuli motions. Postural sway showed a higher degree of coherence with the Lorenz attractor than the brown noise or sinusoidal stimulus motion. Similarly, gaze was more synchronous with the Lorenz attractor than the brown noise and sinusoidal stimulus motion. These results were similar regardless of whether tracking was performed in the AP or ML direction. Based on the theoretical model of optimal movement variability tracking of a complex signal may provide a better stimulus to improve visuo-motor adaptation and learning in postural control.

  13. Video-Assisted Anal Fistula Treatment (VAAFT) for Complex Anal Fistula: A Preliminary Evaluation in China.

    Science.gov (United States)

    Jiang, Hui-Hong; Liu, Hai-Long; Li, Zhen; Xiao, Yi-Hua; Li, A-Jian; Chang, Yi; Zhang, Yong; Lv, Liang; Lin, Mou-Bin

    2017-04-30

    BACKGROUND Although many attempts have been made to advance the treatment of complex anal fistula, it continues to be a difficult surgical problem. This study aimed to describe the novel technique of video-assisted anal fistula treatment (VAAFT) and our preliminary experiences using VAAFT with patients with complex anal fistula. MATERIAL AND METHODS From May 2015 to May 2016, 52 patients with complex anal fistula were treated with VAAFT at Yangpu Hospital of Tongji University School of Medicine, and the clinical data of these patients were reviewed. RESULTS VAAFT was performed successfully in all 52 patients. The median operation time was 55 minutes. Internal openings were identified in all cases. 50 cases were closed with sutures, and 2 were closed with staplers. Complications included perianal sepsis in 3 cases and bleeding in another 3 cases. Complete healing without recurrence was achieved in 44 patients (84.6%) after 9 months of follow-up. No fecal incontinence was observed. Furthermore, a significant improvement in Gastrointestinal Quality of Life Index (GIQLI) score was observed from preoperative baseline (mean, 85.5) to 3-month follow-up (mean, 105.4; panal fistula with preservation of anal sphincter function.

  14. Rapid Motion Estimation Algorithm for Embedded Video Monitor System%嵌入式视频监控系统的快速运动估计算法

    Institute of Scientific and Technical Information of China (English)

    刘国繁; 曹少坤; 彭铁钢

    2009-01-01

    Motion estimation of time-consuming brings great difficulties to real-time video encoding. In order to improve monitoring of real-time video encoding, a rapid motion estimation algorithm for embedded video monitoring system is proposed. Based on the characteristics of a relatively fixed monitoring background, the algorithm uses multi-criteria of early stop, predicts search starting point through vector movement relatively characteristics of time and space, finally, uses improved rood search pattern starting search. Experiment shows that the algorithm has better search rate than adaptive rood search algorithm with slight PSNR decrease, matches the real-time priority principle in embedded video monitoring system.%耗时的运动估计运算给实时视频编码带来较大困难,为提高监控视频编码的,实时性,提出一种用于嵌入式视频监控系统的快速运动估计算法.该算法根据监控背景相对固定的特点,使用多层提前终止准则,通过运动矢量时空相关性的特性来预测搜索起点,采崩改进型的十字搜索模板进行搜索.实验表明,与自适应十字搜索算法相比,该算法在平均峰值信噪比略有下降的情况下,搜索速度得到提升,符合嵌入式视频监控的实时性优先原则.

  15. A retrospective study of diaphragmatic motion, pulmonary function, and quality-of-life following video-assisted thoracoscopic lobectomy in patients with nonsmall cell lung cancer

    Directory of Open Access Journals (Sweden)

    W Jiao

    2014-01-01

    Full Text Available Background: Diaphragmatic dysfunction and its negative physiologic disadvantages are less commonly reported in patients with lung cancer video-assisted thoracoscopic lobectomy. The aim of this study was to investigate the outcomes of this complication on pulmonary function and quality-of-life in patients following video-assisted thoracoscopic lobectomy. Objectives: The aim of this study was to investigate potential benefits on pulmonary function and quality-of-life with normal diaphragmatic motion. Materials and Methods: A retrospective study was conducted in 64 patients with nonsmall cell lung cancer after video-assisted thoracoscopic lobectomy. The population were divided into groups 1 (with diaphragmatic paralysis, n = 32 and group 2 (without diaphragmatic paralysis, n = 32 according diaphragmatic motion after postoperatively 6 months. And then, we investigated the difference between the two groups on pulmonary function and quality-of-life. Results: (1 At 6 months after resection, the patients in group 1 had lost 25% of their preoperative forced expiratory volume in the 1 s (FEV 1 (P < 0.001, and the patients in group 2 had lost 15% of their preoperative FEV 1 (P < 0.001. And the other spirometric variables in group 1 were significantly worse than that of group 2 (P < 0.001. (2 The most frequently reported postoperative symptoms were fatigue, coughing, dyspnea, and thoracotomy pain in two groups. Of all the symptom scales, only the dyspnea scale showed a significant difference which subject has a higher proportion and scale compared to control. Conclusions: The present study indicates that unilateral diaphragmatic paralysis following video-assisted thoracoscopic lobectomy caused adverse effects on postoperative pulmonary function and quality-of-life.

  16. Comment on 'Finding viscosity of liquids from Brownian motion at students' laboratory' and 'Brownian motion using video capture'

    Energy Technology Data Exchange (ETDEWEB)

    Greczylo, Tomasz; Debowska, Ewa [Institute of Experimental Physics, Wroclaw University, pl. Maxa Borna 9, 50-204 Wroclaw (Poland)

    2007-09-15

    The authors make comments and remarks on the papers by Salmon et al (2002 Eur. J. Phys. 23 249-53) and their own (2005 Eur. J. Phys. 26 827-33) concerning Brownian motion in two-dimensional space. New, corrected results of calculations and measurements for students' experiments on finding the viscosity of liquids from Brownian motion are presented. (letters and comments)

  17. Using silent motion pictures to teach complex syntax to adult deaf readers.

    Science.gov (United States)

    Kelly, L

    1998-01-01

    This research tested whether silent motion pictures could be a source of contexts that fostered comprehension of relative clause and passive voice sentences during reading. These two syntactic structures are chronically difficult for some deaf readers. According to the instructional strategy, while subjects watched silent comedy stories, the video display intermittently focused attention on short segments of action and then called for a decision regarding which of two sentences printed in a workbook described the action segment. After this, a display on the video screen provided feedback on the accuracy of the decision. If successful here, this approach might be applied to other areas of competence in order to elevate the generally low level of reading performance by many deaf students. The study applied a single subject design in order to measure sentence comprehension accuracy before and following use of the materials. The computerized testing procedure also measured sentence reading time, an index of attention use. Thus, these data allowed an examination of whether any increases in comprehension were associated with slower, more laborious rates of reading. The instructional approach was an indirect one sharing multiple aspects of whole language methodology, and the sample included deaf subjects at a variety of reading ability levels. This permitted examination of whether an indirect instructional approach could be successful with readers demonstrating relatively low reading ability. The central research question of the study was the following 'Can this instructional method be effective with deaf readers?'.

  18. Resource-Constrained Low-Complexity Video Coding for Wireless Transmission

    DEFF Research Database (Denmark)

    Ukhanova, Ann

    Constrained resources like memory, power, bandwidth and delay requirements in many mobile systems pose limitations for video applications. Standard approaches for video compression and transmission do not always satisfy system requirements. In this thesis we have shown that it is possible to modify...... consumption based on image coding standards. Scalability aspects were studied for distributed video coding as well. We have compared temporal scalability for distributed and scalable video coding and provided recommendations for the choice of one of these solutions based on the system requirements. Another...... of video quality. We proposed a new metric for objective quality assessment that considers frame rate. As many applications deal with wireless video transmission, we performed an analysis of compression and transmission systems with a focus on power-distortion trade-off. We proposed an approach...

  19. Planar Heliocentric Roto-Translatory Motion of a Spacecraft with a Solar Sail of Complex Shape

    Science.gov (United States)

    Kirpichnikov, S. N.; Kirpichnikova, E. S.; Polyakhova, E. N.; Shmyrov, A. S.

    1996-01-01

    A complete treatment of the general motion of rotation and translation of a solar-sail spacecraft is proposed for the non-flat sail of complex shape. The planar heliocentric roto-translatory motion is considered, orbit-rotational coupling in the problem of altitude and orbital sail motion is investigated for the two-folding sail formed by two unequal reflective rectangular plates oriented at a right angle. The problem of orbit-rotational coupling is essentially a planar one: both sail plates are orthogonal to the orbital plane. The possibility of the non-controlled interplanetary transfer with such two-folding sail at its passive radiational orientation is established analytically from point of view of orbit-rotational coupling. Optimal geometric proportions of this sail are found at minimum-time interplanetary transfers.

  20. Modeling on thermally induced coupled micro-motions of satellite with complex flexible appendages

    Directory of Open Access Journals (Sweden)

    Zhicheng Zhou

    2015-06-01

    Full Text Available To describe the characteristics of thermally induced coupled micro-motions more exactly, a numerical model is proposed for a satellite system consisting of a rigid body and the complex appendages. The coupled governing equations including the effects of transient temperature differences are formulated within the framework of the Lagrangian Method based on the finite element models of flexible structures. Meanwhile, the problem of coupling between attitude motions of rigid body and vibrations of flexible attachments are addressed with explicit expressions. Thermally induced micro-motions are examined in detail for a simple satellite with a large solar panel under the disturbance of thermal environment from earth shadow to sunlight area in the earth orbit. The results show that the thermal–mechanical performances of an on-orbit satellite can be well predicted by the proposed finite element model.

  1. Touching motion: rTMS on the human middle temporal complex interferes with tactile speed perception.

    Science.gov (United States)

    Basso, Demis; Pavan, Andrea; Ricciardi, Emiliano; Fagioli, Sabrina; Vecchi, Tomaso; Miniussi, Carlo; Pietrini, Pietro

    2012-10-01

    Brain functional and psychophysical studies have clearly demonstrated that visual motion perception relies on the activity of the middle temporal complex (hMT+). However, recent studies have shown that hMT+ seems to be also activated during tactile motion perception, suggesting that this visual extrastriate area is involved in the processing and integration of motion, irrespective of the sensorial modality. In the present study, we used repetitive transcranial magnetic stimulation (rTMS) to assess whether hMT+ plays a causal role in tactile motion processing. Blindfolded participants detected changes in the speed of a grid of tactile moving points with their finger (i.e. tactile modality). The experiment included three different conditions: a control condition with no TMS and two TMS conditions, i.e. hMT+-rTMS and posterior parietal cortex (PPC)-rTMS. Accuracies were significantly impaired during hMT+-rTMS but not in the other two conditions (No-rTMS or PPC-rTMS), moreover, thresholds for detecting speed changes were significantly higher in the hMT+-rTMS with respect to the control TMS conditions. These findings provide stronger evidence that the activity of the hMT+ area is involved in tactile speed processing, which may be consistent with the hypothesis of a supramodal role for that cortical region in motion processing.

  2. Computationally efficient video restoration for Nyquist sampled imaging sensors combining an affine-motion-based temporal Kalman filter and adaptive Wiener filter.

    Science.gov (United States)

    Rucci, Michael; Hardie, Russell C; Barnard, Kenneth J

    2014-05-01

    In this paper, we present a computationally efficient video restoration algorithm to address both blur and noise for a Nyquist sampled imaging system. The proposed method utilizes a temporal Kalman filter followed by a correlation-model based spatial adaptive Wiener filter (AWF). The Kalman filter employs an affine background motion model and novel process-noise variance estimate. We also propose and demonstrate a new multidelay temporal Kalman filter designed to more robustly treat local motion. The AWF is a spatial operation that performs deconvolution and adapts to the spatially varying residual noise left in the Kalman filter stage. In image areas where the temporal Kalman filter is able to provide significant noise reduction, the AWF can be aggressive in its deconvolution. In other areas, where less noise reduction is achieved with the Kalman filter, the AWF balances the deconvolution with spatial noise reduction. In this way, the Kalman filter and AWF work together effectively, but without the computational burden of full joint spatiotemporal processing. We also propose a novel hybrid system that combines a temporal Kalman filter and BM3D processing. To illustrate the efficacy of the proposed methods, we test the algorithms on both simulated imagery and video collected with a visible camera.

  3. Video summarization and semantics editing tools

    Science.gov (United States)

    Xu, Li-Qun; Zhu, Jian; Stentiford, Fred

    2001-01-01

    This paper describes a video summarization and semantics editing tool that is suited for content-based video indexing and retrieval with appropriate human operator assistance. The whole system has been designed with a clear focus on the extraction and exploitation of motion information inherent in the dynamic video scene. The dominant motion information has ben used explicitly for shot boundary detection, camera motion characterization, visual content variations description, and for key frame extraction. Various contributions have been made to ensure that the system works robustly with complex scenes and across different media types. A window-based graphical user interface has been designed to make the task very easy for interactive analysis and editing of semantic events and episode where appropriate.

  4. 4DCAPTURE: a general purpose software package for capturing and analyzing two- and three-dimensional motion data acquired from video sequences

    Science.gov (United States)

    Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake

    2003-07-01

    4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The

  5. Activation of the Human MT Complex by Motion in Depth Induced by a Moving Cast Shadow.

    Science.gov (United States)

    Katsuyama, Narumi; Usui, Nobuo; Taira, Masato

    2016-01-01

    A moving cast shadow is a powerful monocular depth cue for motion perception in depth. For example, when a cast shadow moves away from or toward an object in a two-dimensional plane, the object appears to move toward or away from the observer in depth, respectively, whereas the size and position of the object are constant. Although the cortical mechanisms underlying motion perception in depth by cast shadow are unknown, the human MT complex (hMT+) is likely involved in the process, as it is sensitive to motion in depth represented by binocular depth cues. In the present study, we examined this possibility by using a functional magnetic resonance imaging (fMRI) technique. First, we identified the cortical regions sensitive to the motion of a square in depth represented via binocular disparity. Consistent with previous studies, we observed significant activation in the bilateral hMT+, and defined functional regions of interest (ROIs) there. We then investigated the activity of the ROIs during observation of the following stimuli: 1) a central square that appeared to move back and forth via a moving cast shadow (mCS); 2) a segmented and scrambled cast shadow presented beside the square (sCS); and 3) no cast shadow (nCS). Participants perceived motion of the square in depth in the mCS condition only. The activity of the hMT+ was significantly higher in the mCS compared with the sCS and nCS conditions. Moreover, the hMT+ was activated equally in both hemispheres in the mCS condition, despite presentation of the cast shadow in the bottom-right quadrant of the stimulus. Perception of the square moving in depth across visual hemifields may be reflected in the bilateral activation of the hMT+. We concluded that the hMT+ is involved in motion perception in depth induced by moving cast shadow and by binocular disparity.

  6. Amputation effects on the underlying complexity within transtibial amputee ankle motion

    Energy Technology Data Exchange (ETDEWEB)

    Wurdeman, Shane R., E-mail: shanewurdeman@gmail.com [Nebraska Biomechanics Core Facility, University of Nebraska at Omaha, Omaha, Nebraska 68182 (United States); Advanced Prosthetics Center, Omaha, Nebraska 68134 (United States); Myers, Sara A. [Nebraska Biomechanics Core Facility, University of Nebraska at Omaha, Omaha, Nebraska 68182 (United States); Stergiou, Nicholas [Nebraska Biomechanics Core Facility, University of Nebraska at Omaha, Omaha, Nebraska 68182 (United States); College of Public Health, University of Nebraska Medical Center, Omaha, Nebraska 68198 (United States)

    2014-03-15

    The presence of chaos in walking is considered to provide a stable, yet adaptable means for locomotion. This study examined whether lower limb amputation and subsequent prosthetic rehabilitation resulted in a loss of complexity in amputee gait. Twenty-eight individuals with transtibial amputation participated in a 6 week, randomized cross-over design study in which they underwent a 3 week adaptation period to two separate prostheses. One prosthesis was deemed “more appropriate” and the other “less appropriate” based on matching/mismatching activity levels of the person and the prosthesis. Subjects performed a treadmill walking trial at self-selected walking speed at multiple points of the adaptation period, while kinematics of the ankle were recorded. Bilateral sagittal plane ankle motion was analyzed for underlying complexity through the pseudoperiodic surrogation analysis technique. Results revealed the presence of underlying deterministic structure in both prostheses and both the prosthetic and sound leg ankle (discriminant measure largest Lyapunov exponent). Results also revealed that the prosthetic ankle may be more likely to suffer loss of complexity than the sound ankle, and a “more appropriate” prosthesis may be better suited to help restore a healthy complexity of movement within the prosthetic ankle motion compared to a “less appropriate” prosthesis (discriminant measure sample entropy). Results from sample entropy results are less likely to be affected by the intracycle periodic dynamics as compared to the largest Lyapunov exponent. Adaptation does not seem to influence complexity in the system for experienced prosthesis users.

  7. Amputation effects on the underlying complexity within transtibial amputee ankle motion

    Science.gov (United States)

    Wurdeman, Shane R.; Myers, Sara A.; Stergiou, Nicholas

    2014-03-01

    The presence of chaos in walking is considered to provide a stable, yet adaptable means for locomotion. This study examined whether lower limb amputation and subsequent prosthetic rehabilitation resulted in a loss of complexity in amputee gait. Twenty-eight individuals with transtibial amputation participated in a 6 week, randomized cross-over design study in which they underwent a 3 week adaptation period to two separate prostheses. One prosthesis was deemed "more appropriate" and the other "less appropriate" based on matching/mismatching activity levels of the person and the prosthesis. Subjects performed a treadmill walking trial at self-selected walking speed at multiple points of the adaptation period, while kinematics of the ankle were recorded. Bilateral sagittal plane ankle motion was analyzed for underlying complexity through the pseudoperiodic surrogation analysis technique. Results revealed the presence of underlying deterministic structure in both prostheses and both the prosthetic and sound leg ankle (discriminant measure largest Lyapunov exponent). Results also revealed that the prosthetic ankle may be more likely to suffer loss of complexity than the sound ankle, and a "more appropriate" prosthesis may be better suited to help restore a healthy complexity of movement within the prosthetic ankle motion compared to a "less appropriate" prosthesis (discriminant measure sample entropy). Results from sample entropy results are less likely to be affected by the intracycle periodic dynamics as compared to the largest Lyapunov exponent. Adaptation does not seem to influence complexity in the system for experienced prosthesis users.

  8. Global affine motion estimation for aerial video registration%基于全局仿射变换估计的航拍视频校正

    Institute of Scientific and Technical Information of China (English)

    郭江; 申浩; 李书晓; 常红星

    2011-01-01

    提出了一种快速的基于特征点匹配的全局仿射运动估计方法,用于航拍视频校正和运动检测.为建立对应点集合,改进的Harris角点检测器用于从图像序列中提取和选择稳定的角点,并采用SURF描述子描述这些角点.在运用随机一致性采样方法求取运动参数之前,将匹配对视为矢量进行分析,滤除明显的误匹配对以提高内点率.结果表明,该方法可实时、准确地估计全局仿射运动,完全能够满足移动平台下运动检测的需要.%This paper proposed a fast feature-based global affine motion estimation method for aerial video registration and motion detection.To establish the correspondences, developed the improved Harris corner detector to extract and select stable corners from image sequences, and extracted the SURF descriptor to describe those corners.Ahead of utilizing the random sample consensus technique for motion parameters estimation, viewed the matched couples as vectors, and analyzed to filter out the obvious mismatches to improve the ratio of inliers.Experiments demonstrate that the presented method can robustly estimate the global affine motion parameters with high accuracy, and can meet the requirements of motion detection for mobile platform.

  9. Boolean map saliency combined with motion feature used for dim and small target detection in infrared video sequences

    Science.gov (United States)

    Wang, Xiaoyang; Peng, Zhenming; Zhang, Ping

    2016-10-01

    Infrared dim and small target detection plays an important role in infrared search and tracking systems. In this paper, a novel infrared dim and small target detection method based on Boolean map saliency and motion feature is proposed. Infrared targets are the most salient parts in images, with high gray level and continuous moving trajectory. Utilizing this property, we build a feature space containing gray level feature and motion feature. The gray level feature is the intensity of input images, while the motion feature is obtained by motion charge in consecutive frames. In the second step, the Boolean map saliency approach is implemented on the gray level feature and motion feature to obtain the gray saliency map and motion saliency map. In the third step, two saliency maps are combined together to get the final result. Numerical experiments have verified the effectiveness of the proposed method. The final detection result can not only get an accurate detection result, but also with fewer false alarms, which is suitable for practical use.

  10. Advances and challenges in deformable image registration: From image fusion to complex motion modelling.

    Science.gov (United States)

    Schnabel, Julia A; Heinrich, Mattias P; Papież, Bartłomiej W; Brady, Sir J Michael

    2016-10-01

    Over the past 20 years, the field of medical image registration has significantly advanced from multi-modal image fusion to highly non-linear, deformable image registration for a wide range of medical applications and imaging modalities, involving the compensation and analysis of physiological organ motion or of tissue changes due to growth or disease patterns. While the original focus of image registration has predominantly been on correcting for rigid-body motion of brain image volumes acquired at different scanning sessions, often with different modalities, the advent of dedicated longitudinal and cross-sectional brain studies soon necessitated the development of more sophisticated methods that are able to detect and measure local structural or functional changes, or group differences. Moving outside of the brain, cine imaging and dynamic imaging required the development of deformable image registration to directly measure or compensate for local tissue motion. Since then, deformable image registration has become a general enabling technology. In this work we will present our own contributions to the state-of-the-art in deformable multi-modal fusion and complex motion modelling, and then discuss remaining challenges and provide future perspectives to the field.

  11. A RNN-based objective video quality measurement

    Institute of Scientific and Technical Information of China (English)

    Xuan Huang; Rong Zhang; Jianxin Pang

    2009-01-01

    @@ Technology used to automatically assess video quality plays a significant role in video processing areas.Because of the complexity of video media, there are great limitations to assess video quality with only one factor.We propose a new method using artificial random neural networks (RNNs) with motion evaluation as an estimation of perceived visual distortion.The results are obtained through a nonlinear fitting procedure and well correlated with human perception.Compared with other methods, the proposed method performs more adaptable and accurate predictions.

  12. Control grid motion estimation for efficient application of optical flow

    CERN Document Server

    Zwart, Christine M

    2012-01-01

    Motion estimation is a long-standing cornerstone of image and video processing. Most notably, motion estimation serves as the foundation for many of today's ubiquitous video coding standards including H.264. Motion estimators also play key roles in countless other applications that serve the consumer, industrial, biomedical, and military sectors. Of the many available motion estimation techniques, optical flow is widely regarded as most flexible. The flexibility offered by optical flow is particularly useful for complex registration and interpolation problems, but comes at a considerable compu

  13. Video game telemetry as a critical tool in the study of complex skill learning.

    Directory of Open Access Journals (Sweden)

    Joseph J Thompson

    Full Text Available Cognitive science has long shown interest in expertise, in part because prediction and control of expert development would have immense practical value. Most studies in this area investigate expertise by comparing experts with novices. The reliance on contrastive samples in studies of human expertise only yields deep insight into development where differences are important throughout skill acquisition. This reliance may be pernicious where the predictive importance of variables is not constant across levels of expertise. Before the development of sophisticated machine learning tools for data mining larger samples, and indeed, before such samples were available, it was difficult to test the implicit assumption of static variable importance in expertise development. To investigate if this reliance may have imposed critical restrictions on the understanding of complex skill development, we adopted an alternative method, the online acquisition of telemetry data from a common daily activity for many: video gaming. Using measures of cognitive-motor, attentional, and perceptual processing extracted from game data from 3360 Real-Time Strategy players at 7 different levels of expertise, we identified 12 variables relevant to expertise. We show that the static variable importance assumption is false--the predictive importance of these variables shifted as the levels of expertise increased--and, at least in our dataset, that a contrastive approach would have been misleading. The finding that variable importance is not static across levels of expertise suggests that large, diverse datasets of sustained cognitive-motor performance are crucial for an understanding of expertise in real-world contexts. We also identify plausible cognitive markers of expertise.

  14. Video game telemetry as a critical tool in the study of complex skill learning.

    Science.gov (United States)

    Thompson, Joseph J; Blair, Mark R; Chen, Lihan; Henrey, Andrew J

    2013-01-01

    Cognitive science has long shown interest in expertise, in part because prediction and control of expert development would have immense practical value. Most studies in this area investigate expertise by comparing experts with novices. The reliance on contrastive samples in studies of human expertise only yields deep insight into development where differences are important throughout skill acquisition. This reliance may be pernicious where the predictive importance of variables is not constant across levels of expertise. Before the development of sophisticated machine learning tools for data mining larger samples, and indeed, before such samples were available, it was difficult to test the implicit assumption of static variable importance in expertise development. To investigate if this reliance may have imposed critical restrictions on the understanding of complex skill development, we adopted an alternative method, the online acquisition of telemetry data from a common daily activity for many: video gaming. Using measures of cognitive-motor, attentional, and perceptual processing extracted from game data from 3360 Real-Time Strategy players at 7 different levels of expertise, we identified 12 variables relevant to expertise. We show that the static variable importance assumption is false--the predictive importance of these variables shifted as the levels of expertise increased--and, at least in our dataset, that a contrastive approach would have been misleading. The finding that variable importance is not static across levels of expertise suggests that large, diverse datasets of sustained cognitive-motor performance are crucial for an understanding of expertise in real-world contexts. We also identify plausible cognitive markers of expertise.

  15. Hydrogen motion in the Cu-H complex in ZnO

    Energy Technology Data Exchange (ETDEWEB)

    Boerrnert, Felix; Lavrov, E.V.; Weber, J. [Technische Universitaet Dresden (Germany)

    2008-07-01

    The Cu-H complex in ZnO consists of Cu on Zn site and a hydrogen atom bound to a nearby O atom with the O-H bond oriented in the basal plane of the hexagonal lattice to the c axis. The motion of hydrogen in the Cu-H complex is studied by the stress-induced dichroism. Stress applied at room temperature along [10 anti 10] results in an alignment of the Cu-H bond. The reorientation process was found to be thermally activated with the activation energy of 0.52{+-}0.04 eV. The connection of the hydrogen movement in the Cu-H complex with the hydrogen diffusion in ZnO is discussed and consequences for the existence of interstitial hydrogen in ZnO at room temperature are presented.

  16. Hand region extraction and gesture recognition from video stream with complex background through entropy analysis.

    Science.gov (United States)

    Lee, JongShill; Lee, YoungJoo; Lee, EungHyuk; Hong, SeungHong

    2004-01-01

    Hand gesture recognition utilizing image processing relies upon recognition through markers or hand extraction by colors, and therefore is heavily restricted by the colors of clothes or skin. We propose a method to recognize band gestures extracted from images with a complex background for a more natural interface in HCI (human computer interaction). The proposed method obtains the image by subtracting one image from another sequential image, measures the entropy, separates hand region from images, tracks the hand region and recognizes hand gestures. Through entropy measurement, we have color information that has near distribution in complexion for regions that have big values and extracted hand region from input images. We could draw the hand region adaptively in variable lighting or individual differences because entropy offers color information as well as motion information at the same time. The detected contour using chain code for the hand region is extracted, and present centroidal profile method that is improved little more and recognized gesture of hand. In the experimental results for 6 kinds of hand gesture, it shows the recognition rate with more than 95% for person and 90-100% for each gesture at 5 frames/sec.

  17. low bit rate video coding low bit rate video coding

    African Journals Online (AJOL)

    eobe

    ariable length bit rate (VLBR) broadly encompasses video coding which broadly encompasses ... for motion estimation and compensation to reduce the prediction sation to reduce the ... a special interest among the video coding community ...

  18. [Limits and possibilities of 2D video analysis in evaluating physiological and pathological foot rolling motion in runners].

    Science.gov (United States)

    Grau, S; Müller, O; Bäurle, W; Beck, M; Krauss, I; Maiwald, C; Baur, H; Mayer, F

    2000-09-01

    Three-dimensional movements of the lower extremities in support phases are usually evaluated with the help of video analysis. This analysis is mainly done two-dimensionally in a frontal and sagittal plane. Usually, the temporal ankle of the achilles tendon respectively rear foot are analysed in the frontal plane, the knee and upper ankle angle in the sagittal plane, because their values are made responsible for different sport injuries. However, so far a correlation between different injuries and biomechanical parameters could not be proven. Often, small changes in 2D video data are discussed without considering the reliability of this method of measurement. It was the aim of this study to evaluate these parameters in 2D video analyses (2D-VA) which characterize the support phases of the foot. A second goal was to find out whether a connection between these angles and chronic achillodynia can then be sensibly proven. 32 male test persons consisting of a control group (KO, n = 14) without injuries and a group with chronic achillodynia (AD, n = 18), have been examined with the test/retest method in weekly intervals. The biomechanical running analysis was done with the help of 2D-VA in the frontal and sagittal plane on a treadmill at a speed of 80% of the individual anaerobic threshold with different shoes. The test/retest variability was for all measuring points not at all satisfying. Both groups showed big mean variations in both shoes and minimal differences in the measured angles. Because of the poor capability of reproduction of the 2D-VA for angles in the frontal plane this measuring method is only usable with restrictions for the evaluation of the support phase.

  19. Topology dictionary for 3D video understanding.

    Science.gov (United States)

    Tung, Tony; Matsuyama, Takashi

    2012-08-01

    This paper presents a novel approach that achieves 3D video understanding. 3D video consists of a stream of 3D models of subjects in motion. The acquisition of long sequences requires large storage space (2 GB for 1 min). Moreover, it is tedious to browse data sets and extract meaningful information. We propose the topology dictionary to encode and describe 3D video content. The model consists of a topology-based shape descriptor dictionary which can be generated from either extracted patterns or training sequences. The model relies on 1) topology description and classification using Reeb graphs, and 2) a Markov motion graph to represent topology change states. We show that the use of Reeb graphs as the high-level topology descriptor is relevant. It allows the dictionary to automatically model complex sequences, whereas other strategies would require prior knowledge on the shape and topology of the captured subjects. Our approach serves to encode 3D video sequences, and can be applied for content-based description and summarization of 3D video sequences. Furthermore, topology class labeling during a learning process enables the system to perform content-based event recognition. Experiments were carried out on various 3D videos. We showcase an application for 3D video progressive summarization using the topology dictionary.

  20. Source complexity of the 1987 Whittier Narrows, California, earthquake from the inversion of strong motion records

    Science.gov (United States)

    Hartzell, S.; Iida, M.

    1990-01-01

    Strong motion records for the Whittier Narrows earthquake are inverted to obtain the history of slip. Both constant rupture velocity models and variable rupture velocity models are considered. The results show a complex rupture process within a relatively small source volume, with at least four separate concentrations of slip. Two sources are associated with the hypocenter, the larger having a slip of 55-90 cm, depending on the rupture model. These sources have a radius of approximately 2-3 km and are ringed by a region of reduced slip. The aftershocks fall within this low slip annulus. Other sources with slips from 40 to 70 cm each ring the central source region and the aftershock pattern. All the sources are predominantly thrust, although some minor right-lateral strike-slip motion is seen. The overall dimensions of the Whittier earthquake from the strong motion inversions is 10 km long (along the strike) and 6 km wide (down the dip). The preferred dip is 30?? and the preferred average rupture velocity is 2.5 km/s. Moment estimates range from 7.4 to 10.0 ?? 1024 dyn cm, depending on the rupture model. -Authors

  1. Fast prediction algorithm for multiview video coding

    Science.gov (United States)

    Abdelazim, Abdelrahman; Mein, Stephen James; Varley, Martin Roy; Ait-Boudaoud, Djamel

    2013-03-01

    The H.264/multiview video coding (MVC) standard has been developed to enable efficient coding for three-dimensional and multiple viewpoint video sequences. The inter-view statistical dependencies are utilized and an inter-view prediction is employed to provide more efficient coding; however, this increases the overall encoding complexity. Motion homogeneity is exploited here to selectively enable inter-view prediction, and to reduce complexity in the motion estimation (ME) and the mode selection processes. This has been accomplished by defining situations that relate macro-blocks' motion characteristics to the mode selection and the inter-view prediction processes. When comparing the proposed algorithm to the H.264/MVC reference software and other recent work, the experimental results demonstrate a significant reduction in ME time while maintaining similar rate-distortion performance.

  2. Pectoralis Muscle Flap Repair Reduces Paradoxical Motion of the Chest Wall in Complex Sternal Wound Dehiscence

    Science.gov (United States)

    Zeitani, Jacob; Russo, Marco; Pompeo, Eugenio; Sergiacomi, Gian Luigi; Chiariello, Luigi

    2016-01-01

    Background The aim of the study was to test the hypothesis that in patients with chronic complex sternum dehiscence, the use of muscle flap repair minimizes the occurrence of paradoxical motion of the chest wall (CWPM) when compared to sternal rewiring, eventually leading to better respiratory function and clinical outcomes during follow-up. Methods In a propensity score matching analysis, out of 94 patients who underwent sternal reconstruction, 20 patients were selected: 10 patients underwent sternal reconstruction with bilateral pectoralis muscle flaps (group 1) and 10 underwent sternal rewiring (group 2). Eligibility criteria included the presence of hemisternum diastases associated with multiple (≥3) bone fractures and radiologic evidence of synchronous chest wall motion (CWSM). We compared radiologically assessed (volumetric computed tomography) ventilatory mechanic indices such as single lung and global vital capacity (VC), diaphragm excursion, synchronous and paradoxical chest wall motion. Results Follow-up was 100% complete (mean 85±24 months). CWPM was inversely correlated with single lung VC (Spearman R=−0.72, p=0.0003), global VC (R=−0.51, p=0.02) and diaphragm excursion (R=−0.80, p=0.0003), whereas it proved directly correlated with dyspnea grade (Spearman R=0.51, p=0.02) and pain (R=0.59, p=0.005). Mean CWPM and single lung VC were both better in group 1, whereas there was no difference in CWSM, diaphragm excursion and global VC. Conclusion Our study suggests that in patients with complex chronic sternal dehiscence, pectoralis muscle flap reconstruction guarantees lower CWPM and greater single-lung VC when compared with sternal rewiring and it is associated with better clinical outcomes with less pain and dyspnea. PMID:27733997

  3. Can Rehabilitation Influence the Efficiency of Control Signals in Complex Motion Strategies?

    Directory of Open Access Journals (Sweden)

    Joanna Cholewa

    2017-01-01

    Full Text Available The factor determining quality of life in Parkinson’s disease (PD is the worsening of a patient’s walking ability. The use of external stimuli can improve gait when performing complex motor patterns. The aim of this study was to evaluate the effect of rehabilitation on the effectiveness of control signals in people with PD. The study was performed on 42 people with idiopathic PD in the third stage of disease. The control group consisted of 19 patients who did not participate in rehabilitation activities. The experimental group was systematically participating in rehabilitation activities twice a week (60 minutes for 9 months. Gait speed, mean step length, and step frequency were calculated on the basis of the obtained results. These parameters were compared in both groups by single factor variance analyses. The best results were obtained using rhythmic external auditory signals. The group with patients actively participating in rehabilitation showed statistically significant improvement in gait speed (12.35%, mean step length (18.00%, and frequency step (2.40% compared to the control group. The presented research showed the positive effect of rehabilitation and was based on the performance of complex motion patterns, using external control signals for their effectiveness in new motion tasks.

  4. The Case for Constructing Video Cases: Promoting Complex, Specific, Learner-Centered Analysis of Discussion

    Science.gov (United States)

    Rosaen, Cheryl; Lundeberg, Mary; Terpstra, Marjorie

    2010-01-01

    The use of reflection and analysis in preparation of elementary and secondary preservice teachers has become a standard practice aimed at helping them develop the capacity to engage in intentional and systematic investigation of their practice. Editing video may be a more powerful tool than writing reflections based on memory to help preservice…

  5. Extending JPEG-LS for low-complexity scalable video coding

    DEFF Research Database (Denmark)

    Ukhanova, Anna; Sergeev, Anton; Forchhammer, Søren

    2011-01-01

    JPEG-LS, the well-known international standard for lossless and near-lossless image compression, was originally designed for non-scalable applications. In this paper we propose a scalable modification of JPEG-LS and compare it with the leading image and video coding standards JPEG2000 and H.264/SVC...

  6. A First Step Towards Synthesising Rubrics and Video for the Formative Assessment of Complex Skills

    NARCIS (Netherlands)

    Ackermans, Kevin; Rusman, Ellen; Brand-Gruwel, Saskia; Specht, Marcus

    2016-01-01

    Abstract. The performance objectives used for the formative assessment of com- plex skills are generally set through text-based analytic rubrics[1]. Moreover, video modeling examples are a widely applied method of observational learning, providing students with context-rich modeling examples of comp

  7. The Case for Constructing Video Cases: Promoting Complex, Specific, Learner-Centered Analysis of Discussion

    Science.gov (United States)

    Rosaen, Cheryl; Lundeberg, Mary; Terpstra, Marjorie

    2010-01-01

    The use of reflection and analysis in preparation of elementary and secondary preservice teachers has become a standard practice aimed at helping them develop the capacity to engage in intentional and systematic investigation of their practice. Editing video may be a more powerful tool than writing reflections based on memory to help preservice…

  8. Quantifying the Consistency of Wearable Knee Acoustical Emission Measurements During Complex Motions.

    Science.gov (United States)

    Toreyin, Hakan; Jeong, Hyeon Ki; Hersek, Sinan; Teague, Caitlin N; Inan, Omer T

    2016-09-01

    Knee-joint sounds could potentially be used to noninvasively probe the physical and/or physiological changes in the knee associated with rehabilitation following acute injury. In this paper, a system and methods for investigating the consistency of knee-joint sounds during complex motions in silent and loud background settings are presented. The wearable hardware component of the system consists of a microelectromechanical systems microphone and inertial rate sensors interfaced with a field programmable gate array-based real-time processor to capture knee-joint sound and angle information during three types of motion: flexion-extension (FE), sit-to-stand (SS), and walking (W) tasks. The data were post-processed to extract high-frequency and short-duration joint sounds (clicks) with particular waveform signatures. Such clicks were extracted in the presence of three different sources of interference: background, stepping, and rubbing noise. A histogram-vector Vn(→) was generated from the clicks in a motion-cycle n, where the bin range was 10°. The Euclidean distance between a vector and the arithmetic mean Vav(→) of all vectors in a recording normalized by the Vav(→) is used as a consistency metric dn. Measurements from eight healthy subjects performing FE, SS, and W show that the mean (of mean) consistency metric for all subjects during SS (μ [ μ (dn)] = 0.72 in silent, 0.85 in loud) is smaller compared with the FE (μ [ μ (dn)] = 1.02 in silent, 0.95 in loud) and W ( μ [ μ (dn)] = 0.94 in silent, 0.97 in loud) exercises, thereby implying more consistent click-generation during SS compared with the FE and W. Knee-joint sounds from one subject performing FE during five consecutive work-days (μ [ μ (dn) = 0.72) and five different times of a day (μ [ μ (dn) = 0.73) suggests high consistency of the clicks on different days and throughout a day. This work represents the first time, to the best of our knowledge, that joint sound consistency has been

  9. ACCURATE DETECTION OF HIGH-SPEED MULTI-TARGET VIDEO SEQUENCES MOTION REGIONS BASED ON RECONSTRUCTED BACKGROUND DIFFERENCE

    Institute of Scientific and Technical Information of China (English)

    Zhang Wentao; Li Xiaofeng; Li Zaiming

    2001-01-01

    The paper first discusses shortcomings of classical adjacent-frame difference. Sec ondly, based on the image energy and high order statistic(HOS) theory, background reconstruction constraints are setup. Under the help of block-processing technology, background is reconstructed quickly. Finally, background difference is used to detect motion regions instead of adjacent frame difference. The DSP based platform tests indicate the background can be recovered losslessly in about one second, and moving regions are not influenced by moving target speeds. The algorithm has important usage both in theory and applications.

  10. A Passionate Journey Info-Motion Graphics Video Exploring Why and How to Find Work You Love

    Science.gov (United States)

    Zhou, Jia

    Benjamin Disraeli, a 19th century British Prime Minister, once said, "Man is not a rational animal. He is only truly good or great when he acts from passion." Passion is the fuel that can power you toward the realization of your dreams. To live a truly satisfying and purposeful life, it is important for every individual to know what their passions are so life will be fulfilling. Teenagers choose their career after they graduate from high school. They begin to wonder what they will become and what contributions they can make in the future. It is important for people to choose something interesting that leads them to work hard in their career. This thesis is a motion graphics piece called "A Passionate Journey." It presents an idea to students to help them contemplate about the things in which they have always been interested, discover their true passions, and choose jobs based on their enthusiasm. When people pursue something of which they are fond, they ultimately become successful. Information is received by people's minds, silencing their inner critics and offering them the courage and self-confidence to purse whatever they love. Therefore, the goal of this motion graphic design is help people to start observing themselves, think about what always makes them passionate, and choose something about which they are passionate in their future careers.

  11. Tracking pedestrians using local spatio-temporal motion patterns in extremely crowded scenes.

    Science.gov (United States)

    Kratz, Louis; Nishino, Ko

    2012-05-01

    Tracking pedestrians is a vital component of many computer vision applications, including surveillance, scene understanding, and behavior analysis. Videos of crowded scenes present significant challenges to tracking due to the large number of pedestrians and the frequent partial occlusions that they produce. The movement of each pedestrian, however, contributes to the overall crowd motion (i.e., the collective motions of the scene's constituents over the entire video) that exhibits an underlying spatially and temporally varying structured pattern. In this paper, we present a novel Bayesian framework for tracking pedestrians in videos of crowded scenes using a space-time model of the crowd motion. We represent the crowd motion with a collection of hidden Markov models trained on local spatio-temporal motion patterns, i.e., the motion patterns exhibited by pedestrians as they move through local space-time regions of the video. Using this unique representation, we predict the next local spatio-temporal motion pattern a tracked pedestrian will exhibit based on the observed frames of the video. We then use this prediction as a prior for tracking the movement of an individual in videos of extremely crowded scenes. We show that our approach of leveraging the crowd motion enables tracking in videos of complex scenes that present unique difficulty to other approaches.

  12. The human translational vestibulo-ocular reflex in response to complex motion.

    Science.gov (United States)

    Walker, Mark; Liao, Ke

    2011-09-01

    We studied the translational vestibulo-ocular reflex (tVOR) in four healthy human subjects during complex, unpredictable sum-of-sines head motion (combination of 0.73, 1.33, 1.93, and 2.93 Hz), while subjects viewed a target 15 cm away. Ideal eye velocity was calculated from recorded head motion; actual eye velocity was measured with scleral coils. The gain and phase for each frequency component was determined by least-squares optimization. Gain averaged approximately 40% and did not change with frequency; phase lag increased with frequency to a maximum of 66°. Fitting actual to ideal eye velocity predicted a tVOR latency of 48 m/s for vertical and 38 m/s for horizontal translation. These findings provide further evidence that the normal tVOR is considerably undercompensatory, even at low frequencies if the stimulus is not predictable. The similarity of this behavior to that of pursuit suggests that these two eye movements may share some aspects of neural processing.

  13. Effect Of A Video-Based Laboratory On The High School Pupils’ Understanding Of Constant Speed Motion

    Directory of Open Access Journals (Sweden)

    Louis Trudel, Abdeljalil Métioui

    2012-05-01

    Full Text Available Among the physical phenomena studied in high school, the kinematical concepts are important because they constitute a precondition for the study of subsequent concepts of mechanics. Our research aims at studying the effect of a computer-assisted scientific investigation on high school pupils’ understanding of the constant speed motion. Experimentation took place in a high school physics classroom. A repeated measures analysis of variance shows that, during the implementation of this strategy, the pupils’ understanding of kinematical concepts increased in a significant way. In conclusion, we specify advantages and limits of the study and give future research directions concerning the design of a computer-assisted laboratory in high school physics.

  14. MATRIX BASED INDEXING TECHNIQUE FOR VIDEO DATA

    Directory of Open Access Journals (Sweden)

    Devarj Saravanan

    2013-01-01

    Full Text Available Due to increasing the usage of media, the utilization of video play central role as it supports various applications. Video is the particular media which contains complex collection of objects like audio, motion, text, color and picture. Due to the rapid growth of this information video indexing process is mandatory for fast and effective retrieval. Many current indexing techniques fails to extract the needed image from the stored data set, based on the users query. Urgent attention in the field of video indexing and image retrieval is the need of the hour. Here a new matrix based indexing technique for image retrieval has been proposed. The proposed method provide better result, experimental results prove this.

  15. The effects of autonomous difficulty selection on engagement, motivation, and learning in a motion-controlled video game task.

    Science.gov (United States)

    Leiker, Amber M; Bruzi, Alessandro T; Miller, Matthew W; Nelson, Monica; Wegman, Rebecca; Lohse, Keith R

    2016-10-01

    This experiment investigated the relationship between motivation, engagement, and learning in a video game task. Previous studies have shown increased autonomy during practice leads to superior retention of motor skills, but it is not clear why this benefit occurs. Some studies suggest this benefit arises from increased motivation during practice; others suggest the benefit arises from better information processing. Sixty novice participants were randomly assigned to a self-controlled group, who chose the progression of difficulty during practice, or to a yoked group, who experienced the same difficulty progression but did not have choice. At the end of practice, participants completed surveys measuring intrinsic motivation and engagement. One week later, participants returned for a series of retention tests at three different difficulty levels. RM-ANCOVA (controlling for pre-test) showed that the self-controlled group had improved retention compared to the yoked group, on average, β=46.78, 95% CI=[2.68, 90.87], p=0.04, but this difference was only statistically significant on the moderate difficulty post-test (p=0.004). The self-controlled group also showed greater intrinsic motivation during practice, t(58)=2.61, p=0.01. However, there was no evidence that individual differences in engagement (p=0.20) or motivation (p=0.87) were associated with learning, which was the relationship this experiment was powered to detect. These data are inconsistent with strictly motivational accounts of how autonomy benefits learning, instead suggesting the benefits of autonomy may be mediated through other mechanisms. For instance, within the information processing framework, the learning benefits may emerge from learners appropriately adjusting difficulty to maintain an appropriate level of challenge (i.e., maintaining the relationship between task demands and cognitive resources). Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Comparison of ankle and subtalar joint complex range of motion during barefoot walking and walking in Masai Barefoot Technology sandals

    Directory of Open Access Journals (Sweden)

    Birch Ivan

    2011-01-01

    Full Text Available Abstract Background Masai Barefoot Technology (MBT, Switzerland produce footwear which they claim simulate walking barefoot on soft undulating ground. This paper reports an investigation into the effect of MBT sandals on the motion of the ankle and subtalar joint complex during walking. Methods Range of motion data was collected in the sagittal, frontal and transverse plane from the ankle and subtalar joint complex from 32 asymptomatic subjects using the CODA MPX30 motion analysis system during both barefoot walking and walking in the MBT sandal. Shod and un-shod data were compared using the Wilcoxon signed ranks test. Results A significantly greater range of motion in the frontal and sagittal planes was recorded when walking in the MBT sandal (p = 0.031, and p = 0.015 respectively. In the transverse plane, no significant difference was found (p = 0.470. Conclusions MBT sandals increase the range of motion of the ankle and subtalar joint complex in the frontal and sagittal planes. MBT footwear could therefore have a role to play in the management of musculoskeletal disorders where an increase in frontal and sagittal plane range of motion is desirable.

  17. Toward enhancing the distributed video coder under a multiview video codec framework

    Science.gov (United States)

    Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua

    2016-11-01

    The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.

  18. Optimal Rate Control in H.264 Video Coding Based on Video Quality Metric

    Directory of Open Access Journals (Sweden)

    R. Karthikeyan

    2014-05-01

    Full Text Available The aim of this research is to find a method for providing better visual quality across the complete video sequence in H.264 video coding standard. H.264 video coding standard with its significantly improved coding efficiency finds important applications in various digital video streaming, storage and broadcast. To achieve comparable quality across the complete video sequence with the constrains on bandwidth availability and buffer fullness, it is important to allocate more bits to frames with high complexity or a scene change and fewer bits to other less complex frames. A frame layer bit allocation scheme is proposed based on the perceptual quality metric as indicator of the frame complexity. The proposed model computes the Quality Index ratio (QIr of the predicted quality index of the current frame to the average quality index of all the previous frames in the group of pictures which is used for bit allocation to the current frame along with bits computed based on buffer availability. The standard deviation of the perceptual quality indicator MOS computed for the proposed model is significantly less which means the quality of the video sequence is identical throughout the full video sequence. Thus the experiment results shows that the proposed model effectively handles the scene changes and scenes with high motion for better visual quality.

  19. Isotropic Brownian motions over complex fields as a solvable model for May-Wigner stability analysis

    Science.gov (United States)

    Ipsen, J. R.; Schomerus, H.

    2016-09-01

    We consider matrix-valued stochastic processes known as isotropic Brownian motions, and show that these can be solved exactly over complex fields. While these processes appear in a variety of questions in mathematical physics, our main motivation is their relation to a May-Wigner-like stability analysis, for which we obtain a stability phase diagram. The exact results establish the full joint probability distribution of the finite-time Lyapunov exponents, and may be used as a starting point for a more detailed analysis of the stability-instability phase transition. Our derivations rest on an explicit formulation of a Fokker-Planck equation for the Lyapunov exponents. This formulation happens to coincide with an exactly solvable class of models of the Calgero-Sutherland type, originally encountered for a model of phase-coherent transport. The exact solution over complex fields describes a determinantal point process of biorthogonal type similar to recent results for products of random matrices, and is also closely related to Hermitian matrix models with an external source.

  20. The Steiner Formula and the Polar Moment of Inertia for the Closed Planar Homothetic Motions in Complex Plane

    Directory of Open Access Journals (Sweden)

    Ayhan Tutar

    2015-01-01

    Full Text Available The Steiner area formula and the polar moment of inertia were expressed during one-parameter closed planar homothetic motions in complex plane. The Steiner point or Steiner normal concepts were described according to whether rotation number was different from zero or equal to zero, respectively. The moving pole point was given with its components and its relation between Steiner point or Steiner normal was specified. The sagittal motion of a winch was considered as an example. This motion was described by a double hinge consisting of the fixed control panel of winch and the moving arm of winch. The results obtained in the second section of this study were applied for this motion.

  1. Crystallographic analysis of the thermal motion of the inclusion complex of cyclomaltoheptaose (beta-cyclodextrin) with hexamethylenetetramine.

    Science.gov (United States)

    Harata, Kazuaki

    2003-02-07

    The crystal structure of the inclusion complex of cyclomaltoheptaose (beta-cyclodextrin) with hexamethylenetetramine was determined at temperatures of 123, 173, 223, and 293 K. The rigid-body motion of the host and guest molecules was evaluated by means of the TLS method that represents the molecular motion in terms of translation, libration, and screw motion. In increasing the temperature from 123 to 293 K, the amplitude of the rigid body vibration of the host molecule was increased from 1.0 to 1.3 degrees in the rotational motion and from 0.16 to 0.17 A in the translational motion. The cyclomaltoheptaose molecule has the flexibility in seven alpha-(1-->4)-linkages, and each glucose unit was in the rotational vibration around an axis through two glycosidic oxygen atoms. As a result, the rigid-body parameters of cyclomaltoheptaose were considered to be overestimated because of including the contribution from the local motion of glucose units. In contrast, for the guest molecule having no structural flexibility, the TLS analysis demonstrated that the atomic thermal vibration was mostly derived from the rigid body motion. The rotational amplitude of hexamethylenetetramine was changed from 5.2 to 6.6 degrees in increasing the temperature from 123 to 293 K, while the change of the translational amplitude was from 0.20 to 0.23 A. The translational motion of the guest molecule was hindered by the inside wall of the host cavity. The molecular motion was characterized by the rotational vibration around the axis through two nitrogen atoms that were involved in the hydrogen-bond formation.

  2. THE METHOD OF CORRECTION OF THE PERSON MOTION PATTERN ON THE BASIS OF APPLICATION OF COMPLEX ELECTROSTIMULATION AND MECHANICAL MASSAGE

    Directory of Open Access Journals (Sweden)

    N. S. Davydova

    2010-01-01

    Full Text Available The article is devoted to development of the algorithm of correction of the person motion pattern by modification of distribution of involved muscles efforts on phases of movement by means of complex electroand mechanotherapy and the control of results on the basis of construction of the electromyography pattern of movement.

  3. Using Video Game Telemetry Data to Research Motor Chunking, Action Latencies, and Complex Cognitive-Motor Skill Learning.

    Science.gov (United States)

    Thompson, Joseph J; McColeman, C M; Stepanova, Ekaterina R; Blair, Mark R

    2017-04-01

    Many theories of complex cognitive-motor skill learning are built on the notion that basic cognitive processes group actions into easy-to-perform sequences. The present work examines predictions derived from laboratory-based studies of motor chunking and motor preparation using data collected from the real-time strategy video game StarCraft 2. We examined 996,163 action sequences in the telemetry data of 3,317 players across seven levels of skill. As predicted, the latency to the first action (thought to be the beginning of a chunked sequence) is delayed relative to the other actions in the group. Other predictions, inspired by the memory drum theory of Henry and Rogers, received only weak support. Copyright © 2017 Cognitive Science Society, Inc.

  4. A computational model of the integration of landmarks and motion in the insect central complex

    Science.gov (United States)

    Sabo, Chelsea; Vasilaki, Eleni; Barron, Andrew B.; Marshall, James A. R.

    2017-01-01

    The insect central complex (CX) is an enigmatic structure whose computational function has evaded inquiry, but has been implicated in a wide range of behaviours. Recent experimental evidence from the fruit fly (Drosophila melanogaster) and the cockroach (Blaberus discoidalis) has demonstrated the existence of neural activity corresponding to the animal’s orientation within a virtual arena (a neural ‘compass’), and this provides an insight into one component of the CX structure. There are two key features of the compass activity: an offset between the angle represented by the compass and the true angular position of visual features in the arena, and the remapping of the 270° visual arena onto an entire circle of neurons in the compass. Here we present a computational model which can reproduce this experimental evidence in detail, and predicts the computational mechanisms that underlie the data. We predict that both the offset and remapping of the fly’s orientation onto the neural compass can be explained by plasticity in the synaptic weights between segments of the visual field and the neurons representing orientation. Furthermore, we predict that this learning is reliant on the existence of neural pathways that detect rotational motion across the whole visual field and uses this rotation signal to drive the rotation of activity in a neural ring attractor. Our model also reproduces the ‘transitioning’ between visual landmarks seen when rotationally symmetric landmarks are presented. This model can provide the basis for further investigation into the role of the central complex, which promises to be a key structure for understanding insect behaviour, as well as suggesting approaches towards creating fully autonomous robotic agents. PMID:28241061

  5. 基于运动注意力融合模型的目标检测与提取算法%Motion Attention Fusion Model Based Video Target Detection and Extraction

    Institute of Scientific and Technical Information of China (English)

    刘龙; 元向辉

    2013-01-01

    Aiming at the limitation of target detection and extraction algorithms under global motion scene, a target detection algorithm based on motion attention fusion model is proposed according to the motion attention mechanism. Firstly, the preprocess, such as accumulation and median filtering, is applied on the motion vector field. Then, according to the temporal and spatial characteristics of the motion vector, the motion attention fusion model is defined to detect moving target. Finally, the edge of the video moving target is extracted accurately by the morphologic operation and the edge tracking algorithm. The experimental results of different global motion video sequences show the proposed algorithm has better veracity and speedup than other algorithms.%针对全局运动场景下目标检测与提取方法的局限性,文中根据运动注意力形成机理,构建一种运动注意力时-空融合模型用于运动目标的检测与提取。该算法首先对运动矢量场进行叠加和滤波等预处理。然后根据运动矢量在时间和空间上的变化特点定义运动注意力融合模型,并采用该模型检测运动目标区域。最后利用形态学和边界跟踪方法对目标区域进行精确化提取。根据多个不同全局运动视频场景的测试结果,显示该算法比其它算法具有更好的准确性和实时性。

  6. Role of dynamic slow motion video endoscopy in etiological correlation between eustachian dysfunction and chronic otitis media: A case-control study

    Directory of Open Access Journals (Sweden)

    Minal Gupta

    2015-01-01

    Full Text Available Objective: To assess the role of dynamic slow motion video endoscopy (DSVE for diagnosing eustachian tube (ET dysfunction in the cases of middle ear disorders and to classify eustachian dysfunction into mechanical and functional for the purpose of systematic management of middle ear disorders. Materials and Methods: A prospective, case-control study was carried out on total 84 patients (168 ears of whom 64 patients with ear complaints (total 95 ears having middle ear disease was taken as cases. Remaining 20 patients without any ear and nasal complaints (40 ears and the normal ears among the case group (33 ears were taken as controls (total 73 ears. DSVE was performed in cases and controls to compare the incidence of eustachian dysfunction in the two groups. Tubal movements were classified into four grades depending on: (1 Appearance of tubal mucosa, (2 movements of medial and lateral cartilaginous lamina, (3 lateral excursion and dilatory waves of the lateral pharyngeal wall, (4 whether tubal lumen opened well or not and (5 presence of patulous tubes (concavity in the superior third of tube. Results: On correlating the DSVE findings of ET in both case and control group, 4 times higher incidence of abnormal ET dysfunction was obtained in cases of middle ear disorders as compared to controls (P = 0.001, odds ratio of 4.0852. We found that 29 tubes had mechanical type of dysfunction (Grades 2A and 3A, whereas 30 tubes had functional type of dysfunction (Grades 2B and 3B and patulous. Conclusion: There is a positive etiological correlation between eustachian dysfunction and chronic otitis media by DSVE. It provides valuable information regarding the structural and functional status of the pharyngeal end of the ET and in classifying the type of eustachian dysfunction into mechanical or functional, which has management implications.

  7. Inverse synthetic aperture radar imaging of targets with complex motion based on the local polynomial ambiguity function

    Science.gov (United States)

    Lv, Qian; Su, Tao; Zheng, Jibin

    2016-01-01

    In inverse synthetic aperture radar (ISAR) imaging of targets with complex motion, the azimuth echoes have to be modeled as multicomponent cubic phase signals (CPSs) after motion compensation. For the CPS model, the chirp rate and the quadratic chirp rate deteriorate the ISAR image quality due to the Doppler frequency shift; thus, an effective parameter estimation algorithm is required. This paper focuses on a parameter estimation algorithm for multicomponent CPSs based on the local polynomial ambiguity function (LPAF), which is simple and can be easily implemented via the complex multiplication and fast Fourier transform. Compared with the existing parameter estimation algorithm for CPS, the proposed algorithm can achieve a better compromise between performance and computational complexity. Then, the high-quality ISAR image can be obtained by the proposed LPAF-based ISAR imaging algorithm. The results of the simulated data demonstrate the effectiveness of the proposed algorithm.

  8. Viewbrics: Formative Assessment of Complex Skills with Video-Enhanced Rubrics (VER) in Dutch Secondary Education

    NARCIS (Netherlands)

    Rusman, Ellen; Nadolski, Rob; Boon, Jo; Ackermans, Kevin

    2016-01-01

    To learn complex skills, like collaboration, learners need to acquire a concrete and consistent mental model of what it means to master this skill. If learners know their current mastery level and know their targeted mastery level, they can better determine their subsequent learning activities. Rubr

  9. Viewbrics: Formatief evalueren van complexe vaardigheden middels ‘video-rubrics’ in het Voortgezet Onderwijs

    NARCIS (Netherlands)

    Rusman, Ellen; Ackermans, Kevin

    2016-01-01

    Presentatie over de doelstelling en aanpak binnen het Viewbrics-project (www.viewbrics.nl), waarin gekeken wordt naar het effect van het gecombineerd gebruik van tekstuele rubrieken met videovoorbeelden op de kwaliteit van de feedback, mentale modellen en het aanleren van een aantal complexe (21e ee

  10. Video-assisted anal fistula treatment (VAAFT): a novel sphincter-saving procedure for treating complex anal fistulas.

    Science.gov (United States)

    Meinero, P; Mori, L

    2011-12-01

    Video-assisted anal fistula treatment (VAAFT) is a novel minimally invasive and sphincter-saving technique for treating complex fistulas. The aim of this report is to describe the procedural steps and preliminary results of VAAFT. Karl Storz Video Equipment is used. Key steps are visualization of the fistula tract using the fistuloscope, correct localization of the internal fistula opening under direct vision, endoscopic treatment of the fistula and closure of the internal opening using a stapler or cutaneous-mucosal flap. Diagnostic fistuloscopy under irrigation is followed by an operative phase of fulguration of the fistula tract, closure of the internal opening and suture reinforcement with cyanoacrylate. From May 2006 to May 2011, we operated on 136 patients using VAAFT. Ninety-eight patients were followed up for a minimum of 6 months. No major complications occurred. In most cases, both short-term and long-term postoperative pain was acceptable. Primary healing was achieved in 72 patients (73.5%) within 2-3 months of the operation. Sixty-two patients were followed up for more than 1 year. The percentage of the patients healed after 1 year was 87.1%. The main feature of the VAAFT technique is that the procedure is performed entirely under direct endoluminal vision. With this approach, the internal opening can be found in 82.6% of cases. Moreover, fistuloscopy helps to identify any possible secondary tracts or chronic abscesses. The VAAFT technique is sphincter-saving, and the surgical wounds are extremely small. Our preliminary results are very promising.

  11. Motion artifacts in MRI: A complex problem with many partial solutions.

    Science.gov (United States)

    Zaitsev, Maxim; Maclaren, Julian; Herbst, Michael

    2015-10-01

    Subject motion during magnetic resonance imaging (MRI) has been problematic since its introduction as a clinical imaging modality. While sensitivity to particle motion or blood flow can be used to provide useful image contrast, bulk motion presents a considerable problem in the majority of clinical applications. It is one of the most frequent sources of artifacts. Over 30 years of research have produced numerous methods to mitigate or correct for motion artifacts, but no single method can be applied in all imaging situations. Instead, a "toolbox" of methods exists, where each tool is suitable for some tasks, but not for others. This article reviews the origins of motion artifacts and presents current mitigation and correction methods. In some imaging situations, the currently available motion correction tools are highly effective; in other cases, appropriate tools still need to be developed. It seems likely that this multifaceted approach will be what eventually solves the motion sensitivity problem in MRI, rather than a single solution that is effective in all situations. This review places a strong emphasis on explaining the physics behind the occurrence of such artifacts, with the aim of aiding artifact detection and mitigation in particular clinical situations.

  12. Automatic Association of Chats and Video Tracks for Activity Learning and Recognition in Aerial Video Surveillance

    Directory of Open Access Journals (Sweden)

    Riad I. Hammoud

    2014-10-01

    Full Text Available We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA and multi-media indexing and explorer (MINER. VIVA utilizes analyst call-outs (ACOs in the form of chat messages (voice-to-text to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1 a fusion of graphical track and text data using probabilistic methods; (2 an activity pattern learning framework to support querying an index of activities of interest (AOIs and targets of interest (TOIs by movement type and geolocation; and (3 a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV. VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat Sensors 2014, 14 19844 messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports.

  13. A Motion Estimation Algorithm Using DTCWT and ARPS

    Directory of Open Access Journals (Sweden)

    Unan Y. Oktiawati

    2013-09-01

    Full Text Available In this paper, a hybrid motion estimation algorithm utilizing the Dual Tree Complex Wavelet Transform (DTCWT and the Adaptive Rood Pattern Search (ARPS block is presented. The proposed algorithm first transforms each video sequence with DTCWT. The frame n of the video sequence is used as a reference input and the frame n+2 is used to find the motion vector. Next, the ARPS block search algorithm is carried out and followed by an inverse DTCWT. The motion compensation is then carried out on each inversed frame n and motion vector. The results show that PSNR can be improved for mobile device without depriving its quality. The proposed algorithm also takes less memory usage compared to the DCT-based algorithm. The main contribution of this work is a hybrid wavelet-based motion estimation algorithm for mobile devices. Other contribution is the visual quality scoring system as used in section 6.

  14. Motion saliency detection using a temporal fourier transform

    Science.gov (United States)

    Chen, Zhe; Wang, Xin; Sun, Zhen; Wang, Zhijian

    2016-06-01

    Motion saliency detection aims at detecting the dynamic semantic regions in a video sequence. It is very important for many vision tasks. This paper proposes a new type of motion saliency detection method, Temporal Fourier Transform, for fast motion saliency detection. Different from conventional motion saliency detection methods that use complex mathematical models or features, variations in the phase spectrum of consecutive frames are identified and extracted as the key to obtaining the location of salient motion. As all the calculation is made on the temporal frequency spectrum, our model is independent of features, background models, or other forms of prior knowledge about scenes. The benefits of the proposed approach are evaluated for various videos where the number of moving objects, illumination, and background are all different. Compared with some the state of the art methods, our method achieves both good accuracy and fast computation.

  15. Video Tracking dalam Digital Compositing untuk Paska Produksi Video

    Directory of Open Access Journals (Sweden)

    Ardiyan Ardiyan

    2012-04-01

    Full Text Available Video Tracking is one of the processes in video postproduction and motion picture digitally. The ability of video tracking method in the production is helpful to realize the concept of the visual. It is considered in the process of visual effects making. This paper presents how the tracking process and its benefits in visual needs, especially for video and motion picture production. Some of the things involved in the process of tracking such as failure to do so are made clear in this discussion. 

  16. Video Tracking dalam Digital Compositing untuk Paska Produksi Video

    OpenAIRE

    Ardiyan

    2012-01-01

    Video Tracking is one of the processes in video postproduction and motion picture digitally. The ability of video tracking method in the production is helpful to realize the concept of the visual. It is considered in the process of visual effects making. This paper presents how the tracking process and its benefits in visual needs, especially for video and motion picture production. Some of the things involved in the process of tracking such as failure to do so are made clear in this discussi...

  17. The Effects of Grade Level, Type of Motion, Cueing Strategy, Pictorial Complexity, and Color on Children's Interpretation of Implied Motion in Pictures.

    Science.gov (United States)

    Downs, Elizabeth; Jenkins, Stephen J.

    2001-01-01

    Examined the ability of 64 kindergarten and third-grade children to interpret implied motion in pictures accurately. Third graders were more adept at identifying implied motion. Results also show that postural motion was more effective than a flow-line condition in conveying motion, and that cues and relevant pictorial background information…

  18. No-Reference Video Quality Assessment Model for Distortion Caused by Packet Loss in the Real-Time Mobile Video Services

    Directory of Open Access Journals (Sweden)

    Jiarun Song

    2014-01-01

    Full Text Available Packet loss will make severe errors due to the corruption of related video data. For most video streams, because the predictive coding structures are employed, the transmission errors in one frame will not only cause decoding failure of itself at the receiver side, but also propagate to its subsequent frames along the motion prediction path, which will bring a significant degradation of end-to-end video quality. To quantify the effects of packet loss on video quality, a no-reference objective quality assessment model is presented in this paper. Considering the fact that the degradation of video quality significantly relies on the video content, the temporal complexity is estimated to reflect the varying characteristic of video content, using the macroblocks with different motion activities in each frame. Then, the quality of the frame affected by the reference frame loss, by error propagation, or by both of them is evaluated, respectively. Utilizing a two-level temporal pooling scheme, the video quality is finally obtained. Extensive experimental results show that the video quality estimated by the proposed method matches well with the subjective quality.

  19. Improved FFSBM Algorithm and Its VLSI Architecture for AVS Video Standard

    Institute of Scientific and Technical Information of China (English)

    Li Zhang; Don Xie; Di Wu

    2006-01-01

    The Video part of AVS (Audio Video Coding Standard) has been finalized recently. It has adopted variable block size motion compensation to improve its coding efficiency. This has brought heavy computation burden when it is applied to compress the HDTV (high definition television) content. Based on the original FFSBM (fast full search blocking matching),this paper proposes an improved FFSBM algorithm to adaptively reduce the complexity of motion estimation according to the actual motion intensity. The main idea of the proposed algorithm is to use the statistical distribution of MVD (motion vector difference). A VLSI (very large scale integration) architecture is also proposed to implement the improved motion estimation algorithm. Experimental results show that this algorithm-hardware co-design gives better tradeoff of gate-count and throughput than the existing ones and is a proper solution for the variable block size motion estimation in AVS.

  20. Complex uniportal video-assisted thoracoscopic sleeve lobectomy during live surgery broadcasting.

    Science.gov (United States)

    Yang, Yang; Guerrero, William Guido; Algitmi, Iskander; Gonzalez-Rivas, Diego

    2016-06-01

    The uniportal approach for major pulmonary resections began in 2010 with the first case being performed by González-Rivas and colleagues in La Coruña. Since then a number of teams around the world had being performing hundreds of cases, applying it to more advance and complex cases recently. The technique has been reported to be feasible and reliable with similar results to that obtained in early stage lung cancer lobectomies. The case presented in this article is an example of an extreme condition: very obese patient, strong adhesions, fused lower lobe to the diaphragm and enlarged inflammatory adenopathies that made the procedure very technically challenging. In addition, the surgery was performed during a live surgery event and it was broadcasted to an auditorium. However, the case was successfully completed through a uniportal VATS approach with no complications.

  1. The influence of large-amplitude librational motion on the hydrogen bond energy for alcohol–water complexes

    DEFF Research Database (Denmark)

    Andersen, Jonas; Heimdal, J.; Larsen, René Wugt

    2015-01-01

    is a superior hydrogen bond acceptor. The class of large-amplitude donor OH librational motion is shown to account for up to 5.1 kJ mol-1 of the destabilizing change of vibrational zero-point energy upon intermolecular OH...O hydrogen bond formation. The experimental findings are supported by complementary...... unambiguous assignments of the intermolecular high-frequency out-of-plane and low-frequency in-plane donor OH librational modes for mixed alcohol–water complexes. The vibrational assignments confirm directly that water acts as the hydrogen bond donor in the most stable mixed complexes and the tertiary alcohol...

  2. A High-Throughput and Low-Complexity H.264/AVC Intra 16×16 Prediction Architecture for HD Video Sequences

    Directory of Open Access Journals (Sweden)

    M. Orlandić

    2014-11-01

    Full Text Available H.264/AVC compression standard provides tools and solutions for an efficient coding of video sequences of various resolutions. Spatial redundancy in a video frame is removed by use of intra prediction algorithm. There are three block-wise types of intra prediction: 4×4, 8×8 and 16×16. This paper proposes an efficient, low-complexity architecture for intra 16×16 prediction that provides real-time processing of HD video sequences. All four prediction (V, H, DC, Plane modes are supported in the implementation. The high-complexity plane mode computes a number of intermediate parameters required for creating prediction pixels. The local memory buffers are used for storing intermediate reconstructed data used as reference pixels in intra prediction process. The high throughput is achieved by 16-pixel parallelism and the proposed prediction process takes 48 cycles for processing one macroblock. The proposed architecture is synthesized and implemented on Kintex 705 -XC7K325T board and requires 94 MHz to encode a video sequence of HD 4k×2k (3840×2160 resolution at 60 fps in real time. This represents a significant improvement compared to the state of the art.

  3. Temporal Error Concealment Technique for MPEG-4 Video Streams

    Institute of Scientific and Technical Information of China (English)

    DING Xuewen; YANG Zhaoxuan; GUO Yingchun

    2006-01-01

    Concerning inter4v mode employed widely in MPEG-4 video, a new temporal error concealment scheme for MPEG-4 video sequences is proposed, which can selectively interpolate one or four motion vectors (MVs) for the missing macroblock (MB) according to the estimated MB coding mode. Performance of the proposed scheme is compared with the existing schemes with multiple testing sequences at different bit error rates. Experimental results show that the proposed algorithm can mask the impairments caused by transmission errors more efficiently than 0 MV and average MV methods by consuming more time for different bit error rates. It has an acceptable image quality close to that obtained by the selective motion vector matching (SMVM) algorithm, while tak ing less than half of cycles of operations. The proposed concealment scheme is suitable for low complexity video real-time implementations.

  4. Wyner-Ziv to Baseline H.264 Video Transcoder

    Science.gov (United States)

    Corrales-García, Alberto; Martínez, Jose Luis; Fernández-Escribano, Gerardo; Villalon, Jose Miguel; Kalva, Hari; Cuenca, Pedro

    2012-12-01

    Mobile-to-mobile video communications is one of the most requested services which operator networks can offer. However, in a framework where one mobile device sends video information to another, both transmitter and receptor should employ video encoders and decoders with low complexity. On the one hand, traditional video codecs, such as H.264, are based on architectures which have encoders with higher complexity than decoders. On the other hand, Wyner-Ziv (WZ) video coding (a particular case of distributed video coding) is an innovative paradigm, which provides encoders with less complexity than decoders. Taking advantage of both paradigms, in terms of low complexity algorithms, a suitable solution consists in transcoding from WZ to H.264. Nevertheless, the transcoding process should be carried out in an efficient way so as to avoid major delays in communication; in other words, the transcoding process should perform the conversion without requiring the complete process of decoding and re-encoding. Based on all the algorithms and techniques we have proposed before, a low complexity WZ to H.264 Transcoder for the Baseline Profile is proposed in this article. Firstly, the proposed transcoder can efficiently turn every WZ group of pictures into the common H.264 I11P pattern and, secondly, the proposed transcoder is based on the hypothesis that macroblock coding mode decisions in H.264 video have a high correlation with the distribution of the side information residual in WZ video. The proposed algorithm selects one sub-set of the several coding modes in H.264. Moreover, a dynamic motion estimation technique is proposed in this article for use in combination with the above algorithm. Simulation results show that the proposed transcoder reduces the inter prediction complexity in H.264 by up to 93%, while maintaining coding efficiency.

  5. Block-based embedded color image and video coding

    Science.gov (United States)

    Nagaraj, Nithin; Pearlman, William A.; Islam, Asad

    2004-01-01

    Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.

  6. Heterogeneity image patch index and its application to consumer video summarization.

    Science.gov (United States)

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.

  7. Method for endobronchial video parsing

    Science.gov (United States)

    Byrnes, Patrick D.; Higgins, William E.

    2016-03-01

    Endoscopic examination of the lungs during bronchoscopy produces a considerable amount of endobronchial video. A physician uses the video stream as a guide to navigate the airway tree for various purposes such as general airway examinations, collecting tissue samples, or administering disease treatment. Aside from its intraoperative utility, the recorded video provides high-resolution detail of the airway mucosal surfaces and a record of the endoscopic procedure. Unfortunately, due to a lack of robust automatic video-analysis methods to summarize this immense data source, it is essentially discarded after the procedure. To address this problem, we present a fully-automatic method for parsing endobronchial video for the purpose of summarization. Endoscopic- shot segmentation is first performed to parse the video sequence into structurally similar groups according to a geometric model. Bronchoscope-motion analysis then identifies motion sequences performed during bronchoscopy and extracts relevant information. Finally, representative key frames are selected based on the derived motion information to present a drastically reduced summary of the processed video. The potential of our method is demonstrated on four endobronchial video sequences from both phantom and human data. Preliminary tests show that, on average, our method reduces the number of frames required to represent an input video sequence by approximately 96% and consistently selects salient key frames appropriately distributed throughout the video sequence, enabling quick and accurate post-operative review of the endoscopic examination.

  8. Complex Human Activity Recognition Using Smartphone and Wrist-Worn Motion Sensors

    NARCIS (Netherlands)

    Shoaib, Muhammad; Bosch, Stephan; Durmaz Incel, Ozlem; Scholten, Hans; Havinga, Paul J.M.

    2016-01-01

    The position of on-body motion sensors plays an important role in human activity recognition. Most often, mobile phone sensors at the trouser pocket or an equivalent position are used for this purpose. However, this position is not suitable for recognizing activities that involve hand gestures, such

  9. Representation of head-centric flow in the human motion complex.

    NARCIS (Netherlands)

    Goossens, J.; Dukelow, S.P.; Menon, R.S.; Vilis, T.; Berg, A.V. van den

    2006-01-01

    Recent neuroimaging studies have identified putative homologs of macaque middle temporal area (area MT) and medial superior temporal area (area MST) in humans. Little is known about the integration of visual and nonvisual signals in human motion areas compared with monkeys. Through extra-retinal sig

  10. Communicating pictures a course in image and video coding

    CERN Document Server

    Bull, David R

    2014-01-01

    Communicating Pictures starts with a unique historical perspective of the role of images in communications and then builds on this to explain the applications and requirements of a modern video coding system. It draws on the author's extensive academic and professional experience of signal processing and video coding to deliver a text that is algorithmically rigorous, yet accessible, relevant to modern standards, and practical. It offers a thorough grounding in visual perception, and demonstrates how modern image and video compression methods can be designed in order to meet the rate-quality performance levels demanded by today's applications, networks and users. With this book you will learn: Practical issues when implementing a codec, such as picture boundary extension and complexity reduction, with particular emphasis on efficient algorithms for transforms, motion estimators and error resilience Conflicts between conventional video compression, based on variable length coding and spatiotemporal prediction,...

  11. Video Compression Schemes Using Edge Feature on Wireless Video Sensor Networks

    Directory of Open Access Journals (Sweden)

    Phat Nguyen Huu

    2012-01-01

    Full Text Available This paper puts forward a low-complexity video compression algorithm that uses the edges of objects in the frames to estimate and compensate for motion. Based on the proposed algorithm, two schemes that balance energy consumption among nodes in a cluster on a wireless video sensor network (WVSN are proposed. In these schemes, we divide the compression process into several small processing components, which are then distributed to multiple nodes along a path from a source node to a cluster head in a cluster. We conduct extensive computational simulations to examine the truth of our method and find that the proposed schemes not only balance energy consumption of sensor nodes by sharing of the processing tasks but also improve the quality of decoding video by using edges of objects in the frames.

  12. Complex Human Activity Recognition Using Smartphone and Wrist-Worn Motion Sensors.

    Science.gov (United States)

    Shoaib, Muhammad; Bosch, Stephan; Incel, Ozlem Durmaz; Scholten, Hans; Havinga, Paul J M

    2016-03-24

    The position of on-body motion sensors plays an important role in human activity recognition. Most often, mobile phone sensors at the trouser pocket or an equivalent position are used for this purpose. However, this position is not suitable for recognizing activities that involve hand gestures, such as smoking, eating, drinking coffee and giving a talk. To recognize such activities, wrist-worn motion sensors are used. However, these two positions are mainly used in isolation. To use richer context information, we evaluate three motion sensors (accelerometer, gyroscope and linear acceleration sensor) at both wrist and pocket positions. Using three classifiers, we show that the combination of these two positions outperforms the wrist position alone, mainly at smaller segmentation windows. Another problem is that less-repetitive activities, such as smoking, eating, giving a talk and drinking coffee, cannot be recognized easily at smaller segmentation windows unlike repetitive activities, like walking, jogging and biking. For this purpose, we evaluate the effect of seven window sizes (2-30 s) on thirteen activities and show how increasing window size affects these various activities in different ways. We also propose various optimizations to further improve the recognition of these activities. For reproducibility, we make our dataset publicly available.

  13. Scalable motion vector coding

    Science.gov (United States)

    Barbarien, Joeri; Munteanu, Adrian; Verdicchio, Fabio; Andreopoulos, Yiannis; Cornelis, Jan P.; Schelkens, Peter

    2004-11-01

    Modern video coding applications require transmission of video data over variable-bandwidth channels to a variety of terminals with different screen resolutions and available computational power. Scalable video coding is needed to optimally support these applications. Recently proposed wavelet-based video codecs employing spatial domain motion compensated temporal filtering (SDMCTF) provide quality, resolution and frame-rate scalability while delivering compression performance comparable to that of the state-of-the-art non-scalable H.264-codec. These codecs require scalable coding of the motion vectors in order to support a large range of bit-rates with optimal compression efficiency. Scalable motion vector coding algorithms based on the integer wavelet transform followed by embedded coding of the wavelet coefficients were recently proposed. In this paper, a new and fundamentally different scalable motion vector codec (MVC) using median-based motion vector prediction is proposed. Extensive experimental results demonstrate that the proposed MVC systematically outperforms the wavelet-based state-of-the-art solutions. To be able to take advantage of the proposed scalable MVC, a rate allocation mechanism capable of optimally dividing the available rate among texture and motion information is required. Two rate allocation strategies are proposed and compared. The proposed MVC and rate allocation schemes are incorporated into an SDMCTF-based video codec and the benefits of scalable motion vector coding are experimentally demonstrated.

  14. Motion states extraction with optical flow for rat-robot automatic navigation.

    Science.gov (United States)

    Zhang, Xinlu; Sun, Chao; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2012-01-01

    The real-time acquisition of precise motion states is significant and difficult for bio-robot automatic navigation. In this paper, we propose a real-time video-tracking algorithm to extract motion states of rat-robots in complex environment using optical flow. The rat-robot's motion states, including location, speed and motion trend, are acquired accurately in real time. Compared with the traditional methods based on single frame image, our algorithm using consecutive frames provides more exact and rich motion information for the automatic navigation of bio-robots. The video of the manual navigation experiments on rat-robots in eight-arm maze is applied to test this algorithm. The average computation time is 25.76 ms which is less than the speed of image acquisition. The results show that our method could extract the motion states with good performance of accuracy and time consumption.

  15. Slow-Motion Theory of Nuclear Spin Relaxation in Paramagnetic Low-Symmetry Complexes: A Generalization to High Electron Spin

    Science.gov (United States)

    Nilsson, T.; Kowalewski, J.

    2000-10-01

    The slow-motion theory of nuclear spin relaxation in paramagnetic low-symmetry complexes is generalized to comprise arbitrary values of S. We describe the effects of rhombic symmetry in the static zero-field splitting (ZFS) and allow the principal axis system of the static ZFS tensor to deviate from the molecule-fixed frame of the nuclear-electron dipole-dipole tensor. We show nuclear magnetic relaxation dispersion (NMRD) profiles for different illustrative cases, ranging from within the Redfield limit into the slow-motion regime with respect to the electron spin dynamics. We focus on S = 3/2 and compare the effects of symmetry-breaking properties on the paramagnetic relaxation enhancement (PRE) in this case with that of S = 1, which we have treated in a previous paper. We also discuss cases of S = 2, 5/2, 3, and 7/2. One of the main objectives of this investigation, together with the previous papers, is to provide a set of standard calculations using the general slow-motion theory, against which simplified models may be tested.

  16. Video Games and Digital Literacies

    Science.gov (United States)

    Steinkuehler, Constance

    2010-01-01

    Today's youth are situated in a complex information ecology that includes video games and print texts. At the basic level, video game play itself is a form of digital literacy practice. If we widen our focus from the "individual player + technology" to the online communities that play them, we find that video games also lie at the nexus of a…

  17. Video Games and Digital Literacies

    Science.gov (United States)

    Steinkuehler, Constance

    2010-01-01

    Today's youth are situated in a complex information ecology that includes video games and print texts. At the basic level, video game play itself is a form of digital literacy practice. If we widen our focus from the "individual player + technology" to the online communities that play them, we find that video games also lie at the nexus of a…

  18. Robust spatio-temporal error concealment for packet- lossy H.264 video transmission

    Institute of Scientific and Technical Information of China (English)

    LIAO Ning; YAN Dan; QUAN Zi-yi; MEN Ai-dong

    2006-01-01

    In this article, a spatio-temporal post-processing error concealment algorithm designed initially for a H. 264video-streaming scheme over packet-lossy networks has been presented. It aims at optimizing subjective quality of restored video and the conventional objective metric, peak signal-to-noise ratio (PSNR), as well, under the constraints of low delay and computational complexity, which are critical to real-time applications and portable devices having limited resources.Specifically, it takes into consideration physical property of motion to achieve more meaningful perceptual video quality.Further, a content-adaptive bilinear spatial interpolation approach and a temporal error concealment approach are combined under a unified boundary match criterion based on texture and motion activity analysis. Extensive experiments have demonstrated that the proposal not only result in better reconstruction, objectively and subjectively, than the reference software model benchmark, but also results in better robustness to different video sequences.

  19. Dashboard Videos

    Science.gov (United States)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  20. When Simple Harmonic Motion is not That Simple: Managing Epistemological Complexity by Using Computer-based Representations

    Science.gov (United States)

    Parnafes, Orit

    2010-12-01

    Many real-world phenomena, even "simple" physical phenomena such as natural harmonic motion, are complex in the sense that they require coordinating multiple subtle foci of attention to get the required information when experiencing them. Moreover, for students to develop sound understanding of a concept or a phenomenon, they need to learn to get the same type of information across different contexts and situations (diSessa and Sherin 1998; diSessa and Wagner 2005). Rather than simplifying complex situations, or creating a linear instructional sequence in which students move from one context to another, this paper demonstrates the use of computer-based representations to facilitate developing understanding of complex physical phenomena. The data is collected from 8 studies in which pairs of students are engaged in an exploratory activity, trying to understand the dynamic behavior of a simulation and, at the same time, to attribute meaning to it in terms of the physical phenomenon it represents. The analysis focuses on three episodes. The first two episodes demonstrate the epistemological complexity involved in attempting to make sense of natural harmonic oscillation. A third episode demonstrates the process by which students develop understanding in this complex perceptual and conceptual territory, through the mediation (Vygotsky 1978) of computer-based representations designed to facilitate understanding in this topic.

  1. Substrate recognition and motion mode analyses of PFV integrase in complex with viral DNA via coarse-grained models.

    Directory of Open Access Journals (Sweden)

    Jianping Hu

    Full Text Available HIV-1 integrase (IN is an important target in the development of drugs against the AIDS virus. Drug design based on the structure of IN was markedly hampered due to the lack of three-dimensional structure information of HIV-1 IN-viral DNA complex. The prototype foamy virus (PFV IN has a highly functional and structural homology with HIV-1 IN. Recently, the X-ray crystal complex structure of PFV IN with its cognate viral DNA has been obtained. In this study, both Gaussian network model (GNM and anisotropy network model (ANM have been applied to comparatively investigate the motion modes of PFV DNA-free and DNA-bound IN. The results show that the motion mode of PFV IN has only a slight change after binding with DNA. The motion of this enzyme is in favor of association with DNA, and the binding ability is determined by its intrinsic structural topology. Molecular docking experiments were performed to gain the binding modes of a series of diketo acid (DKA inhibitors with PFV IN obtained from ANM, from which the dependability of PFV IN-DNA used in the drug screen for strand transfer (ST inhibitors was confirmed. It is also found that the functional groups of keto-enol, bis-diketo, tetrazole and azido play a key role in aiding the recognition of viral DNA, and thus finally increase the inhibition capability for the corresponding DKA inhibitor. Our study provides some theoretical information and helps to design anti-AIDS drug based on the structure of IN.

  2. Spatially reduced image extraction from MPEG-2 video: fast algorithms and applications

    Science.gov (United States)

    Song, Junehwa; Yeo, Boon-Lock

    1997-12-01

    The MPEG-2 video standards are targeted for high-quality video broadcast and distribution, and are optimized for efficient storage and transmission. However, it is difficult to process MPEG-2 for video browsing and database applications without first decompressing the video. Yeo and Liu have proposed fast algorithms for the direct extraction of spatially reduced images from MPEG-1 video. Reduced images have been demonstrated to be effective for shot detection, shot browsing and editing, and temporal processing of video for video presentation and content annotation. In this paper, we develop new tools to handle the extra complexity in MPEG-2 video for extracting spatially reduced images. In particular, we propose new classes of discrete cosine transform (DCT) domain and DCT inverse motion compensation operations for handling the interlaced modes in the different frame types of MPEG-2, and design new and efficient algorithms for generating spatially reduced images of an MPEG-2 video. We also describe key video applications on the extracted reduced images.

  3. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Directory of Open Access Journals (Sweden)

    Rik Van de Walle

    2007-01-01

    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4%.

  4. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Directory of Open Access Journals (Sweden)

    Lerouge Sam

    2007-01-01

    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4 .

  5. Interventional video tomography

    Science.gov (United States)

    Truppe, Michael J.; Pongracz, Ferenc; Ploder, Oliver; Wagner, Arne; Ewers, Rolf

    1995-05-01

    Interventional Video Tomography (IVT) is a new imaging modality for Image Directed Surgery to visualize in real-time intraoperatively the spatial position of surgical instruments relative to the patient's anatomy. The video imaging detector is based on a special camera equipped with an optical viewing and lighting system and electronic 3D sensors. When combined with an endoscope it is used for examining the inside of cavities or hollow organs of the body from many different angles. The surface topography of objects is reconstructed from a sequence of monocular video or endoscopic images. To increase accuracy and speed of the reconstruction the relative movement between objects and endoscope is continuously tracked by electronic sensors. The IVT image sequence represents a 4D data set in stereotactic space and contains image, surface topography and motion data. In ENT surgery an IVT image sequence of the planned and so far accessible surgical path is acquired prior to surgery. To simulate the surgical procedure the cross sectional imaging data is superimposed with the digitally stored IVT image sequence. During surgery the video sequence component of the IVT simulation is substituted by the live video source. The IVT technology makes obsolete the use of 3D digitizing probes for the patient image coordinate transformation. The image fusion of medical imaging data with live video sources is the first practical use of augmented reality in medicine. During surgery a head-up display is used to overlay real-time reformatted cross sectional imaging data with the live video image.

  6. Tracking of Maneuvering Complex Extended Object with Coupled Motion Kinematics and Extension Dynamics Using Range Extent Measurements.

    Science.gov (United States)

    Sun, Lifan; Ji, Baofeng; Lan, Jian; He, Zishu; Pu, Jiexin

    2017-09-22

    The key to successful maneuvering complex extended object tracking (MCEOT) using range extent measurements provided by high resolution sensors lies in accurate and effective modeling of both the extension dynamics and the centroid kinematics. During object maneuvers, the extension dynamics of an object with a complex shape is highly coupled with the centroid kinematics. However, this difficult but important problem is rarely considered and solved explicitly. In view of this, this paper proposes a general approach to modeling a maneuvering complex extended object based on Minkowski sum, so that the coupled turn maneuvers in both the centroid states and extensions can be described accurately. The new model has a concise and unified form, in which the complex extension dynamics can be simply and jointly characterized by multiple simple sub-objects' extension dynamics based on Minkowski sum. The proposed maneuvering model fits range extent measurements very well due to its favorable properties. Based on this model, an MCEOT algorithm dealing with motion and extension maneuvers is also derived. Two different cases of the turn maneuvers with known/unknown turn rates are specifically considered. The proposed algorithm which jointly estimates the kinematic state and the object extension can also be easily implemented. Simulation results demonstrate the effectiveness of the proposed modeling and tracking approaches.

  7. Studying complex decision making in natural settings: using a head-mounted video camera to study competitive orienteering.

    Science.gov (United States)

    Omodei, M M; McLennan, J

    1994-12-01

    Head-mounted video recording is described as a potentially powerful method for studying decision making in natural settings. Most alternative data-collection procedures are intrusive and disruptive of the decision-making processes involved while conventional video-recording procedures are either impractical or impossible. As a severe test of the robustness of the methodology we studied the decision making of 6 experienced orienteers who carried a head-mounted light-weight video camera as they navigated, running as fast as possible, around a set of control points in a forest. Use of the Wilcoxon matched-pairs signed-ranks test indicated that compared with free recall, video-assisted recall evoked (a) significantly greater experiential immersion in the recall, (b) significantly more specific recollections of navigation-related thoughts and feelings, (c) significantly more realizations of map and terrain features and aspects of running speed which were not noticed at the time of actual competition, and (d) significantly greater insight into specific navigational errors and the intrusion of distracting thoughts into the decision-making process. Potential applications of the technique in (a) the environments of emergency services, (b) therapeutic contexts, (c) education and training, and (d) sports psychology are discussed.

  8. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  9. Method and System for Temporal Filtering in Video Compression Systems

    Science.gov (United States)

    Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim

    2011-01-01

    Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame

  10. A fast motion estimation algorithm for mobile communications

    Institute of Scientific and Technical Information of China (English)

    CHEN Guo-bin

    2006-01-01

    The limitation of processing power, battery life and memory capacity of portable terminals requires reducing encoding complexity in mobile communications. Motion estimation (ME) is the most computationally intensive module in a typical video codec, which determines not only the encoder's performance but also the reconstructed video quality. In this paper, a fast ME algorithm for H.264/AVC baseline profile coding is proposed based on the analysis of motion vector field and error surface, and the statistical distributions of different type macroblocks (MBs). Simulation results showed that: in comparison with MVFAST,the proposed algorithm can decrease the computational load over 7.2% with no requirement of expanding memory capacity while maintaining the same video quality as MVFAST. Furthermore, its simplicity makes it easy to be implemented on hardware.

  11. The effect of large amplitude motions on the vibrational intensities in hydrogen bonded complexes

    DEFF Research Database (Denmark)

    Mackeprang, Kasper; Hänninen, Vesa; Halonen, Lauri

    2015-01-01

    We have developed a model to calculate accurately the intensity of the hydrogen bonded XH-stretching vibrational transition in hydrogen bonded complexes. In the Local Mode Perturbation Theory (LMPT) model, the unperturbed system is described by a local mode (LM) model, which is perturbed by the i......We have developed a model to calculate accurately the intensity of the hydrogen bonded XH-stretching vibrational transition in hydrogen bonded complexes. In the Local Mode Perturbation Theory (LMPT) model, the unperturbed system is described by a local mode (LM) model, which is perturbed...... by the intermolecular modes of the hydrogen bonded system that couple with the intramolecular vibrations of the donor unit through the potential energy surface. We have applied the model to three complexes containing water as the donor unit and different acceptor units, providing a series of increasing complex binding...... energy: H2O⋯N2, H2O⋯H2O, and H2O⋯NH3. Results obtained by the LMPT model are presented and compared with calculated results obtained by other vibrational models and with previous results from gas-phase and helium-droplet experiments. We find that the LMPT model reduces the oscillator strengths...

  12. Tunnelling and barrier-less motions in the 2-fluoroethanol-water complex: a rotational spectroscopic and ab initio study.

    Science.gov (United States)

    Huang, Wenyuan; Thomas, Javix; Jäger, Wolfgang; Xu, Yunjie

    2017-05-17

    The pure rotational spectrum of the 2-fluoroethanol (2-FE)water complex was measured using a chirped pulse Fourier-transform microwave spectrometer and a cavity-based Fourier-transform microwave spectrometer. In the detected 2-FEwater conformer, 2-FE serves as a proton donor, in contrast to its role in the observed ethanolwater conformer, while water acts simultaneously as a hydrogen bond donor and acceptor, forming a hydrogen-bonded ring with an OHO and an OHF hydrogen bond. Comparison to the calculated dipole moment components suggests that the observed structure sits between the two most stable minima identified theoretically. This conclusion is supported by extensive deuterium isotopic data. Further analysis shows that these two minima are connected by a barrier-less wagging motion of the non-bonded hydrogen of the water subunit. The observed narrow splitting with a characteristic 3 : 1 intensity ratio is attributed to an exchange of the bonded and non-bonded hydrogen atoms of water. The tunneling barrier of a proposed tunneling path is calculated to be as low as 5.10 kJ mol(-1). A non-covalent interaction analysis indicates that the water rotation motion along the tunneling path has a surprisingly small effect on the interaction energy between water and 2-FE.

  13. Simple to complex modeling of breathing volume using a motion sensor

    Science.gov (United States)

    John, Dinesh; Staudenmayer, John; Freedson, Patty

    2013-01-01

    Purpose To compare simple and complex modeling techniques to estimate categories of low, medium, and high ventilation (VE) from ActiGraph™ activity counts. Methods Vertical axis ActiGraph™ GT1M activity counts, oxygen consumption and VE were measured during treadmill walking and running, sports, household chores and labor-intensive employment activities. Categories of low (35.4 l/min) VEs were derived from activity intensity classifications (light 6.0 METs). We examined the accuracy of two simple techniques (multiple regression and activity count cut-point analyses) and one complex (random forest technique) modeling technique in predicting VE from activity counts. Results Prediction accuracy of the complex random forest technique was marginally better than the simple multiple regression method. Both techniques accurately predicted VE categories almost 80% of the time. The multiple regression and random forest techniques were more accurate (85 to 88%) in predicting medium VE. Both techniques predicted the high VE (70 to 73%) with greater accuracy than low VE (57 to 60%). Actigraph™ cut-points for light, medium and high VEs were 3660 cpm. Conclusions There were minor differences in prediction accuracy between the multiple regression and the random forest technique. This study provides methods to objectively estimate VE categories using activity monitors that can easily be deployed in the field. Objective estimates of VE should provide a better understanding of the dose–response relationship between internal exposure to pollutants and disease. PMID:23542491

  14. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  15. Analysis of Maneuvering Targets with Complex Motions by Two-Dimensional Product Modified Lv's Distribution for Quadratic Frequency Modulation Signals.

    Science.gov (United States)

    Jing, Fulong; Jiao, Shuhong; Hou, Changbo; Si, Weijian; Wang, Yu

    2017-06-21

    For targets with complex motion, such as ships fluctuating with oceanic waves and high maneuvering airplanes, azimuth echo signals can be modeled as multicomponent quadratic frequency modulation (QFM) signals after migration compensation and phase adjustment. For the QFM signal model, the chirp rate (CR) and the quadratic chirp rate (QCR) are two important physical quantities, which need to be estimated. For multicomponent QFM signals, the cross terms create a challenge for detection, which needs to be addressed. In this paper, by employing a novel multi-scale parametric symmetric self-correlation function (PSSF) and modified scaled Fourier transform (mSFT), an effective parameter estimation algorithm is proposed-referred to as the Two-Dimensional product modified Lv's distribution (2D-PMLVD)-for QFM signals. The 2D-PMLVD is simple and can be easily implemented by using fast Fourier transform (FFT) and complex multiplication. These measures are analyzed in the paper, including the principle, the cross term, anti-noise performance, and computational complexity. Compared to the other three representative methods, the 2D-PMLVD can achieve better anti-noise performance. The 2D-PMLVD, which is free of searching and has no identifiability problems, is more suitable for multicomponent situations. Through several simulations and analyses, the effectiveness of the proposed estimation algorithm is verified.

  16. An Image Pattern Tracking Algorithm for Time-resolved Measurement of Mini- and Micro-scale Motion of Complex Object

    Directory of Open Access Journals (Sweden)

    John M. Seiner

    2009-03-01

    Full Text Available An image pattern tracking algorithm is described in this paper for time-resolved measurements of mini- and micro-scale movements of complex objects. This algorithm works with a high-speed digital imaging system, which records thousands of successive image frames in a short time period. The image pattern of the observed object is tracked among successively recorded image frames with a correlation-based algorithm, so that the time histories of the position and displacement of the investigated object in the camera focus plane are determined with high accuracy. The speed, acceleration and harmonic content of the investigated motion are obtained by post processing the position and displacement time histories. The described image pattern tracking algorithm is tested with synthetic image patterns and verified with tests on live insects.

  17. Large amplitude motion of the acetylene molecule within acetylene-neon complexes hosted in helium droplets.

    Science.gov (United States)

    Briant, M; Mengesha, E; de Pujo, P; Gaveau, M-A; Soep, B; Mestdagh, J-M; Poisson, L

    2016-06-28

    Superfluid helium droplets provide an ideal environment for spectroscopic studies with rotational resolution. Nevertheless, the molecular rotation is hindered because the embedded molecules are surrounded by a non-superfluid component. The present work explores the dynamical role of this component in the hindered rotation of C2H2 within the C2H2-Ne complex. A HENDI experiment was built and near-infrared spectroscopy of C2H2-Ne and C2H2 was performed in the spectral region overlapping the ν3/ν2 + ν4 + ν5 Fermi-type resonance of C2H2. The comparison between measured and simulated spectra helped to address the above issue.

  18. Ground motion in the presence of complex Topography II: Earthquake sources and 3D simulations

    Science.gov (United States)

    Hartzell, Stephen; Ramirez-Guzman, Leonardo; Meremonte, Mark; Leeds, Alena L.

    2017-01-01

    Eight seismic stations were placed in a linear array with a topographic relief of 222 m over Mission Peak in the east San Francisco Bay region for a period of one year to study topographic effects. Seventy‐two well‐recorded local earthquakes are used to calculate spectral amplitude ratios relative to a reference site. A well‐defined fundamental resonance peak is observed with individual station amplitudes following the theoretically predicted progression of larger amplitudes in the upslope direction. Favored directions of vibration are also seen that are related to the trapping of shear waves within the primary ridge dimensions. Spectral peaks above the fundamental one are also related to topographic effects but follow a more complex pattern. Theoretical predictions using a 3D velocity model and accurate topography reproduce many of the general frequency and time‐domain features of the data. Shifts in spectral frequencies and amplitude differences, however, are related to deficiencies of the model and point out the importance of contributing factors, including the shear‐wave velocity under the topographic feature, near‐surface velocity gradients, and source parameters.

  19. Content-Aware Video Adaptation under Low-Bitrate Constraint

    Directory of Open Access Journals (Sweden)

    Hsiao Ming-Ho

    2007-01-01

    Full Text Available With the development of wireless network and the improvement of mobile device capability, video streaming is more and more widespread in such an environment. Under the condition of limited resource and inherent constraints, appropriate video adaptations have become one of the most important and challenging issues in wireless multimedia applications. In this paper, we propose a novel content-aware video adaptation in order to effectively utilize resource and improve visual perceptual quality. First, the attention model is derived from analyzing the characteristics of brightness, location, motion vector, and energy features in compressed domain to reduce computation complexity. Then, through the integration of attention model, capability of client device and correlational statistic model, attractive regions of video scenes are derived. The information object- (IOB- weighted rate distortion model is used for adjusting the bit allocation. Finally, the video adaptation scheme dynamically adjusts video bitstream in frame level and object level. Experimental results validate that the proposed scheme achieves better visual quality effectively and efficiently.

  20. Artificial Video for Video Analysis

    Science.gov (United States)

    Gallis, Michael R.

    2010-01-01

    This paper discusses the use of video analysis software and computer-generated animations for student activities. The use of artificial video affords the opportunity for students to study phenomena for which a real video may not be easy or even possible to procure, using analysis software with which the students are already familiar. We will…

  1. AN HMM BASED ANALYSIS FRAMEWORK FOR SEMANTIC VIDEO EVENTS

    Institute of Scientific and Technical Information of China (English)

    You Junyong; Liu Guizhong; Zhang Yaxin

    2007-01-01

    Semantic video analysis plays an important role in the field of machine intelligence and pattern recognition. In this paper, based on the Hidden Markov Model (HMM), a semantic recognition framework on compressed videos is proposed to analyze the video events according to six low-level features. After the detailed analysis of video events, the pattern of global motion and five features in foreground--the principal parts of videos, are employed as the observations of the Hidden Markov Model to classify events in videos. The applications of the proposed framework in some video event detections demonstrate the promising success of the proposed framework on semantic video analysis.

  2. Earth modeling and estimation of the local seismic ground motion due to site geology in complex volcanoclastic areas

    Directory of Open Access Journals (Sweden)

    V. Di Fiore

    2002-06-01

    Full Text Available Volcanic areas often show complex behaviour as far as seismic waves propagation and seismic motion at surface are concerned. In fact, the finite lateral extent of surface layers such as lava flows, blocks, differential welding and/or zeolitization within pyroclastic deposits, introduces in the propagation of seismic waves effects such as the generation of surface waves at the edge, resonance in lateral direction, diffractions and scattering of energy, which tend to modify the amplitude as well as the duration of the ground motion. The irregular topographic surface, typical of volcanic areas, also strongly influences the seismic site response. Despite this heterogeneity, it is unfortunately a common geophysical and engineering practice to evaluate even in volcanic environments the subsurface velocity field with monodimensional investigation method (i.e. geognostic soundings, refraction survey, down-hole, etc. prior to the seismic site response computation which in a such cases is obviously also made with 1D algorithms. This approach often leads to highly inaccurate results. In this paper we use a different approach, i.e. a fully 2D P-wave Çturning rayÈ tomographic survey followed by 2D seismic site response modeling. We report here the results of this approach in three sites located at short distance from Mt. Vesuvius and Campi Flegrei and characterized by overburdens constituted by volcanoclastic deposits with large lateral and vertical variations of their elastic properties. Comparison between 1D and 2D Dynamic Amplification Factor shows in all reported cases entirely different results, both in terms of peak period and spectral contents, as expected from the clear bidimensionality of the geological section. Therefore, these studies suggest evaluating carefully the subsoil geological structures in areas characterized by possible large lateral and vertical variations of the elastic properties in order to reach correct seismic site response

  3. Motion Tracking with Fast Adaptive Background Subtraction

    Institute of Scientific and Technical Information of China (English)

    Xiao; De-Gui; Yu; Sheng-sheng; 等

    2003-01-01

    To extract and track moving objects is usually one of the most important tasks of intelligent video surveillance systems. This paper presents a fast and adaptive background subtraction algorithm and the motion tracking process using this algorithm. The algorithm uses only luminance components of sampled image sequence pixels and models every pixel in a statistical model.The algorithm is characterized by its ability of real time detecting sudden lighting changes, and extracting and tracking motion objects faster. It is shown that our algorithm can be realized with lower time and space complexity and adjustable object detection error rate with comparison to other background subtraction algorithms. Making use of the algorithm, an indoor monitoring system is also worked out and the motion tracking process is presented in this paper.Experimental results testify the algorithms' good performances when used in an indoor monitoring system.

  4. A novel dynamic frame rate control algorithm for H.264 low-bit-rate video coding

    Institute of Scientific and Technical Information of China (English)

    Yang Jing; Fang Xiangzhong

    2007-01-01

    The goal of this paper is to improve human visual perceptual quality as well as coding efficiency of H.264 video at low bit rate conditions by adaptively adjusting the number of skipped frames. The encoding frames ale selected according to the motion activity of each frame and the motion accumulation of successive frames. The motion activity analysis is based on the statistics of motion vectors and with consideration of the characteristics of H. 264 coding standard. A prediction model of motion accumulation is proposed to reduce complex computation of motion estimation. The dynamic encoding frame rate control algorithm is applied to both the frame level and the GOB (Group of Macroblocks) level. Simulation is done to compare the performance of JM76 with the proposed frame level scheme and GOB level scheme.

  5. Residual Distributed Compressive Video Sensing Based on Double Side Information

    Institute of Scientific and Technical Information of China (English)

    CHEN Jian; SU Kai-Xiong; WANG Wei-Xing; LAN Cheng-Dong

    2014-01-01

    Compressed sensing (CS) is a novel technology to acquire and reconstruct sparse signals below the Nyquist rate. It has great potential in image and video acquisition and processing. To effectively improve the sparsity of signal being measured and reconstructing efficiency, an encoding and decoding model of residual distributed compressive video sensing based on double side information (RDCVS-DSI) is proposed in this paper. Exploiting the characteristics of image itself in the frequency domain and the correlation between successive frames, the model regards the video frame in low quality as the first side information in the process of coding, and generates the second side information for the non-key frames using motion estimation and compensation technology at its decoding end. Performance analysis and simulation experiments show that the RDCVS-DSI model can rebuild the video sequence with high fidelity in the consumption of quite low complexity. About 1∼5 dB gain in the average peak signal-to-noise ratio of the reconstructed frames is observed, and the speed is close to the least complex DCVS, when compared with prior works on compressive video sensing.

  6. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  7. Further evidence of complex motor dysfunction in drug naive children with autism using automatic motion analysis of gait.

    Science.gov (United States)

    Nobile, Maria; Perego, Paolo; Piccinini, Luigi; Mani, Elisa; Rossi, Agnese; Bellina, Monica; Molteni, Massimo

    2011-05-01

    In order to increase the knowledge of locomotor disturbances in children with autism, and of the mechanism underlying them, the objective of this exploratory study was to reliably and quantitatively evaluate linear gait parameters (spatio-temporal and kinematic parameters), upper body kinematic parameters, walk orientation and smoothness using an automatic motion analyser (ELITE systems) in drug naïve children with Autistic Disorder (AD) and healthy controls. The children with AD showed a stiffer gait in which the usual fluidity of walking was lost, trunk postural abnormalities, highly significant difficulties to maintain a straight line and a marked loss of smoothness (increase of jerk index), compared to the healthy controls. As a whole, these data suggest a complex motor dysfunction involving both the cortical and the subcortical area or, maybe, a possible deficit in the integration of sensory-motor information within motor networks (i.e., anomalous connections within the fronto-cerebello-thalamo-frontal network). Although the underlying neural structures involved remain to be better defined, these data may contribute to highlighting the central role of motor impairment in autism and suggest the usefulness of taking into account motor difficulties when developing new diagnostic and rehabilitation programs.

  8. Practical video indexing and retrieval system

    Science.gov (United States)

    Liang, Yiqing; Wolf, Wayne H.; Liu, Bede; Huang, Jeffrey R.

    1998-03-01

    We integrated a practical digital video database system based on language and image analysis with components from digital video processing, still image search, information retrieval, closed captioning processing. The attempt is to utilize the multiple modalities of information in video and implement data fusion among the multiple modalities; image information, speech/dialog information, closed captioning information, sound track information such as music, gunfire, explosion, caption information, motion information, temporal information. Effort is made to allow access video contents at different levels including video program level, scene level, shot level, and object level. Approaches of browsing, subject-based classification, and random retrieving are available to gain access to the contents.

  9. Three-dimensional representations of complex carbohydrates and polysaccharides--SweetUnityMol: a video game-based computer graphic software.

    Science.gov (United States)

    Pérez, Serge; Tubiana, Thibault; Imberty, Anne; Baaden, Marc

    2015-05-01

    A molecular visualization program tailored to deal with the range of 3D structures of complex carbohydrates and polysaccharides, either alone or in their interactions with other biomacromolecules, has been developed using advanced technologies elaborated by the video games industry. All the specific structural features displayed by the simplest to the most complex carbohydrate molecules have been considered and can be depicted. This concerns the monosaccharide identification and classification, conformations, location in single or multiple branched chains, depiction of secondary structural elements and the essential constituting elements in very complex structures. Particular attention was given to cope with the accepted nomenclature and pictorial representation used in glycoscience. This achievement provides a continuum between the most popular ways to depict the primary structures of complex carbohydrates to visualizing their 3D structures while giving the users many options to select the most appropriate modes of representations including new features such as those provided by the use of textures to depict some molecular properties. These developments are incorporated in a stand-alone viewer capable of displaying molecular structures, biomacromolecule surfaces and complex interactions of biomacromolecules, with powerful, artistic and illustrative rendering methods. They result in an open source software compatible with multiple platforms, i.e., Windows, MacOS and Linux operating systems, web pages, and producing publication-quality figures. The algorithms and visualization enhancements are demonstrated using a variety of carbohydrate molecules, from glycan determinants to glycoproteins and complex protein-carbohydrate interactions, as well as very complex mega-oligosaccharides and bacterial polysaccharides and multi-stranded polysaccharide architectures. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e

  10. A novel video recommendation system based on efficient retrieval of human actions

    Science.gov (United States)

    Ramezani, Mohsen; Yaghmaee, Farzin

    2016-09-01

    In recent years, fast growth of online video sharing eventuated new issues such as helping users to find their requirements in an efficient way. Hence, Recommender Systems (RSs) are used to find the users' most favorite items. Finding these items relies on items or users similarities. Though, many factors like sparsity and cold start user impress the recommendation quality. In some systems, attached tags are used for searching items (e.g. videos) as personalized recommendation. Different views, incomplete and inaccurate tags etc. can weaken the performance of these systems. Considering the advancement of computer vision techniques can help improving RSs. To this end, content based search can be used for finding items (here, videos are considered). In such systems, a video is taken from the user to find and recommend a list of most similar videos to the query one. Due to relating most videos to humans, we present a novel low complex scalable method to recommend videos based on the model of included action. This method has recourse to human action retrieval approaches. For modeling human actions, some interest points are extracted from each action and their motion information are used to compute the action representation. Moreover, a fuzzy dissimilarity measure is presented to compare videos for ranking them. The experimental results on HMDB, UCFYT, UCF sport and KTH datasets illustrated that, in most cases, the proposed method can reach better results than most used methods.

  11. Learning-Based Video Superresolution Reconstruction Using Spatiotemporal Nonlocal Similarity

    Directory of Open Access Journals (Sweden)

    Meiyu Liang

    2015-01-01

    Full Text Available Aiming at improving the video visual resolution quality and details clarity, a novel learning-based video superresolution reconstruction algorithm using spatiotemporal nonlocal similarity is proposed in this paper. Objective high-resolution (HR estimations of low-resolution (LR video frames can be obtained by learning LR-HR correlation mapping and fusing spatiotemporal nonlocal similarities between video frames. With the objective of improving algorithm efficiency while guaranteeing superresolution quality, a novel visual saliency-based LR-HR correlation mapping strategy between LR and HR patches is proposed based on semicoupled dictionary learning. Moreover, aiming at improving performance and efficiency of spatiotemporal similarity matching and fusion, an improved spatiotemporal nonlocal fuzzy registration scheme is established using the similarity weighting strategy based on pseudo-Zernike moment feature similarity and structural similarity, and the self-adaptive regional correlation evaluation strategy. The proposed spatiotemporal fuzzy registration scheme does not rely on accurate estimation of subpixel motion, and therefore it can be adapted to complex motion patterns and is robust to noise and rotation. Experimental results demonstrate that the proposed algorithm achieves competitive superresolution quality compared to other state-of-the-art algorithms in terms of both subjective and objective evaluations.

  12. Several methods of smoothing motion capture data

    Science.gov (United States)

    Qi, Jingjing; Miao, Zhenjiang; Wang, Zhifei; Zhang, Shujun

    2011-06-01

    Human motion capture and editing technologies are widely used in computer animation production. We can acquire original motion data by human motion capture system, and then process it by motion editing system. However, noise embed in original motion data maybe introduced by extracting the target, three-dimensional reconstruction process, optimizing algorithm and devices itself in human motion capture system. The motion data must be modified before used to make videos, otherwise the animation figures will be jerky and their behavior is unnatural. Therefore, motion smoothing is essential. In this paper, we compare and summarize three methods of smoothing original motion capture data.

  13. Fractional Brownian motions via random walk in the complex plane and via fractional derivative. Comparison and further results on their Fokker-Planck equations

    Energy Technology Data Exchange (ETDEWEB)

    Jumarie, Guy E-mail: jumarie.guy@uqam.ca

    2004-11-01

    There are presently two different models of fractional Brownian motions available in the literature: the Riemann-Liouville fractional derivative of white noise on the one hand, and the complex-valued Brownian motion of order n defined by using a random walk in the complex plane, on the other hand. The paper provides a comparison between these two approaches, and in addition, takes this opportunity to contribute some complements. These two models are more or less equivalent on the theoretical standpoint for fractional order between 0 and 1/2, but their practical significances are quite different. Otherwise, for order larger than 1/2, the fractional derivative model has no counterpart in the complex plane. These differences are illustrated by an example drawn from mathematical finance. Taylor expansion of fractional order provides the expression of fractional difference in terms of finite difference, and this allows us to improve the derivation of Fokker-Planck equation and Kramers-Moyal expansion, and to get more insight in their relation with stochastic differential equations of fractional order. In the case of multi-fractal systems, the Fokker-Planck equation can be solved by using path integrals, and the fractional dynamic equations of the state moments of the stochastic system can be easily obtained. By combining fractional derivative and complex white noise of order n, one obtains a family of complex-valued fractional Brownian motions which exhibits long-range dependence. The conclusion outlines suggestions for further research, mainly regarding Lorentz transformation of fractional noises.

  14. Hierarchical Aligned Cluster Analysis for Temporal Clustering of Human Motion.

    Science.gov (United States)

    Zhou, Feng; De la Torre, Fernando; Hodgins, Jessica K

    2013-03-01

    Temporal segmentation of human motion into plausible motion primitives is central to understanding and building computational models of human motion. Several issues contribute to the challenge of discovering motion primitives: the exponential nature of all possible movement combinations, the variability in the temporal scale of human actions, and the complexity of representing articulated motion. We pose the problem of learning motion primitives as one of temporal clustering, and derive an unsupervised hierarchical bottom-up framework called hierarchical aligned cluster analysis (HACA). HACA finds a partition of a given multidimensional time series into m disjoint segments such that each segment belongs to one of k clusters. HACA combines kernel k-means with the generalized dynamic time alignment kernel to cluster time series data. Moreover, it provides a natural framework to find a low-dimensional embedding for time series. HACA is efficiently optimized with a coordinate descent strategy and dynamic programming. Experimental results on motion capture and video data demonstrate the effectiveness of HACA for segmenting complex motions and as a visualization tool. We also compare the performance of HACA to state-of-the-art algorithms for temporal clustering on data of a honey bee dance. The HACA code is available online.

  15. Single-Dimension Perturbation Glowworm Swarm Optimization Algorithm for Block Motion Estimation

    Directory of Open Access Journals (Sweden)

    Xiangpin Liu

    2013-01-01

    Full Text Available In view of the fact that the classical fast motion estimation methods are easy to fall into local optimum and suffer the high computational cost, the convergence of the motion estimation method based on the swarm intelligence algorithm is very slow. A new block motion estimation method based on single-dimension perturbation glowworm swarm optimization algorithm is proposed. Single-dimension perturbation is a local search strategy which can improve the ability of local optimization. The proposed method not only has overcome the defect of falling into local optimum easily by taking use of both the global search ability of glowworm swarm optimization algorithm and the local optimization ability of single-dimension perturbation but also has reduced the computation complexity by using motion vector predictor and terminating strategies in view of the characteristic of video images. The experimental results show that the performance of the proposed method is better than that of other motion estimation methods for most video sequences, specifically for those video sequences with violent motion, and the searching precision has been improved obviously. Although the computational complexity of the proposed method is slightly higher than that of the classical methods, it is still far lower than that of full search method.

  16. Radiation Tolerant Software Defined Video Processor Project

    Data.gov (United States)

    National Aeronautics and Space Administration — MaXentric's is proposing a radiation tolerant Software Define Video Processor, codenamed SDVP, for the problem of advanced motion imaging in the space environment....

  17. Wiimote Experiments: Circular Motion

    Science.gov (United States)

    Kouh, Minjoon; Holz, Danielle; Kawam, Alae; Lamont, Mary

    2013-01-01

    The advent of new sensor technologies can provide new ways of exploring fundamental physics. In this paper, we show how a Wiimote, which is a handheld remote controller for the Nintendo Wii video game system with an accelerometer, can be used to study the dynamics of circular motion with a very simple setup such as an old record player or a…

  18. Improved Side Information Generation for Distributed Video Coding by Exploiting Spatial and Temporal Correlations

    Directory of Open Access Journals (Sweden)

    Ye Shuiming

    2009-01-01

    Full Text Available Distributed video coding (DVC is a video coding paradigm allowing low complexity encoding for emerging applications such as wireless video surveillance. Side information (SI generation is a key function in the DVC decoder, and plays a key-role in determining the performance of the codec. This paper proposes an improved SI generation for DVC, which exploits both spatial and temporal correlations in the sequences. Partially decoded Wyner-Ziv (WZ frames, based on initial SI by motion compensated temporal interpolation, are exploited to improve the performance of the whole SI generation. More specifically, an enhanced temporal frame interpolation is proposed, including motion vector refinement and smoothing, optimal compensation mode selection, and a new matching criterion for motion estimation. The improved SI technique is also applied to a new hybrid spatial and temporal error concealment scheme to conceal errors in WZ frames. Simulation results show that the proposed scheme can achieve up to 1.0 dB improvement in rate distortion performance in WZ frames for video with high motion, when compared to state-of-the-art DVC. In addition, both the objective and perceptual qualities of the corrupted sequences are significantly improved by the proposed hybrid error concealment scheme, outperforming both spatial and temporal concealments alone.

  19. Video Pulses: User-Based Modeling of Interesting Video Segments

    Directory of Open Access Journals (Sweden)

    Markos Avlonitis

    2014-01-01

    Full Text Available We present a user-based method that detects regions of interest within a video in order to provide video skims and video summaries. Previous research in video retrieval has focused on content-based techniques, such as pattern recognition algorithms that attempt to understand the low-level features of a video. We are proposing a pulse modeling method, which makes sense of a web video by analyzing users' Replay interactions with the video player. In particular, we have modeled the user information seeking behavior as a time series and the semantic regions as a discrete pulse of fixed width. Then, we have calculated the correlation coefficient between the dynamically detected pulses at the local maximums of the user activity signal and the pulse of reference. We have found that users' Replay activity significantly matches the important segments in information-rich and visually complex videos, such as lecture, how-to, and documentary. The proposed signal processing of user activity is complementary to previous work in content-based video retrieval and provides an additional user-based dimension for modeling the semantics of a social video on the web.

  20. CHARACTER RECOGNITION OF VIDEO SUBTITLES\\

    Directory of Open Access Journals (Sweden)

    Satish S Hiremath

    2016-11-01

    Full Text Available An important task in content based video indexing is to extract text information from videos. The challenges involved in text extraction and recognition are variation of illumination on each video frame with text, the text present on the complex background and different font size of the text. Using various image processing algorithms like morphological operations, blob detection and histogram of oriented gradients the character recognition of video subtitles is implemented. Segmentation, feature extraction and classification are the major steps of character recognition. Several experimental results are shown to demonstrate the performance of the proposed algorithm

  1. Content-adaptive robust error concealment for packet-lossy H.264 video streaming

    Institute of Scientific and Technical Information of China (English)

    LIAO Ning; YAN Dan; QUAN Zi-yi; MEN Ai-dong

    2006-01-01

    In this paper, we present a spatio-temporal post-processing error concealment (EC) algorithm designed initially for a H.264 video-streaming scheme over packet-lossy networks. It aims at optimizing the subjective quality of the restored video under the constraints of low delay and computational complexity, which are critical to real-time applications and portable devices having limited resources. Specifically, it takes into consideration the physical property of motion field in order to achieve more meaningful perceptual video quality, in addition to the improved objective PSNR. Further, a simple bilinear spatial interpolation approach is combined with the improved boundary-match (B-M) based temporal EC approach according to texture and motion activity analysis. Finally, we propose a low complexity temporal EC method based on motion vector interpolation as a replacement of the B-M based approach in the scheme under low-computation requirement, or as a complement to further improve the scheme's performance in applications having enough computation resources. Extensive experiments demonstrated that the proposal features not only better reconstruction, objectively and subjectively, than JM benchmark, but also robustness to different video sequences.

  2. Visual motion imagery neurofeedback based on the hMT+/V5 complex: evidence for a feedback-specific neural circuit involving neocortical and cerebellar regions

    Science.gov (United States)

    Banca, Paula; Sousa, Teresa; Catarina Duarte, Isabel; Castelo-Branco, Miguel

    2015-12-01

    Objective. Current approaches in neurofeedback/brain-computer interface research often focus on identifying, on a subject-by-subject basis, the neural regions that are best suited for self-driven modulation. It is known that the hMT+/V5 complex, an early visual cortical region, is recruited during explicit and implicit motion imagery, in addition to real motion perception. This study tests the feasibility of training healthy volunteers to regulate the level of activation in their hMT+/V5 complex using real-time fMRI neurofeedback and visual motion imagery strategies. Approach. We functionally localized the hMT+/V5 complex to further use as a target region for neurofeedback. An uniform strategy based on motion imagery was used to guide subjects to neuromodulate hMT+/V5. Main results. We found that 15/20 participants achieved successful neurofeedback. This modulation led to the recruitment of a specific network as further assessed by psychophysiological interaction analysis. This specific circuit, including hMT+/V5, putative V6 and medial cerebellum was activated for successful neurofeedback runs. The putamen and anterior insula were recruited for both successful and non-successful runs. Significance. Our findings indicate that hMT+/V5 is a region that can be modulated by focused imagery and that a specific cortico-cerebellar circuit is recruited during visual motion imagery leading to successful neurofeedback. These findings contribute to the debate on the relative potential of extrinsic (sensory) versus intrinsic (default-mode) brain regions in the clinical application of neurofeedback paradigms. This novel circuit might be a good target for future neurofeedback approaches that aim, for example, the training of focused attention in disorders such as ADHD.

  3. Encoder power consumption comparison of Distributed Video Codec and H.264/AVC in low-complexity mode

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Belyaev, Eugeniy; Forchhammer, Søren

    2010-01-01

    (ME) and CABAC entropy coder consume much power so we eliminate ME from the codec and use CAVLC instead of CABAC. Some investigations show that low-complexity DVC outperforms other algorithms in terms of encoder side energy consumption . However, estimations of power consumption for H.264/AVC and DVC...

  4. Video Surveillance using Distance Maps

    NARCIS (Netherlands)

    Schouten, Theo E.; Kuppens, Harco C.; Broek, van den Egon L.; Kehtarnavaz, Nasser; Laplante, Phillip A.

    2006-01-01

    Human vigilance is limited; hence, automatic motion and distance detection is one of the central issues in video surveillance. Hereby, many aspects are of importance, this paper specially addresses: efficiency, achieving real-time performance, accuracy, and robustness against various noise factors.

  5. Video surveillance using distance maps

    NARCIS (Netherlands)

    Schouten, Theo E.; Kuppens, Harco C.; van den Broek, Egon; Kehtarnavaz, Nasser; Laplante, Phillip A,

    2006-01-01

    Human vigilance is limited; hence, automatic motion and distance detection is one of the central issues in video surveillance. Hereby, many aspects are of importance, this paper specially addresses: efficiency, achieving real-time performance, accuracy, and robustness against various noise factors.

  6. Fast Aerial Video Stitching

    Directory of Open Access Journals (Sweden)

    Jing Li

    2014-10-01

    Full Text Available The highly efficient and robust stitching of aerial video captured by unmanned aerial vehicles (UAVs is a challenging problem in the field of robot vision. Existing commercial image stitching systems have seen success with offline stitching tasks, but they cannot guarantee high-speed performance when dealing with online aerial video sequences.In this paper, we present a novel system which has an unique ability to stitch high-frame rate aerial video at a speed of 150 frames per second (FPS. In addition, rather than using a high-speed vision platform such as FPGA or CUDA, our system is running on a normal personal computer. To achieve this, after the careful comparison of the existing invariant features, we choose the FAST corner and binary descriptor for efficient feature extraction and representation, and present a spatial and temporal coherent filter to fuse the UAV motion information into the feature matching. The proposed filter can remove the majority of feature correspondence outliers and significantly increase the speed of robust feature matching by up to 20 times. To achieve a balance between robustness and efficiency, a dynamic key frame-based stitching framework is used to reduce the accumulation errors.Extensive experiments on challenging UAV datasets demonstrate that our approach can break through the speed limitation and generate an accurate stitching image for aerial video stitching tasks.

  7. Elevers møte med komplekse utfordringer i digitale spill i naturfagStudents’ dealing with complex issues in video gaming in school science

    Directory of Open Access Journals (Sweden)

    Mette Nordby

    2014-10-01

    Full Text Available In this design-based study we have examined students’ encounters with a computer game, Energispillet.no, in school. How do students deal with complex issues related to energy and environment in a digital simulation-based video game? How does the meetings between the gaming arena and the school arena unfold? The study was conducted in a vocational class (electricity, two groups with respectively 3 and 4 pupils. We have analyzed spoken and written student texts with selected elements from Halliday’s systemic functional grammar. In our material, we saw two different encounters between the gaming arena and the school arena. One group that predominantly interpreted Energispillet in a gaming frame and one group that drew on working methods associated with both gaming- and school arena. In the game the students encounter “texts” that do not convey facts or certain knowledge, but on the contrary entrusts the players to do their own considerations. Based on their own values ​​and attitudes students must jointly make use of knowledge from different disciplines such as natural science, social studies, economics and ethics to make ongoing assessments, argue points of view, and make informed choices during gameplay. One group explored the game extremely and one-sided and reflected on the complex issues in the game once they left the game world. The other group did more joint reflection, both during and after gaming.

  8. Fast Global Motion Estimation in Two Sampling Steps

    Directory of Open Access Journals (Sweden)

    Adel Ahmadi

    2011-12-01

    Full Text Available An important technique in image and video processing is global motion estimation (GME. The common GME methods can be classified in direct and indirect categories. Whereas the direct global motion estimation techniques boast reasonable precision they tend to suffer from high complexity. As with indirect methods, though presenting lower complexity, they mostly exhibit lower accuracy than their direct counterparts. In this paper, the authors introduce a robust algorithm for GME with near identical accuracy and almost 50-times faster than MPEG-4 verification model (VM. This approach entails two stages in which, first, motion vector of sampled block is employed to obtain initial GME then Levenberg-Marquardt algorithm is applied to the subsampled pixels to optimize the initial GME values. As will be shown, the proposed solution exhibits remarkable accuracy and speed features with experimental results distinctively bearing them out.

  9. Brains on video games.

    Science.gov (United States)

    Bavelier, Daphne; Green, C Shawn; Han, Doug Hyun; Renshaw, Perry F; Merzenich, Michael M; Gentile, Douglas A

    2011-11-18

    The popular press is replete with stories about the effects of video and computer games on the brain. Sensationalist headlines claiming that video games 'damage the brain' or 'boost brain power' do not do justice to the complexities and limitations of the studies involved, and create a confusing overall picture about the effects of gaming on the brain. Here, six experts in the field shed light on our current understanding of the positive and negative ways in which playing video games can affect cognition and behaviour, and explain how this knowledge can be harnessed for educational and rehabilitation purposes. As research in this area is still in its early days, the contributors of this Viewpoint also discuss several issues and challenges that should be addressed to move the field forward.

  10. Robust video hashing via multilinear subspace projections.

    Science.gov (United States)

    Li, Mu; Monga, Vishal

    2012-10-01

    The goal of video hashing is to design hash functions that summarize videos by short fingerprints or hashes. While traditional applications of video hashing lie in database searches and content authentication, the emergence of websites such as YouTube and DailyMotion poses a challenging problem of anti-piracy video search. That is, hashes or fingerprints of an original video (provided to YouTube by the content owner) must be matched against those uploaded to YouTube by users to identify instances of "illegal" or undesirable uploads. Because the uploaded videos invariably differ from the original in their digital representation (owing to incidental or malicious distortions), robust video hashes are desired. We model videos as order-3 tensors and use multilinear subspace projections, such as a reduced rank parallel factor analysis (PARAFAC) to construct video hashes. We observe that, unlike most standard descriptors of video content, tensor-based subspace projections can offer excellent robustness while effectively capturing the spatio-temporal essence of the video for discriminability. We introduce randomization in the hash function by dividing the video into (secret key based) pseudo-randomly selected overlapping sub-cubes to prevent against intentional guessing and forgery. Detection theoretic analysis of the proposed hash-based video identification is presented, where we derive analytical approximations for error probabilities. Remarkably, these theoretic error estimates closely mimic empirically observed error probability for our hash algorithm. Furthermore, experimental receiver operating characteristic (ROC) curves reveal that the proposed tensor-based video hash exhibits enhanced robustness against both spatial and temporal video distortions over state-of-the-art video hashing techniques.

  11. Video stabilization with sub-image phase correlation

    Institute of Scientific and Technical Information of China (English)

    Juanjuan Zhu; Baolong Guo

    2006-01-01

    @@ A fast video stabilization method is presented,which consists of sub-image phase correlation based global motion estimation,Kalman filtering based motion smoothing and motion modification based compensation.Global motion is decided using phase correlation in four sub-images.Then,the motion vectors are accumulated to be Kalman filtered for smoothing.The ordinal motion compensation is applied to each frame with modification to prevent error propagation.Experimental results show that this stabilization system can remove unwanted translational jitter of video sequences and follow intentional scan at real-time speed.

  12. Fractionalization of the complex-valued Brownian motion of order n using Riemann-Liouville derivative. Applications to mathematical finance and stochastic mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Jumarie, Guy [Department of Mathematics, University of Quebec at Montreal, P.O. Box 8888, Downtown Station, Montreal, QC, H3C 3P8 (Canada)]. E-mail: jumarie.guy@uqam.ca

    2006-06-15

    The (complex-valued) Brownian motion of order n is defined as the limit of a random walk on the complex roots of the unity. Real-valued fractional noises are obtained as fractional derivatives of the Gaussian white noise (or order two). Here one combines these two approaches and one considers the new class of fractional noises obtained as fractional derivative of the complex-valued Brownian motion of order n. The key of the approach is the relation between differential and fractional differential provided by the fractional Taylor's series of analytic function f(z+h)=E{sub {alpha}}(h{sup {alpha}}D{sub z}{sup {alpha}}).f(z), where E{sub {alpha}} is the Mittag-Leffler function on the one hand, and the generalized Maruyama's notation, on the other hand. Some questions are revisited such as the definition of fractional Brownian motion as integral w.r.t. (dt){sup {alpha}}, and the exponential growth equation driven by fractional Brownian motion, to which a new solution is proposed. As a first illustrative example of application, in mathematical finance, one proposes a new approach to the optimal management of a stochastic portfolio of fractional order via the Lagrange variational technique applied to the state moment dynamical equations. In the second example, one deals with non-random Lagrangian mechanics of fractional order. The last example proposes a new approach to fractional stochastic mechanics, and the solution so obtained gives rise to the question as to whether physical systems would not have their own internal random times.

  13. Investigation on Inter-Limb Coordination and Motion Stability, Intensity and Complexity of Trunk and Limbs during Hands-Knees Crawling in Human Adults.

    Science.gov (United States)

    Ma, Shenglan; Chen, Xiang; Cao, Shuai; Yu, Yi; Zhang, Xu

    2017-03-28

    This study aimed to investigate the inter-limb coordination pattern and the stability, intensity, and complexity of the trunk and limbs motions in human crawling under different speeds. Thirty healthy human adults finished hands-knees crawling trials on a treadmill at six different speeds (from 1 km/h to 2.5 km/h). A home-made multi-channel acquisition system consisting of five 3-axis accelerometers (ACC) and four force sensors was used for the data collection. Ipsilateral phase lag was used to represent inter-limb coordination pattern during crawling and power, harmonic ratio, and sample entropy of acceleration signals were adopted to depict the motion intensity, stability, and complexity of trunk and limbs respectively. Our results revealed some relationships between inter-limb coordination patterns and the stability and complexity of trunk movement. Trot-like crawling pattern was found to be the most stable and regular one at low speed in the view of trunk movement, and no-limb-pairing pattern showed the lowest stability and the greatest complexity at high speed. These relationships could be used to explain why subjects tended to avoid no-limb-pairing pattern when speed was over 2 km/h no matter which coordination type they used at low speeds. This also provided the evidence that the central nervous system (CNS) chose a stable inter-limb coordination pattern to keep the body safe and avoid tumbling. Although considerable progress has been made in the study of four-limb locomotion, much less is known about the reasons for the variety of inter-limb coordination. The research results of the exploration on the inter-limb coordination pattern choice during crawling from the standpoint of the motion stability, intensity, and complexity of trunk and limbs sheds light on the underlying motor control strategy of the human CNS and has important significance in the fields of clinical diagnosis, rehabilitation engineering, and kinematics research.

  14. Video segmentation based on the presence and/or absence of moving objects

    Science.gov (United States)

    Nitsuwat, Supot; Jin, Jesse S.; Hudson, M. B.

    1999-08-01

    Video clip is the dominant component of multimedia system. However, video data are voluminous. An effective and efficient visual data management system is highly desired. Recent technology in digital video processing has moved to 'content-based' storage and retrieval. To detect meaningful area/region, using only production and camera operation- based detection is not enough. The contents of a video also have to be considered. The basic idea of this scheme is that if we can distinguish individual objects in the whole video sequence, we would be able to capture the changes in content throughout the sequences. Among many object features, motion content has been widely used as an important key in video storage and retrieval systems. Therefore, through motion- based representation, this paper will investigate an algorithm for sub-shot extraction and key-frame selection. From a given video sequence, first we segment the sequence into shots by using some of the production and camera operation-based detection techniques. Then, from the beginning of each shot, we calculate optical flow vectors by using complex wavelet phase-matching-based method on a pair of successive frames. Next, we segment each moving object based on these vectors using clustering in a competitive agglomeration scheme and represent them into a number of layers. After separating moving object(s) from each other for every frame in this shot, we extract sub-shots and select key-frames by using information about the presence and absence of moving object in each layer. Finally, these key-frames and sub-shots have been used to represent the whole video in panoramic mosaic-based representation form. Experimental results showing the significance of the proposed method are also provided.

  15. MPEG-2 Compressed-Domain Algorithms for Video Analysis

    Directory of Open Access Journals (Sweden)

    Hesseler Wolfgang

    2006-01-01

    Full Text Available This paper presents new algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Three algorithms for efficient low-level metadata extraction in preprocessing stages are described. The first algorithm detects camera motion using the motion vector field of an MPEG-2 video. The second method extends the idea of motion detection to a limited region of interest, yielding an efficient algorithm to track objects inside video sequences. The third algorithm performs a cut detection using macroblock types and motion vectors.

  16. SPATIO-TEMPORAL SEGMENTATION AND REGIONS TRACKING OF HIGH DEFINITION VIDEO SEQUENCES USING A MARKOV RANDOM FIELD MODEL

    OpenAIRE

    Brouard, Olivier; Delannay, Fabrice; Ricordel, Vincent; Barba, Dominique

    2008-01-01

    International audience; In this paper, we proposed a Markov Random field sequence segmentation and regions tracking model, which aims at combining color, texture, and motion features. First a motion-based segmentation is realized. The global motion of the video sequence is estimated and compensated. From the remaining motion information, the motion segmentation is achieved. Then, we use a Markovian approach to update and track over time the video objects. By video object, we mean typically, a...

  17. Video Coding and Modeling with Applications to ATM Multiplexing

    Science.gov (United States)

    Nguyen, Hien

    A new vector quantization (VQ) coding method based on optimized concentric shell partitioning of the image space is proposed. The advantages of using the concentric shell partition vector quantizer (CSPVQ) are that it is very fast and the image patterns found in each different subspace can be more effectively coded by using a codebook that is best matched to that particular subspace. For intra-frame coding, the CSPVQ is shown to have the same performance, if not better, than the optimized gain-shape VQ in terms of encoded picture quality while it definitely surpasses the gain-shape VQ in term of computational complexity. A variable bit rate (VBR) video coder for moving video is then proposed where the idea of CSPVQ is coupled with the idea of regular quadtree decomposition to further reduce the bit rate of the encoded picture sequence. The usefulness of a quadtree coding technique comes from the fact that different homogeneous regions occurring within an image can be compactly represented by various nodes in a quadtree. It is found that this image representation technique is particularly useful in providing a low bit rate video encoder without compromising the image quality when it is used in conjunction with the CSPVQ. The characteristics of the VBR coder's output as applied to ATM transmission are investigated. Three video models are used to study the performance of the ATM multiplexer. These models are the auto regressive (AR) model, the auto regressive hidden Markov model (AR-HMM), and the fluid flow uniform arrival and service (UAS) model. The AR model is allowed to have arbitrary order and is used to model a video source which has a constant amount of motion, that is, a stationary video source. The AR-HMM is a more general video model which is based on the idea of auto regressive hidden Markov chain formulated by Baum and is used to describe highly non-stationary sources. Hence, it is expected that the AR-HMM model may also be used top represent a video

  18. Dynamic resource allocation engine for cloud-based real-time video transcoding in mobile cloud computing environments

    Science.gov (United States)

    Adedayo, Bada; Wang, Qi; Alcaraz Calero, Jose M.; Grecos, Christos

    2015-02-01

    The recent explosion in video-related Internet traffic has been driven by the widespread use of smart mobile devices, particularly smartphones with advanced cameras that are able to record high-quality videos. Although many of these devices offer the facility to record videos at different spatial and temporal resolutions, primarily with local storage considerations in mind, most users only ever use the highest quality settings. The vast majority of these devices are optimised for compressing the acquired video using a single built-in codec and have neither the computational resources nor battery reserves to transcode the video to alternative formats. This paper proposes a new low-complexity dynamic resource allocation engine for cloud-based video transcoding services that are both scalable and capable of being delivered in real-time. Firstly, through extensive experimentation, we establish resource requirement benchmarks for a wide range of transcoding tasks. The set of tasks investigated covers the most widely used input formats (encoder type, resolution, amount of motion and frame rate) associated with mobile devices and the most popular output formats derived from a comprehensive set of use cases, e.g. a mobile news reporter directly transmitting videos to the TV audience of various video format requirements, with minimal usage of resources both at the reporter's end and at the cloud infrastructure end for transcoding services.

  19. Distributed source coding of video with non-stationary side-information

    NARCIS (Netherlands)

    Meyer, P.F.A.; Westerlaken, R.P.; Klein Gunnewiek, R.; Lagendijk, R.L.

    2005-01-01

    In distributed video coding, the complexity of the video encoder is reduced at the cost of a more complex video decoder. Using the principles of Slepian andWolf, video compression is then carried out using channel coding principles, under the assumption that the video decoder can temporally predict

  20. Rapid Video Copy Detection Method Based on Video Fingerprint%基于视频指纹的快速视频拷贝检测方法

    Institute of Scientific and Technical Information of China (English)

    唐玉元; 欧阳建权

    2011-01-01

    In order to meet the requirements of real-time video copy detection.a method of rapid video fingerprint schene is presented.The video fingerprint is generated by combining both the improved ordinal measure feature and the improved motion feature based on the DC image sequences.The qucry video streams can be identified by matching the candidate video.Experimental results show that the proposed video fingerprint has the advantage of higher discrimination and lower time complexity.%为满足视频拷贝检测的实时性要求,提出一种基于视频指纹的快速视频拷贝检测方法.基于DC图像序列提取改进的顺序度量特征和改进的运动特征,相结合生成视频指纹,通过视频指纹的相似性匹配对视频进行拷贝检测.实验结果证明,该方法能在保持准确性的同时快速地进行视频拷贝检测.

  1. Collaborative Video Search Combining Video Retrieval with Human-Based Visual Inspection

    NARCIS (Netherlands)

    Hudelist, M.A.; Cobârzan, C.; Beecks, C.; van de Werken, Rob; Kletz, S.; Hürst, W.O.; Schoeffmann, K.

    2016-01-01

    We propose a novel video browsing approach that aims at optimally integrating traditional, machine-based retrieval methods with an interface design optimized for human browsing performance. Advanced video retrieval and filtering (e.g., via color and motion signatures, and visual concepts) on a deskt

  2. EI Videos

    CERN Document Server

    Courtney, Michael; Courtney, Amy

    2012-01-01

    The Quantitative Reasoning Center (QRC) at USAFA has the institution's primary responsibility for offering after hours extra instruction (EI) in core technical disciplines (mathematics, chemistry, physics, and engineering mechanics). Demand has been tremendous, totaling over 3600 evening EI sessions in the Fall of 2010. Meeting this demand with only four (now five) full time faculty has been challenging. EI Videos have been produced to help serve cadets in need of well-modeled solutions to homework-type problems. These videos have been warmly received, being viewed over 14,000 times in Fall 2010 and probably contributing to a significant increase in the first attempt success rate on the Algebra Fundamental Skills Exam in Calculus 1. EI Video production is being extended to better support Calculus 2, Calculus 3, and Physics 1.

  3. Video doorphone

    OpenAIRE

    Horyna, Miroslav

    2015-01-01

    Tato diplomová práce se zabývá návrhem dveřního video telefonu na platformě Raspberry Pi. Je zde popsána platforma Raspberry Pi, modul Raspberry Pi Camera, operační systémy pro Raspberry Pi a popis instalace a nastavení softwaru. Dále je zde popsán návrh a popis programů vytvořených pro dveřní video telefon a návrh přídavných modulů. This thesis deals with door video phone on the platform Raspberry Pi. There is described the platform Raspberry Pi, Raspberry Pi Camera module, operating syst...

  4. Video Podcasts

    DEFF Research Database (Denmark)

    Nortvig, Anne Mette; Sørensen, Birgitte Holm

    2016-01-01

    This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...... in podcasts that included designed activities, and moreover – although to a lesser degree – that students engaged actively in podcasts that did not include additional activities, suggesting that learning via podcast does not always mean learning by passive listening....

  5. VLSI Neural Networks Help To Compress Video Signals

    Science.gov (United States)

    Fang, Wai-Chi; Sheu, Bing J.

    1996-01-01

    Advanced analog/digital electronic system for compression of video signals incorporates artificial neural networks. Performs motion-estimation and image-data-compression processing. Effectively eliminates temporal and spatial redundancies of sequences of video images; processes video image data, retaining only nonredundant parts to be transmitted, then transmits resulting data stream in form of efficient code. Reduces bandwidth and storage requirements for transmission and recording of video signal.

  6. Video Analysis and Modeling in Physics Education

    Science.gov (United States)

    Brown, Doug

    2008-03-01

    The Tracker video analysis program allows users to overlay simple dynamical models on a video clip. Video modeling offers advantages over both traditional video analysis and animation-only modeling. In traditional video analysis, for example, students measure ``g'' by tracking a dropped or tossed ball, constructing a position or velocity vs. time graph, and interpreting the graphs to obtain initial conditions and acceleration. In video modeling, by contrast, the students interactively construct theoretical force expressions and define initial conditions for a dynamical particle model that synchs with and draws itself on the video. The behavior of the model is thus compared directly with that of the real-world motion. Tracker uses the Open Source Physics code library so sophisticated models are possible. I will demonstrate and compare video modeling with video analysis and I will discuss the advantages of video modeling over animation-only modeling. The Tracker video analysis program is available at: http://www.cabrillo.edu/˜dbrown/tracker/.

  7. Source-Adaptation-Based Wireless Video Transport: A Cross-Layer Approach

    Directory of Open Access Journals (Sweden)

    Pei Yong

    2006-01-01

    Full Text Available Real-time packet video transmission over wireless networks is expected to experience bursty packet losses that can cause substantial degradation to the transmitted video quality. In wireless networks, channel state information is hard to obtain in a reliable and timely manner due to the rapid change of wireless environments. However, the source motion information is always available and can be obtained easily and accurately from video sequences. Therefore, in this paper, we propose a novel cross-layer framework that exploits only the motion information inherent in video sequences and efficiently combines a packetization scheme, a cross-layer forward error correction (FEC-based unequal error protection (UEP scheme, an intracoding rate selection scheme as well as a novel intraframe interleaving scheme. Our objective and subjective results demonstrate that the proposed approach is very effective in dealing with the bursty packet losses occurring on wireless networks without incurring any additional implementation complexity or delay. Thus, the simplicity of our proposed system has important implications for the implementation of a practical real-time video transmission system.

  8. Few-Example Video Event Retrieval Using Tag Propagation

    NARCIS (Netherlands)

    Mazloom, M.; Li, X.; Snoek, C.G.M.

    2014-01-01

    An emerging topic in multimedia retrieval is to detect a complex event in video using only a handful of video examples. Different from existing work which learns a ranker from positive video examples and hundreds of negative examples, we aim to query web video for events using zero or only a few vis

  9. Directionality based fast fractional pel motion estimation for H.264

    Institute of Scientific and Technical Information of China (English)

    Zhang Wei; Fan Fen; Wang Xiaoyang; Zhu Weile

    2009-01-01

    Motion estimation is an important and intensive task in video coding applications. Since the complex-ity of integer pixel search has been greatly reduced by the numerous fast ME algorithm, the computation overhead required by fractional pixel ME has become relatively significant. To reduce the complexity of the fractional pixel ME algorithm, a directionality-based fractional pixel ME algorithm is proposed. The proposed algorithm efficiently explores the neighborhood positions which with high probability to be the best matching around the minimum one and skips over other unlikely ones. Thus, the proposed algorithm can complete the search by examining only 3 points on appropriate condition instead of 17 search points in the search algorithm of reference software. The simulation results show that the proposed algorithm successfully optimizes the fractional-pixel motion search on both half and quarter-pixel accuracy and improves the processing speed with low PSNR penalty.

  10. Automatic Video-based Motion Analysis Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Understanding task performance and crew behavioral health is crucial to mission success and to the optimal design, development, and operation of next-generation...

  11. Monitoring Motion of Pigs in Thermal Videos

    DEFF Research Database (Denmark)

    Gronskyte, Ruta; Kulahci, Murat; Clemmensen, Line Katrine Harder

    2013-01-01

    and extract features which characterize a pig’s movement (direction and speed). Subsequently a multiway princi-pal component analysis is used to analyze the movement features and monitor their development over time. Results are presented in the form of quality control charts of the principal components...

  12. Automatic Video-based Motion Analysis Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Operations in confined, isolated, and resource-constrained environments can lead to suboptimal human performance. Understanding task performance and crew behavioral...

  13. Akademisk video

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2016-01-01

    Dette kapitel har fokus på metodiske og systemiske problemstillinger vedrørende forskerens positionering i forhold til at formidle på digitale medier, særligt med video. De systemiske problemer omfatter en Janus dobbelthed; forskeren vil måske gerne formidle på digitale platforme, men er tynget...... indadtil af ansvar, tidspres, renomé, kvalitetskrav, og digitale platformes flygtighed. De metodiske problemer inkluderer at videoanalyse trækker på mange traditioner, men er underudviklet i forhold til den digitale kontekst. Empirien består af eksempler på online formidling i form af “akademisk video......”. Analysen anvender narrativ, multimodal analyse af video, primært to videoer på platformen audiovisualthinking.org, hvor forskeren optræder som fortæller eller “storyteller”. En video er lavet af forfatteren. Videoanalysen er valideret gennem kollaboration med en af audiovisualthinking.org stifterne. De...

  14. A multiple-shape memory polymer-metal composite actuator capable of programmable control, creating complex 3D motion of bending, twisting, and oscillation

    Science.gov (United States)

    Shen, Qi; Trabia, Sarah; Stalbaum, Tyler; Palmre, Viljar; Kim, Kwang; Oh, Il-Kwon

    2016-04-01

    Development of biomimetic actuators has been an essential motivation in the study of smart materials. However, few materials are capable of controlling complex twisting and bending deformations simultaneously or separately using a dynamic control system. Here, we report an ionic polymer-metal composite actuator having multiple-shape memory effect, and is able to perform complex motion by two external inputs, electrical and thermal. Prior to the development of this type of actuator, this capability only could be realized with existing actuator technologies by using multiple actuators or another robotic system. This paper introduces a soft multiple-shape-memory polymer-metal composite (MSMPMC) actuator having multiple degrees-of-freedom that demonstrates high maneuverability when controlled by two external inputs, electrical and thermal. These multiple inputs allow for complex motions that are routine in nature, but that would be otherwise difficult to obtain with a single actuator. To the best of the authors’ knowledge, this MSMPMC actuator is the first solitary actuator capable of multiple-input control and the resulting deformability and maneuverability.

  15. Method through motion

    DEFF Research Database (Denmark)

    Steijn, Arthur

    2016-01-01

    Contemporary scenography often consists of video-projected motion graphics. The field is lacking in academic methods and rigour: descriptions and models relevant for the creation as well as in the analysis of existing works. In order to understand the phenomenon of motion graphics in a scenographic...... construction as a support to working systematically practice-led research project. The design model is being developed through design laboratories and workshops with students and professionals who provide feedback that lead to incremental improvements. Working with this model construction-as-method reveals...

  16. An Improved Recurrent Neural Network for Complex-Valued Systems of Linear Equation and Its Application to Robotic Motion Tracking.

    Science.gov (United States)

    Ding, Lei; Xiao, Lin; Liao, Bolin; Lu, Rongbo; Peng, Hua

    2017-01-01

    To obtain the online solution of complex-valued systems of linear equation in complex domain with higher precision and higher convergence rate, a new neural network based on Zhang neural network (ZNN) is investigated in this paper. First, this new neural network for complex-valued systems of linear equation in complex domain is proposed and theoretically proved to be convergent within finite time. Then, the illustrative results show that the new neural network model has the higher precision and the higher convergence rate, as compared with the gradient neural network (GNN) model and the ZNN model. Finally, the application for controlling the robot using the proposed method for the complex-valued systems of linear equation is realized, and the simulation results verify the effectiveness and superiorness of the new neural network for the complex-valued systems of linear equation.

  17. Modeling Digital Video Database

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The main purpose of the model is to present how the UnifiedModeling L anguage (UML) can be used for modeling digital video database system (VDBS). It demonstrates the modeling process that can be followed during the analysis phase of complex applications. In order to guarantee the continuity mapping of the mo dels, the authors propose some suggestions to transform the use case diagrams in to an object diagram, which is one of the main diagrams for the next development phases.

  18. Circular block matching based video stabilization

    Science.gov (United States)

    Xu, Lidong; Fu, Fangwen; Lin, Xinggang

    2005-07-01

    Video sequences captured by handheld digital camera need to be stabilized to eliminate the tiresome effects caused by camera"s undesirable shake or jiggle. The key issue of video stabilization is to estimate the global motion parameters between two successive frames. In this paper, a novel circular block matching algorithm is proposed to estimate the global motion parameters. This algorithm can deal with not only translational motion but even large rotational motion. For an appointed circular block in current frame, a four-dimensional rotation invariant feature vector is firstly extracted from it and used to judge if it is an effective block. Then the rotation invariant features based circular block matching process is performed to find the best matching blocks in reference frame for those effective blocks. With the matching results of any two effective blocks, a two-dimensional motion model is constructed to produce one group of frame motion parameters. A statistical method is proposed to calculate the estimated global motion parameters with all groups of global motion parameters. Finally, using the estimated motion parameters as the initial values, an iteration algorithm is introduced to obtain the refined global motion parameters. The experimental results show that the proposed algorithm is excellent in stabilizing frames with even burst global translational and rotational motions.

  19. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    André Thomas

    2007-01-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  20. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    Thomas André

    2007-03-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  1. Multi-Features Encoding and Selecting Based on Genetic Algorithm for Human Action Recognition from Video

    Directory of Open Access Journals (Sweden)

    Chenglong Yu

    2013-05-01

    Full Text Available In this study, we proposed multiple local features encoded for recognizing the human actions. The multiple local features were obtained from the simple feature description of human actions in video. The simple features are two kinds of important features, optical flow and edge, to represent the human perception for the video behavior. As the video information descriptors, optical flow and edge, which their computing speeds are very fast and their requirement of memory consumption is very low, can represent respectively the motion information and shape information. Furthermore, key local multi-features are extracted and encoded by GA in order to reduce the computational complexity of the algorithm. After then, the Multi-SVM classifier is applied to discriminate the human actions.

  2. A Fast Block-Matching Algorithm Using Smooth Motion Vector Field Adaptive Search Technique

    Institute of Scientific and Technical Information of China (English)

    LI Bo(李波); LI Wei(李炜); TU YaMing(涂亚明)

    2003-01-01

    In many video standards based on inter-frame compression such as H.26x and MPEG, block-matching algorithm has been widely adopted as the method for motion estimation because of its simplicity and effectiveness. Nevertheless, since motion estimation is very complex in computing. Fast algorithm for motion estimation has always been an important and attractive topic in video compression. From the viewpoint of making motion vector field smoother, this paper proposes a new algorithm SMVFAST. On the basis of motion correlation, it predicts the starting point by neighboring motion vectors according to their SADs. Adaptive search modes are usedin its search process through simply classifying motion activity. After discovering the ubiquitous ratio between the SADs of the collocated blocks in the consecutive frames, the paper proposes an effective half-stop criterion that can quickly stop the search process with good enough results.Experiments show that SMVFAST obtains almost the same results as the full search at very low computation cost, and outperforms MVFAST and PMVFAST in speed and quality, which are adopted by MPEG-4.

  3. Extracting Text from Video

    Directory of Open Access Journals (Sweden)

    Jayshree Ghorpade

    2011-09-01

    Full Text Available The text data present in images and video contain certain useful information for automatic annotation,indexing, and structuring of images. However variations of the text due to differences in text style, font, size, orientation, alignment as well as low image contrast and complex background make the problem of automatic text extraction extremely difficult and challenging job. A large number of techniques have been proposed to address this problem and the purpose of this paper is to design algorithms for each phase of extracting text from a video using java libraries and classes. Here first we frame the input video into stream of images using the Java Media Framework (JMF with the input being a real time or a video from the database. Then we apply pre processing algorithms to convert the image to gray scale and remove the disturbances like superimposed lines over the text, discontinuity removal, and dot removal.Then we continue with the algorithms for localization, segmentation and recognition for which we use the neural network pattern matching technique. The performance of our approach is demonstrated by presenting experimental results for a set of static images.

  4. EXTRACTING TEXT FROM VIDEO

    Directory of Open Access Journals (Sweden)

    Jayshree Ghorpade

    2011-06-01

    Full Text Available The text data present in images and video contain certain useful information for automatic annotation,indexing, and structuring of images. However variations of the text due to differences in text style, font, size, orientation, alignment as well as low image contrast and complex background make the problem of automatic text extraction extremely difficult and challenging job. A large number of techniques have been proposed to address this problem and the purpose of this paper is to design algorithms for each phase of extracting text from a video using java libraries and classes. Here first we frame the input video into stream of images using the Java Media Framework (JMF with the input being a real time or a video from the database. Then we apply pre processing algorithms to convert the image to gray scale and remove the disturbances like superimposed lines over the text, discontinuity removal, and dot removal.Then we continue with the algorithms for localization, segmentation and recognition for which we use the neural network pattern matching technique. The performance of our approach is demonstrated by presenting experimental results for a set of static images.

  5. Déjà vu: Motion Prediction in Static Images

    NARCIS (Netherlands)

    Pintea, S.L.; van Gemert, J.C.; Smeulders, A.W.M.

    2014-01-01

    This paper proposes motion prediction in single still images by learning it from a set of videos. The building assumption is that similar motion is characterized by similar appearance. The proposed method learns local motion patterns given a specific appearance and adds the predicted motion in a num

  6. Video Analytics

    DEFF Research Database (Denmark)

    include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition......This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...

  7. Collective motion in a fluid complex plasma induced by interaction with a slow projectile under microgravity conditions

    Science.gov (United States)

    Zhukhovitskii, Dmitry; Ivlev, Alexei; Thomas, Hubertus; Fortov, Vladimir; Lipaev, Andrey; Morfill, Gregor; Molotkov, Vladimir; Naumkin, Vadim

    Subsonic motion of a large particle (projectile) moving through the bulk of a dust crystal formed by negatively charged small particles is investigated using the PK-3 Plus laboratory onboard the International Space Station. Tracing the dust particle trajectories show that the projectile moves almost freely through the bulk of plasma crystal, while dust particles move along characteristic alpha-shaped pathways near the large particle. We develop a theory of nonviscous dust particles motion about a projectile and calculate particle trajectories. The deformation of a cavity around a subsonic projectile in the cloud of small dust particles is investigated with due regard for friction between the dust particles and atoms of neutral gas. The pressure of a dust cloud at the surface of a cavity around the projectile can become negative, which entails the emergence of a considerable asymmetry of the cavity, i.e., the cavity deformation. Corresponding threshold velocity is calculated, which is found to decrease with increasing cavity size. Developed theory makes it possible to estimate the static pressure of dust particles in a cloud on the basis of experimental data. A good agreement with experiment validates our approach.

  8. Spatial-Aided Low-Delay Wyner-Ziv Video Coding

    Directory of Open Access Journals (Sweden)

    Bo Wu

    2009-01-01

    Full Text Available In distributed video coding, the side information (SI quality plays an important role in Wyner-Ziv (WZ frame coding. Usually, SI is generated at the decoder by the motion-compensated interpolation (MCI from the past and future key frames under the assumption that the motion trajectory between the adjacent frames is translational with constant velocity. However, this assumption is not always true and thus, the coding efficiency for WZ coding is often unsatisfactory in video with high and/or irregular motion. This situation becomes more serious in low-delay applications since only motion-compensated extrapolation (MCE can be applied to yield SI. In this paper, a spatial-aided Wyner-Ziv video coding (WZVC in low-delay application is proposed. In SA-WZVC, at the encoder, each WZ frame is coded as performed in the existing common Wyner-Ziv video coding scheme and meanwhile, the auxiliary information is also coded with the low-complexity DPCM. At the decoder, for the WZ frame decoding, auxiliary information should be decoded firstly and then SI is generated with the help of this auxiliary information by the spatial-aided motion-compensated extrapolation (SA-MCE. Theoretical analysis proved that when a good tradeoff between the auxiliary information coding and WZ frame coding is achieved, SA-WZVC is able to achieve better rate distortion performance than the conventional MCE-based WZVC without auxiliary information. Experimental results also demonstrate that SA-WZVC can efficiently improve the coding performance of WZVC in low-delay application.

  9. Spatial-Aided Low-Delay Wyner-Ziv Video Coding

    Directory of Open Access Journals (Sweden)

    Ji Xiangyang

    2009-01-01

    Full Text Available Abstract In distributed video coding, the side information (SI quality plays an important role in Wyner-Ziv (WZ frame coding. Usually, SI is generated at the decoder by the motion-compensated interpolation (MCI from the past and future key frames under the assumption that the motion trajectory between the adjacent frames is translational with constant velocity. However, this assumption is not always true and thus, the coding efficiency for WZ coding is often unsatisfactory in video with high and/or irregular motion. This situation becomes more serious in low-delay applications since only motion-compensated extrapolation (MCE can be applied to yield SI. In this paper, a spatial-aided Wyner-Ziv video coding (WZVC in low-delay application is proposed. In SA-WZVC, at the encoder, each WZ frame is coded as performed in the existing common Wyner-Ziv video coding scheme and meanwhile, the auxiliary information is also coded with the low-complexity DPCM. At the decoder, for the WZ frame decoding, auxiliary information should be decoded firstly and then SI is generated with the help of this auxiliary information by the spatial-aided motion-compensated extrapolation (SA-MCE. Theoretical analysis proved that when a good tradeoff between the auxiliary information coding and WZ frame coding is achieved, SA-WZVC is able to achieve better rate distortion performance than the conventional MCE-based WZVC without auxiliary information. Experimental results also demonstrate that SA-WZVC can efficiently improve the coding performance of WZVC in low-delay application.

  10. Classifying Motion.

    Science.gov (United States)

    Duzen, Carl; And Others

    1992-01-01

    Presents a series of activities that utilizes a leveling device to classify constant and accelerated motion. Applies this classification system to uniform circular motion and motion produced by gravitational force. (MDH)

  11. 3D motion analysis via energy minimization

    Energy Technology Data Exchange (ETDEWEB)

    Wedel, Andreas

    2009-10-16

    This work deals with 3D motion analysis from stereo image sequences for driver assistance systems. It consists of two parts: the estimation of motion from the image data and the segmentation of moving objects in the input images. The content can be summarized with the technical term machine visual kinesthesia, the sensation or perception and cognition of motion. In the first three chapters, the importance of motion information is discussed for driver assistance systems, for machine vision in general, and for the estimation of ego motion. The next two chapters delineate on motion perception, analyzing the apparent movement of pixels in image sequences for both a monocular and binocular camera setup. Then, the obtained motion information is used to segment moving objects in the input video. Thus, one can clearly identify the thread from analyzing the input images to describing the input images by means of stationary and moving objects. Finally, I present possibilities for future applications based on the contents of this thesis. Previous work in each case is presented in the respective chapters. Although the overarching issue of motion estimation from image sequences is related to practice, there is nothing as practical as a good theory (Kurt Lewin). Several problems in computer vision are formulated as intricate energy minimization problems. In this thesis, motion analysis in image sequences is thoroughly investigated, showing that splitting an original complex problem into simplified sub-problems yields improved accuracy, increased robustness, and a clear and accessible approach to state-of-the-art motion estimation techniques. In Chapter 4, optical flow is considered. Optical flow is commonly estimated by minimizing the combined energy, consisting of a data term and a smoothness term. These two parts are decoupled, yielding a novel and iterative approach to optical flow. The derived Refinement Optical Flow framework is a clear and straight-forward approach to

  12. Research on video motion object segmentation for content-based application%基于内容的视频运动对象分割技术研究

    Institute of Scientific and Technical Information of China (English)

    包红强

    2006-01-01

    @@ With the development of the modern information society, more and more multimedia information is available. So the technology of multimedia processing is becoming the important task for the irrelevant area of scientist. Among of the multimedia, the visual information is more attractive due to its direct, vivid characteristic, but at the same time the huge amount of video data causes many challenges if the video storage, processing and transmission.

  13. Distortion Modeling and Error Robust Coding Scheme for H.26L Video

    Institute of Scientific and Technical Information of China (English)

    CHENChuan; YUSongyu; CHENGLianji

    2004-01-01

    Transmission of hybrid-coded video including motion compensation and spatial prediction over error prone channel results in the well-known problem of error propagation because of the drift in reference frames between encoder and decoder. The prediction loop propa-gates errors and causes substantial degradation in video quality. Especially in H.26L video, both intra and inter prediction strategies are used to improve compression efficiency, however, they make error propagation more serious. This work proposes distortion models for H.26L video to optimally estimate the overall distortion of decoder frame reconstruction due to quantization, error propagation, and error concealment. Based on these statistical distortion models, our error robust coding scheme only integrates the distinct distortion between intra and inter macroblocks into a rate-distortlon based framework to select suitable coding mode for each macroblock, and so,the cost in computation complexity is modest. Simulations under typical 3GPP/3GPP2 channel and Internet channel conditions have shown that our proposed scheme achieves much better performance than those currently used in H.26L. The error propagation estimation and effect at high fractural pixel-level prediction have also been tested. All the results have demonstrated that our proposed scheme achieves a good balance between compression efficiency and error robustness for H.26L video, at the cost of modest additional complexity.

  14. Hierarchical temporal video segmentation and content characterization

    Science.gov (United States)

    Gunsel, Bilge; Fu, Yue; Tekalp, A. Murat

    1997-10-01

    This paper addresses the segmentation of a video sequence into shots, specification of edit effects and subsequent characterization of shots in terms of color and motion content. The proposed scheme uses DC images extracted from MPEG compressed video and performs an unsupervised clustering for the extraction of camera shots. The specification of edit effects, such as fade-in/out and dissolve is based on the analysis of distribution of mean value for the luminance components. This step is followed by the representation of visual content of temporal segments in terms of key frames selected by similarity analysis of mean color histograms. For characterization of the similar temporal segments, motion and color characteristics are classified into different categories using a set of different features derived from motion vectors of triangular meshes and mean histograms of video shots.

  15. A COMPARISION OF VARIOUS EDGE DETECTION TECHNIQUES IN MOTION PICTURE FOR IDENTIFYING A SHARK FISH

    Directory of Open Access Journals (Sweden)

    Shrivakshan Gopal Thiruvangadan

    2013-01-01

    Full Text Available The significant feature of detecting the motion image objects in this study it try identify the shark fish videos by removing the Background of the image. The main method involved in the detecting from the background is the foreground detection of the image. There are many techniques which usually ignore the fact that the background images consist of different image objects whose conditions may mostly change occur. In this study, a motion picture identification procedure is proposed for real time motion video frames by comparing the three key classes of methods for motion detection primarily the Background Removal (Subtraction followed by the Temporal distinguishing (differencing and Optical Flow method. Structured hierarchical background procedure is proposed based on segmenting background images objects. It mainly divided the background images divided into several parts (regions by the Support Vector Machine (SVM followed by a structured hierarchical model is built with the region procedure and pixel model procedure. In the region model method, the image object is extracted from the histogram of specific parts which is same to the kind of a Gaussian-combination model. In the pixel model procedure, it is been demonstrated by histograms, picture, which shows gradients sample of pixels in each parts based on the concurrent occurrence of object variations. In this study, it suggests Silhouette detection procedure and it is used. The experimental result are counter validated with a video database to illustrate its efficiencies, which is involved, from static to dynamic scenes by analyzing it with some distinguished motion detection methods chiefly Temporal differencing method followed by Optical Flow method and based on the outputs a motion detection procedure for real time video frames can be created which is cost effective, it shows good rate of accuracy, which is less rate of reliability in simple, less of complexity and well adapted to several

  16. Video object tracking in the compressed domain using spatio-temporal Markov random fields.

    Science.gov (United States)

    Khatoonabadi, Sayed Hossein; Bajić, Ivan V

    2013-01-01

    Despite the recent progress in both pixel-domain and compressed-domain video object tracking, the need for a tracking framework with both reasonable accuracy and reasonable complexity still exists. This paper presents a method for tracking moving objects in H.264/AVC-compressed video sequences using a spatio-temporal Markov random field (ST-MRF) model. An ST-MRF model naturally integrates the spatial and temporal aspects of the object's motion. Built upon such a model, the proposed method works in the compressed domain and uses only the motion vectors (MVs) and block coding modes from the compressed bitstream to perform tracking. First, the MVs are preprocessed through intracoded block motion approximation and global motion compensation. At each frame, the decision of whether a particular block belongs to the object being tracked is made with the help of the ST-MRF model, which is updated from frame to frame in order to follow the changes in the object's motion. The proposed method is tested on a number of standard sequences, and the results demonstrate its advantages over some of the recent state-of-the-art methods.

  17. Compact video synopsis via global spatiotemporal optimization.

    Science.gov (United States)

    Nie, Yongwei; Xiao, Chunxia; Sun, Hanqiu; Li, Ping

    2013-10-01

    Video synopsis aims at providing condensed representations of video data sets that can be easily captured from digital cameras nowadays, especially for daily surveillance videos. Previous work in video synopsis usually moves active objects along the time axis, which inevitably causes collisions among the moving objects if compressed much. In this paper, we propose a novel approach for compact video synopsis using a unified spatiotemporal optimization. Our approach globally shifts moving objects in both spatial and temporal domains, which shifting objects temporally to reduce the length of the video and shifting colliding objects spatially to avoid visible collision artifacts. Furthermore, using a multilevel patch relocation (MPR) method, the moving space of the original video is expanded into a compact background based on environmental content to fit with the shifted objects. The shifted objects are finally composited with the expanded moving space to obtain the high-quality video synopsis, which is more condensed while remaining free of collision artifacts. Our experimental results have shown that the compact video synopsis we produced can be browsed quickly, preserves relative spatiotemporal relationships, and avoids motion collisions.

  18. Online scene change detection of multicast (MBone) video

    Science.gov (United States)

    Zhou, Wensheng; Shen, Ye; Vellaikal, Asha; Kuo, C.-C. Jay

    1998-10-01

    Many multimedia applications, such as multimedia data management systems and communication systems, require efficient representation of multimedia content. Thus semantic interpretation of video content has been a popular research area. Currently, most content-based video representation involves the segmentation of video based on key frames which are generated using scene change detection techniques as well as camera/object motion. Then, video features can be extracted from key frames. However most of such research performs off-line video processing in which the whole video scope is known as a priori which allows multiple scans of the stored video files during video processing. In comparison, relatively not much research has been done in the area of on-line video processing, which is crucial in video communication applications such as on-line collaboration, news broadcasts and so on. Our research investigates on-line real-time scene change detection of multicast video over the Internet. Our on-line processing system are designed to meet the requirements of real-time video multicasting over the Internet and to utilize the successful video parsing techniques available today. The proposed algorithms extract key frames from video bitstreams sent through the MBone network, and the extracted key frames are multicasted as annotations or metadata over a separate channel to assist in content filtering such as those anticipated to be in use by on-line filtering proxies in the Internet. The performance of the proposed algorithms are demonstrated and discussed in this paper.

  19. Bit Serial Architecture for Variable Block Size Motion Estimation

    Directory of Open Access Journals (Sweden)

    Krishna Kaveri Devarinti

    2013-06-01

    Full Text Available H.264/AVC is the latest video coding standard adopting variable block size, quarter-pixel accuracy and motion vector prediction and multi-reference frames for motion estimations. These new features result in higher computation requirements than that for previous coding standards.The computational complexity of motion estimation is about 60% in the H.264/AVC encoder. In this paper most significant bit (MSB first arithmetic based bit serial Variable Block Size Motion Estimation (VBSME hardware architecture is proposed. MSB first bit serial architecture main feature is, its early termination SAD computation compared to normal bit serial architectures. With this early termination technique, number computations are reduced drastically. Hence power consumption is also less compared to parallel architectures. An efficient bit serial processing element is proposed and developed 2D architecture for processing of 4x4 block in parallel .Inter connect structure is developed in such way that data reusability is achieved between PEs. Two types of adder trees are employed for variable block size SAD calculation with less number of adders. The proposed architecture can generate up to 41 motion vectors (MVs for each macroblock. The inter connection complexity between PEs reduced drastically compared to parallel architectures. The architecture supports processing of SDTV (640x480 with 30fps at 172.8 MHz for search range [+8, -7]. We could reduce 14% of computations by using early termination technique.

  20. Topology Dictionary for 3D Video Understanding

    OpenAIRE

    2012-01-01

    This paper presents a novel approach that achieves 3D video understanding. 3D video consists of a stream of 3D models of subjects in motion. The acquisition of long sequences requires large storage space (2 GB for 1 min). Moreover, it is tedious to browse data sets and extract meaningful information. We propose the topology dictionary to encode and describe 3D video content. The model consists of a topology-based shape descriptor dictionary which can be generated from either extracted pattern...

  1. Timeline editing of objects in video.

    Science.gov (United States)

    Lu, Shao-Ping; Zhang, Song-Hai; Wei, Jin; Hu, Shi-Min; Martin, Ralph R

    2013-07-01

    We present a video editing technique based on changing the timelines of individual objects in video, which leaves them in their original places but puts them at different times. This allows the production of object-level slow motion effects, fast motion effects, or even time reversal. This is more flexible than simply applying such effects to whole frames, as new relationships between objects can be created. As we restrict object interactions to the same spatial locations as in the original video, our approach can produce highquality results using only coarse matting of video objects. Coarse matting can be done efficiently using automatic video object segmentation, avoiding tedious manual matting. To design the output, the user interactively indicates the desired new life spans of objects, and may also change the overall running time of the video. Our method rearranges the timelines of objects in the video whilst applying appropriate object interaction constraints. We demonstrate that, while this editing technique is somewhat restrictive, it still allows many interesting results.

  2. Trifocal tensor based side information generation for multi-view distributed video code

    Institute of Scientific and Technical Information of China (English)

    Lin Xin; Liu Haitao; Wei Jianming

    2010-01-01

    Distributed video coding(DVC)is a new video coding approach based on Wyner-Ziv theorem.The novel uplink-friendly DVC,which offers low-complexity,low-power consuming,and low-cost video encoding,has aroused more and more research interests.In this paper a new method based on multiple view geometry is presented for spatial side information generation of uncalibrated video sensor network.Trifocal tensor encapsulates all the geometric relations among three views that ale independent of scene structure;it can be computed from image correspondences alone without requiring knowledge of the motion or calibration.Simulation results show that trifocal tensor-based spatial side information improves the rate-distortion performance over motion compensation based interpolation side information by a maximum gap of around 2dB.Then fusion merges the different side information(temporal and spatial)in order to improve the quality of the final one.Simulation results show that the rate-distortion gains about 0.4 dB.

  3. Analysis of Maneuvering Targets with Complex Motions by Two-Dimensional Product Modified Lv’s Distribution for Quadratic Frequency Modulation Signals

    Directory of Open Access Journals (Sweden)

    Fulong Jing

    2017-06-01

    Full Text Available For targets with complex motion, such as ships fluctuating with oceanic waves and high maneuvering airplanes, azimuth echo signals can be modeled as multicomponent quadratic frequency modulation (QFM signals after migration compensation and phase adjustment. For the QFM signal model, the chirp rate (CR and the quadratic chirp rate (QCR are two important physical quantities, which need to be estimated. For multicomponent QFM signals, the cross terms create a challenge for detection, which needs to be addressed. In this paper, by employing a novel multi-scale parametric symmetric self-correlation function (PSSF and modified scaled Fourier transform (mSFT, an effective parameter estimation algorithm is proposed—referred to as the Two-Dimensional product modified Lv’s distribution (2D-PMLVD—for QFM signals. The 2D-PMLVD is simple and can be easily implemented by using fast Fourier transform (FFT and complex multiplication. These measures are analyzed in the paper, including the principle, the cross term, anti-noise performance, and computational complexity. Compared to the other three representative methods, the 2D-PMLVD can achieve better anti-noise performance. The 2D-PMLVD, which is free of searching and has no identifiability problems, is more suitable for multicomponent situations. Through several simulations and analyses, the effectiveness of the proposed estimation algorithm is verified.

  4. Analysis of Maneuvering Targets with Complex Motions by Two-Dimensional Product Modified Lv’s Distribution for Quadratic Frequency Modulation Signals

    Science.gov (United States)

    Jing, Fulong; Jiao, Shuhong; Hou, Changbo; Si, Weijian; Wang, Yu

    2017-01-01

    For targets with complex motion, such as ships fluctuating with oceanic waves and high maneuvering airplanes, azimuth echo signals can be modeled as multicomponent quadratic frequency modulation (QFM) signals after migration compensation and phase adjustment. For the QFM signal model, the chirp rate (CR) and the quadratic chirp rate (QCR) are two important physical quantities, which need to be estimated. For multicomponent QFM signals, the cross terms create a challenge for detection, which needs to be addressed. In this paper, by employing a novel multi-scale parametric symmetric self-correlation function (PSSF) and modified scaled Fourier transform (mSFT), an effective parameter estimation algorithm is proposed—referred to as the Two-Dimensional product modified Lv’s distribution (2D-PMLVD)—for QFM signals. The 2D-PMLVD is simple and can be easily implemented by using fast Fourier transform (FFT) and complex multiplication. These measures are analyzed in the paper, including the principle, the cross term, anti-noise performance, and computational complexity. Compared to the other three representative methods, the 2D-PMLVD can achieve better anti-noise performance. The 2D-PMLVD, which is free of searching and has no identifiability problems, is more suitable for multicomponent situations. Through several simulations and analyses, the effectiveness of the proposed estimation algorithm is verified. PMID:28635640

  5. Complex-scaled equation-of-motion coupled-cluster method with single and double substitutions for autoionizing excited states: theory, implementation, and examples.

    Science.gov (United States)

    Bravaya, Ksenia B; Zuev, Dmitry; Epifanovsky, Evgeny; Krylov, Anna I

    2013-03-28

    Theory and implementation of complex-scaled variant of equation-of-motion coupled-cluster method for excitation energies with single and double substitutions (EOM-EE-CCSD) is presented. The complex-scaling formalism extends the EOM-EE-CCSD model to resonance states, i.e., excited states that are metastable with respect to electron ejection. The method is applied to Feshbach resonances in atomic systems (He, H(-), and Be). The dependence of the results on one-electron basis set is quantified and analyzed. Energy decomposition and wave function analysis reveal that the origin of the dependence is in electron correlation, which is essential for the lifetime of Feshbach resonances. It is found that one-electron basis should be sufficiently flexible to describe radial and angular electron correlation in a balanced fashion and at different values of the scaling parameter, θ. Standard basis sets that are optimized for not-complex-scaled calculations (θ = 0) are not sufficiently flexible to describe the θ-dependence of the wave functions even when heavily augmented by additional sets.

  6. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... ANA's First 25 Years Video Support Group Video Library Mark Ruffalo Story Patient Journeys ANA Public Webinars ... Groups Map Scheduled Meetings Support Group Meeting Video Library Start a Support Group ANA Discussion Forum ANetwork, ...

  7. Marketing through Video Presentations.

    Science.gov (United States)

    Newhart, Donna

    1989-01-01

    Discusses the advantages of using video presentations as marketing tools. Includes information about video news releases, public service announcements, and sales/marketing presentations. Describes the three stages in creating a marketing video: preproduction planning; production; and postproduction. (JOW)

  8. Control Sound and Light Color Video With Dynamic Image——Motion Graphic Design Motion Picture Design%用动态影像驾驭声光色影——浅谈Motion Graphic Design动态影像设计

    Institute of Scientific and Technical Information of China (English)

    陈凯晴

    2011-01-01

    Dynamic Image Design is a comprehensive art digital design,the application of different types and elements of design,combining live action,visual effects,2D/3D animation,graphic design,typography, interactive design and other areas.Dynamic images are usually designed to show the electronic media technology, with the static graphic design difference is that the performance will change over time.ln the era of rapid development of computer technology, video design is leading the trend of new media design.%动态影像设计是一种数字设计领域的综合艺术,应用了不同的设计类别与元素,融合了实景拍摄、视觉特效、2D/3D动画、图形设计、字体设计、互动设计等各种领域。动态影像设计通常以电子媒体科技展现,跟静态的图像设计的区别是在于,表现是否会随时间而改变。在电脑技术高速发展的时代,动态影像设计正引领着新媒体设计的潮流。

  9. Multimodal Semantic Analysis and Annotation for Basketball Video

    Science.gov (United States)

    Liu, Song; Xu, Min; Yi, Haoran; Chia, Liang-Tien; Rajan, Deepu

    2006-12-01

    This paper presents a new multiple-modality method for extracting semantic information from basketball video. The visual, motion, and audio information are extracted from video to first generate some low-level video segmentation and classification. Domain knowledge is further exploited for detecting interesting events in the basketball video. For video, both visual and motion prediction information are utilized for shot and scene boundary detection algorithm; this will be followed by scene classification. For audio, audio keysounds are sets of specific audio sounds related to semantic events and a classification method based on hidden Markov model (HMM) is used for audio keysound identification. Subsequently, by analyzing the multimodal information, the positions of potential semantic events, such as "foul" and "shot at the basket," are located with additional domain knowledge. Finally, a video annotation is generated according to MPEG-7 multimedia description schemes (MDSs). Experimental results demonstrate the effectiveness of the proposed method.

  10. A spatiotemporal decomposition strategy for personal home video management

    Science.gov (United States)

    Yi, Haoran; Kozintsev, Igor; Polito, Marzia; Wu, Yi; Bouguet, Jean-Yves; Nefian, Ara; Dulong, Carole

    2007-01-01

    With the advent and proliferation of low cost and high performance digital video recorder devices, an increasing number of personal home video clips are recorded and stored by the consumers. Compared to image data, video data is lager in size and richer in multimedia content. Efficient access to video content is expected to be more challenging than image mining. Previously, we have developed a content-based image retrieval system and the benchmarking framework for personal images. In this paper, we extend our personal image retrieval system to include personal home video clips. A possible initial solution to video mining is to represent video clips by a set of key frames extracted from them thus converting the problem into an image search one. Here we report that a careful selection of key frames may improve the retrieval accuracy. However, because video also has temporal dimension, its key frame representation is inherently limited. The use of temporal information can give us better representation for video content at semantic object and concept levels than image-only based representation. In this paper we propose a bottom-up framework to combine interest point tracking, image segmentation and motion-shape factorization to decompose the video into spatiotemporal regions. We show an example application of activity concept detection using the trajectories extracted from the spatio-temporal regions. The proposed approach shows good potential for concise representation and indexing of objects and their motion in real-life consumer video.

  11. Key frames extraction in athletic video

    Science.gov (United States)

    Caccia, Giuseppe; Lancini, Rosa; Russo, Stefano

    2003-06-01

    In this paper, we present an effective framework for features extraction from an athletic sport sequence. We analyze both forward and backward motion vectors from MPEG 2 video sequences for camera movements detection. Features like the beginning and the end of the race and the type of competition are strictly connected to the camera motion. Our algorithm is able to extract the frame number of the investigated feature with very high accuracy.

  12. Feasibility of Using Low-Cost Motion Capture for Automated Screening of Shoulder Motion Limitation after Breast Cancer Surgery.

    Directory of Open Access Journals (Sweden)

    Valeriya Gritsenko

    Full Text Available To determine if a low-cost, automated motion analysis system using Microsoft Kinect could accurately measure shoulder motion and detect motion impairments in women following breast cancer surgery.Descriptive study of motion measured via 2 methods.Academic cancer center oncology clinic.20 women (mean age = 60 yrs were assessed for active and passive shoulder motions during a routine post-operative clinic visit (mean = 18 days after surgery following mastectomy (n = 4 or lumpectomy (n = 16 for breast cancer.Participants performed 3 repetitions of active and passive shoulder motions on the side of the breast surgery. Arm motion was recorded using motion capture by Kinect for Windows sensor and on video. Goniometric values were determined from video recordings, while motion capture data were transformed to joint angles using 2 methods (body angle and projection angle.Correlation of motion capture with goniometry and detection of motion limitation.Active shoulder motion measured with low-cost motion capture agreed well with goniometry (r = 0.70-0.80, while passive shoulder motion measurements did not correlate well. Using motion capture, it was possible to reliably identify participants whose range of shoulder motion was reduced by 40% or more.Low-cost, automated motion analysis may be acceptable to screen for moderate to severe motion impairments in active shoulder motion. Automatic detection of motion limitation may allow quick screening to be performed in an oncologist's office and trigger timely referrals for rehabilitation.

  13. Disentangling regular and chaotic motion in the standard map using complex network analysis of recurrences in phase space.

    Science.gov (United States)

    Zou, Yong; Donner, Reik V; Thiel, Marco; Kurths, Jürgen

    2016-02-01

    Recurrence in the phase space of complex systems is a well-studied phenomenon, which has provided deep insights into the nonlinear dynamics of such systems. For dissipative systems, characteristics based on recurrence plots have recently attracted much interest for discriminating qualitatively different types of dynamics in terms of measures of complexity, dynamical invariants, or even structural characteristics of the underlying attractor's geometry in phase space. Here, we demonstrate that the latter approach also provides a corresponding distinction between different co-existing dynamical regimes of the standard map, a paradigmatic example of a low-dimensional conservative system. Specifically, we show that the recently developed approach of recurrence network analysis provides potentially useful geometric characteristics distinguishing between regular and chaotic orbits. We find that chaotic orbits in an intermittent laminar phase (commonly referred to as sticky orbits) have a distinct geometric structure possibly differing in a subtle way from those of regular orbits, which is highlighted by different recurrence network properties obtained from relatively short time series. Thus, this approach can help discriminating regular orbits from laminar phases of chaotic ones, which presents a persistent challenge to many existing chaos detection techniques.

  14. Disentangling regular and chaotic motion in the standard map using complex network analysis of recurrences in phase space

    Science.gov (United States)

    Zou, Yong; Donner, Reik V.; Thiel, Marco; Kurths, Jürgen

    2016-02-01

    Recurrence in the phase space of complex systems is a well-studied phenomenon, which has provided deep insights into the nonlinear dynamics of such systems. For dissipative systems, characteristics based on recurrence plots have recently attracted much interest for discriminating qualitatively different types of dynamics in terms of measures of complexity, dynamical invariants, or even structural characteristics of the underlying attractor's geometry in phase space. Here, we demonstrate that the latter approach also provides a corresponding distinction between different co-existing dynamical regimes of the standard map, a paradigmatic example of a low-dimensional conservative system. Specifically, we show that the recently developed approach of recurrence network analysis provides potentially useful geometric characteristics distinguishing between regular and chaotic orbits. We find that chaotic orbits in an intermittent laminar phase (commonly referred to as sticky orbits) have a distinct geometric structure possibly differing in a subtle way from those of regular orbits, which is highlighted by different recurrence network properties obtained from relatively short time series. Thus, this approach can help discriminating regular orbits from laminar phases of chaotic ones, which presents a persistent challenge to many existing chaos detection techniques.

  15. Improved hand tracking algorithm in video sequences for intelligent rehabilitation

    Institute of Scientific and Technical Information of China (English)

    LI Ling; LUO Yuan; ZHANG Yi; ZHANG Bai-sheng

    2009-01-01

    Intelligent rehabilitation system is an active research topic. It is motivated by the increased number of limb disabled patients. Human motion tracking is the key technology of intelligent rehabilitation system, because the movement of limb disabled patients needs to be localized and learned so that any undesired motion behavior can be corrected in order to reach an expectation. This paper introduces a real-time tracking system of human hand motion, specifically intent to be used for home rehabilitation. Vision sensor (camera) is employed in this system to track the hand movement, and the improved Camshift algorithm and Kalman filter are used to implement dynamic hand tracking in the video. CAMSHIFT algorithm is able to track any kind of target colors by building a histogram distribution of the H channel in HSV color space from the region of interests selected by users at the initial stage. Kalman filter is able to predict hand location in one image frame based on its location data detected in the previous frame. The experimental results show that this system can track 2D hand motion and has acceptable accuracy by using the two algorithms properly. The new algorithm proposed in this paper can not only deal with the skin color interference problems, but also deal well with the track of complex background.

  16. Least-Square Prediction for Backward Adaptive Video Coding

    OpenAIRE

    2006-01-01

    Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP), and demonstrate its potential in video coding. Motivated by the duality between edge contour in im...

  17. High-Speed Video Analysis in a Conceptual Physics Class

    Science.gov (United States)

    Desbien, Dwain M.

    2011-01-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software. Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting…

  18. Smoothing Motion Estimates for Radar Motion Compensation.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-07-01

    Simple motion models for complex motion environments are often not adequate for keeping radar data coherent. Eve n perfect motion samples appli ed to imperfect models may lead to interim calculations e xhibiting errors that lead to degraded processing results. Herein we discuss a specific i ssue involving calculating motion for groups of pulses, with measurements only available at pulse-group boundaries. - 4 - Acknowledgements This report was funded by General A tomics Aeronautical Systems, Inc. (GA-ASI) Mission Systems under Cooperative Re search and Development Agre ement (CRADA) SC08/01749 between Sandia National Laboratories and GA-ASI. General Atomics Aeronautical Systems, Inc. (GA-ASI), an affilia te of privately-held General Atomics, is a leading manufacturer of Remotely Piloted Aircraft (RPA) systems, radars, and electro-optic and rel ated mission systems, includin g the Predator(r)/Gray Eagle(r)-series and Lynx(r) Multi-mode Radar.

  19. Video Defogging Based on Adaptive Tolerance

    Directory of Open Access Journals (Sweden)

    Yan Xiaoyuan

    2012-11-01

    Full Text Available Dark channel prior is a kind of statistics of the haze-free outdoor images, which is widely used in image defogging. But when the image contains a large bright region, such as sky and white object, the prior will cause color distortion in these bright regions because of the underestimated transmission. To solve this problem, a video defogging technology based on adaptive tolerance is presented in this paper, and it is applied to video defogging by combining with the guided filter. First, the transmission of each video frame is estimated according to the dark channel prior, and then it is fast refined by the guided filter for restoration. If a large bright region exists in the video frame, the transmission of these regions will be corrected according to the adaptive tolerance, which avoids the color distortion in the video defogging. For the video of dynamic scenes which are caused by camera motion, each frame is defogged as a single image. But for the video of static scenes whose background is almost invariant, the transmission of the background is estimated and used for the defogging of all the frames instead of estimating the transmission of each frame. In this way, the rate of video defogging is greatly improved. Experimental results show that the algorithm has a strong applicability, and the proposed method can be further used for many applications, such as outdoor surveillance, remote sensing and intelligent vehicles.

  20. The Video Genome

    CERN Document Server

    Bronstein, Alexander M; Kimmel, Ron

    2010-01-01

    Fast evolution of Internet technologies has led to an explosive growth of video data available in the public domain and created unprecedented challenges in the analysis, organization, management, and control of such content. The problems encountered in video analysis such as identifying a video in a large database (e.g. detecting pirated content in YouTube), putting together video fragments, finding similarities and common ancestry between different versions of a video, have analogous counterpart problems in genetic research and analysis of DNA and protein sequences. In this paper, we exploit the analogy between genetic sequences and videos and propose an approach to video analysis motivated by genomic research. Representing video information as video DNA sequences and applying bioinformatic algorithms allows to search, match, and compare videos in large-scale databases. We show an application for content-based metadata mapping between versions of annotated video.

  1. Scalable Video Transcaling for the Wireless Internet

    Directory of Open Access Journals (Sweden)

    van der Schaar Mihaela

    2004-01-01

    Full Text Available The rapid and unprecedented increase in the heterogeneity of multimedia networks and devices emphasizes the need for scalable and adaptive video solutions both for coding and transmission purposes. However, in general, there is an inherent trade-off between the level of scalability and the quality of scalable video streams. In other words, the higher the bandwidth variation, the lower the overall video quality of the scalable stream that is needed to support the desired bandwidth range. In this paper, we introduce the notion of wireless video transcaling (TS, which is a generalization of (nonscalable transcoding. With TS, a scalable video stream, that covers a given bandwidth range, is mapped into one or more scalable video streams covering different bandwidth ranges. Our proposed TS framework exploits the fact that the level of heterogeneity changes at different points of the video distribution tree over wireless and mobile Internet networks. This provides the opportunity to improve the video quality by performing the appropriate TS process. We argue that an Internet/wireless network gateway represents a good candidate for performing TS. Moreover, we describe hierarchical TS (HTS, which provides a “Transcaler” with the option of choosing among different levels of TS processes with different complexities. We illustrate the benefits of TS by considering the recently developed MPEG-4 fine granularity scalability (FGS video coding. Extensive simulation results of video TS over bit rate ranges supported by emerging wireless LANs are presented.

  2. Segmentation-based video coding

    Energy Technology Data Exchange (ETDEWEB)

    Lades, M. [Lawrence Livermore National Lab., CA (United States); Wong, Yiu-fai; Li, Qi [Texas Univ., San Antonio, TX (United States). Div. of Engineering

    1995-10-01

    Low bit rate video coding is gaining attention through a current wave of consumer oriented multimedia applications which aim, e.g., for video conferencing over telephone lines or for wireless communication. In this work we describe a new segmentation-based approach to video coding which belongs to a class of paradigms appearing very promising among the various proposed methods. Our method uses a nonlinear measure of local variance to identify the smooth areas in an image in a more indicative and robust fashion: First, the local minima in the variance image are identified. These minima then serve as seeds for the segmentation of the image with a watershed algorithm. Regions and their contours are extracted. Motion compensation is used to predict the change of regions between previous frames and the current frame. The error signal is then quantized. To reduce the number of regions and contours, we use the motion information to assist the segmentation process, to merge regions, resulting in a further reduction in bit rate. Our scheme has been tested and good results have been obtained.

  3. Low-Complexity Video Coding for Wireless Video Sensor Networks%面向无线视频传感器网络的低复杂度视频编码算法

    Institute of Scientific and Technical Information of China (English)

    卓力; 刘博仑; 沈兰荪

    2009-01-01

    针对无线视频传感器网络(Wireless video sensor networks,WVSN)时视频编码算法的具体需求,提出一种基于运动检测的低复杂度视频编码算法.该算法只对当前编码帧中的运动对象进行编码,并且以面向对象的结构输出码流.实验结果表明,与H.264全I帧编码相比,本文提出的算法编码速度提高了约3倍,编码性能提高了约2 dB.与H.264基本档次相比,虽然编码性能略有下降,但是编码速度平均提高了8倍左右.本文提出的算法可以在编码效率和编码速度之间获得很好的折衷,在一定程度上可以满足WVSN的需求.

  4. Resolution enhancement of color video sequences.

    Science.gov (United States)

    Shah, N R; Zakhor, A

    1999-01-01

    We propose a new multiframe algorithm to enhance the spatial resolution of frames in video sequences. Our technique specifically accounts for the possibility that motion estimation will be inaccurate and compensates for these inaccuracies. Experiments show that our multiframe enhancement algorithm yields perceptibly sharper enhanced images with significant signal-to-noise ratio (SNR) improvement over bilinear and cubic B-spline interpolation.

  5. Attention modeling for video quality assessment

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Perkis, Andrew

    2010-01-01

    averaged spatiotemporal pooling. The local quality is derived from visual attention modeling and quality variations over frames. Saliency, motion, and contrast information are taken into account in modeling visual attention, which is then integrated into IQMs to calculate the local quality of a video frame...

  6. Tech Tips: Using Video Management/ Analysis Technology in Qualitative Research

    Directory of Open Access Journals (Sweden)

    J.A. Spiers

    2004-03-01

    Full Text Available This article presents tips on how to use video in qualitative research. The author states that, though there many complex and powerful computer programs for working with video, the work done in qualitative research does not require those programs. For this work, simple editing software is sufficient. Also presented is an easy and efficient method of transcribing video clips.

  7. A Database Design and Development Case: Home Theater Video

    Science.gov (United States)

    Ballenger, Robert; Pratt, Renee

    2012-01-01

    This case consists of a business scenario of a small video rental store, Home Theater Video, which provides background information, a description of the functional business requirements, and sample data. The case provides sufficient information to design and develop a moderately complex database to assist Home Theater Video in solving their…

  8. Evaporation in motion

    CERN Document Server

    Machrafi, Hatim; Colinet, Pierre; Dauby, Pierre

    2012-01-01

    This work presents fluid dynamics videos obtained via numerical (CFD) calculations using ComSol (finite elements method) software, showing the evaporation of HFE7100 (3M company refrigerant) into a nitrogen gas flow along the liquid interface. The overall temperature evolution and liquid motion, which is caused by surface-tension (Marangoni) and buoyancy (Rayleigh) instability mechanisms, are shown as well. Flow behavior in the liquid caused by the aforementioned instability mechanisms can be nicely seen. Finally, these observations are made for three liquid thicknesses in order to appreciate the qualitative influence of confinement.

  9. CONTENT BASED VIDEO RETRIEVAL BASED ON HDWT AND SPARSE REPRESENTATION

    Directory of Open Access Journals (Sweden)

    Sajad Mohamadzadeh

    2016-04-01

    Full Text Available Video retrieval has recently attracted a lot of research attention due to the exponential growth of video datasets and the internet. Content based video retrieval (CBVR systems are very useful for a wide range of applications with several type of data such as visual, audio and metadata. In this paper, we are only using the visual information from the video. Shot boundary detection, key frame extraction, and video retrieval are three important parts of CBVR systems. In this paper, we have modified and proposed new methods for the three important parts of our CBVR system. Meanwhile, the local and global color, texture, and motion features of the video are extracted as features of key frames. To evaluate the applicability of the proposed technique against various methods, the P(1 metric and the CC_WEB_VIDEO dataset are used. The experimental results show that the proposed method provides better performance and less processing time compared to the other methods.

  10. Video Design Games

    DEFF Research Database (Denmark)

    Smith, Rachel Charlotte; Christensen, Kasper Skov; Iversen, Ole Sejer;

    2016-01-01

    We introduce Video Design Games to train educators in teaching design. The Video Design Game is a workshop format consisting of three rounds in which participants observe, reflect and generalize based on video snippets from their own practice. The paper reports on a Video Design Game workshop...

  11. Video Design Games

    DEFF Research Database (Denmark)

    Smith, Rachel Charlotte; Christensen, Kasper Skov; Iversen, Ole Sejer

    We introduce Video Design Games to train educators in teaching design. The Video Design Game is a workshop format consisting of three rounds in which participants observe, reflect and generalize based on video snippets from their own practice. The paper reports on a Video Design Game workshop...

  12. Motion control systems

    CERN Document Server

    Sabanovic, Asif

    2011-01-01

    "Presents a unified approach to the fundamental issues in motion control, starting from the basics and moving through single degree of freedom and multi-degree of freedom systems In Motion Control Systems, Šabanovic and Ohnishi present a unified approach to very diverse issues covered in motion control systems, offering know-how accumulated through work on very diverse problems into a comprehensive, integrated approach suitable for application in high demanding high-tech products. It covers material from single degree of freedom systems to complex multi-body non-redundant and redundant systems. The discussion of the main subject is based on original research results and will give treatment of the issues in motion control in the framework of the acceleration control method with disturbance rejection technique. This allows consistent unification of different issues in motion control ranging from simple trajectory tracking to topics related to haptics and bilateral control without and with delay in the measure...

  13. VISUAL ATTENTION BASED KEYFRAMES EXTRACTION AND VIDEO SUMMARIZATION

    Directory of Open Access Journals (Sweden)

    P.Geetha

    2012-05-01

    Full Text Available Recent developments in digital video and drastic increase of internet use have increased the amount of people searching and watching videos online. In order to make the search of the videos easy, Summary of the video may be provided along with each video. The video summary provided thus should be effective so that the user would come to know the content of the video without having to watch it fully. The summary produced should consists of the key frames that effectively express the content and context of the video. This work suggests a method to extract key frames which express most of the information in the video. This is achieved by quantifying Visual attention each frame commands. Visual attention of each frame is quantified using a descriptor called Attention quantifier. This quantification of visual attention is based on the human attention mechanism that indicates color conspicuousness and the motion involved seek more attention. So based on the color conspicuousness and the motion involved each frame is given a Attention parameter. Based on the attention quantifier value the key frames are extracted and are summarized adaptively. This framework suggests a method to produces meaningful video summary.

  14. A Maximum a Posteriori Estimation Framework for Robust High Dynamic Range Video Synthesis.

    Science.gov (United States)

    Li, Yuelong; Lee, Chul; Monga, Vishal

    2017-03-01

    High dynamic range (HDR) image synthesis from multiple low dynamic range exposures continues to be actively researched. The extension to HDR video synthesis is a topic of significant current interest due to potential cost benefits. For HDR video, a stiff practical challenge presents itself in the form of accurate correspondence estimation of objects between video frames. In particular, loss of data resulting from poor exposures and varying intensity makes conventional optical flow methods highly inaccurate. We avoid exact correspondence estimation by proposing a statistical approach via maximum a posterior estimation, and under appropriate statistical assumptions and choice of priors and models, we reduce it to an optimization problem of solving for the foreground and background of the target frame. We obtain the background through rank minimization and estimate the foreground via a novel multiscale adaptive kernel regression technique, which implicitly captures local structure and temporal motion by solving an unconstrained optimization problem. Extensive experimental results on both real and synthetic data sets demonstrate that our algorithm is more capable of delivering high-quality HDR videos than current state-of-the-art methods, under both subjective and objective assessments. Furthermore, a thorough complexity analysis reveals that our algorithm achieves better complexity-performance tradeoff than conventional methods.

  15. Shipboard Video Observations of Whitecaps

    Science.gov (United States)

    Schwendeman, M.; Thomson, J. M.

    2014-12-01

    Video observations of breaking ocean surface waves in deep water (i.e., whitecaps) are useful for determining which waves are breaking and inferring how much energy these breakers are dissipating. We present shipboard video of breaking waves from a research cruise in the North Pacific. As with airborne systems, motion compensation is essential in geo-rectifying the image. A stabilization method based on the location of the horizon in the image is shown to be effective in correcting pitch and roll motions to within one degree, without an IMU (inertial motion unit). After rectification, whitecaps are identified and measured based on the translation of surface foam patches, which appear as groups of bright pixels. Two standard breaking metrics, whitecap coverage and Phillips' Λ(c) distribution, are calculated for the full dataset and compared with other recent observations. In addition, the growth rate of the whitecap foam patches is used to examine a new dissipation function, independent from Λ(c), which was developed in laboratory experiments. Finally, we present preliminary whitecap results from a stereo system. Stereo imaging has the potential to provide much more information about the geometry and kinematics of breaking surface waves in the field, but significant technical challenges remain.

  16. How Do Consumers Evaluate Explainer Videos? An Empirical Study on the Effectiveness and Efficiency of Different Explainer Video Formats

    Science.gov (United States)

    Krämer, Andreas; Böhrs, Sandra

    2017-01-01

    There is a significant rise in the use of videos. More and more people use videos not only as a source of information but also as learning tool. This article explores the future potential of explainer videos, a format that conveys complex facts to a target group within a very short time. The findings are based on an empirical study representative…

  17. Motion Sickness

    Science.gov (United States)

    ... activities, such as playing video games or watching spinning objects. Symptoms can strike without warning and worsen ... by a parasite. It is one of the top causes…October 2017September 2000familydoctor.org editorial staff AboutSupport ...

  18. Scanning probe microscopy at video-rate

    Directory of Open Access Journals (Sweden)

    Georg Schitter

    2008-01-01

    Full Text Available Recent results have demonstrated the feasibility of video-rate scanning tunneling microscopy and video-rate atomic force microscopy. The further development of this technology will enable the direct observation of many dynamic processes that are impossible to observe today with conventional Scanning Probe Microscopes (SPMs. Examples are atom and molecule diffusion processes, the motion of molecular motors, real-time film growth, and chemical or catalytic reactions. Video-rate scanning probe technology might also lead to the extended application of SPMs in industry, e.g. for process control. In this paper we discuss the critical aspects that have to be taken into account for improving the imaging speed of SPMs. We point out the required instrumentation efforts, give an overview of the state of the art in high-speed scanning technology and discuss the required future developments for imaging at video-rates.

  19. Motion analysis systems as optimization training tools in combat sports and martial arts

    Directory of Open Access Journals (Sweden)

    Ewa Polak

    2016-01-01

    Full Text Available Introduction: Over the past years, a few review papers about possibilities of using motion analysis systems in sport were published, but there are no articles that discuss this problem in the field of combat sports and martial arts. Aim: This study presents the diversity of contemporary motion analysis systems both, those that are used in scientific research, as well as those that can be applied in daily work of coaches and athletes in combat sports and martial arts. An additional aim is the indication of example applications in scientific research and range of applications in optimizing the training process. It presents a brief description of each type of systems that are currently used in sport, specific examples of systems and the main advantages and disadvantages of using them. The presentation and discussion takes place in the following sections: motion analysis utility for combat sports and martial arts, systems using digital video and systems using markers, sensors or transmitters. Conclusions: Not all types of motion analysis systems used in sport are suitable for combat sports and martial arts. Scientific studies conducted so far showed the usefulness of video-based, optical and electromechanical systems. The use of research results made with complex motion analysis systems, or made with simple systems, local application and immediate visualization is important for the preparation of training and its optimization. It may lead to technical and tactical improvement in athletes as well as the prevention of injuries in combat sports and martial arts.

  20. Complex motions of grains in dusty plasma with nonuniform magnetic field%非均匀磁场尘埃等离子体中颗粒的复杂运动∗

    Institute of Scientific and Technical Information of China (English)

    宫卫华; 张永亮; 冯帆; 刘富成; 贺亚峰

    2015-01-01

    We have studied various complex motions of the irregular dust grains immersed in non-uniformly magnetized plasma. The cylindrical magnet that we used for experiments significantly alters the radial distribution of the sheath potential which confines the negatively charged grains. Grains are horizontally illuminated by a 50 mW, 532 nm laser sheet and imaged by a CCD camera from the upper transparent electrode. Hypocycloid and epicycloid motions of grains are observed for the first time as far as we know. Cuspate cycloid motions, circle motion, wave motion, and stationary grains are also observed. Their trajectories can be obtained by using long-time exposure, and the characteristic parameters of the grain movement are measured by using the image processing with MATLAB. Though the dust grains can move around the magnet steadily in various trajectories, the induced magnetic field is too weak to give rise to cycloid motions of grains. Then we propose a new mechanism that an inverse Magnus force induced by the spin of the irregular grains plays an important role in their cycloid motions. The pollen pini we used for experiment is not a regular microsphere, there is a symmetry in the shape. On the basis of Bernoulli principle, the pressure difference between the left and right side of the forward moving grains produces the inverse Magnus effect. Additional comparison experiments with regular microspheres are also performed to confirm that the cycloid motions are distinctive features of an irregular dust grain immersed in the plasma. The periodical change of the cyclotron radius as the grain travels would result in the (cuspate) cycloid motions, and the maximal value of angular velocity of spin is about 105 rad/s. Our experimental observations can be well explained based on the force analysis in 2D horizontal plane.

  1. Action induction due to visual perception of linear motion in depth.

    Science.gov (United States)

    Classen, Claudia; Kibele, Armin

    2017-01-01

    Visually perceived motion can affect observers' motor control in such a way that an intended action can be activated automatically when it contains similar spatial features. So far, effects have been mostly demonstrated with simple displays where objects were moving in a two-dimensional plane. However, almost all actions we perform and visually perceive in everyday life are much more complex and take place in three-dimensional space. The purpose of this study was to examine action inductions due to visual perception of motion in depth. Therefore, we conducted two Simon experiments where subjects were presented with video displays of a sphere (simple displays, experiment 1) and a real person (complex displays, experiment 2) moving in depth. In both experiments, motion direction towards and away from the observer served as task irrelevant information whereas a color change in the video served as relevant information to choose the correct response (close or far positioned response key). The results show that subjects reacted faster when motion direction of the dynamic stimulus was corresponding to the spatial position of the demanded response. In conclusion, this direction-based Simon effect is modulated by spatial position information, higher sensitivity of our visual system for looming objects, and a high salience of objects being on a collision course.

  2. Fractional motions

    Energy Technology Data Exchange (ETDEWEB)

    Eliazar, Iddo I., E-mail: eliazar@post.tau.ac.il [Holon Institute of Technology, P.O. Box 305, Holon 58102 (Israel); Shlesinger, Michael F., E-mail: mike.shlesinger@navy.mil [Office of Naval Research, Code 30, 875 N. Randolph St., Arlington, VA 22203 (United States)

    2013-06-10

    Brownian motion is the archetypal model for random transport processes in science and engineering. Brownian motion displays neither wild fluctuations (the “Noah effect”), nor long-range correlations (the “Joseph effect”). The quintessential model for processes displaying the Noah effect is Lévy motion, the quintessential model for processes displaying the Joseph effect is fractional Brownian motion, and the prototypical model for processes displaying both the Noah and Joseph effects is fractional Lévy motion. In this paper we review these four random-motion models–henceforth termed “fractional motions” –via a unified physical setting that is based on Langevin’s equation, the Einstein–Smoluchowski paradigm, and stochastic scaling limits. The unified setting explains the universal macroscopic emergence of fractional motions, and predicts–according to microscopic-level details–which of the four fractional motions will emerge on the macroscopic level. The statistical properties of fractional motions are classified and parametrized by two exponents—a “Noah exponent” governing their fluctuations, and a “Joseph exponent” governing their dispersions and correlations. This self-contained review provides a concise and cohesive introduction to fractional motions.

  3. Network video transmission system based on SOPC

    Science.gov (United States)

    Zhang, Zhengbing; Deng, Huiping; Xia, Zhenhua

    2008-03-01

    Video systems have been widely used in many fields such as conferences, public security, military affairs and medical treatment. With the rapid development of FPGA, SOPC has been paid great attentions in the area of image and video processing in recent years. A network video transmission system based on SOPC is proposed in this paper for the purpose of video acquisition, video encoding and network transmission. The hardware platform utilized to design the system is an SOPC board of model Altera's DE2, which includes an FPGA chip of model EP2C35F672C6, an Ethernet controller and a video I/O interface. An IP core, known as Nios II embedded processor, is used as the CPU of the system. In addition, a hardware module for format conversion of video data, and another module to realize Motion-JPEG have been designed with Verilog HDL. These two modules are attached to the Nios II processor as peripheral equipments through the Avalon bus. Simulation results show that these two modules work as expected. Uclinux including TCP/IP protocol as well as the driver of Ethernet controller is chosen as the embedded operating system and an application program scheme is proposed.

  4. Sports Video Segmentation using Spectral Clustering

    Directory of Open Access Journals (Sweden)

    Xiaohong Zhao

    2014-07-01

    Full Text Available With the rapid development of the computer and multimedia technology, the video processing technique is applied to the field of sports in order to analyze the sport video. For sports video analysis, how to segment the sports video image has become an important research topic. Nowadays, the algorithms for video image segmentation mainly include neural network, K-means and so on. However, the accuracy and speed of these algorithms for moving objects segmentation are not satisfied, and easily influenced by the irregular movement of the object and illumination, etc. In view of this, this paper proposes an algorithm for object segmentation in sports video image sequence, based on the spectral clustering. This algorithm simultaneously considers the pixel level visual feature and the edge information of the neighboring pixels to make the calculation of similarity is more intuitive and not affected by factors such as image texture. When clustering the image feature, the proposed method: (1 preprocesses video image sequence and extracts the image feature. (2Using weight function to build and calculate the similar matrix between pixels. (2 Extract feature vector. (3 Perform clustering using spectral clustering algorithm to segment the sports video image. The experimental results indicate that the method proposed in this paper has the advantages, such as lower complexity, high computational effectiveness, low computational amount, and so on. It can get better extraction effects on video image

  5. The experiments and analysis of several selective video encryption methods

    Science.gov (United States)

    Zhang, Yue; Yang, Cheng; Wang, Lei

    2013-07-01

    This paper presents four methods for selective video encryption based on the MPEG-2 video compression,including the slices, the I-frames, the motion vectors, and the DCT coefficients. We use the AES encryption method for simulation experiment for the four methods on VS2010 Platform, and compare the video effects and the processing speed of each frame after the video encrypted. The encryption depth can be arbitrarily selected, and design the encryption depth by using the double limit counting method, so the accuracy can be increased.

  6. Video segmentation using multiple features based on EM algorithm

    Institute of Scientific and Technical Information of China (English)

    张风超; 杨杰; 刘尔琦

    2004-01-01

    Object-based video segmentation is an important issue for many multimedia applications. A video segmentation method based on EM algorithm is proposed. We consider video segmentation as an unsupervised classification problem and apply EM algorithm to obtain the maximum-likelihood estimation of the Gaussian model parameters for model-based segmentation. We simultaneously combine multiple features (motion, color) within a maximum likelihood framework to obtain accurate segment results. We also use the temporal consistency among video frames to improve the speed of EM algorithm. Experimental results on typical MPEG-4 sequences and real scene sequences show that our method has an attractive accuracy and robustness.

  7. Robust Video Watermarking using Multi-Band Wavelet Transform

    CERN Document Server

    Hussein, Jamal

    2009-01-01

    This paper addresses copyright protection as a major security demand in digital marketplaces. Two watermarking techniques are proposed and compared for compressed and uncompressed video with the intention to show the advantages and the possible weaknesses in the schemes working in the frequency domain and in the spatial domain. In this paper a robust video watermarking method is presented. This method embeds data to the specific bands in the wavelet domain using motion estimation approach. The algorithm uses the HL and LH bands to add the watermark where the motion in these bands does not affect the quality of extracted watermark if the video is subjected to different types of malicious attacks. Watermark is embedded in an additive way using random Gaussian distribution in video sequences. The method is tested on different types of video (compressed DVD quality movie and uncompressed digital camera movie). The proposed watermarking method in frequency domain has strong robustness against some attacks such as ...

  8. 一种将增强层运动补偿与混合时空域的FGS融合的视频编码方案%An improved spatio-temporal-SNR FGS video coding scheme using motion compensation on enhancement layers

    Institute of Scientific and Technical Information of China (English)

    江涛; 张兆扬; 马然; 石旭利

    2006-01-01

    In this paper an effective MC + FGSST structure is explored, which is appropriate for scalable video coding. The structure obtains spatio-temporal-SNR fine granular scalability, and achieves high coding efficiency at the same time. Users can acquire their necessary scalability by choosing and combming them. Then, a high-powered codec solution based on this structure is presented. Subsequently, a focus issue on the right number of bit-planes should be used for motion compensation is discussed, and an algorithm is presented for this issue. The proposed codec saves a lot of hardware expense. Simulation results indicate that the performance of MC + FGSST structure is superior to that of FGSST structure.

  9. Collective motion

    Science.gov (United States)

    Vicsek, Tamás; Zafeiris, Anna

    2012-08-01

    We review the observations and the basic laws describing the essential aspects of collective motion - being one of the most common and spectacular manifestation of coordinated behavior. Our aim is to provide a balanced discussion of the various facets of this highly multidisciplinary field, including experiments, mathematical methods and models for simulations, so that readers with a variety of background could get both the basics and a broader, more detailed picture of the field. The observations we report on include systems consisting of units ranging from macromolecules through metallic rods and robots to groups of animals and people. Some emphasis is put on models that are simple and realistic enough to reproduce the numerous related observations and are useful for developing concepts for a better understanding of the complexity of systems consisting of many simultaneously moving entities. As such, these models allow the establishing of a few fundamental principles of flocking. In particular, it is demonstrated, that in spite of considerable differences, a number of deep analogies exist between equilibrium statistical physics systems and those made of self-propelled (in most cases living) units. In both cases only a few well defined macroscopic/collective states occur and the transitions between these states follow a similar scenario, involving discontinuity and algebraic divergences.

  10. Serious Video Games for Health How Behavioral Science Guided the Development of a Serious Video Game.

    Science.gov (United States)

    Thompson, Debbe; Baranowski, Tom; Buday, Richard; Baranowski, Janice; Thompson, Victoria; Jago, Russell; Griffith, Melissa Juliano

    2010-08-01

    Serious video games for health are designed to entertain players while attempting to modify some aspect of their health behavior. Behavior is a complex process influenced by multiple factors, often making it difficult to change. Behavioral science provides insight into factors that influence specific actions that can be used to guide key game design decisions. This article reports how behavioral science guided the design of a serious video game to prevent Type 2 diabetes and obesity among youth, two health problems increasing in prevalence. It demonstrates how video game designers and behavioral scientists can combine their unique talents to create a highly focused serious video game that entertains while promoting behavior change.

  11. Do-It-Yourself Whiteboard-Style Physics Video Lectures

    Science.gov (United States)

    Douglas, Scott Samuel; Aiken, John Mark; Greco, Edwin; Schatz, Michael; Lin, Shih-Yin

    2017-01-01

    Video lectures are increasingly being used in physics instruction. For example, video lectures can be used to "flip" the classroom, i.e., to deliver, via the Internet, content that is traditionally transmitted by in-class lectures (e.g., presenting concepts, working examples, etc.), thereby freeing up classroom time for more interactive instruction. To date, most video lectures are live lecture recordings or screencasts. The hand-animated "whiteboard" video is an alternative to these more common styles and affords unique creative opportunities such as stop-motion animation or visual "demonstrations" of phenomena that would be difficult to demo in a classroom. In the spring of 2013, a series of whiteboard-style videos were produced to provide video lecture content for Georgia Tech introductory physics instruction, including flipped courses and a MOOC. This set of videos (which also includes screencasts and live recordings) can be found on the "Your World is Your Lab" YouTube channel. In this article, we describe this method of video production, which is suitable for an instructor working solo or in collaboration with students; we explore students' engagement with these videos in a separate work. A prominent example of whiteboard animation is the "Minute Physics" video series by Henry Reich, whose considerable popularity and accessible, cartoony style were the original inspiration for our own video lectures.

  12. Perception and discrimination of movement and biological motion patterns in fish.

    Science.gov (United States)

    Schluessel, V; Kortekamp, N; Cortes, J A Ortiz; Klein, A; Bleckmann, H

    2015-09-01

    Vision is of primary importance for many fish species, as is the recognition of movement. With the exception of one study, assessing the influence of conspecific movement on shoaling behaviour, the perception of biological motion in fish had not been studied in a cognitive context. The aim of the present study was therefore to assess the discrimination abilities of two teleost species in regard to simple and complex movement patterns of dots and objects, including biological motion patterns using point and point-light displays (PDs and PLDs). In two-alternative forced-choice experiments, in which choosing the designated positive stimulus was food-reinforced, fish were first tested in their ability to distinguish the video of a stationary black dot on a light background from the video of a moving black dot presented at different frequencies and amplitudes. While all fish succeeded in learning the task, performance declined with decreases in either or both parameters. In subsequent tests, cichlids and damselfish distinguished successfully between the videos of two dots moving at different speeds and amplitudes, between two moving dot patterns (sinus vs. expiring sinus) and between animated videos of two moving organisms (trout vs. eel). Transfer tests following the training of the latter showed that fish were unable to identify the positive stimulus (trout) by means of its PD alone, thereby indicating that the ability of humans to spontaneously recognize an organism based on its biological motion may not be present in fish. All participating individuals successfully discriminated between two PDs and two PLDs after a short period of training, indicating that biological motions presented in form of PLDs are perceived and can be distinguished. Results were the same for the presentation of dark dots on a light background and light dots on a dark background.

  13. A Novel High Efficiency Fractal Multiview Video Codec

    Directory of Open Access Journals (Sweden)

    Shiping Zhu

    2015-01-01

    Full Text Available Multiview video which is one of the main types of three-dimensional (3D video signals, captured by a set of video cameras from various viewpoints, has attracted much interest recently. Data compression for multiview video has become a major issue. In this paper, a novel high efficiency fractal multiview video codec is proposed. Firstly, intraframe algorithm based on the H.264/AVC intraprediction modes and combining fractal and motion compensation (CFMC algorithm in which range blocks are predicted by domain blocks in the previously decoded frame using translational motion with gray value transformation is proposed for compressing the anchor viewpoint video. Then temporal-spatial prediction structure and fast disparity estimation algorithm exploiting parallax distribution constraints are designed to compress the multiview video data. The proposed fractal multiview video codec can exploit temporal and spatial correlations adequately. Experimental results show that it can obtain about 0.36 dB increase in the decoding quality and 36.21% decrease in encoding bitrate compared with JMVC8.5, and the encoding time is saved by 95.71%. The rate-distortion comparisons with other multiview video coding methods also demonstrate the superiority of the proposed scheme.

  14. An unsupervised method for summarizing egocentric sport videos

    Science.gov (United States)

    Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec

    2015-12-01

    People are getting more interested to record their sport activities using head-worn or hand-held cameras. This type of videos which is called egocentric sport videos has different motion and appearance patterns compared with life-logging videos. While a life-logging video can be defined in terms of well-defined human-object interactions, notwithstanding, it is not trivial to describe egocentric sport videos using well-defined activities. For this reason, summarizing egocentric sport videos based on human-object interaction might fail to produce meaningful results. In this paper, we propose an unsupervised method for summarizing egocentric videos by identifying the key-frames of the video. Our method utilizes both appearance and motion information and it automatically finds the number of the key-frames. Our blind user study on the new dataset collected from YouTube shows that in 93:5% cases, the users choose the proposed method as their first video summary choice. In addition, our method is within the top 2 choices of the users in 99% of studies.

  15. An Adaptive Frame Skipping and VOP Interpolation Algorithm for Video Object Segmentation

    Institute of Scientific and Technical Information of China (English)

    YANGGaobo; ZHANGZhaoyang

    2004-01-01

    Video object segmentation is a key step for the successful use of MPEG-4. However, most of the current available segmentation algorithms are still far away from real-time performance. In order to improve the processing speed, an adaptive frame skipping and VOP interpolation algorithm is proposed in this paper. It adaptively determines the number of skipped frames based on the rigidity and motion complexity of video object. To interpolate the VOPs for skipped frames, a hi-directional projection scheme is adopted. Its principle is to perform a classification of those regions obtained by spatial segmentation for every frame in the sequence. It is valid for both rigid object and non-rigid object and can get good localization of object boundaries. Experimental results show that the proposed approach can improve the processing speed greatly while maintaining visually pleasant results.

  16. Motion-Adaptive Depth Superresolution.

    Science.gov (United States)

    Kamilov, Ulugbek S; Boufounos, Petros T

    2017-04-01

    Multi-modal sensing is increasingly becoming important in a number of applications, providing new capabilities and processing challenges. In this paper, we explore the benefit of combining a low-resolution depth sensor with a high-resolution optical video sensor, in order to provide a high-resolution depth map of the scene. We propose a new formulation that is able to incorporate temporal information and exploit the motion of objects in the video to significantly improve the results over existing methods. In particular, our approach exploits the space-time redundancy in the depth and intensity using motion-adaptive low-rank regularization. We provide experiments to validate our approach and confirm that the quality of the estimated high-resolution depth is improved substantially. Our approach can be a first component in systems using vision techniques that rely on high-resolution depth information.

  17. An investigation comparing traditional recitation instruction to computer tutorials which combine three-dimensional animation with varying levels of visual complexity, including digital video in teaching various chemistry topics

    Science.gov (United States)

    Graves, A. Palmer

    This study examines the effect of increasing the visual complexity used in computer assisted instruction in general chemistry. Traditional recitation instruction was used as a control for the experiment. One tutorial presented a chemistry topic using 3-D animation showing molecular activity and symbolic representation of the macroscopic view of a chemical phenomenon. A second tutorial presented the same topic but simultaneously presented students with a digital video movie showing the phenomena and 3-D animation showing the molecular view of the phenomena. This experimental set-up was used in two different experiments during the first semester of college level general chemistry course. The topics covered were the molecular effect of heating water through the solid-liquid phase change and the kinetic molecular theory used in explaining pressure changes. The subjects used in the experiment were 236 college students enrolled in a freshman chemistry course at a large university. The data indicated that the simultaneous presentation of digital video, showing the solid to liquid phase change of water, with a molecular animation, showing the molecular behavior during the phase change, had a significant effect on student particulate understanding when compared to traditional recitation. Although the effect of the KMT tutorial was not statistically significant, there was a positive effect on student particulate understanding. The use of computer tutorial also had a significant effect on student attitude toward their comprehension of the lesson.

  18. Semantic home video categorization

    Science.gov (United States)

    Min, Hyun-Seok; Lee, Young Bok; De Neve, Wesley; Ro, Yong Man

    2009-02-01

    Nowadays, a strong need exists for the efficient organization of an increasing amount of home video content. To create an efficient system for the management of home video content, it is required to categorize home video content in a semantic way. So far, a significant amount of research has already been dedicated to semantic video categorization. However, conventional categorization approaches often rely on unnecessary concepts and complicated algorithms that are not suited in the context of home video categorization. To overcome the aforementioned problem, this paper proposes a novel home video categorization method that adopts semantic home photo categorization. To use home photo categorization in the context of home video, we segment video content into shots and extract key frames that represent each shot. To extract the semantics from key frames, we divide each key frame into ten local regions and extract lowlevel features. Based on the low level features extracted for each local region, we can predict the semantics of a particular key frame. To verify the usefulness of the proposed home video categorization method, experiments were performed with home video sequences, labeled by concepts part of the MPEG-7 VCE2 dataset. To verify the usefulness of the proposed home video categorization method, experiments were performed with 70 home video sequences. For the home video sequences used, the proposed system produced a recall of 77% and an accuracy of 78%.

  19. A subjective study to evaluate video quality assessment algorithms

    Science.gov (United States)

    Seshadrinathan, Kalpana; Soundararajan, Rajiv; Bovik, Alan C.; Cormack, Lawrence K.

    2010-02-01

    Automatic methods to evaluate the perceptual quality of a digital video sequence have widespread applications wherever the end-user is a human. Several objective video quality assessment (VQA) algorithms exist, whose performance is typically evaluated using the results of a subjective study performed by the video quality experts group (VQEG) in 2000. There is a great need for a free, publicly available subjective study of video quality that embodies state-of-the-art in video processing technology and that is effective in challenging and benchmarking objective VQA algorithms. In this paper, we present a study and a resulting database, known as the LIVE Video Quality Database, where 150 distorted video sequences obtained from 10 different source video content were subjectively evaluated by 38 human observers. Our study includes videos that have been compressed by MPEG-2 and H.264, as well as videos obtained by simulated transmission of H.264 compressed streams through error prone IP and wireless networks. The subjective evaluation was performed using a single stimulus paradigm with hidden reference removal, where the observers were asked to provide their opinion of video quality on a continuous scale. We also present the performance of several freely available objective, full reference (FR) VQA algorithms on the LIVE Video Quality Database. The recent MOtion-based Video Integrity Evaluation (MOVIE) index emerges as the leading objective VQA algorithm in our study, while the performance of the Video Quality Metric (VQM) and the Multi-Scale Structural SIMilarity (MS-SSIM) index is noteworthy. The LIVE Video Quality Database is freely available for download1 and we hope that our study provides researchers with a valuable tool to benchmark and improve the performance of objective VQA algorithms.

  20. Simple video format for mobile applications

    Science.gov (United States)

    Smith, John R.; Miao, Zhourong; Li, Chung-Sheng

    2000-04-01

    With the advent of pervasive computing, there is a growing demand for enabling multimedia applications on mobile devices. Large numbers of pervasive computing devices, such as personal digital assistants (PDAs), hand-held computer (HHC), smart phones, portable audio players, automotive computing devices, and wearable computers are gaining access to online information sources. However, the pervasive computing devices are often constrained along a number of dimensions, such as processing power, local storage, display size and depth, connectivity, and communication bandwidth, which makes it difficult to access rich image and video content. In this paper, we report on our initial efforts in designing a simple scalable video format with low-decoding and transcoding complexity for pervasive computing. The goal is to enable image and video access for mobile applications such as electronic catalog shopping, video conferencing, remote surveillance and video mail using pervasive computing devices.

  1. Spatial constraints of stereopsis in video displays

    Science.gov (United States)

    Schor, Clifton

    1989-01-01

    Recent development in video technology, such as the liquid crystal displays and shutters, have made it feasible to incorporate stereoscopic depth into the 3-D representations on 2-D displays. However, depth has already been vividly portrayed in video displays without stereopsis using the classical artists' depth cues described by Helmholtz (1866) and the dynamic depth cues described in detail by Ittleson (1952). Successful static depth cues include overlap, size, linear perspective, texture gradients, and shading. Effective dynamic cues include looming (Regan and Beverly, 1979) and motion parallax (Rogers and Graham, 1982). Stereoscopic depth is superior to the monocular distance cues under certain circumstances. It is most useful at portraying depth intervals as small as 5 to 10 arc secs. For this reason it is extremely useful in user-video interactions such as telepresence. Objects can be manipulated in 3-D space, for example, while a person who controls the operations views a virtual image of the manipulated object on a remote 2-D video display. Stereopsis also provides structure and form information in camouflaged surfaces such as tree foliage. Motion parallax also reveals form; however, without other monocular cues such as overlap, motion parallax can yield an ambiguous perception. For example, a turning sphere, portrayed as solid by parallax can appear to rotate either leftward or rightward. However, only one direction of rotation is perceived when stereo-depth is included. If the scene is static, then stereopsis is the principal cue for revealing the camouflaged surface structure. Finally, dynamic stereopsis provides information about the direction of motion in depth (Regan and Beverly, 1979). Clearly there are many spatial constraints, including spatial frequency content, retinal eccentricity, exposure duration, target spacing, and disparity gradient, which - when properly adjusted - can greatly enhance stereodepth in video displays.

  2. Complexity

    CERN Document Server

    Gershenson, Carlos

    2011-01-01

    The term complexity derives etymologically from the Latin plexus, which means interwoven. Intuitively, this implies that something complex is composed by elements that are difficult to separate. This difficulty arises from the relevant interactions that take place between components. This lack of separability is at odds with the classical scientific method - which has been used since the times of Galileo, Newton, Descartes, and Laplace - and has also influenced philosophy and engineering. In recent decades, the scientific study of complexity and complex systems has proposed a paradigm shift in science and philosophy, proposing novel methods that take into account relevant interactions.

  3. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer ... me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork ...

  4. Video Screen Capture Basics

    Science.gov (United States)

    Dunbar, Laura

    2014-01-01

    This article is an introduction to video screen capture. Basic information of two software programs, QuickTime for Mac and BlueBerry Flashback Express for PC, are also discussed. Practical applications for video screen capture are given.

  5. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Providers Patient Surveys Related Links Health Care Insurance Toolkit ANA Store Clinical Trials.gov Additional Resources Videos ... Providers Patient Surveys Related Links Health Care Insurance Toolkit ANA Store Clinical Trials.gov Additional Resources Videos ...

  6. STUDY ON SEMANTIC-BASED VIDEO WATERMARKING METHOD

    Institute of Scientific and Technical Information of China (English)

    Wang Xuhai; Tong Ming; Qin Kezhen

    2010-01-01

    A new video watermarking method for the Audio Video coding Standard (AVS) is proposed. According to human visual masking properties,this method determines the region of interest for watermark embedding by analyzing video semantics,and generates dynamic robust watermark according to video motion semantics,and embeds watermarks in the Intermediate Frequency (IF) Discrete Cosine Transform (DCT) coefficients of the luminance sub-block prediction residual in the region of interest. This method controls watermark embedding strength adaptively by video textures semantics. Experiments show that this method is robust not only to various conventional attacks,but also to re-frame,frame cropping,frame deletion and other video-specific attacks.

  7. MRT letter: visual attention driven framework for hysteroscopy video abstraction.

    Science.gov (United States)

    Ejaz, Naveed; Mehmood, Irfan; Baik, Sung Wook

    2013-06-01

    Diagnostic hysteroscopy is a popular method for investigating the regions in the female reproductive system. The videos generated by hysteroscopy sessions of patients are recurrently archived in medical libraries. Gynecologists often need to browse these libraries in search of similar cases or for reviewing old videos of a patient. Diagnostic hysteroscopy videos contain a lot of information with abundant redundancy. Key frame extraction-based video summarization can be used to reduce this huge amount of data. Moreover, key frames can be used for browsing and indexing of hysteroscopy videos. In this article, a domain specific visual attention driven framework for summarization of hysteroscopy videos is proposed. The visual attention model is materialized by computing saliency based on color, texture, and motion. The experimental results, in comparison with other techniques, demonstrate the efficacy of the proposed framework.

  8. An Algorithm of Extracting I-Frame in Compressed Video

    Directory of Open Access Journals (Sweden)

    Zhu Yaling

    2015-01-01

    Full Text Available The MPEG video data includes three types of frames, that is: I-frame, P-frame and B-frame. However, the I-frame records the main information of video data, the P-frame and the B-frame are just regarded as motion compensations of the I-frame. This paper presents the approach which analyzes the MPEG video stream in the compressed domain, and find out the key frame of MPEG video stream by extracting the I-frame. Experiments indicated that this method can be automatically realized in the compressed MPEG video and it will lay the foundation for the video processing in the future.

  9. 47 CFR 79.3 - Video description of video programming.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Video description of video programming. 79.3... CLOSED CAPTIONING AND VIDEO DESCRIPTION OF VIDEO PROGRAMMING § 79.3 Video description of video programming. (a) Definitions. For purposes of this section the following definitions shall apply:...

  10. Video analysis platform

    OpenAIRE

    FLORES, Pablo; Arias, Pablo; Lecumberry, Federico; Pardo, Álvaro

    2006-01-01

    In this article we present the Video Analysis Platform (VAP) which is an open source software framework for video analysis, processing and description. The main goals of VAP are: to provide a multiplatform system which allows the easy implementation of video algorithms, provide structures and algorithms for the segmentation of video data in its different levels of abstraction: shots, frames, objects, regions, etc, permit the generation and comparison of MPEG7-like descriptors, and develop tes...

  11. Making good physics videos

    Science.gov (United States)

    Lincoln, James

    2017-05-01

    Online videos are an increasingly important way technology is contributing to the improvement of physics teaching. Students and teachers have begun to rely on online videos to provide them with content knowledge and instructional strategies. Online audiences are expecting greater production value, and departments are sometimes requesting educators to post video pre-labs or to flip our classrooms. In this article, I share my advice on creating engaging physics videos.

  12. Evaluation of Fast-Forward Video Visualization.

    Science.gov (United States)

    Hoferlin, M; Kurzhals, K; Hoferlin, B; Heidemann, G; Weiskopf, D

    2012-12-01

    We evaluate and compare video visualization techniques based on fast-forward. A controlled laboratory user study (n = 24) was conducted to determine the trade-off between support of object identification and motion perception, two properties that have to be considered when choosing a particular fast-forward visualization. We compare four different visualizations: two representing the state-of-the-art and two new variants of visualization introduced in this paper. The two state-of-the-art methods we consider are frame-skipping and temporal blending of successive frames. Our object trail visualization leverages a combination of frame-skipping and temporal blending, whereas predictive trajectory visualization supports motion perception by augmenting the video frames with an arrow that indicates the future object trajectory. Our hypothesis was that each of the state-of-the-art methods satisfies just one of the goals: support of object identification or motion perception. Thus, they represent both ends of the visualization design. The key findings of the evaluation are that object trail visualization supports object identification, whereas predictive trajectory visualization is most useful for motion perception. However, frame-skipping surprisingly exhibits reasonable performance for both tasks. Furthermore, we evaluate the subjective performance of three different playback speed visualizations for adaptive fast-forward, a subdomain of video fast-forward.

  13. Enhanced Video-Oculography System

    Science.gov (United States)

    Moore, Steven T.; MacDougall, Hamish G.

    2009-01-01

    A previously developed video-oculography system has been enhanced for use in measuring vestibulo-ocular reflexes of a human subject in a centrifuge, motor vehicle, or other setting. The system as previously developed included a lightweight digital video camera mounted on goggles. The left eye was illuminated by an infrared light-emitting diode via a dichroic mirror, and the camera captured images of the left eye in infrared light. To extract eye-movement data, the digitized video images were processed by software running in a laptop computer. Eye movements were calibrated by having the subject view a target pattern, fixed with respect to the subject s head, generated by a goggle-mounted laser with a diffraction grating. The system as enhanced includes a second camera for imaging the scene from the subject s perspective, and two inertial measurement units (IMUs) for measuring linear accelerations and rates of rotation for computing head movements. One IMU is mounted on the goggles, the other on the centrifuge or vehicle frame. All eye-movement and head-motion data are time-stamped. In addition, the subject s point of regard is superimposed on each scene image to enable analysis of patterns of gaze in real time.

  14. Smart sensing surveillance video system

    Science.gov (United States)

    Hsu, Charles; Szu, Harold

    2016-05-01

    An intelligent video surveillance system is able to detect and identify abnormal and alarming situations by analyzing object movement. The Smart Sensing Surveillance Video (S3V) System is proposed to minimize video processing and transmission, thus allowing a fixed number of cameras to be connected on the system, and making it suitable for its applications in remote battlefield, tactical, and civilian applications including border surveillance, special force operations, airfield protection, perimeter and building protection, and etc. The S3V System would be more effective if equipped with visual understanding capabilities to detect, analyze, and recognize objects, track motions, and predict intentions. In addition, alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. The S3V System capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments. It would be directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.

  15. Video Self-Modeling

    Science.gov (United States)

    Buggey, Tom; Ogle, Lindsey

    2012-01-01

    Video self-modeling (VSM) first appeared on the psychology and education stage in the early 1970s. The practical applications of VSM were limited by lack of access to tools for editing video, which is necessary for almost all self-modeling videos. Thus, VSM remained in the research domain until the advent of camcorders and VCR/DVD players and,…

  16. Waisda?: video labeling game

    NARCIS (Netherlands)

    Hildebrand, M.; Brinkerink, M.; Gligorov, R.; Steenbergen, M. van; Huijkman, J.; Oomen, J.

    2013-01-01

    The Waisda? video labeling game is a crowsourcing tool to collect user-generated metadata for video clips. It follows the paradigm of games-with-a-purpose, where two or more users play against each other by entering tags that describe the content of the video. Players score points by entering the sa

  17. Video: Modalities and Methodologies

    Science.gov (United States)

    Hadfield, Mark; Haw, Kaye

    2012-01-01

    In this article, we set out to explore what we describe as the use of video in various modalities. For us, modality is a synthesizing construct that draws together and differentiates between the notion of "video" both as a method and as a methodology. It encompasses the use of the term video as both product and process, and as a data collection…

  18. Developing a Promotional Video

    Science.gov (United States)

    Epley, Hannah K.

    2014-01-01

    There is a need for Extension professionals to show clientele the benefits of their program. This article shares how promotional videos are one way of reaching audiences online. An example is given on how a promotional video has been used and developed using iMovie software. Tips are offered for how professionals can create a promotional video and…

  19. Pavideoge: A New Video Processing Method in Video Search Engine

    CERN Document Server

    Yang, Pu; Chen, Guang

    2009-01-01

    In this paper, we study the problems of video processing in video search engine. Video has now become a very important kind of data in Internet; while searching for video is still a challenging task due to the inner properties of video: requiring enormous storage space, being independent, expressing information hiddenly. To handle the properties of video more effectively, in this paper, we propose a new video processing method in video search engine. In detail, the core of the new video processing method is creating pavideoge--a new data type, which contains the video advantages and webpage advantages. The pavideoge has four attributes: real link, videorank, text information and playnum. Each of them combines video's properties with webpage's. Video search engine based on the pavideoge can retrieve video more effectively. The experiment results show the encouraging performance of our approach. Based on the pavideoge, our video search engine can retrieve more precise videos in comparsion with previous related ...

  20. Human-robot trust. Is motion fluency an effective behavioral style for regulating robot trustworthiness?

    NARCIS (Netherlands)

    Ligthart, M.; Brule, R. van den; Haselager, W.F.G.

    2013-01-01

    Finding good behavioral styles to express robot trustworthiness will optimize the usage of robots. In previous research, motion fluency as behavioral style was studied. Smooth robot motions were compared with trembling robot motions. In a video experiment an effect of motion fluency on trust was fou