WorldWideScience

Sample records for video segment shows

  1. Gamifying Video Object Segmentation.

    Science.gov (United States)

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  2. Video segmentation using keywords

    Science.gov (United States)

    Ton-That, Vinh; Vong, Chi-Tai; Nguyen-Dao, Xuan-Truong; Tran, Minh-Triet

    2018-04-01

    At DAVIS-2016 Challenge, many state-of-art video segmentation methods achieve potential results, but they still much depend on annotated frames to distinguish between background and foreground. It takes a lot of time and efforts to create these frames exactly. In this paper, we introduce a method to segment objects from video based on keywords given by user. First, we use a real-time object detection system - YOLOv2 to identify regions containing objects that have labels match with the given keywords in the first frame. Then, for each region identified from the previous step, we use Pyramid Scene Parsing Network to assign each pixel as foreground or background. These frames can be used as input frames for Object Flow algorithm to perform segmentation on entire video. We conduct experiments on a subset of DAVIS-2016 dataset in half the size of its original size, which shows that our method can handle many popular classes in PASCAL VOC 2012 dataset with acceptable accuracy, about 75.03%. We suggest widely testing by combining other methods to improve this result in the future.

  3. Joint Rendering and Segmentation of Free-Viewpoint Video

    Directory of Open Access Journals (Sweden)

    Ishii Masato

    2010-01-01

    Full Text Available Abstract This paper presents a method that jointly performs synthesis and object segmentation of free-viewpoint video using multiview video as the input. This method is designed to achieve robust segmentation from online video input without per-frame user interaction and precomputations. This method shares a calculation process between the synthesis and segmentation steps; the matching costs calculated through the synthesis step are adaptively fused with other cues depending on the reliability in the segmentation step. Since the segmentation is performed for arbitrary viewpoints directly, the extracted object can be superimposed onto another 3D scene with geometric consistency. We can observe that the object and new background move naturally along with the viewpoint change as if they existed together in the same space. In the experiments, our method can process online video input captured by a 25-camera array and show the result image at 4.55 fps.

  4. Selecting salient frames for spatiotemporal video modeling and segmentation.

    Science.gov (United States)

    Song, Xiaomu; Fan, Guoliang

    2007-12-01

    We propose a new statistical generative model for spatiotemporal video segmentation. The objective is to partition a video sequence into homogeneous segments that can be used as "building blocks" for semantic video segmentation. The baseline framework is a Gaussian mixture model (GMM)-based video modeling approach that involves a six-dimensional spatiotemporal feature space. Specifically, we introduce the concept of frame saliency to quantify the relevancy of a video frame to the GMM-based spatiotemporal video modeling. This helps us use a small set of salient frames to facilitate the model training by reducing data redundancy and irrelevance. A modified expectation maximization algorithm is developed for simultaneous GMM training and frame saliency estimation, and the frames with the highest saliency values are extracted to refine the GMM estimation for video segmentation. Moreover, it is interesting to find that frame saliency can imply some object behaviors. This makes the proposed method also applicable to other frame-related video analysis tasks, such as key-frame extraction, video skimming, etc. Experiments on real videos demonstrate the effectiveness and efficiency of the proposed method.

  5. Video-based noncooperative iris image segmentation.

    Science.gov (United States)

    Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig

    2011-02-01

    In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.

  6. Automatic Story Segmentation for TV News Video Using Multiple Modalities

    Directory of Open Access Journals (Sweden)

    Émilie Dumont

    2012-01-01

    Full Text Available While video content is often stored in rather large files or broadcasted in continuous streams, users are often interested in retrieving only a particular passage on a topic of interest to them. It is, therefore, necessary to split video documents or streams into shorter segments corresponding to appropriate retrieval units. We propose here a method for the automatic segmentation of TV news videos into stories. A-multiple-descriptor based segmentation approach is proposed. The selected multimodal features are complementary and give good insights about story boundaries. Once extracted, these features are expanded with a local temporal context and combined by an early fusion process. The story boundaries are then predicted using machine learning techniques. We investigate the system by experiments conducted using TRECVID 2003 data and protocol of the story boundary detection task, and we show that the proposed approach outperforms the state-of-the-art methods while requiring a very small amount of manual annotation.

  7. Segment scheduling method for reducing 360° video streaming latency

    Science.gov (United States)

    Gudumasu, Srinivas; Asbun, Eduardo; He, Yong; Ye, Yan

    2017-09-01

    360° video is an emerging new format in the media industry enabled by the growing availability of virtual reality devices. It provides the viewer a new sense of presence and immersion. Compared to conventional rectilinear video (2D or 3D), 360° video poses a new and difficult set of engineering challenges on video processing and delivery. Enabling comfortable and immersive user experience requires very high video quality and very low latency, while the large video file size poses a challenge to delivering 360° video in a quality manner at scale. Conventionally, 360° video represented in equirectangular or other projection formats can be encoded as a single standards-compliant bitstream using existing video codecs such as H.264/AVC or H.265/HEVC. Such method usually needs very high bandwidth to provide an immersive user experience. While at the client side, much of such high bandwidth and the computational power used to decode the video are wasted because the user only watches a small portion (i.e., viewport) of the entire picture. Viewport dependent 360°video processing and delivery approaches spend more bandwidth on the viewport than on non-viewports and are therefore able to reduce the overall transmission bandwidth. This paper proposes a dual buffer segment scheduling algorithm for viewport adaptive streaming methods to reduce latency when switching between high quality viewports in 360° video streaming. The approach decouples the scheduling of viewport segments and non-viewport segments to ensure the viewport segment requested matches the latest user head orientation. A base layer buffer stores all lower quality segments, and a viewport buffer stores high quality viewport segments corresponding to the most recent viewer's head orientation. The scheduling scheme determines viewport requesting time based on the buffer status and the head orientation. This paper also discusses how to deploy the proposed scheduling design for various viewport adaptive video

  8. User-assisted video segmentation system for visual communication

    Science.gov (United States)

    Wu, Zhengping; Chen, Chun

    2002-01-01

    Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.

  9. Fast Appearance Modeling for Automatic Primary Video Object Segmentation.

    Science.gov (United States)

    Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong

    2016-02-01

    Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.

  10. News video story segmentation method using fusion of audio-visual features

    Science.gov (United States)

    Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang

    2007-11-01

    News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.

  11. B-Spline Active Contour with Handling of Topology Changes for Fast Video Segmentation

    Directory of Open Access Journals (Sweden)

    Frederic Precioso

    2002-06-01

    Full Text Available This paper deals with video segmentation for MPEG-4 and MPEG-7 applications. Region-based active contour is a powerful technique for segmentation. However most of these methods are implemented using level sets. Although level-set methods provide accurate segmentation, they suffer from large computational cost. We propose to use a regular B-spline parametric method to provide a fast and accurate segmentation. Our B-spline interpolation is based on a fixed number of points 2j depending on the level of the desired details. Through this spatial multiresolution approach, the computational cost of the segmentation is reduced. We introduce a length penalty. This results in improving both smoothness and accuracy. Then we show some experiments on real-video sequences.

  12. Segment-Tube: Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation

    OpenAIRE

    Le Wang; Xuhuan Duan; Qilin Zhang; Zhenxing Niu; Gang Hua; Nanning Zheng

    2018-01-01

    Inspired by the recent spatio-temporal action localization efforts with tubelets (sequences of bounding boxes), we present a new spatio-temporal action localization detector Segment-tube, which consists of sequences of per-frame segmentation masks. The proposed Segment-tube detector can temporally pinpoint the starting/ending frame of each action category in the presence of preceding/subsequent interference actions in untrimmed videos. Simultaneously, the Segment-tube detector produces per-fr...

  13. A new user-assisted segmentation and tracking technique for an object-based video editing system

    Science.gov (United States)

    Yu, Hong Y.; Hong, Sung-Hoon; Lee, Mike M.; Choi, Jae-Gark

    2004-03-01

    This paper presents a semi-automatic segmentation method which can be used to generate video object plane (VOP) for object based coding scheme and multimedia authoring environment. Semi-automatic segmentation can be considered as a user-assisted segmentation technique. A user can initially mark objects of interest around the object boundaries and then the user-guided and selected objects are continuously separated from the unselected areas through time evolution in the image sequences. The proposed segmentation method consists of two processing steps: partially manual intra-frame segmentation and fully automatic inter-frame segmentation. The intra-frame segmentation incorporates user-assistance to define the meaningful complete visual object of interest to be segmentation and decides precise object boundary. The inter-frame segmentation involves boundary and region tracking to obtain temporal coherence of moving object based on the object boundary information of previous frame. The proposed method shows stable efficient results that could be suitable for many digital video applications such as multimedia contents authoring, content based coding and indexing. Based on these results, we have developed objects based video editing system with several convenient editing functions.

  14. Temporally coherent 4D video segmentation for teleconferencing

    Science.gov (United States)

    Ehmann, Jana; Guleryuz, Onur G.

    2013-09-01

    We develop an algorithm for 4-D (RGB+Depth) video segmentation targeting immersive teleconferencing ap- plications on emerging mobile devices. Our algorithm extracts users from their environments and places them onto virtual backgrounds similar to green-screening. The virtual backgrounds increase immersion and interac- tivity, relieving the users of the system from distractions caused by disparate environments. Commodity depth sensors, while providing useful information for segmentation, result in noisy depth maps with a large number of missing depth values. By combining depth and RGB information, our work signi¯cantly improves the other- wise very coarse segmentation. Further imposing temporal coherence yields compositions where the foregrounds seamlessly blend with the virtual backgrounds with minimal °icker and other artifacts. We achieve said improve- ments by correcting the missing information in depth maps before fast RGB-based segmentation, which operates in conjunction with temporal coherence. Simulation results indicate the e±cacy of the proposed system in video conferencing scenarios.

  15. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    Science.gov (United States)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  16. GPU-Accelerated Foreground Segmentation and Labeling for Real-Time Video Surveillance

    Directory of Open Access Journals (Sweden)

    Wei Song

    2016-09-01

    Full Text Available Real-time and accurate background modeling is an important researching topic in the fields of remote monitoring and video surveillance. Meanwhile, effective foreground detection is a preliminary requirement and decision-making basis for sustainable energy management, especially in smart meters. The environment monitoring results provide a decision-making basis for energy-saving strategies. For real-time moving object detection in video, this paper applies a parallel computing technology to develop a feedback foreground–background segmentation method and a parallel connected component labeling (PCCL algorithm. In the background modeling method, pixel-wise color histograms in graphics processing unit (GPU memory is generated from sequential images. If a pixel color in the current image does not locate around the peaks of its histogram, it is segmented as a foreground pixel. From the foreground segmentation results, a PCCL algorithm is proposed to cluster the foreground pixels into several groups in order to distinguish separate blobs. Because the noisy spot and sparkle in the foreground segmentation results always contain a small quantity of pixels, the small blobs are removed as noise in order to refine the segmentation results. The proposed GPU-based image processing algorithms are implemented using the compute unified device architecture (CUDA toolkit. The testing results show a significant enhancement in both speed and accuracy.

  17. USABILITY TESTING OF JAPANESE CAPTIONS SEGMENTATION SYSTEM TO SCAFFOLD BEGINNERS TO COMPREHEND JAPANESE VIDEOS

    Directory of Open Access Journals (Sweden)

    Ya-Fei Yang

    2013-06-01

    Full Text Available A major learning difficulty of Japanese foreign language (JFL learners is the complex composition of two syllabaries, hiragana and katakana, and kanji characters adopted from logographic Chinese ones. As the number of Japanese language learners increases, computer-assisted Japanese language education gradually gains more attention. This study aimed to adopt a Japanese word segmentation system to help JFL learners overcome literacy problems. This study adopted MeCab, a Japanese morphological analyzer and part-of-speech (POS tagger, to segment Japanese texts into separate morphemes by adding spaces and to attach POS tags to each morpheme for beginners. The participants were asked to participate in three experimental activities involvingwatching two Japanese videos with general or segmented Japanese captions and complete the Nielsen’s Attributes of Usability (NAU survey and the After Scenario Questionnaire (ASQ to evaluate the usability of the learning activities. The results of the system evaluation showed that the videos with the segmented captions could increase the participants’ learning motivation and willingness to adopt the word segmentation system to learn Japanese.

  18. Video segmentation and camera motion characterization using compressed data

    Science.gov (United States)

    Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain

    1997-10-01

    We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.

  19. Rate Adaptive Selective Segment Assignment for Reliable Wireless Video Transmission

    Directory of Open Access Journals (Sweden)

    Sajid Nazir

    2012-01-01

    Full Text Available A reliable video communication system is proposed based on data partitioning feature of H.264/AVC, used to create a layered stream, and LT codes for erasure protection. The proposed scheme termed rate adaptive selective segment assignment (RASSA is an adaptive low-complexity solution to varying channel conditions. The comparison of the results of the proposed scheme is also provided for slice-partitioned H.264/AVC data. Simulation results show competitiveness of the proposed scheme compared to optimized unequal and equal error protection solutions. The simulation results also demonstrate that a high visual quality video transmission can be maintained despite the adverse effect of varying channel conditions and the number of decoding failures can be reduced.

  20. Segmentation Based Video Steganalysis to Detect Motion Vector Modification

    Directory of Open Access Journals (Sweden)

    Peipei Wang

    2017-01-01

    Full Text Available This paper presents a steganalytic approach against video steganography which modifies motion vector (MV in content adaptive manner. Current video steganalytic schemes extract features from fixed-length frames of the whole video and do not take advantage of the content diversity. Consequently, the effectiveness of the steganalytic feature is influenced by video content and the problem of cover source mismatch also affects the steganalytic performance. The goal of this paper is to propose a steganalytic method which can suppress the differences of statistical characteristics caused by video content. The given video is segmented to subsequences according to block’s motion in every frame. The steganalytic features extracted from each category of subsequences with close motion intensity are used to build one classifier. The final steganalytic result can be obtained by fusing the results of weighted classifiers. The experimental results have demonstrated that our method can effectively improve the performance of video steganalysis, especially for videos of low bitrate and low embedding ratio.

  1. Segmentation of Pollen Tube Growth Videos Using Dynamic Bi-Modal Fusion and Seam Carving.

    Science.gov (United States)

    Tambo, Asongu L; Bhanu, Bir

    2016-05-01

    The growth of pollen tubes is of significant interest in plant cell biology, as it provides an understanding of internal cell dynamics that affect observable structural characteristics such as cell diameter, length, and growth rate. However, these parameters can only be measured in experimental videos if the complete shape of the cell is known. The challenge is to accurately obtain the cell boundary in noisy video images. Usually, these measurements are performed by a scientist who manually draws regions-of-interest on the images displayed on a computer screen. In this paper, a new automated technique is presented for boundary detection by fusing fluorescence and brightfield images, and a new efficient method of obtaining the final cell boundary through the process of Seam Carving is proposed. This approach takes advantage of the nature of the fusion process and also the shape of the pollen tube to efficiently search for the optimal cell boundary. In video segmentation, the first two frames are used to initialize the segmentation process by creating a search space based on a parametric model of the cell shape. Updates to the search space are performed based on the location of past segmentations and a prediction of the next segmentation.Experimental results show comparable accuracy to a previous method, but significant decrease in processing time. This has the potential for real time applications in pollen tube microscopy.

  2. An improvement analysis on video compression using file segmentation

    Science.gov (United States)

    Sharma, Shubhankar; Singh, K. John; Priya, M.

    2017-11-01

    From the past two decades the extreme evolution of the Internet has lead a massive rise in video technology and significantly video consumption over the Internet which inhabits the bulk of data traffic in general. Clearly, video consumes that so much data size on the World Wide Web, to reduce the burden on the Internet and deduction of bandwidth consume by video so that the user can easily access the video data.For this, many video codecs are developed such as HEVC/H.265 and V9. Although after seeing codec like this one gets a dilemma of which would be improved technology in the manner of rate distortion and the coding standard.This paper gives a solution about the difficulty for getting low delay in video compression and video application e.g. ad-hoc video conferencing/streaming or observation by surveillance. Also this paper describes the benchmark of HEVC and V9 technique of video compression on subjective oral estimations of High Definition video content, playback on web browsers. Moreover, this gives the experimental ideology of dividing the video file into several segments for compression and putting back together to improve the efficiency of video compression on the web as well as on the offline mode.

  3. Stochastic modeling of soundtrack for efficient segmentation and indexing of video

    Science.gov (United States)

    Naphade, Milind R.; Huang, Thomas S.

    1999-12-01

    Tools for efficient and intelligent management of digital content are essential for digital video data management. An extremely challenging research area in this context is that of multimedia analysis and understanding. The capabilities of audio analysis in particular for video data management are yet to be fully exploited. We present a novel scheme for indexing and segmentation of video by analyzing the audio track. This analysis is then applied to the segmentation and indexing of movies. We build models for some interesting events in the motion picture soundtrack. The models built include music, human speech and silence. We propose the use of hidden Markov models to model the dynamics of the soundtrack and detect audio-events. Using these models we segment and index the soundtrack. A practical problem in motion picture soundtracks is that the audio in the track is of a composite nature. This corresponds to the mixing of sounds from different sources. Speech in foreground and music in background are common examples. The coexistence of multiple individual audio sources forces us to model such events explicitly. Experiments reveal that explicit modeling gives better result than modeling individual audio events separately.

  4. Audio scene segmentation for video with generic content

    Science.gov (United States)

    Niu, Feng; Goela, Naveen; Divakaran, Ajay; Abdel-Mottaleb, Mohamed

    2008-01-01

    In this paper, we present a content-adaptive audio texture based method to segment video into audio scenes. The audio scene is modeled as a semantically consistent chunk of audio data. Our algorithm is based on "semantic audio texture analysis." At first, we train GMM models for basic audio classes such as speech, music, etc. Then we define the semantic audio texture based on those classes. We study and present two types of scene changes, those corresponding to an overall audio texture change and those corresponding to a special "transition marker" used by the content creator, such as a short stretch of music in a sitcom or silence in dramatic content. Unlike prior work using genre specific heuristics, such as some methods presented for detecting commercials, we adaptively find out if such special transition markers are being used and if so, which of the base classes are being used as markers without any prior knowledge about the content. Our experimental results show that our proposed audio scene segmentation works well across a wide variety of broadcast content genres.

  5. Candidate Smoke Region Segmentation of Fire Video Based on Rough Set Theory

    Directory of Open Access Journals (Sweden)

    Yaqin Zhao

    2015-01-01

    Full Text Available Candidate smoke region segmentation is the key link of smoke video detection; an effective and prompt method of candidate smoke region segmentation plays a significant role in a smoke recognition system. However, the interference of heavy fog and smoke-color moving objects greatly degrades the recognition accuracy. In this paper, a novel method of candidate smoke region segmentation based on rough set theory is presented. First, Kalman filtering is used to update video background in order to exclude the interference of static smoke-color objects, such as blue sky. Second, in RGB color space smoke regions are segmented by defining the upper approximation, lower approximation, and roughness of smoke-color distribution. Finally, in HSV color space small smoke regions are merged by the definition of equivalence relation so as to distinguish smoke images from heavy fog images in terms of V component value variety from center to edge of smoke region. The experimental results on smoke region segmentation demonstrated the effectiveness and usefulness of the proposed scheme.

  6. An Adaptive Motion Segmentation for Automated Video Surveillance

    Directory of Open Access Journals (Sweden)

    Hossain MJulius

    2008-01-01

    Full Text Available This paper presents an adaptive motion segmentation algorithm utilizing spatiotemporal information of three most recent frames. The algorithm initially extracts the moving edges applying a novel flexible edge matching technique which makes use of a combined distance transformation image. Then watershed-based iterative algorithm is employed to segment the moving object region from the extracted moving edges. The challenges of existing three-frame-based methods include slow movement, edge localization error, minor movement of camera, and homogeneity of background and foreground region. The proposed method represents edges as segments and uses a flexible edge matching algorithm to deal with edge localization error and minor movement of camera. The combined distance transformation image works in favor of accumulating gradient information of overlapping region which effectively improves the sensitivity to slow movement. The segmentation algorithm uses watershed, gradient information of difference image, and extracted moving edges. It helps to segment moving object region with more accurate boundary even some part of the moving edges cannot be detected due to region homogeneity or other reasons during the detection step. Experimental results using different types of video sequences are presented to demonstrate the efficiency and accuracy of the proposed method.

  7. ROBUST MOTION SEGMENTATION FOR HIGH DEFINITION VIDEO SEQUENCES USING A FAST MULTI-RESOLUTION MOTION ESTIMATION BASED ON SPATIO-TEMPORAL TUBES

    OpenAIRE

    Brouard , Olivier; Delannay , Fabrice; Ricordel , Vincent; Barba , Dominique

    2007-01-01

    4 pages; International audience; Motion segmentation methods are effective for tracking video objects. However, objects segmentation methods based on motion need to know the global motion of the video in order to back-compensate it before computing the segmentation. In this paper, we propose a method which estimates the global motion of a High Definition (HD) video shot and then segments it using the remaining motion information. First, we develop a fast method for multi-resolution motion est...

  8. Spatio-Temporal Video Object Segmentation via Scale-Adaptive 3D Structure Tensor

    Directory of Open Access Journals (Sweden)

    Hai-Yun Wang

    2004-06-01

    Full Text Available To address multiple motions and deformable objects' motions encountered in existing region-based approaches, an automatic video object (VO segmentation methodology is proposed in this paper by exploiting the duality of image segmentation and motion estimation such that spatial and temporal information could assist each other to jointly yield much improved segmentation results. The key novelties of our method are (1 scale-adaptive tensor computation, (2 spatial-constrained motion mask generation without invoking dense motion-field computation, (3 rigidity analysis, (4 motion mask generation and selection, and (5 motion-constrained spatial region merging. Experimental results demonstrate that these novelties jointly contribute much more accurate VO segmentation both in spatial and temporal domains.

  9. Real-time recursive motion segmentation of video data on a programmable device

    NARCIS (Netherlands)

    Wittebrood, R.B; Haan, de G.

    2001-01-01

    We previously reported on a recursive algorithm enabling real-time object-based motion estimation (OME) of standard definition video on a digital signal processor (DSP). The algorithm approximates the motion of the objects in the image with parametric motion models and creates a segmentation mask by

  10. PROTOTIPE VIDEO EDITOR DENGAN MENGGUNAKAN DIRECT X DAN DIRECT SHOW

    Directory of Open Access Journals (Sweden)

    Djoni Haryadi Setiabudi

    2004-01-01

    Full Text Available Technology development had given people the chance to capture their memorable moments in video format. A high quality digital video is a result of a good editing process. Which in turn, arise the new need of an editor application. In accordance to the problem, here the process of making a simple application for video editing needs. The application development use the programming techniques often applied in multimedia applications, especially video. First part of the application will begin with the video file compression and decompression, then we'll step into the editing part of the digital video file. Furthermore, the application also equipped with the facilities needed for the editing processes. The application made with Microsoft Visual C++ with DirectX technology, particularly DirectShow. The application provides basic facilities that will help the editing process of a digital video file. The application will produce an AVI format file after the editing process is finished. Through the testing process of this application shows the ability of this application to do the 'cut' and 'insert' of video files in AVI, MPEG, MPG and DAT formats. The 'cut' and 'insert' process only can be done in static order. Further, the aplication also provide the effects facility for transition process in each clip. Lastly, the process of saving the new edited video file in AVI format from the application. Abstract in Bahasa Indonesia : Perkembangan teknologi memberi kesempatan masyarakat untuk mengabadikan saat - saat yang penting menggunakan video. Pembentukan video digital yang baik membutuhkan proses editing yang baik pula. Untuk melakukan proses editing video digital dibutuhkan program editor. Berdasarkan permasalahan diatas maka pada penelitian ini dibuat prototipe editor sederhana untuk video digital. Pembuatan aplikasi memakai teknik pemrograman di bidang multimedia, khususnya video. Perencanaan dalam pembuatan aplikasi tersebut dimulai dengan pembentukan

  11. Video modeling by experts with video feedback to enhance gymnastics skills.

    Science.gov (United States)

    Boyer, Eva; Miltenberger, Raymond G; Batsche, Catherine; Fogel, Victoria

    2009-01-01

    The effects of combining video modeling by experts with video feedback were analyzed with 4 female competitive gymnasts (7 to 10 years old) in a multiple baseline design across behaviors. During the intervention, after the gymnast performed a specific gymnastics skill, she viewed a video segment showing an expert gymnast performing the same skill and then viewed a video replay of her own performance of the skill. The results showed that all gymnasts demonstrated improved performance across three gymnastics skills following exposure to the intervention.

  12. Video Object Segmentation through Spatially Accurate and Temporally Dense Extraction of Primary Object Regions (Open Access)

    Science.gov (United States)

    2013-10-03

    fol- low the setup in the literature ([13, 14]), and use 5 (birdfall, cheetah , girl, monkeydog and parachute) of the videos for evaluation (since the...segmentation labeling results of the method, GT is the ground-truth labeling of the video, and F is the (a) Birdfall (b) Cheetah (c) Girl (d) Monkeydog...Video Ours [14] [13] [20] [6] birdfall 155 189 288 252 454 cheetah 633 806 905 1142 1217 girl 1488 1698 1785 1304 1755 monkeydog 365 472 521 563 683

  13. Video Segmentation Using Fast Marching and Region Growing Algorithms

    Directory of Open Access Journals (Sweden)

    Eftychis Sifakis

    2002-04-01

    Full Text Available The algorithm presented in this paper is comprised of three main stages: (1 classification of the image sequence and, in the case of a moving camera, parametric motion estimation, (2 change detection having as reference a fixed frame, an appropriately selected frame or a displaced frame, and (3 object localization using local colour features. The image sequence classification is based on statistical tests on the frame difference. The change detection module uses a two-label fast marching algorithm. Finally, the object localization uses a region growing algorithm based on the colour similarity. Video object segmentation results are shown using the COST 211 data set.

  14. Hybrid video-assisted thoracic surgery with segmental-main bronchial sleeve resection for non-small cell lung cancer.

    Science.gov (United States)

    Li, Shuben; Chai, Huiping; Huang, Jun; Zeng, Guangqiao; Shao, Wenlong; He, Jianxing

    2014-04-01

    The purpose of the current study is to present the clinical and surgical results in patients who underwent hybrid video-assisted thoracic surgery with segmental-main bronchial sleeve resection. Thirty-one patients, 27 men and 4 women, underwent segmental-main bronchial sleeve anastomoses for non-small cell lung cancer between May 2004 and May 2011. Twenty-six (83.9%) patients had squamous cell carcinoma, and 5 patients had adenocarcinoma. Six patients were at stage IIB, 24 patients at stage IIIA, and 1 patient at stage IIIB. Secondary sleeve anastomosis was performed in 18 patients, and Y-shaped multiple sleeve anastomosis was performed in 8 patients. Single segmental bronchiole anastomosis was performed in 5 cases. The average time for chest tube removal was 5.6 days. The average length of hospital stay was 11.8 days. No anastomosis fistula developed in any of the patients. The 1-, 2-, and 3-year survival rates were 83.9%, 71.0%, and 41.9%, respectively. Hybrid video-assisted thoracic surgery with segmental-main bronchial sleeve resection is a complex technique that requires training and experience, but it is an effective and safe operation for selected patients.

  15. From image captioning to video summary using deep recurrent networks and unsupervised segmentation

    Science.gov (United States)

    Morosanu, Bogdan-Andrei; Lemnaru, Camelia

    2018-04-01

    Automatic captioning systems based on recurrent neural networks have been tremendously successful at providing realistic natural language captions for complex and varied image data. We explore methods for adapting existing models trained on large image caption data sets to a similar problem, that of summarising videos using natural language descriptions and frame selection. These architectures create internal high level representations of the input image that can be used to define probability distributions and distance metrics on these distributions. Specifically, we interpret each hidden unit inside a layer of the caption model as representing the un-normalised log probability of some unknown image feature of interest for the caption generation process. We can then apply well understood statistical divergence measures to express the difference between images and create an unsupervised segmentation of video frames, classifying consecutive images of low divergence as belonging to the same context, and those of high divergence as belonging to different contexts. To provide a final summary of the video, we provide a group of selected frames and a text description accompanying them, allowing a user to perform a quick exploration of large unlabeled video databases.

  16. Video segmentation for post-production

    Science.gov (United States)

    Wills, Ciaran

    2001-12-01

    Specialist post-production is an industry that has much to gain from the application of content-based video analysis techniques. However the types of material handled in specialist post-production, such as television commercials, pop music videos and special effects are quite different in nature from the typical broadcast material which many video analysis techniques are designed to work with; shots are short and highly dynamic, and the transitions are often novel or ambiguous. We address the problem of scene change detection and develop a new algorithm which tackles some of the common aspects of post-production material that cause difficulties for past algorithms, such as illumination changes and jump cuts. Operating in the compressed domain on Motion JPEG compressed video, our algorithm detects cuts and fades by analyzing each JPEG macroblock in the context of its temporal and spatial neighbors. Analyzing the DCT coefficients directly we can extract the mean color of a block and an approximate detail level. We can also perform an approximated cross-correlation between two blocks. The algorithm is part of a set of tools being developed to work with an automated asset management system designed specifically for use in post-production facilities.

  17. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Science.gov (United States)

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  18. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Directory of Open Access Journals (Sweden)

    Florian Eyben

    Full Text Available Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  19. Temporal Segmentation of MPEG Video Streams

    Directory of Open Access Journals (Sweden)

    Janko Calic

    2002-06-01

    Full Text Available Many algorithms for temporal video partitioning rely on the analysis of uncompressed video features. Since the information relevant to the partitioning process can be extracted directly from the MPEG compressed stream, higher efficiency can be achieved utilizing information from the MPEG compressed domain. This paper introduces a real-time algorithm for scene change detection that analyses the statistics of the macroblock features extracted directly from the MPEG stream. A method for extraction of the continuous frame difference that transforms the 3D video stream into a 1D curve is presented. This transform is then further employed to extract temporal units within the analysed video sequence. Results of computer simulations are reported.

  20. A content-based news video retrieval system: NVRS

    Science.gov (United States)

    Liu, Huayong; He, Tingting

    2009-10-01

    This paper focus on TV news programs and design a content-based news video browsing and retrieval system, NVRS, which is convenient for users to fast browsing and retrieving news video by different categories such as political, finance, amusement, etc. Combining audiovisual features and caption text information, the system automatically segments a complete news program into separate news stories. NVRS supports keyword-based news story retrieval, category-based news story browsing and generates key-frame-based video abstract for each story. Experiments show that the method of story segmentation is effective and the retrieval is also efficient.

  1. Part Two: Learning Science Through Digital Video: Student Views on Watching and Creating Videos

    Science.gov (United States)

    Wade, P.; Courtney, A. R.

    2014-12-01

    The use of digital video for science education has become common with the wide availability of video imagery. This study continues research into aspects of using digital video as a primary teaching tool to enhance student learning in undergraduate science courses. Two survey instruments were administered to undergraduate non-science majors. Survey One focused on: a) What science is being learned from watching science videos such as a "YouTube" clip of a volcanic eruption or an informational video on geologic time and b) What are student preferences with regard to their learning (e.g. using video versus traditional modes of delivery)? Survey Two addressed students' perspectives on the storytelling aspect of the video with respect to: a) sustaining interest, b) providing science information, c) style of video and d) quality of the video. Undergraduate non-science majors were the primary focus group in this study. Students were asked to view video segments and respond to a survey focused on what they learned from the segments. The storytelling aspect of each video was also addressed by students. Students watched 15-20 shorter (3-15 minute science videos) created within the last four years. Initial results of this research support that shorter video segments were preferred and the storytelling quality of each video related to student learning.

  2. Common and Innovative Visuals: A sparsity modeling framework for video.

    Science.gov (United States)

    Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder

    2014-05-02

    Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.

  3. Segmentation of sows in farrowing pens

    DEFF Research Database (Denmark)

    Tu, Gang Jun; Karstoft, Henrik; Pedersen, Lene Juul

    2014-01-01

    The correct segmentation of a foreground object in video recordings is an important task for many surveillance systems. The development of an effective and practical algorithm to segment sows in grayscale video recordings captured under commercial production conditions is described...

  4. Segmentation of object-based video of gaze communication

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Stegmann, Mikkel Bille; Forchhammer, Søren

    2005-01-01

    Aspects of video communication based on gaze interaction are considered. The overall idea is to use gaze interaction to control video, e.g. for video conferencing. Towards this goal, animation of a facial mask is demonstrated. The animation is based on images using Active Appearance Models (AAM......). Good quality reproduction of (low-resolution) coded video of an animated facial mask as low as 10-20 kbit/s using MPEG-4 object based video is demonstated....

  5. Hierarchical video summarization

    Science.gov (United States)

    Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.

    1998-12-01

    We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.

  6. Learning to Segment Human by Watching YouTube.

    Science.gov (United States)

    Liang, Xiaodan; Wei, Yunchao; Chen, Yunpeng; Shen, Xiaohui; Yang, Jianchao; Lin, Liang; Yan, Shuicheng

    2016-08-05

    An intuition on human segmentation is that when a human is moving in a video, the video-context (e.g., appearance and motion clues) may potentially infer reasonable mask information for the whole human body. Inspired by this, based on popular deep convolutional neural networks (CNN), we explore a very-weakly supervised learning framework for human segmentation task, where only an imperfect human detector is available along with massive weakly-labeled YouTube videos. In our solution, the video-context guided human mask inference and CNN based segmentation network learning iterate to mutually enhance each other until no further improvement gains. In the first step, each video is decomposed into supervoxels by the unsupervised video segmentation. The superpixels within the supervoxels are then classified as human or non-human by graph optimization with unary energies from the imperfect human detection results and the predicted confidence maps by the CNN trained in the previous iteration. In the second step, the video-context derived human masks are used as direct labels to train CNN. Extensive experiments on the challenging PASCAL VOC 2012 semantic segmentation benchmark demonstrate that the proposed framework has already achieved superior results than all previous weakly-supervised methods with object class or bounding box annotations. In addition, by augmenting with the annotated masks from PASCAL VOC 2012, our method reaches a new stateof- the-art performance on the human segmentation task.

  7. Automatic video segmentation employing object/camera modeling techniques

    NARCIS (Netherlands)

    Farin, D.S.

    2005-01-01

    Practically established video compression and storage techniques still process video sequences as rectangular images without further semantic structure. However, humans watching a video sequence immediately recognize acting objects as semantic units. This semantic object separation is currently not

  8. Roadside video data analysis deep learning

    CERN Document Server

    Verma, Brijesh; Stockwell, David

    2017-01-01

    This book highlights the methods and applications for roadside video data analysis, with a particular focus on the use of deep learning to solve roadside video data segmentation and classification problems. It describes system architectures and methodologies that are specifically built upon learning concepts for roadside video data processing, and offers a detailed analysis of the segmentation, feature extraction and classification processes. Lastly, it demonstrates the applications of roadside video data analysis including scene labelling, roadside vegetation classification and vegetation biomass estimation in fire risk assessment.

  9. Smoke regions extraction based on two steps segmentation and motion detection in early fire

    Science.gov (United States)

    Jian, Wenlin; Wu, Kaizhi; Yu, Zirong; Chen, Lijuan

    2018-03-01

    Aiming at the early problems of video-based smoke detection in fire video, this paper proposes a method to extract smoke suspected regions by combining two steps segmentation and motion characteristics. Early smoldering smoke can be seen as gray or gray-white regions. In the first stage, regions of interests (ROIs) with smoke are obtained by using two step segmentation methods. Then, suspected smoke regions are detected by combining the two step segmentation and motion detection. Finally, morphological processing is used for smoke regions extracting. The Otsu algorithm is used as segmentation method and the ViBe algorithm is used to detect the motion of smoke. The proposed method was tested on 6 test videos with smoke. The experimental results show the effectiveness of our proposed method over visual observation.

  10. ADAPTIVE STREAMING OVER HTTP (DASH UNTUK APLIKASI VIDEO STREAMING

    Directory of Open Access Journals (Sweden)

    I Made Oka Widyantara

    2015-12-01

    Full Text Available This paper aims to analyze Internet-based streaming video service in the communication media with variable bit rates. The proposed scheme on Dynamic Adaptive Streaming over HTTP (DASH using the internet network that adapts to the protocol Hyper Text Transfer Protocol (HTTP. DASH technology allows a video in the video segmentation into several packages that will distreamingkan. DASH initial stage is to compress the video source to lower the bit rate video codec uses H.26. Video compressed further in the segmentation using MP4Box generates streaming packets with the specified duration. These packages are assembled into packets in a streaming media format Presentation Description (MPD or known as MPEG-DASH. Streaming video format MPEG-DASH run on a platform with the player bitdash teritegrasi bitcoin. With this scheme, the video will have several variants of the bit rates that gave rise to the concept of scalability of streaming video services on the client side. The main target of the mechanism is smooth the MPEG-DASH streaming video display on the client. The simulation results show that the scheme based scalable video streaming MPEG-DASH able to improve the quality of image display on the client side, where the procedure bufering videos can be made constant and fine for the duration of video views

  11. Anthropocentric Video Segmentation for Lecture Webcasts

    Directory of Open Access Journals (Sweden)

    Rojas Raul

    2007-01-01

    Full Text Available Abstract Many lecture recording and presentation systems transmit slides or chalkboard content along with a small video of the instructor. As a result, two areas of the screen are competing for the viewer's attention, causing the widely known split-attention effect. Face and body gestures, such as pointing, do not appear in the context of the slides or the board. To eliminate this problem, this article proposes to extract the lecturer from the video stream and paste his or her image onto the board or slide image. As a result, the lecturer acting in front of the board or slides becomes the center of attention. The entire lecture presentation becomes more human-centered. This article presents both an analysis of the underlying psychological problems and an explanation of signal processing techniques that are applied in a concrete system. The presented algorithm is able to extract and overlay the lecturer online and in real time at full video resolution.

  12. Anthropocentric Video Segmentation for Lecture Webcasts

    Directory of Open Access Journals (Sweden)

    Raul Rojas

    2008-03-01

    Full Text Available Many lecture recording and presentation systems transmit slides or chalkboard content along with a small video of the instructor. As a result, two areas of the screen are competing for the viewer's attention, causing the widely known split-attention effect. Face and body gestures, such as pointing, do not appear in the context of the slides or the board. To eliminate this problem, this article proposes to extract the lecturer from the video stream and paste his or her image onto the board or slide image. As a result, the lecturer acting in front of the board or slides becomes the center of attention. The entire lecture presentation becomes more human-centered. This article presents both an analysis of the underlying psychological problems and an explanation of signal processing techniques that are applied in a concrete system. The presented algorithm is able to extract and overlay the lecturer online and in real time at full video resolution.

  13. New robust algorithm for tracking cells in videos of Drosophila morphogenesis based on finding an ideal path in segmented spatio-temporal cellular structures.

    Science.gov (United States)

    Bellaïche, Yohanns; Bosveld, Floris; Graner, François; Mikula, Karol; Remesíková, Mariana; Smísek, Michal

    2011-01-01

    In this paper, we present a novel algorithm for tracking cells in time lapse confocal microscopy movie of a Drosophila epithelial tissue during pupal morphogenesis. We consider a 2D + time video as a 3D static image, where frames are stacked atop each other, and using a spatio-temporal segmentation algorithm we obtain information about spatio-temporal 3D tubes representing evolutions of cells. The main idea for tracking is the usage of two distance functions--first one from the cells in the initial frame and second one from segmented boundaries. We track the cells backwards in time. The first distance function attracts the subsequently constructed cell trajectories to the cells in the initial frame and the second one forces them to be close to centerlines of the segmented tubular structures. This makes our tracking algorithm robust against noise and missing spatio-temporal boundaries. This approach can be generalized to a 3D + time video analysis, where spatio-temporal tubes are 4D objects.

  14. Automatic topics segmentation for TV news video

    Science.gov (United States)

    Hmayda, Mounira; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    Automatic identification of television programs in the TV stream is an important task for operating archives. This article proposes a new spatio-temporal approach to identify the programs in TV stream into two main steps: First, a reference catalogue for video features visual jingles built. We operate the features that characterize the instances of the same program type to identify the different types of programs in the flow of television. The role of video features is to represent the visual invariants for each visual jingle using appropriate automatic descriptors for each television program. On the other hand, programs in television streams are identified by examining the similarity of the video signal for visual grammars in the catalogue. The main idea of the identification process is to compare the visual similarity of the video signal features in the flow of television to the catalogue. After presenting the proposed approach, the paper overviews encouraging experimental results on several streams extracted from different channels and compounds of several programs.

  15. Video Game Players Show More Precise Multisensory Temporal Processing Abilities

    OpenAIRE

    Donohue, Sarah E.; Woldorff, Marty G.; Mitroff, Stephen R.

    2010-01-01

    Recent research has demonstrated enhanced visual attention and visual perception in individuals with extensive experience playing action video games. These benefits manifest in several realms, but much remains unknown about the ways in which video game experience alters perception and cognition. The current study examined whether video game players’ benefits generalize beyond vision to multisensory processing by presenting video game players and non-video game players auditory and visual stim...

  16. Video game players show more precise multisensory temporal processing abilities.

    Science.gov (United States)

    Donohue, Sarah E; Woldorff, Marty G; Mitroff, Stephen R

    2010-05-01

    Recent research has demonstrated enhanced visual attention and visual perception in individuals with extensive experience playing action video games. These benefits manifest in several realms, but much remains unknown about the ways in which video game experience alters perception and cognition. In the present study, we examined whether video game players' benefits generalize beyond vision to multisensory processing by presenting auditory and visual stimuli within a short temporal window to video game players and non-video game players. Participants performed two discrimination tasks, both of which revealed benefits for video game players: In a simultaneity judgment task, video game players were better able to distinguish whether simple visual and auditory stimuli occurred at the same moment or slightly offset in time, and in a temporal-order judgment task, they revealed an enhanced ability to determine the temporal sequence of multisensory stimuli. These results suggest that people with extensive experience playing video games display benefits that extend beyond the visual modality to also impact multisensory processing.

  17. Adjustable Two-Tier Cache for IPTV Based on Segmented Streaming

    Directory of Open Access Journals (Sweden)

    Kai-Chun Liang

    2012-01-01

    Full Text Available Internet protocol TV (IPTV is a promising Internet killer application, which integrates video, voice, and data onto a single IP network, and offers viewers an innovative set of choices and control over their TV content. To provide high-quality IPTV services, an effective strategy is based on caching. This work proposes a segment-based two-tier caching approach, which divides each video into multiple segments to be cached. This approach also partitions the cache space into two layers, where the first layer mainly caches to-be-played segments and the second layer saves possibly played segments. As the segment access becomes frequent, the proposed approach enlarges the first layer and reduces the second layer, and vice versa. Because requested segments may not be accessed frequently, this work further designs an admission control mechanism to determine whether an incoming segment should be cached or not. The cache architecture takes forward/stop playback into account and may replace the unused segments under the interrupted playback. Finally, we conduct comprehensive simulation experiments to evaluate the performance of the proposed approach. The results show that our approach can yield higher hit ratio than previous work under various environmental parameters.

  18. Video steganography based on bit-plane decomposition of wavelet-transformed video

    Science.gov (United States)

    Noda, Hideki; Furuta, Tomofumi; Niimi, Michiharu; Kawaguchi, Eiji

    2004-06-01

    This paper presents a steganography method using lossy compressed video which provides a natural way to send a large amount of secret data. The proposed method is based on wavelet compression for video data and bit-plane complexity segmentation (BPCS) steganography. BPCS steganography makes use of bit-plane decomposition and the characteristics of the human vision system, where noise-like regions in bit-planes of a dummy image are replaced with secret data without deteriorating image quality. In wavelet-based video compression methods such as 3-D set partitioning in hierarchical trees (SPIHT) algorithm and Motion-JPEG2000, wavelet coefficients in discrete wavelet transformed video are quantized into a bit-plane structure and therefore BPCS steganography can be applied in the wavelet domain. 3-D SPIHT-BPCS steganography and Motion-JPEG2000-BPCS steganography are presented and tested, which are the integration of 3-D SPIHT video coding and BPCS steganography, and that of Motion-JPEG2000 and BPCS, respectively. Experimental results show that 3-D SPIHT-BPCS is superior to Motion-JPEG2000-BPCS with regard to embedding performance. In 3-D SPIHT-BPCS steganography, embedding rates of around 28% of the compressed video size are achieved for twelve bit representation of wavelet coefficients with no noticeable degradation in video quality.

  19. Learning Science Through Digital Video: Views on Watching and Creating Videos

    Science.gov (United States)

    Wade, P.; Courtney, A. R.

    2013-12-01

    In science, the use of digital video to document phenomena, experiments and demonstrations has rapidly increased during the last decade. The use of digital video for science education also has become common with the wide availability of video over the internet. However, as with using any technology as a teaching tool, some questions should be asked: What science is being learned from watching a YouTube clip of a volcanic eruption or an informational video on hydroelectric power generation? What are student preferences (e.g. multimedia versus traditional mode of delivery) with regard to their learning? This study describes 1) the efficacy of watching digital video in the science classroom to enhance student learning, 2) student preferences of instruction with regard to multimedia versus traditional delivery modes, and 3) the use of creating digital video as a project-based educational strategy to enhance learning. Undergraduate non-science majors were the primary focus group in this study. Students were asked to view video segments and respond to a survey focused on what they learned from the segments. Additionally, they were asked about their preference for instruction (e.g. text only, lecture-PowerPoint style delivery, or multimedia-video). A majority of students indicated that well-made video, accompanied with scientific explanations or demonstration of the phenomena was most useful and preferred over text-only or lecture instruction for learning scientific information while video-only delivery with little or no explanation was deemed not very useful in learning science concepts. The use of student generated video projects as learning vehicles for the creators and other class members as viewers also will be discussed.

  20. Automatic generation of pictorial transcripts of video programs

    Science.gov (United States)

    Shahraray, Behzad; Gibbon, David C.

    1995-03-01

    An automatic authoring system for the generation of pictorial transcripts of video programs which are accompanied by closed caption information is presented. A number of key frames, each of which represents the visual information in a segment of the video (i.e., a scene), are selected automatically by performing a content-based sampling of the video program. The textual information is recovered from the closed caption signal and is initially segmented based on its implied temporal relationship with the video segments. The text segmentation boundaries are then adjusted, based on lexical analysis and/or caption control information, to account for synchronization errors due to possible delays in the detection of scene boundaries or the transmission of the caption information. The closed caption text is further refined through linguistic processing for conversion to lower- case with correct capitalization. The key frames and the related text generate a compact multimedia presentation of the contents of the video program which lends itself to efficient storage and transmission. This compact representation can be viewed on a computer screen, or used to generate the input to a commercial text processing package to generate a printed version of the program.

  1. Automated Music Video Generation Using Multi-level Feature-based Segmentation

    Science.gov (United States)

    Yoon, Jong-Chul; Lee, In-Kwon; Byun, Siwoo

    The expansion of the home video market has created a requirement for video editing tools to allow ordinary people to assemble videos from short clips. However, professional skills are still necessary to create a music video, which requires a stream to be synchronized with pre-composed music. Because the music and the video are pre-generated in separate environments, even a professional producer usually requires a number of trials to obtain a satisfactory synchronization, which is something that most amateurs are unable to achieve.

  2. Excessive users of violent video games do not show emotional desensitization: an fMRI study.

    Science.gov (United States)

    Szycik, Gregor R; Mohammadi, Bahram; Hake, Maria; Kneer, Jonas; Samii, Amir; Münte, Thomas F; Te Wildt, Bert T

    2017-06-01

    Playing violent video games have been linked to long-term emotional desensitization. We hypothesized that desensitization effects in excessive users of violent video games should lead to decreased brain activations to highly salient emotional pictures in emotional sensitivity brain regions. Twenty-eight male adult subjects showing excessive long-term use of violent video games and age and education matched control participants were examined in two experiments using standardized emotional pictures of positive, negative and neutral valence. No group differences were revealed even at reduced statistical thresholds which speaks against desensitization of emotion sensitive brain regions as a result of excessive use of violent video games.

  3. Robust video object cosegmentation.

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing; Li, Xuelong; Porikli, Fatih

    2015-10-01

    With ever-increasing volumes of video data, automatic extraction of salient object regions became even more significant for visual analytic solutions. This surge has also opened up opportunities for taking advantage of collective cues encapsulated in multiple videos in a cooperative manner. However, it also brings up major challenges, such as handling of drastic appearance, motion pattern, and pose variations, of foreground objects as well as indiscriminate backgrounds. Here, we present a cosegmentation framework to discover and segment out common object regions across multiple frames and multiple videos in a joint fashion. We incorporate three types of cues, i.e., intraframe saliency, interframe consistency, and across-video similarity into an energy optimization framework that does not make restrictive assumptions on foreground appearance and motion model, and does not require objects to be visible in all frames. We also introduce a spatio-temporal scale-invariant feature transform (SIFT) flow descriptor to integrate across-video correspondence from the conventional SIFT-flow into interframe motion flow from optical flow. This novel spatio-temporal SIFT flow generates reliable estimations of common foregrounds over the entire video data set. Experimental results show that our method outperforms the state-of-the-art on a new extensive data set (ViCoSeg).

  4. Multi-modal highlight generation for sports videos using an information-theoretic excitability measure

    Science.gov (United States)

    Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.

    2013-12-01

    The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.

  5. Content-based retrieval in videos from laparoscopic surgery

    Science.gov (United States)

    Schoeffmann, Klaus; Beecks, Christian; Lux, Mathias; Uysal, Merih Seran; Seidl, Thomas

    2016-03-01

    In the field of medical endoscopy more and more surgeons are changing over to record and store videos of their endoscopic procedures for long-term archival. These endoscopic videos are a good source of information for explanations to patients and follow-up operations. As the endoscope is the "eye of the surgeon", the video shows the same information the surgeon has seen during the operation, and can describe the situation inside the patient much more precisely than an operation report would do. Recorded endoscopic videos can also be used for training young surgeons and in some countries the long-term archival of video recordings from endoscopic procedures is even enforced by law. A major challenge, however, is to efficiently access these very large video archives for later purposes. One problem, for example, is to locate specific images in the videos that show important situations, which are additionally captured as static images during the procedure. This work addresses this problem and focuses on contentbased video retrieval in data from laparoscopic surgery. We propose to use feature signatures, which can appropriately and concisely describe the content of laparoscopic images, and show that by using this content descriptor with an appropriate metric, we are able to efficiently perform content-based retrieval in laparoscopic videos. In a dataset with 600 captured static images from 33 hours recordings, we are able to find the correct video segment for more than 88% of these images.

  6. Bayesian Modeling of Temporal Coherence in Videos for Entity Discovery and Summarization.

    Science.gov (United States)

    Mitra, Adway; Biswas, Soma; Bhattacharyya, Chiranjib

    2017-03-01

    A video is understood by users in terms of entities present in it. Entity Discovery is the task of building appearance model for each entity (e.g., a person), and finding all its occurrences in the video. We represent a video as a sequence of tracklets, each spanning 10-20 frames, and associated with one entity. We pose Entity Discovery as tracklet clustering, and approach it by leveraging Temporal Coherence (TC): the property that temporally neighboring tracklets are likely to be associated with the same entity. Our major contributions are the first Bayesian nonparametric models for TC at tracklet-level. We extend Chinese Restaurant Process (CRP) to TC-CRP, and further to Temporally Coherent Chinese Restaurant Franchise (TC-CRF) to jointly model entities and temporal segments using mixture components and sparse distributions. For discovering persons in TV serial videos without meta-data like scripts, these methods show considerable improvement over state-of-the-art approaches to tracklet clustering in terms of clustering accuracy, cluster purity and entity coverage. The proposed methods can perform online tracklet clustering on streaming videos unlike existing approaches, and can automatically reject false tracklets. Finally we discuss entity-driven video summarization- where temporal segments of the video are selected based on the discovered entities, to create a semantically meaningful summary.

  7. Visual hashing of digital video : applications and techniques

    NARCIS (Netherlands)

    Oostveen, J.; Kalker, A.A.C.M.; Haitsma, J.A.; Tescher, A.G.

    2001-01-01

    his paper present the concept of robust video hashing as a tool for video identification. We present considerations and a technique for (i) extracting essential perceptual features from a moving image sequences and (ii) for identifying any sufficiently long unknown video segment by efficiently

  8. Interaction between High-Level and Low-Level Image Analysis for Semantic Video Object Extraction

    Directory of Open Access Journals (Sweden)

    Andrea Cavallaro

    2004-06-01

    Full Text Available The task of extracting a semantic video object is split into two subproblems, namely, object segmentation and region segmentation. Object segmentation relies on a priori assumptions, whereas region segmentation is data-driven and can be solved in an automatic manner. These two subproblems are not mutually independent, and they can benefit from interactions with each other. In this paper, a framework for such interaction is formulated. This representation scheme based on region segmentation and semantic segmentation is compatible with the view that image analysis and scene understanding problems can be decomposed into low-level and high-level tasks. Low-level tasks pertain to region-oriented processing, whereas the high-level tasks are closely related to object-level processing. This approach emulates the human visual system: what one “sees” in a scene depends on the scene itself (region segmentation as well as on the cognitive task (semantic segmentation at hand. The higher-level segmentation results in a partition corresponding to semantic video objects. Semantic video objects do not usually have invariant physical properties and the definition depends on the application. Hence, the definition incorporates complex domain-specific knowledge and is not easy to generalize. For the specific implementation used in this paper, motion is used as a clue to semantic information. In this framework, an automatic algorithm is presented for computing the semantic partition based on color change detection. The change detection strategy is designed to be immune to the sensor noise and local illumination variations. The lower-level segmentation identifies the partition corresponding to perceptually uniform regions. These regions are derived by clustering in an N-dimensional feature space, composed of static as well as dynamic image attributes. We propose an interaction mechanism between the semantic and the region partitions which allows to

  9. Moving Shadow Detection in Video Using Cepstrum

    Directory of Open Access Journals (Sweden)

    Fuat Cogun

    2013-01-01

    Full Text Available Moving shadows constitute problems in various applications such as image segmentation and object tracking. The main cause of these problems is the misclassification of the shadow pixels as target pixels. Therefore, the use of an accurate and reliable shadow detection method is essential to realize intelligent video processing applications. In this paper, a cepstrum-based method for moving shadow detection is presented. The proposed method is tested on outdoor and indoor video sequences using well-known benchmark test sets. To show the improvements over previous approaches, quantitative metrics are introduced and comparisons based on these metrics are made.

  10. MEKANISME SEGMENTASI LAJU BIT PADA DYNAMIC ADAPTIVE STREAMING OVER HTTP (DASH UNTUK APLIKASI VIDEO STREAMING

    Directory of Open Access Journals (Sweden)

    Muhammad Audy Bazly

    2015-12-01

    Full Text Available This paper aims to analyze Internet-based streaming video service in the communication media with variable bit rates. The proposed scheme on Dynamic Adaptive Streaming over HTTP (DASH using the internet network that adapts to the protocol Hyper Text Transfer Protocol (HTTP. DASH technology allows a video in the video segmentation into several packages that will distreamingkan. DASH initial stage is to compress the video source to lower the bit rate video codec uses H.26. Video compressed further in the segmentation using MP4Box generates streaming packets with the specified duration. These packages are assembled into packets in a streaming media format Presentation Description (MPD or known as MPEG-DASH. Streaming video format MPEG-DASH run on a platform with the player bitdash teritegrasi bitcoin. With this scheme, the video will have several variants of the bit rates that gave rise to the concept of scalability of streaming video services on the client side. The main target of the mechanism is smooth the MPEG-DASH streaming video display on the client. The simulation results show that the scheme based scalable video streaming MPEG- DASH able to improve the quality of image display on the client side, where the procedure bufering videos can be made constant and fine for the duration of video views

  11. vm119_0601b-- Video mosaic segments

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Canadian ROPOS remotely operated vehicle (ROV) outfitted with video equipment (and other devices) was deployed from the NOAA Ship McAurthurII during May-June...

  12. Brain activity and desire for Internet video game play.

    Science.gov (United States)

    Han, Doug Hyun; Bolo, Nicolas; Daniels, Melissa A; Arenella, Lynn; Lyoo, In Kyoon; Renshaw, Perry F

    2011-01-01

    Recent studies have suggested that the brain circuitry mediating cue-induced desire for video games is similar to that elicited by cues related to drugs and alcohol. We hypothesized that desire for Internet video games during cue presentation would activate similar brain regions to those that have been linked with craving for drugs or pathologic gambling. This study involved the acquisition of diagnostic magnetic resonance imaging and functional magnetic resonance imaging data from 19 healthy male adults (age, 18-23 years) following training and a standardized 10-day period of game play with a specified novel Internet video game, "War Rock" (K2 Network, Irvine, CA). Using segments of videotape consisting of 5 contiguous 90-second segments of alternating resting, matched control, and video game-related scenes, desire to play the game was assessed using a 7-point visual analogue scale before and after presentation of the videotape. In responding to Internet video game stimuli, compared with neutral control stimuli, significantly greater activity was identified in left inferior frontal gyrus, left parahippocampal gyrus, right and left parietal lobe, right and left thalamus, and right cerebellum (false discovery rate Internet video game showed significantly greater activity in right medial frontal lobe, right and left frontal precentral gyrus, right parietal postcentral gyrus, right parahippocampal gyrus, and left parietal precuneus gyrus. Controlling for total game time, reported desire for the Internet video game in the subjects who played more Internet video game was positively correlated with activation in right medial frontal lobe and right parahippocampal gyrus. The present findings suggest that cue-induced activation to Internet video game stimuli may be similar to that observed during cue presentation in persons with substance dependence or pathologic gambling. In particular, cues appear to commonly elicit activity in the dorsolateral prefrontal, orbitofrontal

  13. The Effects of Video Self-Modeling on the Decoding Skills of Children At Risk for Reading Disabilities

    OpenAIRE

    Ayala, Sandra M

    2010-01-01

    Ten first grade students, participating in a Tier II response to intervention (RTI) reading program received an intervention of video self modeling to improve decoding skills and sight word recognition. The students were video recorded blending and segmenting decodable words, and reading sight words taken directly from their curriculum instruction. Individual videos were recorded and edited to show students successfully and accurately decoding words and practicing sight word recognition. Each...

  14. AUTOMATIC FAST VIDEO OBJECT DETECTION AND TRACKING ON VIDEO SURVEILLANCE SYSTEM

    Directory of Open Access Journals (Sweden)

    V. Arunachalam

    2012-08-01

    Full Text Available This paper describes the advance techniques for object detection and tracking in video. Most visual surveillance systems start with motion detection. Motion detection methods attempt to locate connected regions of pixels that represent the moving objects within the scene; different approaches include frame-to-frame difference, background subtraction and motion analysis. The motion detection can be achieved by Principle Component Analysis (PCA and then separate an objects from background using background subtraction. The detected object can be segmented. Segmentation consists of two schemes: one for spatial segmentation and the other for temporal segmentation. Tracking approach can be done in each frame of detected Object. Pixel label problem can be alleviated by the MAP (Maximum a Posteriori technique.

  15. Brain activity and desire for internet video game play

    Science.gov (United States)

    Han, Doug Hyun; Bolo, Nicolas; Daniels, Melissa A.; Arenella, Lynn; Lyoo, In Kyoon; Renshaw, Perry F.

    2010-01-01

    Objective Recent studies have suggested that the brain circuitry mediating cue induced desire for video games is similar to that elicited by cues related to drugs and alcohol. We hypothesized that desire for internet video games during cue presentation would activate similar brain regions to those which have been linked with craving for drugs or pathological gambling. Methods This study involved the acquisition of diagnostic MRI and fMRI data from 19 healthy male adults (ages 18–23 years) following training and a standardized 10-day period of game play with a specified novel internet video game, “War Rock” (K-network®). Using segments of videotape consisting of five contiguous 90-second segments of alternating resting, matched control and video game-related scenes, desire to play the game was assessed using a seven point visual analogue scale before and after presentation of the videotape. Results In responding to internet video game stimuli, compared to neutral control stimuli, significantly greater activity was identified in left inferior frontal gyrus, left parahippocampal gyrus, right and left parietal lobe, right and left thalamus, and right cerebellum (FDR video game (MIGP) cohort showed significantly greater activity in right medial frontal lobe, right and left frontal pre-central gyrus, right parietal post-central gyrus, right parahippocampal gyrus, and left parietal precuneus gyrus. Controlling for total game time, reported desire for the internet video game in the MIGP cohort was positively correlated with activation in right medial frontal lobe and right parahippocampal gyrus. Discussion The present findings suggest that cue-induced activation to internet video game stimuli may be similar to that observed during cue presentation in persons with substance dependence or pathological gambling. In particular, cues appear to commonly elicit activity in the dorsolateral prefrontal, orbitofrontal cortex, parahippocampal gyrus, and thalamus. PMID:21220070

  16. Multi-view video segmentation and tracking for video surveillance

    Science.gov (United States)

    Mohammadi, Gelareh; Dufaux, Frederic; Minh, Thien Ha; Ebrahimi, Touradj

    2009-05-01

    Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation, calibration, objects matching, switching, data fusion, and occlusion handling. In this paper, we address the issue of tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view, typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate objecttracking information from multiple video sources. Basically, the proposed technique consists of the following steps. We first perform a single-view tracking algorithm on each camera view, and then apply a consistent object labeling algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent objects are extracted through a Homography transform from one view to the other and vice versa. Having found the correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as occlusion, appearance and disappearance of objects are resolved using information from all views. This method is capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application scenarios.

  17. Hierarchical vs non-hierarchical audio indexation and classification for video genres

    Science.gov (United States)

    Dammak, Nouha; BenAyed, Yassine

    2018-04-01

    In this paper, Support Vector Machines (SVMs) are used for segmenting and indexing video genres based on only audio features extracted at block level, which has a prominent asset by capturing local temporal information. The main contribution of our study is to show the wide effect on the classification accuracies while using an hierarchical categorization structure based on Mel Frequency Cepstral Coefficients (MFCC) audio descriptor. In fact, the classification consists in three common video genres: sports videos, music clips and news scenes. The sub-classification may divide each genre into several multi-speaker and multi-dialect sub-genres. The validation of this approach was carried out on over 360 minutes of video span yielding a classification accuracy of over 99%.

  18. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    Science.gov (United States)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  19. Kalman Filter Based Tracking in an Video Surveillance System

    Directory of Open Access Journals (Sweden)

    SULIMAN, C.

    2010-05-01

    Full Text Available In this paper we have developed a Matlab/Simulink based model for monitoring a contact in a video surveillance sequence. For the segmentation process and corect identification of a contact in a surveillance video, we have used the Horn-Schunk optical flow algorithm. The position and the behavior of the correctly detected contact were monitored with the help of the traditional Kalman filter. After that we have compared the results obtained from the optical flow method with the ones obtained from the Kalman filter, and we show the correct functionality of the Kalman filter based tracking. The tests were performed using video data taken with the help of a fix camera. The tested algorithm has shown promising results.

  20. Design Effectiveness Analysis of a Media Literacy Intervention to Reduce Violent Video Games Consumption Among Adolescents: The Relevance of Lifestyles Segmentation.

    Science.gov (United States)

    Rivera, Reynaldo; Santos, David; Brändle, Gaspar; Cárdaba, Miguel Ángel M

    2016-04-01

    Exposure to media violence might have detrimental effects on psychological adjustment and is associated with aggression-related attitudes and behaviors. As a result, many media literacy programs were implemented to tackle that major public health issue. However, there is little evidence about their effectiveness. Evaluating design effectiveness, particularly regarding targeting process, would prevent adverse effects and improve the evaluation of evidence-based media literacy programs. The present research examined whether or not different relational lifestyles may explain the different effects of an antiviolence intervention program. Based on relational and lifestyles theory, the authors designed a randomized controlled trial and applied an analysis of variance 2 (treatment: experimental vs. control) × 4 (lifestyle classes emerged from data using latent class analysis: communicative vs. autonomous vs. meta-reflexive vs. fractured). Seven hundred and thirty-five Italian students distributed in 47 classes participated anonymously in the research (51.3% females). Participants completed a lifestyle questionnaire as well as their attitudes and behavioral intentions as the dependent measures. The results indicated that the program was effective in changing adolescents' attitudes toward violence. However, behavioral intentions toward consumption of violent video games were moderated by lifestyles. Those with communicative relational lifestyles showed fewer intentions to consume violent video games, while a boomerang effect was found among participants with problematic lifestyles. Adolescents' lifestyles played an important role in influencing the effectiveness of an intervention aimed at changing behavioral intentions toward the consumption of violent video games. For that reason, audience lifestyle segmentation analysis should be considered an essential technique for designing, evaluating, and improving media literacy programs. © The Author(s) 2016.

  1. Unsupervised Object Modeling and Segmentation with Symmetry Detection for Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Jui-Yuan Su

    2015-04-01

    Full Text Available In this paper we present a novel unsupervised approach to detecting and segmenting objects as well as their constituent symmetric parts in an image. Traditional unsupervised image segmentation is limited by two obvious deficiencies: the object detection accuracy degrades with the misaligned boundaries between the segmented regions and the target, and pre-learned models are required to group regions into meaningful objects. To tackle these difficulties, the proposed approach aims at incorporating the pair-wise detection of symmetric patches to achieve the goal of segmenting images into symmetric parts. The skeletons of these symmetric parts then provide estimates of the bounding boxes to locate the target objects. Finally, for each detected object, the graphcut-based segmentation algorithm is applied to find its contour. The proposed approach has significant advantages: no a priori object models are used, and multiple objects are detected. To verify the effectiveness of the approach based on the cues that a face part contains an oval shape and skin colors, human objects are extracted from among the detected objects. The detected human objects and their parts are finally tracked across video frames to capture the object part movements for learning the human activity models from video clips. Experimental results show that the proposed method gives good performance on publicly available datasets.

  2. IBES: A Tool for Creating Instructions Based on Event Segmentation

    Directory of Open Access Journals (Sweden)

    Katharina eMura

    2013-12-01

    Full Text Available Receiving informative, well-structured, and well-designed instructions supports performance and memory in assembly tasks. We describe IBES, a tool with which users can quickly and easily create multimedia, step-by-step instructions by segmenting a video of a task into segments. In a validation study we demonstrate that the step-by-step structure of the visual instructions created by the tool corresponds to the natural event boundaries, which are assessed by event segmentation and are known to play an important role in memory processes. In one part of the study, twenty participants created instructions based on videos of two different scenarios by using the proposed tool. In the other part of the study, ten and twelve participants respectively segmented videos of the same scenarios yielding event boundaries for coarse and fine events. We found that the visual steps chosen by the participants for creating the instruction manual had corresponding events in the event segmentation. The number of instructional steps was a compromise between the number of fine and coarse events. Our interpretation of results is that the tool picks up on natural human event perception processes of segmenting an ongoing activity into events and enables the convenient transfer into meaningful multimedia instructions for assembly tasks. We discuss the practical application of IBES, for example, creating manuals for differing expertise levels, and give suggestions for research on user-oriented instructional design based on this tool.

  3. IBES: a tool for creating instructions based on event segmentation.

    Science.gov (United States)

    Mura, Katharina; Petersen, Nils; Huff, Markus; Ghose, Tandra

    2013-12-26

    Receiving informative, well-structured, and well-designed instructions supports performance and memory in assembly tasks. We describe IBES, a tool with which users can quickly and easily create multimedia, step-by-step instructions by segmenting a video of a task into segments. In a validation study we demonstrate that the step-by-step structure of the visual instructions created by the tool corresponds to the natural event boundaries, which are assessed by event segmentation and are known to play an important role in memory processes. In one part of the study, 20 participants created instructions based on videos of two different scenarios by using the proposed tool. In the other part of the study, 10 and 12 participants respectively segmented videos of the same scenarios yielding event boundaries for coarse and fine events. We found that the visual steps chosen by the participants for creating the instruction manual had corresponding events in the event segmentation. The number of instructional steps was a compromise between the number of fine and coarse events. Our interpretation of results is that the tool picks up on natural human event perception processes of segmenting an ongoing activity into events and enables the convenient transfer into meaningful multimedia instructions for assembly tasks. We discuss the practical application of IBES, for example, creating manuals for differing expertise levels, and give suggestions for research on user-oriented instructional design based on this tool.

  4. 'Comparable to MTV - but better': The impact of The Chart Show on British music video culture, 1986-1998

    OpenAIRE

    Smith, Justin

    2017-01-01

    Open access article The Chart Show was a weekly UK TV programme showcasing music videos from the Media Research Information Bureau (MRIB) Network Chart and a range of independent and specialist pop music charts. It began broadcasting on Friday evenings on Channel 4 in April 1986 and ran for three series until September 1988. Its production company, Video Visuals, subsequently found a new home for The Chart Show with Yorkshire Television on ITV, where it went out on Saturday mornings betwee...

  5. Storyboard-Based Video Browsing Using Color and Concept Indices

    NARCIS (Netherlands)

    Hürst, W.O.; Ip Vai Ching, Algernon; Schoeffmann, K.; Primus, Manfred J.

    2017-01-01

    We present an interface for interactive video browsing where users visually skim storyboard representations of the files in search for known items (known-item search tasks) and textually described subjects, objects, or events (ad-hoc search tasks). Individual segments of the video are represented as

  6. Spatio-Temporal Video Segmentation with Shape Growth or Shrinkage Constraint

    Science.gov (United States)

    Tarabalka, Yuliya; Charpiat, Guillaume; Brucker, Ludovic; Menze, Bjoern H.

    2014-01-01

    We propose a new method for joint segmentation of monotonously growing or shrinking shapes in a time sequence of noisy images. The task of segmenting the image time series is expressed as an optimization problem using the spatio-temporal graph of pixels, in which we are able to impose the constraint of shape growth or of shrinkage by introducing monodirectional infinite links connecting pixels at the same spatial locations in successive image frames. The globally optimal solution is computed with a graph cut. The performance of the proposed method is validated on three applications: segmentation of melting sea ice floes and of growing burned areas from time series of 2D satellite images, and segmentation of a growing brain tumor from sequences of 3D medical scans. In the latter application, we impose an additional intersequences inclusion constraint by adding directed infinite links between pixels of dependent image structures.

  7. Large-Scale Query-by-Image Video Retrieval Using Bloom Filters

    OpenAIRE

    Araujo, Andre; Chaves, Jason; Lakshman, Haricharan; Angst, Roland; Girod, Bernd

    2016-01-01

    We consider the problem of using image queries to retrieve videos from a database. Our focus is on large-scale applications, where it is infeasible to index each database video frame independently. Our main contribution is a framework based on Bloom filters, which can be used to index long video segments, enabling efficient image-to-video comparisons. Using this framework, we investigate several retrieval architectures, by considering different types of aggregation and different functions to ...

  8. Hierarchical video summarization based on context clustering

    Science.gov (United States)

    Tseng, Belle L.; Smith, John R.

    2003-11-01

    A personalized video summary is dynamically generated in our video personalization and summarization system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order to maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. Besides finding the desired contents, the objective is to present a coherent summary. There are diverse methods for creating summaries, and we focus on the challenges of generating a hierarchical video summary based on context information. In our summarization algorithm, three inputs are used to generate the hierarchical video summary output. These inputs are (1) MPEG-7 metadata descriptions of the contents in the server, (2) user preference and usage environment declarations from the user client, and (3) context information including MPEG-7 controlled term list and classification scheme. In a video sequence, descriptions and relevance scores are assigned to each shot. Based on these shot descriptions, context clustering is performed to collect consecutively similar shots to correspond to hierarchical scene representations. The context clustering is based on the available context information, and may be derived from domain knowledge or rules engines. Finally, the selection of structured video segments to generate the hierarchical summary efficiently balances between scene representation and shot selection.

  9. Perioperative outcomes of video- and robot-assisted segmentectomies.

    Science.gov (United States)

    Rinieri, Philippe; Peillon, Christophe; Salaün, Mathieu; Mahieu, Julien; Bubenheim, Michael; Baste, Jean-Marc

    2016-02-01

    Video-assisted thoracic surgery appears to be technically difficult for segmentectomy. Conversely, robotic surgery could facilitate the performance of segmentectomy. The aim of this study was to compare the early results of video- and robot-assisted segmentectomies. Data were collected prospectively on videothoracoscopy from 2010 and on robotic procedures from 2013. Fifty-one patients who were candidates for minimally invasive segmentectomy were included in the study. Perioperative outcomes of video-assisted and robotic segmentectomies were compared. The minimally invasive segmentectomies included 32 video- and 16 robot-assisted procedures; 3 segmentectomies (2 video-assisted and 1 robot-assisted) were converted to lobectomies. Four conversions to thoracotomy were necessary for anatomical reason or arterial injury, with no uncontrolled bleeding in the robotic arm. There were 7 benign or infectious lesions, 9 pre-invasive lesions, 25 lung cancers, and 10 metastatic diseases. Patient characteristics, type of segment, conversion to thoracotomy, conversion to lobectomy, operative time, postoperative complications, chest tube duration, postoperative stay, and histology were similar in the video and robot groups. Estimated blood loss was significantly higher in the video group (100 vs. 50 mL, p = 0.028). The morbidity rate of minimally invasive segmentectomy was low. The short-term results of video-assisted and robot-assisted segmentectomies were similar, and more data are required to show any advantages between the two techniques. Long-term oncologic outcomes are necessary to evaluate these new surgical practices. © The Author(s) 2016.

  10. People detection in nuclear plants by video processing for safety purpose

    Energy Technology Data Exchange (ETDEWEB)

    Jorge, Carlos Alexandre F.; Mol, Antonio Carlos A., E-mail: calexandre@ien.gov.b, E-mail: mol@ien.gov.b [Instituto de Engenharia Nuclear (IEN/CNEN), Rio de Janeiro, RJ (Brazil); Seixas, Jose M.; Silva, Eduardo Antonio B., E-mail: seixas@lps.ufrj.b, E-mail: eduardo@lps.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Eletrica; Cota, Raphael E.; Ramos, Bruno L., E-mail: brunolange@poli.ufrj.b [Universidade Federal do Rio de Janeiro (EP/UFRJ), RJ (Brazil). Dept. de Engenharia Eletronica e de Computacao

    2011-07-01

    This work describes the development of a surveillance system for safety purposes in nuclear plants. The final objective is to track people online in videos, in order to estimate the dose received by personnel, during the execution of working tasks in nuclear plants. The estimation will be based on their tracked positions and on dose rate mapping in a real nuclear plant at Instituto de Engenharia Nuclear, Argonauta nuclear research reactor. Cameras have been installed within Argonauta's room, supplying the data needed. Both video processing and statistical signal processing techniques may be used for detection, segmentation and tracking people in video. This first paper reports people segmentation in video using background subtraction, by two different approaches, namely frame differences, and blind signal separation based on the independent component analysis method. Results are commented, along with perspectives for further work. (author)

  11. People detection in nuclear plants by video processing for safety purpose

    International Nuclear Information System (INIS)

    Jorge, Carlos Alexandre F.; Mol, Antonio Carlos A.; Seixas, Jose M.; Silva, Eduardo Antonio B.; Cota, Raphael E.; Ramos, Bruno L.

    2011-01-01

    This work describes the development of a surveillance system for safety purposes in nuclear plants. The final objective is to track people online in videos, in order to estimate the dose received by personnel, during the execution of working tasks in nuclear plants. The estimation will be based on their tracked positions and on dose rate mapping in a real nuclear plant at Instituto de Engenharia Nuclear, Argonauta nuclear research reactor. Cameras have been installed within Argonauta's room, supplying the data needed. Both video processing and statistical signal processing techniques may be used for detection, segmentation and tracking people in video. This first paper reports people segmentation in video using background subtraction, by two different approaches, namely frame differences, and blind signal separation based on the independent component analysis method. Results are commented, along with perspectives for further work. (author)

  12. Surgical gesture classification from video and kinematic data.

    Science.gov (United States)

    Zappella, Luca; Béjar, Benjamín; Hager, Gregory; Vidal, René

    2013-10-01

    Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Spontaneous Brain Activity Did Not Show the Effect of Violent Video Games on Aggression: A Resting-State fMRI Study

    Science.gov (United States)

    Pan, Wei; Gao, Xuemei; Shi, Shuo; Liu, Fuqu; Li, Chao

    2018-01-01

    A great many of empirical researches have proved that longtime exposure to violent video game can lead to a series of negative effects. Although research has focused on the neural basis of the correlation between violent video game and aggression, little is known whether the spontaneous brain activity is associated with violent video game exposure. To address this question, we measured the spontaneous brain activity using resting-state functional magnetic resonance imaging (fMRI). We used the amplitude of low-frequency fluctuations (ALFF) and fractional ALFF (fALFF) to quantify spontaneous brain activity. The results showed there is no significant difference in ALFF, or fALFF, between violent video game group and the control part, indicating that long time exposure to violent video games won’t significantly influence spontaneous brain activity, especially the core brain regions such as execution control, moral judgment and short-term memory. This implies the adverse impact of violent video games is exaggerated. PMID:29375416

  14. Spontaneous Brain Activity Did Not Show the Effect of Violent Video Games on Aggression: A Resting-State fMRI Study.

    Science.gov (United States)

    Pan, Wei; Gao, Xuemei; Shi, Shuo; Liu, Fuqu; Li, Chao

    2017-01-01

    A great many of empirical researches have proved that longtime exposure to violent video game can lead to a series of negative effects. Although research has focused on the neural basis of the correlation between violent video game and aggression, little is known whether the spontaneous brain activity is associated with violent video game exposure. To address this question, we measured the spontaneous brain activity using resting-state functional magnetic resonance imaging (fMRI). We used the amplitude of low-frequency fluctuations (ALFF) and fractional ALFF (fALFF) to quantify spontaneous brain activity. The results showed there is no significant difference in ALFF, or fALFF, between violent video game group and the control part, indicating that long time exposure to violent video games won't significantly influence spontaneous brain activity, especially the core brain regions such as execution control, moral judgment and short-term memory. This implies the adverse impact of violent video games is exaggerated.

  15. Spontaneous Brain Activity Did Not Show the Effect of Violent Video Games on Aggression: A Resting-State fMRI Study

    Directory of Open Access Journals (Sweden)

    Wei Pan

    2018-01-01

    Full Text Available A great many of empirical researches have proved that longtime exposure to violent video game can lead to a series of negative effects. Although research has focused on the neural basis of the correlation between violent video game and aggression, little is known whether the spontaneous brain activity is associated with violent video game exposure. To address this question, we measured the spontaneous brain activity using resting-state functional magnetic resonance imaging (fMRI. We used the amplitude of low-frequency fluctuations (ALFF and fractional ALFF (fALFF to quantify spontaneous brain activity. The results showed there is no significant difference in ALFF, or fALFF, between violent video game group and the control part, indicating that long time exposure to violent video games won’t significantly influence spontaneous brain activity, especially the core brain regions such as execution control, moral judgment and short-term memory. This implies the adverse impact of violent video games is exaggerated.

  16. Research on Construction of Road Network Database Based on Video Retrieval Technology

    Directory of Open Access Journals (Sweden)

    Wang Fengling

    2017-01-01

    Full Text Available Based on the characteristics of the video database and the basic structure of the video database and several typical video data models, the segmentation-based multi-level data model is used to describe the landscape information video database, the network database model and the road network management database system. Landscape information management system detailed design and implementation of a detailed preparation.

  17. Fast Temporal Activity Proposals for Efficient Detection of Human Actions in Untrimmed Videos

    KAUST Repository

    Heilbron, Fabian Caba; Niebles, Juan Carlos; Ghanem, Bernard

    2016-01-01

    In many large-scale video analysis scenarios, one is interested in localizing and recognizing human activities that occur in short temporal intervals within long untrimmed videos. Current approaches for activity detection still struggle to handle large-scale video collections and the task remains relatively unexplored. This is in part due to the computational complexity of current action recognition approaches and the lack of a method that proposes fewer intervals in the video, where activity processing can be focused. In this paper, we introduce a proposal method that aims to recover temporal segments containing actions in untrimmed videos. Building on techniques for learning sparse dictionaries, we introduce a learning framework to represent and retrieve activity proposals. We demonstrate the capabilities of our method in not only producing high quality proposals but also in its efficiency. Finally, we show the positive impact our method has on recognition performance when it is used for action detection, while running at 10FPS.

  18. Fast Temporal Activity Proposals for Efficient Detection of Human Actions in Untrimmed Videos

    KAUST Repository

    Heilbron, Fabian Caba

    2016-12-13

    In many large-scale video analysis scenarios, one is interested in localizing and recognizing human activities that occur in short temporal intervals within long untrimmed videos. Current approaches for activity detection still struggle to handle large-scale video collections and the task remains relatively unexplored. This is in part due to the computational complexity of current action recognition approaches and the lack of a method that proposes fewer intervals in the video, where activity processing can be focused. In this paper, we introduce a proposal method that aims to recover temporal segments containing actions in untrimmed videos. Building on techniques for learning sparse dictionaries, we introduce a learning framework to represent and retrieve activity proposals. We demonstrate the capabilities of our method in not only producing high quality proposals but also in its efficiency. Finally, we show the positive impact our method has on recognition performance when it is used for action detection, while running at 10FPS.

  19. Video Film Piracy in Nigeria: Interfacing to Integrate the Pirate ...

    African Journals Online (AJOL)

    It recommends the adoption of market segmentation policy in integrating the pirate, emphasises the run of video films in cinemas, halls etc before they go into the market and calls for a better synergy between producers and marketers among others. Key words: Identification, Interface, Integration, Market Segmentation ...

  20. Adventure Racing and Organizational Behavior: Using Eco Challenge Video Clips to Stimulate Learning

    Science.gov (United States)

    Kenworthy-U'Ren, Amy; Erickson, Anthony

    2009-01-01

    In this article, the Eco Challenge race video is presented as a teaching tool for facilitating theory-based discussion and application in organizational behavior (OB) courses. Before discussing the intricacies of the video series itself, the authors present a pedagogically based rationale for using reality TV-based video segments in a classroom…

  1. Unsupervised motion-based object segmentation refined by color

    Science.gov (United States)

    Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris

    2003-06-01

    For various applications, such as data compression, structure from motion, medical imaging and video enhancement, there is a need for an algorithm that divides video sequences into independently moving objects. Because our focus is on video enhancement and structure from motion for consumer electronics, we strive for a low complexity solution. For still images, several approaches exist based on colour, but these lack in both speed and segmentation quality. For instance, colour-based watershed algorithms produce a so-called oversegmentation with many segments covering each single physical object. Other colour segmentation approaches exist which somehow limit the number of segments to reduce this oversegmentation problem. However, this often results in inaccurate edges or even missed objects. Most likely, colour is an inherently insufficient cue for real world object segmentation, because real world objects can display complex combinations of colours. For video sequences, however, an additional cue is available, namely the motion of objects. When different objects in a scene have different motion, the motion cue alone is often enough to reliably distinguish objects from one another and the background. However, because of the lack of sufficient resolution of efficient motion estimators, like the 3DRS block matcher, the resulting segmentation is not at pixel resolution, but at block resolution. Existing pixel resolution motion estimators are more sensitive to noise, suffer more from aperture problems or have less correspondence to the true motion of objects when compared to block-based approaches or are too computationally expensive. From its tendency to oversegmentation it is apparent that colour segmentation is particularly effective near edges of homogeneously coloured areas. On the other hand, block-based true motion estimation is particularly effective in heterogeneous areas, because heterogeneous areas improve the chance a block is unique and thus decrease the

  2. Despeckle filtering for ultrasound imaging and video II selected applications

    CERN Document Server

    Loizou, Christos P

    2015-01-01

    In ultrasound imaging and video visual perception is hindered by speckle multiplicative noise that degrades the quality. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image/video segmentation, texture analysis and encoding in ultrasound imaging and video. The goal of the first book (book 1 of 2 books) was to introduce the problem of speckle in ultrasound image and video as well as the theoretical background, algorithmic steps, and the MatlabTM for the following group of despeckle filters:

  3. A low false negative filter for detecting rare bird species from short video segments using a probable observation data set-based EKF method.

    Science.gov (United States)

    Song, Dezhen; Xu, Yiliang

    2010-09-01

    We report a new filter to assist the search for rare bird species. Since a rare bird only appears in front of a camera with very low occurrence (e.g., less than ten times per year) for very short duration (e.g., less than a fraction of a second), our algorithm must have a very low false negative rate. We verify the bird body axis information with the known bird flying dynamics from the short video segment. Since a regular extended Kalman filter (EKF) cannot converge due to high measurement error and limited data, we develop a novel probable observation data set (PODS)-based EKF method. The new PODS-EKF searches the measurement error range for all probable observation data that ensures the convergence of the corresponding EKF in short time frame. The algorithm has been extensively tested using both simulated inputs and real video data of four representative bird species. In the physical experiments, our algorithm has been tested on rock pigeons and red-tailed hawks with 119 motion sequences. The area under the ROC curve is 95.0%. During the one-year search of ivory-billed woodpeckers, the system reduces the raw video data of 29.41 TB to only 146.7 MB (reduction rate 99.9995%).

  4. Low Cost Skin Segmentation Scheme in Videos Using Two Alternative Methods for Dynamic Hand Gesture Detection Method

    Directory of Open Access Journals (Sweden)

    Eman Thabet

    2017-01-01

    Full Text Available Recent years have witnessed renewed interest in developing skin segmentation approaches. Skin feature segmentation has been widely employed in different aspects of computer vision applications including face detection and hand gestures recognition systems. This is mostly due to the attractive characteristics of skin colour and its effectiveness to object segmentation. On the contrary, there are certain challenges in using human skin colour as a feature to segment dynamic hand gesture, due to various illumination conditions, complicated environment, and computation time or real-time method. These challenges have led to the insufficiency of many of the skin color segmentation approaches. Therefore, to produce simple, effective, and cost efficient skin segmentation, this paper has proposed a skin segmentation scheme. This scheme includes two procedures for calculating generic threshold ranges in Cb-Cr colour space. The first procedure uses threshold values trained online from nose pixels of the face region. Meanwhile, the second procedure known as the offline training procedure uses thresholds trained out of skin samples and weighted equation. The experimental results showed that the proposed scheme achieved good performance in terms of efficiency and computation time.

  5. Robust and efficient fiducial tracking for augmented reality in HD-laparoscopic video streams

    Science.gov (United States)

    Mueller, M.; Groch, A.; Baumhauer, M.; Maier-Hein, L.; Teber, D.; Rassweiler, J.; Meinzer, H.-P.; Wegner, In.

    2012-02-01

    Augmented Reality (AR) is a convenient way of porting information from medical images into the surgical field of view and can deliver valuable assistance to the surgeon, especially in laparoscopic procedures. In addition, high definition (HD) laparoscopic video devices are a great improvement over the previously used low resolution equipment. However, in AR applications that rely on real-time detection of fiducials from video streams, the demand for efficient image processing has increased due to the introduction of HD devices. We present an algorithm based on the well-known Conditional Density Propagation (CONDENSATION) algorithm which can satisfy these new demands. By incorporating a prediction around an already existing and robust segmentation algorithm, we can speed up the whole procedure while leaving the robustness of the fiducial segmentation untouched. For evaluation purposes we tested the algorithm on recordings from real interventions, allowing for a meaningful interpretation of the results. Our results show that we can accelerate the segmentation by a factor of 3.5 on average. Moreover, the prediction information can be used to compensate for fiducials that are temporarily occluded or out of scope, providing greater stability.

  6. Joint Optimization in UMTS-Based Video Transmission

    Directory of Open Access Journals (Sweden)

    Attila Zsiros

    2007-01-01

    Full Text Available A software platform is exposed, which was developed to enable demonstration and capacity testing. The platform simulates a joint optimized wireless video transmission. The development succeeded within the frame of the IST-PHOENIX project and is based on the system optimization model of the project. One of the constitutive parts of the model, the wireless network segment, is changed to a detailed, standard UTRA network simulation module. This paper consists of (1 a brief description of the projects simulation chain, (2 brief description of the UTRAN system, and (3 the integration of the two segments. The role of the UTRAN part in the joint optimization is described, with the configuration and control of this element. Finally, some simulation results are shown. In the conclusion, we show how our simulation results translate into real-world performance gains.

  7. Polyp Detection and Segmentation from Video Capsule Endoscopy: A Review

    Directory of Open Access Journals (Sweden)

    V. B. Surya Prasath

    2016-12-01

    Full Text Available Video capsule endoscopy (VCE is used widely nowadays for visualizing the gastrointestinal (GI tract. Capsule endoscopy exams are prescribed usually as an additional monitoring mechanism and can help in identifying polyps, bleeding, etc. To analyze the large scale video data produced by VCE exams, automatic image processing, computer vision, and learning algorithms are required. Recently, automatic polyp detection algorithms have been proposed with various degrees of success. Though polyp detection in colonoscopy and other traditional endoscopy procedure based images is becoming a mature field, due to its unique imaging characteristics, detecting polyps automatically in VCE is a hard problem. We review different polyp detection approaches for VCE imagery and provide systematic analysis with challenges faced by standard image processing and computer vision methods.

  8. The Simple Video Coder: A free tool for efficiently coding social video data.

    Science.gov (United States)

    Barto, Daniel; Bird, Clark W; Hamilton, Derek A; Fink, Brandi C

    2017-08-01

    Videotaping of experimental sessions is a common practice across many disciplines of psychology, ranging from clinical therapy, to developmental science, to animal research. Audio-visual data are a rich source of information that can be easily recorded; however, analysis of the recordings presents a major obstacle to project completion. Coding behavior is time-consuming and often requires ad-hoc training of a student coder. In addition, existing software is either prohibitively expensive or cumbersome, which leaves researchers with inadequate tools to quickly process video data. We offer the Simple Video Coder-free, open-source software for behavior coding that is flexible in accommodating different experimental designs, is intuitive for students to use, and produces outcome measures of event timing, frequency, and duration. Finally, the software also offers extraction tools to splice video into coded segments suitable for training future human coders or for use as input for pattern classification algorithms.

  9. Dutch-Cantonese Bilinguals Show Segmental Processing during Sinitic Language Production

    Directory of Open Access Journals (Sweden)

    Kalinka Timmer

    2017-07-01

    Full Text Available This study addressed the debate on the primacy of syllable vs. segment (i.e., phoneme as a functional unit of phonological encoding in syllabic languages by investigating both behavioral and neural responses of Dutch-Cantonese (DC bilinguals in a color-object picture naming task. Specifically, we investigated whether DC bilinguals exhibit the phonemic processing strategy, evident in monolingual Dutch speakers, during planning of their Cantonese speech production. Participants named the color of colored line-drawings in Cantonese faster when color and object matched in the first segment than when they were mismatched (e.g., 藍駱駝, /laam4/ /lok3to4/, “blue camel;” 紅饑駝, /hung4/ /lok3to4/, “red camel”. This is in contrast to previous studies in Sinitic languages that did not reveal such phoneme-only facilitation. Phonemic overlap also modulated the event-related potentials (ERPs in the 125–175, 200–300, and 300–400 ms time windows, suggesting earlier ERP modulations than in previous studies with monolingual Sinitic speakers or unbalanced Sinitic-Germanic bilinguals. Conjointly, our results suggest that, while the syllable may be considered the primary unit of phonological encoding in Sinitic languages, the phoneme can serve as the primary unit of phonological encoding, both behaviorally and neurally, for DC bilinguals. The presence/absence of a segment onset effect in Sinitic languages may be related to the proficiency in the Germanic language of bilinguals.

  10. Celiac Family Health Education Video Series

    Medline Plus

    Full Text Available ... Boston Children's Hospital will teach you and your family about a healthful celiac lifestyle. Education is key in making parents feel more at ease and allow children with celiac disease to live happy and productive lives. Each of our video segments ... I. Introduction : Experiencing ...

  11. Making Sense of Video Analytics: Lessons Learned from Clickstream Interactions, Attitudes, and Learning Outcome in a Video-Assisted Course

    Directory of Open Access Journals (Sweden)

    Michail N. Giannakos

    2015-02-01

    Full Text Available Online video lectures have been considered an instructional media for various pedagogic approaches, such as the flipped classroom and open online courses. In comparison to other instructional media, online video affords the opportunity for recording student clickstream patterns within a video lecture. Video analytics within lecture videos may provide insights into student learning performance and inform the improvement of video-assisted teaching tactics. Nevertheless, video analytics are not accessible to learning stakeholders, such as researchers and educators, mainly because online video platforms do not broadly share the interactions of the users with their systems. For this purpose, we have designed an open-access video analytics system for use in a video-assisted course. In this paper, we present a longitudinal study, which provides valuable insights through the lens of the collected video analytics. In particular, we found that there is a relationship between video navigation (repeated views and the level of cognition/thinking required for a specific video segment. Our results indicated that learning performance progress was slightly improved and stabilized after the third week of the video-assisted course. We also found that attitudes regarding easiness, usability, usefulness, and acceptance of this type of course remained at the same levels throughout the course. Finally, we triangulate analytics from diverse sources, discuss them, and provide the lessons learned for further development and refinement of video-assisted courses and practices.

  12. Coding Transparency in Object-Based Video

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2006-01-01

    A novel algorithm for coding gray level alpha planes in object-based video is presented. The scheme is based on segmentation in multiple layers. Different coders are specifically designed for each layer. In order to reduce the bit rate, cross-layer redundancies as well as temporal correlation are...

  13. Multi-Model Estimation Based Moving Object Detection for Aerial Video

    Directory of Open Access Journals (Sweden)

    Yanning Zhang

    2015-04-01

    Full Text Available With the wide development of UAV (Unmanned Aerial Vehicle technology, moving target detection for aerial video has become a popular research topic in the computer field. Most of the existing methods are under the registration-detection framework and can only deal with simple background scenes. They tend to go wrong in the complex multi background scenarios, such as viaducts, buildings and trees. In this paper, we break through the single background constraint and perceive the complex scene accurately by automatic estimation of multiple background models. First, we segment the scene into several color blocks and estimate the dense optical flow. Then, we calculate an affine transformation model for each block with large area and merge the consistent models. Finally, we calculate subordinate degree to multi-background models pixel to pixel for all small area blocks. Moving objects are segmented by means of energy optimization method solved via Graph Cuts. The extensive experimental results on public aerial videos show that, due to multi background models estimation, analyzing each pixel’s subordinate relationship to multi models by energy minimization, our method can effectively remove buildings, trees and other false alarms and detect moving objects correctly.

  14. REAL TIME SPEED ESTIMATION FROM MONOCULAR VIDEO

    Directory of Open Access Journals (Sweden)

    M. S. Temiz

    2012-07-01

    Full Text Available In this paper, detailed studies have been performed for developing a real time system to be used for surveillance of the traffic flow by using monocular video cameras to find speeds of the vehicles for secure travelling are presented. We assume that the studied road segment is planar and straight, the camera is tilted downward a bridge and the length of one line segment in the image is known. In order to estimate the speed of a moving vehicle from a video camera, rectification of video images is performed to eliminate the perspective effects and then the interest region namely the ROI is determined for tracking the vehicles. Velocity vectors of a sufficient number of reference points are identified on the image of the vehicle from each video frame. For this purpose sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in the image space are transformed to the object space to find the absolute values of these magnitudes. The accuracy of the estimated speed is approximately ±1 – 2 km/h. In order to solve the real time speed estimation problem, the authors have written a software system in C++ programming language. This software system has been used for all of the computations and test applications.

  15. Video repairing under variable illumination using cyclic motions.

    Science.gov (United States)

    Jia, Jiaya; Tai, Yu-Wing; Wu, Tai-Pang; Tang, Chi-Keung

    2006-05-01

    This paper presents a complete system capable of synthesizing a large number of pixels that are missing due to occlusion or damage in an uncalibrated input video. These missing pixels may correspond to the static background or cyclic motions of the captured scene. Our system employs user-assisted video layer segmentation, while the main processing in video repair is fully automatic. The input video is first decomposed into the color and illumination videos. The necessary temporal consistency is maintained by tensor voting in the spatio-temporal domain. Missing colors and illumination of the background are synthesized by applying image repairing. Finally, the occluded motions are inferred by spatio-temporal alignment of collected samples at multiple scales. We experimented on our system with some difficult examples with variable illumination, where the capturing camera can be stationary or in motion.

  16. Motion video analysis using planar parallax

    Science.gov (United States)

    Sawhney, Harpreet S.

    1994-04-01

    Motion and structure analysis in video sequences can lead to efficient descriptions of objects and their motions. Interesting events in videos can be detected using such an analysis--for instance independent object motion when the camera itself is moving, figure-ground segregation based on the saliency of a structure compared to its surroundings. In this paper we present a method for 3D motion and structure analysis that uses a planar surface in the environment as a reference coordinate system to describe a video sequence. The motion in the video sequence is described as the motion of the reference plane, and the parallax motion of all the non-planar components of the scene. It is shown how this method simplifies the otherwise hard general 3D motion analysis problem. In addition, a natural coordinate system in the environment is used to describe the scene which can simplify motion based segmentation. This work is a part of an ongoing effort in our group towards video annotation and analysis for indexing and retrieval. Results from a demonstration system being developed are presented.

  17. Evaluating Two Oral Health Video Interventions with Early Head Start Families

    Directory of Open Access Journals (Sweden)

    Lynn B. Wilson

    2013-01-01

    Full Text Available Poor oral health in early childhood can have long-term consequences, and parents often are unaware of the importance of preventive measures for infants and toddlers. Children in rural, low-income families suffer disproportionately from the effects of poor oral health. Participants were 91 parents of infants and toddlers enrolled in Early Head Start (EHS living in rural Hawai'i, USA. In this quasi-experimental design, EHS home visitors were assigned to use either a didactic or family-centered video with parents they served. Home visitors reviewed short segments of the assigned videos with parents over an eight-week period. Both groups showed significant prepost gains on knowledge and attitudes/behaviors relating to early oral health as well as self-reported changes in family oral health routines at a six-week followup. Controlling for pretest levels, parents in the family-centered video group showed larger changes in attitudes/behaviors at posttest and a higher number of positive changes in family oral health routines at followup. Results suggest that family-centered educational videos are a promising method for providing anticipatory guidance to parents regarding early childhood oral health. Furthermore, establishing partnerships between dental care, early childhood education, and maternal health systems offers a model that broadens potential reach with minimal cost.

  18. Watch it! The Influence of Forced Pre-roll Video Ads on Consumer Perceptions

    NARCIS (Netherlands)

    Hegner, Sabrina; Hegner, Sabrina M.; Kusse, Daniel C.; Pruyn, Adriaan T.H.; Verlegh, Peeter; Voorveld, Hilde; Eisend, Martin

    2016-01-01

    The internet is the fastest growing advertising segment in the world (Gambaro and Puglisi, 2012). One specific online advertising format that is growing very rapidly is online video advertising. This advertising format owes its explosive growth to the rapid acceleration of online video viewing and

  19. Viewer Discussion is Advised. Video Clubs Focus Teacher Discussion on Student Learning

    Directory of Open Access Journals (Sweden)

    Elizabeth A. van Es

    2014-06-01

    Full Text Available Video is being used widely in professional development. Yet, little is known about how to design video-based learning environments that are productive for teacher learning. One promising model is a video club (Sherin, 2000. Video clubs bring teachers together to view and analyze video segments from one another’s classrooms. The idea is that by watching and discussing video segments focused on student thinking, teachers will learn practices for identifying and analyzing noteworthy student thinking during instruction and can use what they learn to inform their instructional decisions. This paper addresses issues to consider when setting up a video club for teacher education, such as defining goals for using video, establishing norms for viewing and discussing one another’s teaching, selecting clips for analysis, and facilitating teacher discussions. Si consiglia la discussione tra osservatori. Nei Video Club gli insegnanti mettono a fuoco le modalità con cui gli studenti apprendono.Il video è stato ampiamente utilizzato per la formazione professionale. Tuttavia poche sono le conoscenze relative alla progettazione di ambienti di apprendimento basati su video che siano efficaci per la formazione degli insegnanti. Un modello promettente è il “video club” (Sherin, 2000. Video club uniscono insegnanti che guardano ed analizzano insieme segmenti video delle proprie rispettive classi. L'idea è che gli insegnanti, guardando e discutendo segmenti video centrati sul pensiero degli alunni, imparino ad adottare durante l’insegnamento pratiche d'identificazione e analisi di pensieri degli alunni degni di nota e possano poi utilizzare ciò che hanno imparato nelle decisioni didattiche. Questo articolo affronta le questioni da considerare quando si configura un video club per la formazione degli insegnanti, come ad esempio la definizione di obiettivi per l'utilizzo dei video, le norme per la visione e discussione dei rispettivi video, la selezione

  20. Indexed Captioned Searchable Videos: A Learning Companion for STEM Coursework

    Science.gov (United States)

    Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher

    2017-02-01

    Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character recognition (OCR) technology was enhanced with image transformations for extraction of text from video frames to support indexing and search. The images and text on video frames is analyzed to divide lecture videos into topical segments. The ICS video player integrates indexing, search, and captioning in video playback providing instant access to the content of interest. This video framework has been used by more than 70 courses in a variety of STEM disciplines and assessed by more than 4000 students. Results presented from the surveys demonstrate the value of the videos as a learning resource and the role played by videos in a students learning process. Survey results also establish the value of indexing and search features in a video platform for education. This paper reports on the development and evaluation of ICS videos framework and over 5 years of usage experience in several STEM courses.

  1. Activity-based exploitation of Full Motion Video (FMV)

    Science.gov (United States)

    Kant, Shashi

    2012-06-01

    Video has been a game-changer in how US forces are able to find, track and defeat its adversaries. With millions of minutes of video being generated from an increasing number of sensor platforms, the DOD has stated that the rapid increase in video is overwhelming their analysts. The manpower required to view and garner useable information from the flood of video is unaffordable, especially in light of current fiscal restraints. "Search" within full-motion video has traditionally relied on human tagging of content, and video metadata, to provision filtering and locate segments of interest, in the context of analyst query. Our approach utilizes a novel machine-vision based approach to index FMV, using object recognition & tracking, events and activities detection. This approach enables FMV exploitation in real-time, as well as a forensic look-back within archives. This approach can help get the most information out of video sensor collection, help focus the attention of overburdened analysts form connections in activity over time and conserve national fiscal resources in exploiting FMV.

  2. Classification and Weakly Supervised Pain Localization using Multiple Segment Representation.

    Science.gov (United States)

    Sikka, Karan; Dhall, Abhinav; Bartlett, Marian Stewart

    2014-10-01

    Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth. These segments are generated via multiple clustering of a sequence or running a multi-scale temporal scanning window, and are represented using a state-of-the-art Bag of Words (BoW) representation. This work extends the idea of detecting facial expressions through 'concept frames' to 'concept segments' and argues through extensive experiments that algorithms such as MIL are needed to reap the benefits of such representation. The key advantages of our approach are: (1) joint detection and localization of painful frames using only sequence-level ground-truth, (2) incorporation of temporal dynamics by representing the data not as individual frames but as segments, and (3) extraction of multiple segments, which is well suited to signals with uncertain temporal location and duration in the video. Extensive experiments on UNBC-McMaster Shoulder Pain dataset highlight the effectiveness of the approach by achieving competitive results on both tasks of pain classification and localization in videos. We also empirically evaluate the contributions of different components of MS-MIL. The paper also includes the visualization of discriminative facial patches, important for pain detection, as discovered by our

  3. Social Practices around Personal Videos using the Web

    NARCIS (Netherlands)

    R.L. Guimarães (Rodrigo); P.S. Cesar Garcia (Pablo Santiago); D.C.A. Bulterman (Dick); I. Kegel; P. Ljungstrand

    2011-01-01

    htmlabstractSocial multimedia is changing the way people interact with each other, transforming old practices on political activism, social participation and interpersonal relationships. Sharing dynamically created video segments is a prime example of this social transformation. This paper reports

  4. Qualitative and quantitative analyses of the morphological-dynamics of early cardiac pumping function using video densitometry and optical coherence tomography (OCT)

    DEFF Research Database (Denmark)

    Happel, C.; Männer, J.; Thommes, J.

    has become a matter of dispute. Uncovering of the pumping mechanism of tubular embryonic hearts requires detailed information about the hemodynamics as well as morphological dynamics of the pump action. We have analyzed the morphological dynamics of cardiac pump action in chick embryos (HH-stage 16......) of the embryonic heart segments (common atrium, AV-canal, embryonic ventricles, outflow tract). Video densitometric M-mode curves show remarkable similarities to OCT M-mode recordings. OCT M-mode recordings can only be taken at one site at a time whereas video densitometry allows simultaneous recordings at any...... striking differences in contraction behavior of different heart segments of the tubular embryonic heart. These findings are important for the understanding of the pumping mechanism of the developing valveless embryonic heart....

  5. Gradual cut detection using low-level vision for digital video

    Science.gov (United States)

    Lee, Jae-Hyun; Choi, Yeun-Sung; Jang, Ok-bae

    1996-09-01

    Digital video computing and organization is one of the important issues in multimedia system, signal compression, or database. Video should be segmented into shots to be used for identification and indexing. This approach requires a suitable method to automatically locate cut points in order to separate shot in a video. Automatic cut detection to isolate shots in a video has received considerable attention due to many practical applications; our video database, browsing, authoring system, retrieval and movie. Previous studies are based on a set of difference mechanisms and they measured the content changes between video frames. But they could not detect more special effects which include dissolve, wipe, fade-in, fade-out, and structured flashing. In this paper, a new cut detection method for gradual transition based on computer vision techniques is proposed. And then, experimental results applied to commercial video are presented and evaluated.

  6. Concurrent Calculations on Reconfigurable Logic Devices Applied to the Analysis of Video Images

    Directory of Open Access Journals (Sweden)

    Sergio R. Geninatti

    2010-01-01

    Full Text Available This paper presents the design and implementation on FPGA devices of an algorithm for computing similarities between neighboring frames in a video sequence using luminance information. By taking advantage of the well-known flexibility of Reconfigurable Logic Devices, we have designed a hardware implementation of the algorithm used in video segmentation and indexing. The experimental results show the tradeoff between concurrent sequential resources and the functional blocks needed to achieve maximum operational speed while achieving minimum silicon area usage. To evaluate system efficiency, we compare the performance of the hardware solution to that of calculations done via software using general-purpose processors with and without an SIMD instruction set.

  7. Porn video shows, local brew, and transactional sex: HIV risk among youth in Kisumu, Kenya.

    Science.gov (United States)

    Njue, Carolyne; Voeten, Helene A C M; Remes, Pieter

    2011-08-08

    Kisumu has shown a rising HIV prevalence over the past sentinel surveillance surveys, and most new infections are occurring among youth. We conducted a qualitative study to explore risk situations that can explain the high HIV prevalence among youth in Kisumu town, Kenya We conducted in-depth interviews with 150 adolescents aged 15 to 20, held 4 focus group discussions, and made 48 observations at places where youth spend their free time. Porn video shows and local brew dens were identified as popular events where unprotected multipartner, concurrent, coerced and transactional sex occurs between adolescents. Video halls - rooms with a TV and VCR - often show pornography at night for a very small fee, and minors are allowed. Forced sex, gang rape and multiple concurrent relationships characterised the sexual encounters of youth, frequently facilitated by the abuse of alcohol, which is available for minors at low cost in local brew dens. For many sexually active girls, their vulnerability to STI/HIV infection is enhanced due to financial inequality, gender-related power difference and cultural norms. The desire for love and sexual pleasure also contributed to their multiple concurrent partnerships. A substantial number of girls and young women engaged in transactional sex, often with much older working partners. These partners had a stronger socio-economic position than young women, enabling them to use money/gifts as leverage for sex. Condom use was irregular during all types of sexual encounters. In Kisumu, local brew dens and porn video halls facilitate risky sexual encounters between youth. These places should be regulated and monitored by the government. Our study strongly points to female vulnerabilities and the role of men in perpetuating the local epidemic. Young men should be targeted in prevention activities, to change their attitudes related to power and control in relationships. Girls should be empowered how to negotiate safe sex, and their poverty should

  8. Porn video shows, local brew, and transactional sex: HIV risk among youth in Kisumu, Kenya

    Directory of Open Access Journals (Sweden)

    Voeten Helene ACM

    2011-08-01

    Full Text Available Abstract Background Kisumu has shown a rising HIV prevalence over the past sentinel surveillance surveys, and most new infections are occurring among youth. We conducted a qualitative study to explore risk situations that can explain the high HIV prevalence among youth in Kisumu town, Kenya Methods We conducted in-depth interviews with 150 adolescents aged 15 to 20, held 4 focus group discussions, and made 48 observations at places where youth spend their free time. Results Porn video shows and local brew dens were identified as popular events where unprotected multipartner, concurrent, coerced and transactional sex occurs between adolescents. Video halls - rooms with a TV and VCR - often show pornography at night for a very small fee, and minors are allowed. Forced sex, gang rape and multiple concurrent relationships characterised the sexual encounters of youth, frequently facilitated by the abuse of alcohol, which is available for minors at low cost in local brew dens. For many sexually active girls, their vulnerability to STI/HIV infection is enhanced due to financial inequality, gender-related power difference and cultural norms. The desire for love and sexual pleasure also contributed to their multiple concurrent partnerships. A substantial number of girls and young women engaged in transactional sex, often with much older working partners. These partners had a stronger socio-economic position than young women, enabling them to use money/gifts as leverage for sex. Condom use was irregular during all types of sexual encounters. Conclusions In Kisumu, local brew dens and porn video halls facilitate risky sexual encounters between youth. These places should be regulated and monitored by the government. Our study strongly points to female vulnerabilities and the role of men in perpetuating the local epidemic. Young men should be targeted in prevention activities, to change their attitudes related to power and control in relationships. Girls

  9. ISOMER: Informative Segment Observations for Multimedia Event Recounting

    NARCIS (Netherlands)

    Sun, C.; Burns, B.; Nevatia, R.; Snoek, C.; Bolles, B.; Myers, G.; Wang, W.; Yeh, E.

    2014-01-01

    This paper describes a system for multimedia event detection and recounting. The goal is to detect a high level event class in unconstrained web videos and generate event oriented summarization for display to users. For this purpose, we detect informative segments and collect observations for them,

  10. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  11. Self Occlusion and Disocclusion in Causal Video Object Segmentation

    Science.gov (United States)

    2015-12-18

    22, 37, 13, 17], since an explicit 3D reconstruction of the scene produces as a side effect a partition of the video into regions. However, it...83.4 79.3 82.8 84.4 34.7 Soldier 84.0 81.1 83.8 66.6 66.5 Monkey 85.1 86.0 84.8 79.0 61.9 Bird of Paradise 96.1 93.0 94.0 92.2 86.8 BMXPerson 92.8 88.9

  12. A generic flexible and robust approach for intelligent real-time video-surveillance systems

    Science.gov (United States)

    Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit

    2004-05-01

    In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.

  13. Video event classification and image segmentation based on noncausal multidimensional hidden Markov models.

    Science.gov (United States)

    Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A

    2009-06-01

    In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.

  14. Real-Time Adaptive Foreground/Background Segmentation

    Directory of Open Access Journals (Sweden)

    Sridha Sridharan

    2005-08-01

    Full Text Available The automatic analysis of digital video scenes often requires the segmentation of moving objects from a static background. Historically, algorithms developed for this purpose have been restricted to small frame sizes, low frame rates, or offline processing. The simplest approach involves subtracting the current frame from the known background. However, as the background is rarely known beforehand, the key is how to learn and model it. This paper proposes a new algorithm that represents each pixel in the frame by a group of clusters. The clusters are sorted in order of the likelihood that they model the background and are adapted to deal with background and lighting variations. Incoming pixels are matched against the corresponding cluster group and are classified according to whether the matching cluster is considered part of the background. The algorithm has been qualitatively and quantitatively evaluated against three other well-known techniques. It demonstrated equal or better segmentation and proved capable of processing 320×240 PAL video at full frame rate using only 35%–40% of a 1.8 GHz Pentium 4 computer.

  15. Discovery and fusion of salient multimodal features toward news story segmentation

    Science.gov (United States)

    Hsu, Winston; Chang, Shih-Fu; Huang, Chih-Wei; Kennedy, Lyndon; Lin, Ching-Yung; Iyengar, Giridharan

    2003-12-01

    In this paper, we present our new results in news video story segmentation and classification in the context of TRECVID video retrieval benchmarking event 2003. We applied and extended the Maximum Entropy statistical model to effectively fuse diverse features from multiple levels and modalities, including visual, audio, and text. We have included various features such as motion, face, music/speech types, prosody, and high-level text segmentation information. The statistical fusion model is used to automatically discover relevant features contributing to the detection of story boundaries. One novel aspect of our method is the use of a feature wrapper to address different types of features -- asynchronous, discrete, continuous and delta ones. We also developed several novel features related to prosody. Using the large news video set from the TRECVID 2003 benchmark, we demonstrate satisfactory performance (F1 measures up to 0.76 in ABC news and 0.73 in CNN news), present how these multi-level multi-modal features construct the probabilistic framework, and more importantly observe an interesting opportunity for further improvement.

  16. Automated Indexing and Search of Video Data in Large Collections with inVideo

    Directory of Open Access Journals (Sweden)

    Shuangbao Paul Wang

    2017-08-01

    Full Text Available In this paper, we present a novel system, inVideo, for automatically indexing and searching videos based on the keywords spoken in the audio track and the visual content of the video frames. Using the highly efficient video indexing engine we developed, inVideo is able to analyze videos using machine learning and pattern recognition without the need for initial viewing by a human. The time-stamped commenting and tagging features refine the accuracy of search results. The cloud-based implementation makes it possible to conduct elastic search, augmented search, and data analytics. Our research shows that inVideo presents an efficient tool in processing and analyzing videos and increasing interactions in video-based online learning environment. Data from a cybersecurity program with more than 500 students show that applying inVideo to current video material, interactions between student-student and student-faculty increased significantly across 24 sections program-wide.

  17. Geographic Video 3d Data Model And Retrieval

    Science.gov (United States)

    Han, Z.; Cui, C.; Kong, Y.; Wu, H.

    2014-04-01

    Geographic video includes both spatial and temporal geographic features acquired through ground-based or non-ground-based cameras. With the popularity of video capture devices such as smartphones, the volume of user-generated geographic video clips has grown significantly and the trend of this growth is quickly accelerating. Such a massive and increasing volume poses a major challenge to efficient video management and query. Most of the today's video management and query techniques are based on signal level content extraction. They are not able to fully utilize the geographic information of the videos. This paper aimed to introduce a geographic video 3D data model based on spatial information. The main idea of the model is to utilize the location, trajectory and azimuth information acquired by sensors such as GPS receivers and 3D electronic compasses in conjunction with video contents. The raw spatial information is synthesized to point, line, polygon and solid according to the camcorder parameters such as focal length and angle of view. With the video segment and video frame, we defined the three categories geometry object using the geometry model of OGC Simple Features Specification for SQL. We can query video through computing the spatial relation between query objects and three categories geometry object such as VFLocation, VSTrajectory, VSFOView and VFFovCone etc. We designed the query methods using the structured query language (SQL) in detail. The experiment indicate that the model is a multiple objective, integration, loosely coupled, flexible and extensible data model for the management of geographic stereo video.

  18. Content-Aware Scalability-Type Selection for Rate Adaptation of Scalable Video

    Directory of Open Access Journals (Sweden)

    Tekalp A Murat

    2007-01-01

    Full Text Available Scalable video coders provide different scaling options, such as temporal, spatial, and SNR scalabilities, where rate reduction by discarding enhancement layers of different scalability-type results in different kinds and/or levels of visual distortion depend on the content and bitrate. This dependency between scalability type, video content, and bitrate is not well investigated in the literature. To this effect, we first propose an objective function that quantifies flatness, blockiness, blurriness, and temporal jerkiness artifacts caused by rate reduction by spatial size, frame rate, and quantization parameter scaling. Next, the weights of this objective function are determined for different content (shot types and different bitrates using a training procedure with subjective evaluation. Finally, a method is proposed for choosing the best scaling type for each temporal segment that results in minimum visual distortion according to this objective function given the content type of temporal segments. Two subjective tests have been performed to validate the proposed procedure for content-aware selection of the best scalability type on soccer videos. Soccer videos scaled from 600 kbps to 100 kbps by the proposed content-aware selection of scalability type have been found visually superior to those that are scaled using a single scalability option over the whole sequence.

  19. Combination of Accumulated Motion and Color Segmentation for Human Activity Analysis

    Directory of Open Access Journals (Sweden)

    Briassouli Alexia

    2008-01-01

    Full Text Available Abstract The automated analysis of activity in digital multimedia, and especially video, is gaining more and more importance due to the evolution of higher-level video processing systems and the development of relevant applications such as surveillance and sports. This paper presents a novel algorithm for the recognition and classification of human activities, which employs motion and color characteristics in a complementary manner, so as to extract the most information from both sources, and overcome their individual limitations. The proposed method accumulates the flow estimates in a video, and extracts "regions of activity" by processing their higher-order statistics. The shape of these activity areas can be used for the classification of the human activities and events taking place in a video and the subsequent extraction of higher-level semantics. Color segmentation of the active and static areas of each video frame is performed to complement this information. The color layers in the activity and background areas are compared using the earth mover's distance, in order to achieve accurate object segmentation. Thus, unlike much existing work on human activity analysis, the proposed approach is based on general color and motion processing methods, and not on specific models of the human body and its kinematics. The combined use of color and motion information increases the method robustness to illumination variations and measurement noise. Consequently, the proposed approach can lead to higher-level information about human activities, but its applicability is not limited to specific human actions. We present experiments with various real video sequences, from sports and surveillance domains, to demonstrate the effectiveness of our approach.

  20. Combination of Accumulated Motion and Color Segmentation for Human Activity Analysis

    Directory of Open Access Journals (Sweden)

    Ioannis Kompatsiaris

    2008-03-01

    Full Text Available The automated analysis of activity in digital multimedia, and especially video, is gaining more and more importance due to the evolution of higher-level video processing systems and the development of relevant applications such as surveillance and sports. This paper presents a novel algorithm for the recognition and classification of human activities, which employs motion and color characteristics in a complementary manner, so as to extract the most information from both sources, and overcome their individual limitations. The proposed method accumulates the flow estimates in a video, and extracts “regions of activity” by processing their higher-order statistics. The shape of these activity areas can be used for the classification of the human activities and events taking place in a video and the subsequent extraction of higher-level semantics. Color segmentation of the active and static areas of each video frame is performed to complement this information. The color layers in the activity and background areas are compared using the earth mover's distance, in order to achieve accurate object segmentation. Thus, unlike much existing work on human activity analysis, the proposed approach is based on general color and motion processing methods, and not on specific models of the human body and its kinematics. The combined use of color and motion information increases the method robustness to illumination variations and measurement noise. Consequently, the proposed approach can lead to higher-level information about human activities, but its applicability is not limited to specific human actions. We present experiments with various real video sequences, from sports and surveillance domains, to demonstrate the effectiveness of our approach.

  1. A Method of Sharing Tacit Knowledge by a Bulletin Board Link to Video Scene and an Evaluation in the Field of Nursing Skill

    Science.gov (United States)

    Shimada, Satoshi; Azuma, Shouzou; Teranaka, Sayaka; Kojima, Akira; Majima, Yukie; Maekawa, Yasuko

    We developed the system that knowledge could be discovered and shared cooperatively in the organization based on the SECI model of knowledge management. This system realized three processes by the following method. (1)A video that expressed skill is segmented into a number of scenes according to its contents. Tacit knowledge is shared in each scene. (2)Tacit knowledge is extracted by bulletin board linked to each scene. (3)Knowledge is acquired by repeatedly viewing the video scene with the comment that shows the technical content to be practiced. We conducted experiments that the system was used by nurses working for general hospitals. Experimental results show that the nursing practical knack is able to be collected by utilizing bulletin board linked to video scene. Results of this study confirmed the possibility of expressing the tacit knowledge of nurses' empirical nursing skills sensitively with a clue of video images.

  2. An EM based approach for motion segmentation of video sequence

    NARCIS (Netherlands)

    Zhao, Wei; Roos, Nico; Pan, Zhigeng; Skala, Vaclav

    2016-01-01

    Motions are important features for robot vision as we live in a dynamic world. Detecting moving objects is crucial for mobile robots and computer vision systems. This paper investigates an architecture for the segmentation of moving objects from image sequences. Objects are represented as groups of

  3. SoccerNet: A Scalable Dataset for Action Spotting in Soccer Videos

    KAUST Repository

    Giancola, Silvio; Amine, Mohieddine; Dghaily, Tarek; Ghanem, Bernard

    2018-01-01

    In this paper, we introduce SoccerNet, a benchmark for action spotting in soccer videos. The dataset is composed of 500 complete soccer games from six main European leagues, covering three seasons from 2014 to 2017 and a total duration of 764 hours. A total of 6,637 temporal annotations are automatically parsed from online match reports at a one minute resolution for three main classes of events (Goal, Yellow/Red Card, and Substitution). As such, the dataset is easily scalable. These annotations are manually refined to a one second resolution by anchoring them at a single timestamp following well-defined soccer rules. With an average of one event every 6.9 minutes, this dataset focuses on the problem of localizing very sparse events within long videos. We define the task of spotting as finding the anchors of soccer events in a video. Making use of recent developments in the realm of generic action recognition and detection in video, we provide strong baselines for detecting soccer events. We show that our best model for classifying temporal segments of length one minute reaches a mean Average Precision (mAP) of 67.8%. For the spotting task, our baseline reaches an Average-mAP of 49.7% for tolerances $\\delta$ ranging from 5 to 60 seconds.

  4. SoccerNet: A Scalable Dataset for Action Spotting in Soccer Videos

    KAUST Repository

    Giancola, Silvio

    2018-04-12

    In this paper, we introduce SoccerNet, a benchmark for action spotting in soccer videos. The dataset is composed of 500 complete soccer games from six main European leagues, covering three seasons from 2014 to 2017 and a total duration of 764 hours. A total of 6,637 temporal annotations are automatically parsed from online match reports at a one minute resolution for three main classes of events (Goal, Yellow/Red Card, and Substitution). As such, the dataset is easily scalable. These annotations are manually refined to a one second resolution by anchoring them at a single timestamp following well-defined soccer rules. With an average of one event every 6.9 minutes, this dataset focuses on the problem of localizing very sparse events within long videos. We define the task of spotting as finding the anchors of soccer events in a video. Making use of recent developments in the realm of generic action recognition and detection in video, we provide strong baselines for detecting soccer events. We show that our best model for classifying temporal segments of length one minute reaches a mean Average Precision (mAP) of 67.8%. For the spotting task, our baseline reaches an Average-mAP of 49.7% for tolerances $\\\\delta$ ranging from 5 to 60 seconds.

  5. 77 FR 48102 - Closed Captioning and Video Description of Video Programming

    Science.gov (United States)

    2012-08-13

    ... Captioning and Video Description of Video Programming AGENCY: Federal Communications Commission. [[Page 48103... show that providing captions on their programming would be economically burdensome. DATES: Effective...) establishing requirements for closed captioning on video programming to ensure access by persons with hearing...

  6. Video sensor architecture for surveillance applications.

    Science.gov (United States)

    Sánchez, Jordi; Benet, Ginés; Simó, José E

    2012-01-01

    This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.

  7. Video Sensor Architecture for Surveillance Applications

    Directory of Open Access Journals (Sweden)

    José E. Simó

    2012-02-01

    Full Text Available This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.

  8. Motion Segments Decomposition of RGB-D Sequences for Human Behavior Understanding

    OpenAIRE

    Devanne , Maxime; Berretti , Stefano; Pala , Pietro; Wannous , Hazem; Daoudi , Mohamed; Bimbo , Alberto ,

    2017-01-01

    International audience; In this paper, we propose a framework for analyzing and understanding human behavior from depth videos. The proposed solution first employs shape analysis of the human pose across time to decompose the full motion into short temporal segments representing elementary motions. Then, each segment is characterized by human motion and depth appearance around hand joints to describe the change in pose of the body and the interaction with objects. Finally , the sequence of te...

  9. Ranking Highlights in Personal Videos by Analyzing Edited Videos.

    Science.gov (United States)

    Sun, Min; Farhadi, Ali; Chen, Tseng-Hung; Seitz, Steve

    2016-11-01

    We present a fully automatic system for ranking domain-specific highlights in unconstrained personal videos by analyzing online edited videos. A novel latent linear ranking model is proposed to handle noisy training data harvested online. Specifically, given a targeted domain such as "surfing," our system mines the YouTube database to find pairs of raw and their corresponding edited videos. Leveraging the assumption that an edited video is more likely to contain highlights than the trimmed parts of the raw video, we obtain pair-wise ranking constraints to train our model. The learning task is challenging due to the amount of noise and variation in the mined data. Hence, a latent loss function is incorporated to mitigate the issues caused by the noise. We efficiently learn the latent model on a large number of videos (about 870 min in total) using a novel EM-like procedure. Our latent ranking model outperforms its classification counterpart and is fairly competitive compared with a fully supervised ranking system that requires labels from Amazon Mechanical Turk. We further show that a state-of-the-art audio feature mel-frequency cepstral coefficients is inferior to a state-of-the-art visual feature. By combining both audio-visual features, we obtain the best performance in dog activity, surfing, skating, and viral video domains. Finally, we show that impressive highlights can be detected without additional human supervision for seven domains (i.e., skating, surfing, skiing, gymnastics, parkour, dog activity, and viral video) in unconstrained personal videos.

  10. An optimized video system for augmented reality in endodontics: a feasibility study.

    Science.gov (United States)

    Bruellmann, D D; Tjaden, H; Schwanecke, U; Barth, P

    2013-03-01

    We propose an augmented reality system for the reliable detection of root canals in video sequences based on a k-nearest neighbor color classification and introduce a simple geometric criterion for teeth. The new software was implemented using C++, Qt, and the image processing library OpenCV. Teeth are detected in video images to restrict the segmentation of the root canal orifices by using a k-nearest neighbor algorithm. The location of the root canal orifices were determined using Euclidean distance-based image segmentation. A set of 126 human teeth with known and verified locations of the root canal orifices was used for evaluation. The software detects root canals orifices for automatic classification of the teeth in video images and stores location and size of the found structures. Overall 287 of 305 root canals were correctly detected. The overall sensitivity was about 94 %. Classification accuracy for molars ranged from 65.0 to 81.2 % and from 85.7 to 96.7 % for premolars. The realized software shows that observations made in anatomical studies can be exploited to automate real-time detection of root canal orifices and tooth classification with a software system. Automatic storage of location, size, and orientation of the found structures with this software can be used for future anatomical studies. Thus, statistical tables with canal locations will be derived, which can improve anatomical knowledge of the teeth to alleviate root canal detection in the future. For this purpose the software is freely available at: http://www.dental-imaging.zahnmedizin.uni-mainz.de/.

  11. Detection of illegal transfer of videos over the Internet

    Science.gov (United States)

    Chaisorn, Lekha; Sainui, Janya; Manders, Corey

    2010-07-01

    In this paper, a method for detecting infringements or modifications of a video in real-time is proposed. The method first segments a video stream into shots, after which it extracts some reference frames as keyframes. This process is performed employing a Singular Value Decomposition (SVD) technique developed in this work. Next, for each input video (represented by its keyframes), ordinal-based signature and SIFT (Scale Invariant Feature Transform) descriptors are generated. The ordinal-based method employs a two-level bitmap indexing scheme to construct the index for each video signature. The first level clusters all input keyframes into k clusters while the second level converts the ordinal-based signatures into bitmap vectors. On the other hand, the SIFT-based method directly uses the descriptors as the index. Given a suspect video (being streamed or transferred on the Internet), we generate the signature (ordinal and SIFT descriptors) then we compute similarity between its signature and those signatures in the database based on ordinal signature and SIFT descriptors separately. For similarity measure, besides the Euclidean distance, Boolean operators are also utilized during the matching process. We have tested our system by performing several experiments on 50 videos (each about 1/2 hour in duration) obtained from the TRECVID 2006 data set. For experiments set up, we refer to the conditions provided by TRECVID 2009 on "Content-based copy detection" task. In addition, we also refer to the requirements issued in the call for proposals by MPEG standard on the similar task. Initial result shows that our framework is effective and robust. As compared to our previous work, on top of the achievement we obtained by reducing the storage space and time taken in the ordinal based method, by introducing the SIFT features, we could achieve an overall accuracy in F1 measure of about 96% (improved about 8%).

  12. Infrared video based gas leak detection method using modified FAST features

    Science.gov (United States)

    Wang, Min; Hong, Hanyu; Huang, Likun

    2018-03-01

    In order to detect the invisible leaking gas that is usually dangerous and easily leads to fire or explosion in time, many new technologies have arisen in the recent years, among which the infrared video based gas leak detection is widely recognized as a viable tool. However, all the moving regions of a video frame can be detected as leaking gas regions by the existing infrared video based gas leak detection methods, without discriminating the property of each detected region, e.g., a walking person in a video frame may be also detected as gas by the current gas leak detection methods.To solve this problem, we propose a novel infrared video based gas leak detection method in this paper, which is able to effectively suppress strong motion disturbances.Firstly, the Gaussian mixture model(GMM) is used to establish the background model.Then due to the observation that the shapes of gas regions are different from most rigid moving objects, we modify the Features From Accelerated Segment Test (FAST) algorithm and use the modified FAST (mFAST) features to describe each connected component. In view of the fact that the statistical property of the mFAST features extracted from gas regions is different from that of other motion regions, we propose the Pixel-Per-Points (PPP) condition to further select candidate connected components.Experimental results show that the algorithm is able to effectively suppress most strong motion disturbances and achieve real-time leaking gas detection.

  13. An extended framework for adaptive playback-based video summarization

    Science.gov (United States)

    Peker, Kadir A.; Divakaran, Ajay

    2003-11-01

    In our previous work, we described an adaptive fast playback framework for video summarization where we changed the playback rate using the motion activity feature so as to maintain a constant "pace." This method provides an effective way of skimming through video, especially when the motion is not too complex and the background is mostly still, such as in surveillance video. In this paper, we present an extended summarization framework that, in addition to motion activity, uses semantic cues such as face or skin color appearance, speech and music detection, or other domain dependent semantically significant events to control the playback rate. The semantic features we use are computationally inexpensive and can be computed in compressed domain, yet are robust, reliable, and have a wide range of applicability across different content types. The presented framework also allows for adaptive summaries based on preference, for example, to include more dramatic vs. action elements, or vice versa. The user can switch at any time between the skimming and the normal playback modes. The continuity of the video is preserved, and complete omission of segments that may be important to the user is avoided by using adaptive fast playback instead of skipping over long segments. The rule-set and the input parameters can be further modified to fit a certain domain or application. Our framework can be used by itself, or as a subsequent presentation stage for a summary produced by any other summarization technique that relies on generating a sub-set of the content.

  14. The impact of video technology on learning: A cooking skills experiment.

    Science.gov (United States)

    Surgenor, Dawn; Hollywood, Lynsey; Furey, Sinéad; Lavelle, Fiona; McGowan, Laura; Spence, Michelle; Raats, Monique; McCloat, Amanda; Mooney, Elaine; Caraher, Martin; Dean, Moira

    2017-07-01

    This study examines the role of video technology in the development of cooking skills. The study explored the views of 141 female participants on whether video technology can promote confidence in learning new cooking skills to assist in meal preparation. Prior to each focus group participants took part in a cooking experiment to assess the most effective method of learning for low-skilled cooks across four experimental conditions (recipe card only; recipe card plus video demonstration; recipe card plus video demonstration conducted in segmented stages; and recipe card plus video demonstration whereby participants freely accessed video demonstrations as and when needed). Focus group findings revealed that video technology was perceived to assist learning in the cooking process in the following ways: (1) improved comprehension of the cooking process; (2) real-time reassurance in the cooking process; (3) assisting the acquisition of new cooking skills; and (4) enhancing the enjoyment of the cooking process. These findings display the potential for video technology to promote motivation and confidence as well as enhancing cooking skills among low-skilled individuals wishing to cook from scratch using fresh ingredients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Practical and Scalable Transmission of Segmented Video Sequences to Multiple Players Using H.264

    Science.gov (United States)

    Quax, Peter; di Fiore, Fabian; Issaris, Panagiotis; Lamotte, Wim; van Reeth, Frank

    We present a practical way to distribute viewports on the same video sequence to large amounts of players. Each of them has personal preferences to be met or is limited by the physical properties of his/her device (e.g., screen size of a PDA or processing power of a mobile phone). Instead of taking the naïve approach, in which sections of the video sequence are decoded and re-encoded for each of the clients, we have exploited advanced features offered by the H.264 codec to enable selection of parts of the video sequence by directly manipulating the encoder-generated bitstream. At the same time, we have overcome several practical issues presented by the fact that support for these features is sadly lacking from the state-of-the-art encoders available on the market. Two alternative solutions are discussed and have been implemented, enabling the generation of measurement results and comparison to alternative approaches.

  16. A Secure and Robust Object-Based Video Authentication System

    Directory of Open Access Journals (Sweden)

    He Dajun

    2004-01-01

    Full Text Available An object-based video authentication system, which combines watermarking, error correction coding (ECC, and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI.

  17. Effects of micro transactions on video games industry

    Directory of Open Access Journals (Sweden)

    Tomić Nenad

    2017-01-01

    Full Text Available During the twentieth century, the entertainment industry recorded a steady revenue growth. The progress of information and communication technology (ICT influenced the creation of a new segment in the industry at the beginning of the 80s, known as the video game industry. During the first two decades, the dominant model of earning for video games publishers was sale of a full game, which means that users were obliged to pay in order to play the game (pay-to-play concept. In the past ten years, publishers have developed a new approach, which instead of selling entire game content at once tends to decompose the sale into several smaller transactions. The prices of these supplements are often calculated in the virtual currency that is considered to be the currency of video game, and not in one of convertible currencies, which creates additional confusion. The subject of the paper is to explain the essence of microtransactions as type of electronic payments created in the video games industry and to observe their role in the process of industry transformation.

  18. Remote control video cameras on a suborbital rocket

    International Nuclear Information System (INIS)

    Wessling, Francis C.

    1997-01-01

    Three video cameras were controlled in real time from the ground to a sub-orbital rocket during a fifteen minute flight from White Sands Missile Range in New Mexico. Telemetry communications with the rocket allowed the control of the cameras. The pan, tilt, zoom, focus, and iris of two of the camera lenses, the power and record functions of the three cameras, and also the analog video signal that would be sent to the ground was controlled by separate microprocessors. A microprocessor was used to record data from three miniature accelerometers, temperature sensors and a differential pressure sensor. In addition to the selected video signal sent to the ground and recorded there, the video signals from the three cameras also were recorded on board the rocket. These recorders were mounted inside the pressurized segment of the rocket payload. The lenses, lens control mechanisms, and the three small television cameras were located in a portion of the rocket payload that was exposed to the vacuum of space. The accelerometers were also exposed to the vacuum of space

  19. Video processing for human perceptual visual quality-oriented video coding.

    Science.gov (United States)

    Oh, Hyungsuk; Kim, Wonha

    2013-04-01

    We have developed a video processing method that achieves human perceptual visual quality-oriented video coding. The patterns of moving objects are modeled by considering the limited human capacity for spatial-temporal resolution and the visual sensory memory together, and an online moving pattern classifier is devised by using the Hedge algorithm. The moving pattern classifier is embedded in the existing visual saliency with the purpose of providing a human perceptual video quality saliency model. In order to apply the developed saliency model to video coding, the conventional foveation filtering method is extended. The proposed foveation filter can smooth and enhance the video signals locally, in conformance with the developed saliency model, without causing any artifacts. The performance evaluation results confirm that the proposed video processing method shows reliable improvements in the perceptual quality for various sequences and at various bandwidths, compared to existing saliency-based video coding methods.

  20. Segmentation of dance movement: Effects of expertise, visual familiarity, motor experience and music

    Directory of Open Access Journals (Sweden)

    Bettina E. Bläsing

    2015-01-01

    Full Text Available According to event segmentation theory, action perception depends on sensory cues and prior knowledge, and the segmentation of observed actions is crucial for understanding and memorizing these actions. While most activities in everyday life are characterized by external goals and interaction with objects or persons, this does not necessarily apply to dance-like actions. We investigated to what extent visual familiarity of the observed movement and accompanying music influence the segmentation of a dance phrase in dancers of different skill level and non-dancers. In Experiment 1, dancers and non-dancers repeatedly watched a video clip showing a dancer performing a choreographed dance phrase and indicated segment boundaries by key press. Dancers generally defined less segment boundaries than non-dancers, specifically in the first trials in which visual familiarity with the phrase was low. Music increased the number of segment boundaries in the non-dancers and decreased it in the dancers. The results suggest that dance expertise reduces the number of perceived segment boundaries in an observed dance phrase, and that the ways visual familiarity and music affect movement segmentation are modulated by dance expertise. In a second experiment, motor experience was added as factor, based on empirical evidence suggesting that action perception is modified by visual and motor expertise in different ways. In Experiment 2, the same task as in Experiment 1 was performed by dance amateurs, and was repeated by the same participants after they had learned to dance the presented dance phrase. Less segment boundaries were defined in the middle trials after participants had learned to dance the phrase, and music reduced the number of segment boundaries before learning. The results suggest that specific motor experience of the observed movement influences its perception and anticipation and makes segmentation broader, but not to the same degree as dance expertise

  1. Video dosimetry: evaluation of X-radiation dose by video fluoroscopic image

    International Nuclear Information System (INIS)

    Nova, Joao Luiz Leocadio da; Lopes, Ricardo Tadeu

    1996-01-01

    A new methodology to evaluate the entrance surface dose on patients under radiodiagnosis is presented. A phantom is used in video fluoroscopic procedures in on line video signal system. The images are obtained from a Siemens Polymat 50 and are digitalized. The results show that the entrance surface dose can be obtained in real time from video imaging

  2. Flip Video for Dummies

    CERN Document Server

    Hutsko, Joe

    2010-01-01

    The full-color guide to shooting great video with the Flip Video camera. The inexpensive Flip Video camera is currently one of the hottest must-have gadgets. It's portable and connects easily to any computer to transfer video you shoot onto your PC or Mac. Although the Flip Video camera comes with a quick-start guide, it lacks a how-to manual, and this full-color book fills that void! Packed with full-color screen shots throughout, Flip Video For Dummies shows you how to shoot the best possible footage in a variety of situations. You'll learn how to transfer video to your computer and then edi

  3. A Video Game-Based Framework for Analyzing Human-Robot Interaction: Characterizing Interface Design in Real-Time Interactive Multimedia Applications

    Science.gov (United States)

    2006-01-01

    segments video game interaction into domain-independent components which together form a framework that can be used to characterize real-time interactive...multimedia applications in general and HRI in particular. We provide examples of using the components in both the video game and the Unmanned Aerial

  4. Adaptive deblocking and deringing of H.264/AVC video sequences

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Burini, Nino; Forchhammer, Søren

    2013-01-01

    We present a method to reduce blocking and ringing artifacts in H.264/AVC video sequences. For deblocking, the proposed method uses a quality measure of a block based coded image to find filtering modes. Based on filtering modes, the images are segmented to three classes and a specific deblocking...

  5. The effects of video self-modeling on the decoding skills of children at risk for reading disabilities

    OpenAIRE

    Ayala, SM; O'Connor, R

    2013-01-01

    Ten first grade students who had responded poorly to a Tier 2 reading intervention in a response to intervention (RTI) model received an intervention of video self-modeling to improve decoding skills and sight word recognition. Students were video recorded blending and segmenting decodable words and reading sight words. Videos were edited and viewed a minimum of four times per week. Data were collected twice per week using curriculum-based measures. A single subject multiple baseline across p...

  6. Automatic video shot boundary detection using k-means clustering and improved adaptive dual threshold comparison

    Science.gov (United States)

    Sa, Qila; Wang, Zhihui

    2018-03-01

    At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.

  7. Video Quality Prediction Models Based on Video Content Dynamics for H.264 Video over UMTS Networks

    Directory of Open Access Journals (Sweden)

    Asiya Khan

    2010-01-01

    Full Text Available The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS networks. In order to characterize the Quality of Service (QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS. The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.

  8. Long-Time Exposure to Violent Video Games Does Not Show Desensitization on Empathy for Pain: An fMRI Study.

    Science.gov (United States)

    Gao, Xuemei; Pan, Wei; Li, Chao; Weng, Lei; Yao, Mengyun; Chen, Antao

    2017-01-01

    As a typical form of empathy, empathy for pain refers to the perception and appraisal of others' pain, as well as the corresponding affective responses. Numerous studies investigated the factors affecting the empathy for pain, in which the exposure to violent video games (VVGs) could change players' empathic responses to painful situations. However, it remains unclear whether exposure to VVG influences the empathy for pain. In the present study, in terms of the exposure experience to VVG, two groups of participants (18 in VVG group, VG; 17 in non-VVG group, NG) were screened from nearly 200 video game experience questionnaires. And then, the functional magnetic resonance imaging data were recorded when they were viewing painful and non-painful stimuli. The results showed that the perception of others' pain were not significantly different in brain regions between groups, from which we could infer that the desensitization effect of VVGs was overrated.

  9. Self-Occlusions and Disocclusions in Causal Video Object Segmentation

    KAUST Repository

    Yang, Yanchao

    2016-02-19

    We propose a method to detect disocclusion in video sequences of three-dimensional scenes and to partition the disoccluded regions into objects, defined by coherent deformation corresponding to surfaces in the scene. Our method infers deformation fields that are piecewise smooth by construction without the need for an explicit regularizer and the associated choice of weight. It then partitions the disoccluded region and groups its components with objects by leveraging on the complementarity of motion and appearance cues: Where appearance changes within an object, motion can usually be reliably inferred and used for grouping. Where appearance is close to constant, it can be used for grouping directly. We integrate both cues in an energy minimization framework, incorporate prior assumptions explicitly into the energy, and propose a numerical scheme. © 2015 IEEE.

  10. Video Podcasts

    DEFF Research Database (Denmark)

    Nortvig, Anne Mette; Sørensen, Birgitte Holm

    2016-01-01

    This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...

  11. Robust Adaptable Video Copy Detection

    DEFF Research Database (Denmark)

    Assent, Ira; Kremer, Hardy

    2009-01-01

    in contrast). Our query processing combines filtering and indexing structures for efficient multistep computation of video copies under this model. We show that our model successfully identifies altered video copies and does so more reliably than existing models.......Video copy detection should be capable of identifying video copies subject to alterations e.g. in video contrast or frame rates. We propose a video copy detection scheme that allows for adaptable detection of videos that are altered temporally (e.g. frame rate change) and/or visually (e.g. change...

  12. Boundary error analysis and categorization in the TRECVID news story segmentation task

    NARCIS (Netherlands)

    Arlandis, J.; Over, P.; Kraaij, W.

    2005-01-01

    In this paper, an error analysis based on boundary error popularity (frequency) including semantic boundary categorization is applied in the context of the news story segmentation task from TRECVTD1. Clusters of systems were defined based on the input resources they used including video, audio and

  13. Long-Time Exposure to Violent Video Games Does Not Show Desensitization on Empathy for Pain: An fMRI Study

    Directory of Open Access Journals (Sweden)

    Xuemei Gao

    2017-05-01

    Full Text Available As a typical form of empathy, empathy for pain refers to the perception and appraisal of others’ pain, as well as the corresponding affective responses. Numerous studies investigated the factors affecting the empathy for pain, in which the exposure to violent video games (VVGs could change players’ empathic responses to painful situations. However, it remains unclear whether exposure to VVG influences the empathy for pain. In the present study, in terms of the exposure experience to VVG, two groups of participants (18 in VVG group, VG; 17 in non-VVG group, NG were screened from nearly 200 video game experience questionnaires. And then, the functional magnetic resonance imaging data were recorded when they were viewing painful and non-painful stimuli. The results showed that the perception of others’ pain were not significantly different in brain regions between groups, from which we could infer that the desensitization effect of VVGs was overrated.

  14. Long-Time Exposure to Violent Video Games Does Not Show Desensitization on Empathy for Pain: An fMRI Study

    Science.gov (United States)

    Gao, Xuemei; Pan, Wei; Li, Chao; Weng, Lei; Yao, Mengyun; Chen, Antao

    2017-01-01

    As a typical form of empathy, empathy for pain refers to the perception and appraisal of others’ pain, as well as the corresponding affective responses. Numerous studies investigated the factors affecting the empathy for pain, in which the exposure to violent video games (VVGs) could change players’ empathic responses to painful situations. However, it remains unclear whether exposure to VVG influences the empathy for pain. In the present study, in terms of the exposure experience to VVG, two groups of participants (18 in VVG group, VG; 17 in non-VVG group, NG) were screened from nearly 200 video game experience questionnaires. And then, the functional magnetic resonance imaging data were recorded when they were viewing painful and non-painful stimuli. The results showed that the perception of others’ pain were not significantly different in brain regions between groups, from which we could infer that the desensitization effect of VVGs was overrated. PMID:28512439

  15. Infertilitas feminis caused by salpingemphraxis: therapeutic alliances of oviduct recanalization and video-laparoscope

    International Nuclear Information System (INIS)

    Din Xinxue; Fan Xuemei; Chen Tianwu; Ren Chaofeng; Zhou Dan; You Haiyan

    2010-01-01

    Objective: To explore the clinical value of therapeutic alliances of oviduct recanalization and video-laparoscope in the treatment of infertilitas feminis caused by multiple salpingemphraxis. Methods: Sixty-seven patients with salpingemphraxis in 127 oviducts complicated with adhesions in fimbriated extremities were enrolled into our study. All the patients underwent separation of adherences in fimbriated extremities and neostomy using a video-laparoscope 2 to 3 days after selective oviduct recanalization. The therapeutic effects were retrospectively reviewed focusing on recanalization rate of proximal three segments, complete recanalization rate, and pregnancy rate and relevant complications during the follow-up period were analyzed. And patients with infertilitas feminis in the follow-up period underwent repeated salpingography to determine whether oviduct was repeatebly obstructed. Results: The therapeutic alliance of oviduct recanalization and video-laparoscope were performed successfully in this cohort. Owing to the treatment of oviduct recanalization, recanalization rate of proximal three segments was 97.6% oviducts (124/127). Due to the alliance of oviduct recanalization and video-laparoscope, complete rate of oviduct were 98.4%(122/124). One year after operation, the pregnancy rate, ectopic pregnancy rate, and non pregnancy rate were 58.2% (39/67), 4.5% (3/67), and 37.3% (25/67), respectively. The patients with non pregnancy were composed by repeated oviduct obstruction in 25.4% (17/67) and non obstruction in 11.9% (8/67). Conclusion: Therapeutic alliances of oviduct recanalization and video-laparoscope could be an effective method for the treatment of infertilitas feminis caused by mulitiple salpingemphraxis, and be helpful for the enhancement of pregnancy rate. (authors)

  16. Motion estimation for video coding efficient algorithms and architectures

    CERN Document Server

    Chakrabarti, Indrajit; Chatterjee, Sumit Kumar

    2015-01-01

    The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.

  17. Layer-based buffer aware rate adaptation design for SHVC video streaming

    Science.gov (United States)

    Gudumasu, Srinivas; Hamza, Ahmed; Asbun, Eduardo; He, Yong; Ye, Yan

    2016-09-01

    This paper proposes a layer based buffer aware rate adaptation design which is able to avoid abrupt video quality fluctuation, reduce re-buffering latency and improve bandwidth utilization when compared to a conventional simulcast based adaptive streaming system. The proposed adaptation design schedules DASH segment requests based on the estimated bandwidth, dependencies among video layers and layer buffer fullness. Scalable HEVC video coding is the latest state-of-art video coding technique that can alleviate various issues caused by simulcast based adaptive video streaming. With scalable coded video streams, the video is encoded once into a number of layers representing different qualities and/or resolutions: a base layer (BL) and one or more enhancement layers (EL), each incrementally enhancing the quality of the lower layers. Such layer based coding structure allows fine granularity rate adaptation for the video streaming applications. Two video streaming use cases are presented in this paper. The first use case is to stream HD SHVC video over a wireless network where available bandwidth varies, and the performance comparison between proposed layer-based streaming approach and conventional simulcast streaming approach is provided. The second use case is to stream 4K/UHD SHVC video over a hybrid access network that consists of a 5G millimeter wave high-speed wireless link and a conventional wired or WiFi network. The simulation results verify that the proposed layer based rate adaptation approach is able to utilize the bandwidth more efficiently. As a result, a more consistent viewing experience with higher quality video content and minimal video quality fluctuations can be presented to the user.

  18. Query by example video based on fuzzy c-means initialized by fixed clustering center

    Science.gov (United States)

    Hou, Sujuan; Zhou, Shangbo; Siddique, Muhammad Abubakar

    2012-04-01

    Currently, the high complexity of video contents has posed the following major challenges for fast retrieval: (1) efficient similarity measurements, and (2) efficient indexing on the compact representations. A video-retrieval strategy based on fuzzy c-means (FCM) is presented for querying by example. Initially, the query video is segmented and represented by a set of shots, each shot can be represented by a key frame, and then we used video processing techniques to find visual cues to represent the key frame. Next, because the FCM algorithm is sensitive to the initializations, here we initialized the cluster center by the shots of query video so that users could achieve appropriate convergence. After an FCM cluster was initialized by the query video, each shot of query video was considered a benchmark point in the aforesaid cluster, and each shot in the database possessed a class label. The similarity between the shots in the database with the same class label and benchmark point can be transformed into the distance between them. Finally, the similarity between the query video and the video in database was transformed into the number of similar shots. Our experimental results demonstrated the performance of this proposed approach.

  19. Using learning analytics to evaluate a video-based lecture series.

    Science.gov (United States)

    Lau, K H Vincent; Farooque, Pue; Leydon, Gary; Schwartz, Michael L; Sadler, R Mark; Moeller, Jeremy J

    2018-01-01

    The video-based lecture (VBL), an important component of the flipped classroom (FC) and massive open online course (MOOC) approaches to medical education, has primarily been evaluated through direct learner feedback. Evaluation may be enhanced through learner analytics (LA) - analysis of quantitative audience usage data generated by video-sharing platforms. We applied LA to an experimental series of ten VBLs on electroencephalography (EEG) interpretation, uploaded to YouTube in the model of a publicly accessible MOOC. Trends in view count; total percentage of video viewed and audience retention (AR) (percentage of viewers watching at a time point compared to the initial total) were examined. The pattern of average AR decline was characterized using regression analysis, revealing a uniform linear decline in viewership for each video, with no evidence of an optimal VBL length. Segments with transient increases in AR corresponded to those focused on core concepts, indicative of content requiring more detailed evaluation. We propose a model for applying LA at four levels: global, series, video, and feedback. LA may be a useful tool in evaluating a VBL series. Our proposed model combines analytics data and learner self-report for comprehensive evaluation.

  20. Remote Video Monitor of Vehicles in Cooperative Information Platform

    Science.gov (United States)

    Qin, Guofeng; Wang, Xiaoguo; Wang, Li; Li, Yang; Li, Qiyan

    Detection of vehicles plays an important role in the area of the modern intelligent traffic management. And the pattern recognition is a hot issue in the area of computer vision. An auto- recognition system in cooperative information platform is studied. In the cooperative platform, 3G wireless network, including GPS, GPRS (CDMA), Internet (Intranet), remote video monitor and M-DMB networks are integrated. The remote video information can be taken from the terminals and sent to the cooperative platform, then detected by the auto-recognition system. The images are pretreated and segmented, including feature extraction, template matching and pattern recognition. The system identifies different models and gets vehicular traffic statistics. Finally, the implementation of the system is introduced.

  1. Efficient Coding of Shape and Transparency for Video Objects

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2007-01-01

    A novel scheme for coding gray-level alpha planes in object-based video is presented. Gray-level alpha planes convey the shape and the transparency information, which are required for smooth composition of video objects. The algorithm proposed is based on the segmentation of the alpha plane...... in three layers: binary shape layer, opaque layer, and intermediate layer. Thus, the latter two layers replace the single transparency layer of MPEG-4 Part 2. Different encoding schemes are specifically designed for each layer, utilizing cross-layer correlations to reduce the bit rate. First, the binary...... demonstrating that the proposed techniques provide substantial bit rate savings coding shape and transparency when compared to the tools adopted in MPEG-4 Part 2....

  2. A clinical pilot study of a modular video-CT augmentation system for image-guided skull base surgery

    Science.gov (United States)

    Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.

    2012-02-01

    Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.

  3. Video Bioinformatics Analysis of Human Embryonic Stem Cell Colony Growth

    Science.gov (United States)

    Lin, Sabrina; Fonteno, Shawn; Satish, Shruthi; Bhanu, Bir; Talbot, Prue

    2010-01-01

    Because video data are complex and are comprised of many images, mining information from video material is difficult to do without the aid of computer software. Video bioinformatics is a powerful quantitative approach for extracting spatio-temporal data from video images using computer software to perform dating mining and analysis. In this article, we introduce a video bioinformatics method for quantifying the growth of human embryonic stem cells (hESC) by analyzing time-lapse videos collected in a Nikon BioStation CT incubator equipped with a camera for video imaging. In our experiments, hESC colonies that were attached to Matrigel were filmed for 48 hours in the BioStation CT. To determine the rate of growth of these colonies, recipes were developed using CL-Quant software which enables users to extract various types of data from video images. To accurately evaluate colony growth, three recipes were created. The first segmented the image into the colony and background, the second enhanced the image to define colonies throughout the video sequence accurately, and the third measured the number of pixels in the colony over time. The three recipes were run in sequence on video data collected in a BioStation CT to analyze the rate of growth of individual hESC colonies over 48 hours. To verify the truthfulness of the CL-Quant recipes, the same data were analyzed manually using Adobe Photoshop software. When the data obtained using the CL-Quant recipes and Photoshop were compared, results were virtually identical, indicating the CL-Quant recipes were truthful. The method described here could be applied to any video data to measure growth rates of hESC or other cells that grow in colonies. In addition, other video bioinformatics recipes can be developed in the future for other cell processes such as migration, apoptosis, and cell adhesion. PMID:20495527

  4. Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding

    Science.gov (United States)

    Oh, Kwan-Jung; Oh, Byung Tae

    2015-04-01

    We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.

  5. Automatic Video-based Analysis of Human Motion

    DEFF Research Database (Denmark)

    Fihl, Preben

    The human motion contains valuable information in many situations and people frequently perform an unconscious analysis of the motion of other people to understand their actions, intentions, and state of mind. An automatic analysis of human motion will facilitate many applications and thus has...... received great interest from both industry and research communities. The focus of this thesis is on video-based analysis of human motion and the thesis presents work within three overall topics, namely foreground segmentation, action recognition, and human pose estimation. Foreground segmentation is often...... the first important step in the analysis of human motion. By separating foreground from background the subsequent analysis can be focused and efficient. This thesis presents a robust background subtraction method that can be initialized with foreground objects in the scene and is capable of handling...

  6. Physical activity patterns across time-segmented youth sport flag football practice.

    Science.gov (United States)

    Schlechter, Chelsey R; Guagliano, Justin M; Rosenkranz, Richard R; Milliken, George A; Dzewaltowski, David A

    2018-02-08

    Youth sport (YS) reaches a large number of children world-wide and contributes substantially to children's daily physical activity (PA), yet less than half of YS time has been shown to be spent in moderate-to-vigorous physical activity (MVPA). Physical activity during practice is likely to vary depending on practice structure that changes across YS time, therefore the purpose of this study was 1) to describe the type and frequency of segments of time, defined by contextual characteristics of practice structure, during YS practices and 2) determine the influence of these segments on PA. Research assistants video-recorded the full duration of 28 practices from 14 boys' flag football teams (2 practices/team) while children concurrently (N = 111, aged 5-11 years, mean 7.9 ± 1.2 years) wore ActiGraph GT1M accelerometers to measure PA. Observers divided videos of each practice into continuous context time segments (N = 204; mean-segments-per-practice = 7.3, SD = 2.5) using start/stop points defined by change in context characteristics, and assigned a value for task (e.g., management, gameplay, etc.), member arrangement (e.g., small group, whole group, etc.), and setting demand (i.e., fosters participation, fosters exclusion). Segments were then paired with accelerometer data. Data were analyzed using a multilevel model with segment as unit of analysis. Whole practices averaged 34 ± 2.4% of time spent in MVPA. Free-play (51.5 ± 5.5%), gameplay (53.6 ± 3.7%), and warm-up (53.9 ± 3.6%) segments had greater percentage of time (%time) in MVPA compared to fitness (36.8 ± 4.4%) segments (p ≤ .01). Greater %time was spent in MVPA during free-play segments compared to scrimmage (30.2 ± 4.6%), strategy (30.6 ± 3.2%), and sport-skill (31.6 ± 3.1%) segments (p ≤ .01), and in segments that fostered participation (36.1 ± 2.7%) than segments that fostered exclusion (29.1 ± 3.0%; p ≤ .01

  7. Towards a Video Passive Content Fingerprinting Method for Partial-Copy Detection Robust against Non-Simulated Attacks.

    Directory of Open Access Journals (Sweden)

    Zobeida Jezabel Guzman-Zavaleta

    Full Text Available Passive content fingerprinting is widely used for video content identification and monitoring. However, many challenges remain unsolved especially for partial-copies detection. The main challenge is to find the right balance between the computational cost of fingerprint extraction and fingerprint dimension, without compromising detection performance against various attacks (robustness. Fast video detection performance is desirable in several modern applications, for instance, in those where video detection involves the use of large video databases or in applications requiring real-time video detection of partial copies, a process whose difficulty increases when videos suffer severe transformations. In this context, conventional fingerprinting methods are not fully suitable to cope with the attacks and transformations mentioned before, either because the robustness of these methods is not enough or because their execution time is very high, where the time bottleneck is commonly found in the fingerprint extraction and matching operations. Motivated by these issues, in this work we propose a content fingerprinting method based on the extraction of a set of independent binary global and local fingerprints. Although these features are robust against common video transformations, their combination is more discriminant against severe video transformations such as signal processing attacks, geometric transformations and temporal and spatial desynchronization. Additionally, we use an efficient multilevel filtering system accelerating the processes of fingerprint extraction and matching. This multilevel filtering system helps to rapidly identify potential similar video copies upon which the fingerprint process is carried out only, thus saving computational time. We tested with datasets of real copied videos, and the results show how our method outperforms state-of-the-art methods regarding detection scores. Furthermore, the granularity of our method makes

  8. Study of Temporal Effects on Subjective Video Quality of Experience.

    Science.gov (United States)

    Bampis, Christos George; Zhi Li; Moorthy, Anush Krishna; Katsavounidis, Ioannis; Aaron, Anne; Bovik, Alan Conrad

    2017-11-01

    HTTP adaptive streaming is being increasingly deployed by network content providers, such as Netflix and YouTube. By dividing video content into data chunks encoded at different bitrates, a client is able to request the appropriate bitrate for the segment to be played next based on the estimated network conditions. However, this can introduce a number of impairments, including compression artifacts and rebuffering events, which can severely impact an end-user's quality of experience (QoE). We have recently created a new video quality database, which simulates a typical video streaming application, using long video sequences and interesting Netflix content. Going beyond previous efforts, the new database contains highly diverse and contemporary content, and it includes the subjective opinions of a sizable number of human subjects regarding the effects on QoE of both rebuffering and compression distortions. We observed that rebuffering is always obvious and unpleasant to subjects, while bitrate changes may be less obvious due to content-related dependencies. Transient bitrate drops were preferable over rebuffering only on low complexity video content, while consistently low bitrates were poorly tolerated. We evaluated different objective video quality assessment algorithms on our database and found that objective video quality models are unreliable for QoE prediction on videos suffering from both rebuffering events and bitrate changes. This implies the need for more general QoE models that take into account objective quality models, rebuffering-aware information, and memory. The publicly available video content as well as metadata for all of the videos in the new database can be found at http://live.ece.utexas.edu/research/LIVE_NFLXStudy/nflx_index.html.

  9. Scalable gastroscopic video summarization via similar-inhibition dictionary selection.

    Science.gov (United States)

    Wang, Shuai; Cong, Yang; Cao, Jun; Yang, Yunsheng; Tang, Yandong; Zhao, Huaici; Yu, Haibin

    2016-01-01

    This paper aims at developing an automated gastroscopic video summarization algorithm to assist clinicians to more effectively go through the abnormal contents of the video. To select the most representative frames from the original video sequence, we formulate the problem of gastroscopic video summarization as a dictionary selection issue. Different from the traditional dictionary selection methods, which take into account only the number and reconstruction ability of selected key frames, our model introduces the similar-inhibition constraint to reinforce the diversity of selected key frames. We calculate the attention cost by merging both gaze and content change into a prior cue to help select the frames with more high-level semantic information. Moreover, we adopt an image quality evaluation process to eliminate the interference of the poor quality images and a segmentation process to reduce the computational complexity. For experiments, we build a new gastroscopic video dataset captured from 30 volunteers with more than 400k images and compare our method with the state-of-the-arts using the content consistency, index consistency and content-index consistency with the ground truth. Compared with all competitors, our method obtains the best results in 23 of 30 videos evaluated based on content consistency, 24 of 30 videos evaluated based on index consistency and all videos evaluated based on content-index consistency. For gastroscopic video summarization, we propose an automated annotation method via similar-inhibition dictionary selection. Our model can achieve better performance compared with other state-of-the-art models and supplies more suitable key frames for diagnosis. The developed algorithm can be automatically adapted to various real applications, such as the training of young clinicians, computer-aided diagnosis or medical report generation. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Daily Digest Generation of Kindergartner from Surveillance Video

    Science.gov (United States)

    Ishikawa, Tomoya; Wang, Yu; Kato, Jien

    Nowadays, children spend most of their time in kindergarten as well as nursery schools. This directly brings a requirement to the parents: they want to see how everyday goes with their kids. To meet this requirement, in this paper, we propose a method to automatically generate video digest that records kids' daily life in kindergarten. Our method involves two steps. The first is to efficiently narrow down the searching space by analyzing the noisy RFID tag log which records kids' temporal location, while the second is to use visual features and time constrains to recognize events and pick out video segments for each individual event. The accuracy of our method was evaluated with quantitative experiment and the superior of the digest that generated by our method was confirmed via questionnaire survey.

  11. Gender and video games: How is female gender generally represented in various genres of video games?

    Directory of Open Access Journals (Sweden)

    Xeniya Kondrat

    2015-06-01

    Full Text Available Gender representation in video games is a current sensitive topic in entertainment media. Gender studies in video games look at the difference between the portrayal of female and male characters. Most video games tend to over-represent stereotypes and in general use extensive violence and cruelty (Maietti, 2008. Some video games use wrong, disrespectful and sometimes even violent representations of both genders. This research paper focuses on the current representation of female gender in video games and how they are represented, stereotyped and used as characters in games. Results show that there is a difference between portraying women in the past and present. This research paper is based on previous academic research and results which were achieved with online questionnaire among game players and two interviews with professionals in the field of game design. The results show that there is still negative stereotyping of female gender. However, at the same time, the answers of the respondents show that the target audience of video games desires improvements in presentation of female gender as well as male.

  12. The IXV Ground Segment design, implementation and operations

    Science.gov (United States)

    Martucci di Scarfizzi, Giovanni; Bellomo, Alessandro; Musso, Ivano; Bussi, Diego; Rabaioli, Massimo; Santoro, Gianfranco; Billig, Gerhard; Gallego Sanz, José María

    2016-07-01

    The Intermediate eXperimental Vehicle (IXV) is an ESA re-entry demonstrator that performed, on the 11th February of 2015, a successful re-entry demonstration mission. The project objectives were the design, development, manufacturing and on ground and in flight verification of an autonomous European lifting and aerodynamically controlled re-entry system. For the IXV mission a dedicated Ground Segment was provided. The main subsystems of the IXV Ground Segment were: IXV Mission Control Center (MCC), from where monitoring of the vehicle was performed, as well as support during pre-launch and recovery phases; IXV Ground Stations, used to cover IXV mission by receiving spacecraft telemetry and forwarding it toward the MCC; the IXV Communication Network, deployed to support the operations of the IXV mission by interconnecting all remote sites with MCC, supporting data, voice and video exchange. This paper describes the concept, architecture, development, implementation and operations of the ESA Intermediate Experimental Vehicle (IXV) Ground Segment and outlines the main operations and lessons learned during the preparation and successful execution of the IXV Mission.

  13. Segmentation and packaging reactor vessels internals

    International Nuclear Information System (INIS)

    Boucau, Joseph

    2014-01-01

    Document available in abstract form only, full text follows: With more than 25 years of experience in the development of reactor vessel internals and reactor vessel segmentation and packaging technology, Westinghouse has accumulated significant know-how in the reactor dismantling market. The primary challenges of a segmentation and packaging project are to separate the highly activated materials from the less-activated materials and package them into appropriate containers for disposal. Since disposal cost is a key factor, it is important to plan and optimize waste segmentation and packaging. The choice of the optimum cutting technology is also important for a successful project implementation and depends on some specific constraints. Detailed 3-D modeling is the basis for tooling design and provides invaluable support in determining the optimum strategy for component cutting and disposal in waste containers, taking account of the radiological and packaging constraints. The usual method is to start at the end of the process, by evaluating handling of the containers, the waste disposal requirements, what type and size of containers are available for the different disposal options, and working backwards to select a cutting method and finally the cut geometry required. The 3-D models can include intelligent data such as weight, center of gravity, curie content, etc, for each segmented piece, which is very useful when comparing various cutting, handling and packaging options. The detailed 3-D analyses and thorough characterization assessment can draw the attention to material potentially subject to clearance, either directly or after certain period of decay, to allow recycling and further disposal cost reduction. Westinghouse has developed a variety of special cutting and handling tools, support fixtures, service bridges, water filtration systems, video-monitoring systems and customized rigging, all of which are required for a successful reactor vessel internals

  14. Shot Boundary Detection in Soccer Video using Twin-comparison Algorithm and Dominant Color Region

    Directory of Open Access Journals (Sweden)

    Matko Šarić

    2008-06-01

    Full Text Available The first step in generic video processing is temporal segmentation, i.e. shot boundary detection. Camera shot transitions can be either abrupt (e.g. cuts or gradual (e.g. fades, dissolves, wipes. Sports video is one of the most challenging domains for robust shot boundary detection. We proposed a shot boundary detection algorithm for soccer video based on the twin-comparison method and the absolute difference between frames in their ratios of dominant colored pixels to total number of pixels. With this approach the detection of gradual transitions is improved by decreasing the number of false positives caused by some camera operations. We also compared performances of our algorithm and the standard twin-comparison method.

  15. Mediastinoscopic Bilateral Bronchial Release for Long Segmental Resection and Anastomosis of the Trachea

    OpenAIRE

    Kang, Jeong-Han; Park, In Kyu; Bae, Mi-Kyung; Hwang, Yoohwa

    2011-01-01

    The extent of resection and release of the trachea is important for successful anastomosis. Bilateral bronchial dissection is one of the release techniques for resection of the lower trachea. We present the experience of cervical video-assisted mediastinoscopic bilateral bronchial release for long segmental resection and anastomosis of the lower trachea.

  16. Performance Analysis of Video Transmission Using Sequential Distortion Minimization Method for Digital Video Broadcasting Terrestrial

    Directory of Open Access Journals (Sweden)

    Novita Astin

    2016-12-01

    Full Text Available This paper presents about the transmission of Digital Video Broadcasting system with streaming video resolution 640x480 on different IQ rate and modulation. In the video transmission, distortion often occurs, so the received video has bad quality. Key frames selection algorithm is flexibel on a change of video, but on these methods, the temporal information of a video sequence is omitted. To minimize distortion between the original video and received video, we aimed at adding methodology using sequential distortion minimization algorithm. Its aim was to create a new video, better than original video without significant loss of content between the original video and received video, fixed sequentially. The reliability of video transmission was observed based on a constellation diagram, with the best result on IQ rate 2 Mhz and modulation 8 QAM. The best video transmission was also investigated using SEDIM (Sequential Distortion Minimization Method and without SEDIM. The experimental result showed that the PSNR (Peak Signal to Noise Ratio average of video transmission using SEDIM was an increase from 19,855 dB to 48,386 dB and SSIM (Structural Similarity average increase 10,49%. The experimental results and comparison of proposed method obtained a good performance. USRP board was used as RF front-end on 2,2 GHz.

  17. Improved people detection in nuclear plants by video processing for safety purpose

    Energy Technology Data Exchange (ETDEWEB)

    Jorge, Carlos Alexandre F.; Mol, Antonio Carlos A.; Carvalho, Paulo Victor R., E-mail: calexandre@ien.gov.br, E-mail: mol@ien.gov.br, E-mail: paulov@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Seixas, Jose M.; Silva, Eduardo Antonio B., E-mail: seixas@lps.ufrj.br, E-mail: eduardo@smt.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Eletrica; Waintraub, Fabio, E-mail: fabiowaintraub@hotmail.com [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Escola Politecnica. Departamento de Engenharia Eletronica e de Computacao

    2013-07-01

    This work describes improvements in a surveillance system for safety purposes in nuclear plants. The objective is to track people online in video, in order to estimate the dose received by personnel, during working tasks executed in nuclear plants. The estimation will be based on their tracked positions and on dose rate mapping in a nuclear research reactor, Argonauta. Cameras have been installed within Argonauta room, supplying the data needed. Video processing methods were combined for detecting and tracking people in video. More specifically, segmentation, performed by background subtraction, was combined with a tracking method based on color distribution. The use of both methods improved the overall results. An alternative approach was also evaluated, by means of blind source signal separation. Results are commented, along with perspectives. (author)

  18. Improved people detection in nuclear plants by video processing for safety purpose

    International Nuclear Information System (INIS)

    Jorge, Carlos Alexandre F.; Mol, Antonio Carlos A.; Carvalho, Paulo Victor R.; Seixas, Jose M.; Silva, Eduardo Antonio B.; Waintraub, Fabio

    2013-01-01

    This work describes improvements in a surveillance system for safety purposes in nuclear plants. The objective is to track people online in video, in order to estimate the dose received by personnel, during working tasks executed in nuclear plants. The estimation will be based on their tracked positions and on dose rate mapping in a nuclear research reactor, Argonauta. Cameras have been installed within Argonauta room, supplying the data needed. Video processing methods were combined for detecting and tracking people in video. More specifically, segmentation, performed by background subtraction, was combined with a tracking method based on color distribution. The use of both methods improved the overall results. An alternative approach was also evaluated, by means of blind source signal separation. Results are commented, along with perspectives. (author)

  19. Spontaneous Brain Activity Did Not Show the Effect of Violent Video Games on Aggression: A Resting-State fMRI Study

    OpenAIRE

    Wei Pan; Wei Pan; Wei Pan; Xuemei Gao; Shuo Shi; Fuqu Liu; Chao Li

    2018-01-01

    A great many of empirical researches have proved that longtime exposure to violent video game can lead to a series of negative effects. Although research has focused on the neural basis of the correlation between violent video game and aggression, little is known whether the spontaneous brain activity is associated with violent video game exposure. To address this question, we measured the spontaneous brain activity using resting-state functional magnetic resonance imaging (fMRI). We used the...

  20. Rate control scheme for consistent video quality in scalable video codec.

    Science.gov (United States)

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame.

  1. GeoSegmenter: A statistically learned Chinese word segmenter for the geoscience domain

    Science.gov (United States)

    Huang, Lan; Du, Youfu; Chen, Gongyang

    2015-03-01

    Unlike English, the Chinese language has no space between words. Segmenting texts into words, known as the Chinese word segmentation (CWS) problem, thus becomes a fundamental issue for processing Chinese documents and the first step in many text mining applications, including information retrieval, machine translation and knowledge acquisition. However, for the geoscience subject domain, the CWS problem remains unsolved. Although a generic segmenter can be applied to process geoscience documents, they lack the domain specific knowledge and consequently their segmentation accuracy drops dramatically. This motivated us to develop a segmenter specifically for the geoscience subject domain: the GeoSegmenter. We first proposed a generic two-step framework for domain specific CWS. Following this framework, we built GeoSegmenter using conditional random fields, a principled statistical framework for sequence learning. Specifically, GeoSegmenter first identifies general terms by using a generic baseline segmenter. Then it recognises geoscience terms by learning and applying a model that can transform the initial segmentation into the goal segmentation. Empirical experimental results on geoscience documents and benchmark datasets showed that GeoSegmenter could effectively recognise both geoscience terms and general terms.

  2. Pollen Bearing Honey Bee Detection in Hive Entrance Video Recorded by Remote Embedded System for Pollination Monitoring

    Science.gov (United States)

    Babic, Z.; Pilipovic, R.; Risojevic, V.; Mirjanic, G.

    2016-06-01

    Honey bees have crucial role in pollination across the world. This paper presents a simple, non-invasive, system for pollen bearing honey bee detection in surveillance video obtained at the entrance of a hive. The proposed system can be used as a part of a more complex system for tracking and counting of honey bees with remote pollination monitoring as a final goal. The proposed method is executed in real time on embedded systems co-located with a hive. Background subtraction, color segmentation and morphology methods are used for segmentation of honey bees. Classification in two classes, pollen bearing honey bees and honey bees that do not have pollen load, is performed using nearest mean classifier, with a simple descriptor consisting of color variance and eccentricity features. On in-house data set we achieved correct classification rate of 88.7% with 50 training images per class. We show that the obtained classification results are not far behind from the results of state-of-the-art image classification methods. That favors the proposed method, particularly having in mind that real time video transmission to remote high performance computing workstation is still an issue, and transfer of obtained parameters of pollination process is much easier.

  3. Intelligent keyframe extraction for video printing

    Science.gov (United States)

    Zhang, Tong

    2004-10-01

    Nowadays most digital cameras have the functionality of taking short video clips, with the length of video ranging from several seconds to a couple of minutes. The purpose of this research is to develop an algorithm which extracts an optimal set of keyframes from each short video clip so that the user could obtain proper video frames to print out. In current video printing systems, keyframes are normally obtained by evenly sampling the video clip over time. Such an approach, however, may not reflect highlights or regions of interest in the video. Keyframes derived in this way may also be improper for video printing in terms of either content or image quality. In this paper, we present an intelligent keyframe extraction approach to derive an improved keyframe set by performing semantic analysis of the video content. For a video clip, a number of video and audio features are analyzed to first generate a candidate keyframe set. These features include accumulative color histogram and color layout differences, camera motion estimation, moving object tracking, face detection and audio event detection. Then, the candidate keyframes are clustered and evaluated to obtain a final keyframe set. The objective is to automatically generate a limited number of keyframes to show different views of the scene; to show different people and their actions in the scene; and to tell the story in the video shot. Moreover, frame extraction for video printing, which is a rather subjective problem, is considered in this work for the first time, and a semi-automatic approach is proposed.

  4. A Video Game-Based Framework for Analyzing Human-Robot Interaction: Characterizing Interface Design in Real-Time Interactive Multimedia Applications

    National Research Council Canada - National Science Library

    Richer, Justin; Drury, Jill L

    2006-01-01

    .... This paper segments video game interaction into domain-independent components which together form a framework that can be used to characterize real-time interactive multimedia applications in general...

  5. Video Quality Prediction over Wireless 4G

    KAUST Repository

    Lau, Chun Pong

    2013-04-14

    In this paper, we study the problem of video quality prediction over the wireless 4G network. Video transmission data is collected from a real 4G SCM testbed for investigating factors that affect video quality. After feature transformation and selection on video and network parameters, video quality is predicted by solving as regression problem. Experimental results show that the dominated factor on video quality is the channel attenuation and video quality can be well estimated by our models with small errors.

  6. Video Quality Prediction over Wireless 4G

    KAUST Repository

    Lau, Chun Pong; Zhang, Xiangliang; Shihada, Basem

    2013-01-01

    In this paper, we study the problem of video quality prediction over the wireless 4G network. Video transmission data is collected from a real 4G SCM testbed for investigating factors that affect video quality. After feature transformation and selection on video and network parameters, video quality is predicted by solving as regression problem. Experimental results show that the dominated factor on video quality is the channel attenuation and video quality can be well estimated by our models with small errors.

  7. A System based on Adaptive Background Subtraction Approach for Moving Object Detection and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Bahadır KARASULU

    2013-04-01

    Full Text Available Video surveillance systems are based on video and image processing research areas in the scope of computer science. Video processing covers various methods which are used to browse the changes in existing scene for specific video. Nowadays, video processing is one of the important areas of computer science. Two-dimensional videos are used to apply various segmentation and object detection and tracking processes which exists in multimedia content-based indexing, information retrieval, visual and distributed cross-camera surveillance systems, people tracking, traffic tracking and similar applications. Background subtraction (BS approach is a frequently used method for moving object detection and tracking. In the literature, there exist similar methods for this issue. In this research study, it is proposed to provide a more efficient method which is an addition to existing methods. According to model which is produced by using adaptive background subtraction (ABS, an object detection and tracking system’s software is implemented in computer environment. The performance of developed system is tested via experimental works with related video datasets. The experimental results and discussion are given in the study

  8. Leveraging Automatic Speech Recognition Errors to Detect Challenging Speech Segments in TED Talks

    Science.gov (United States)

    Mirzaei, Maryam Sadat; Meshgi, Kourosh; Kawahara, Tatsuya

    2016-01-01

    This study investigates the use of Automatic Speech Recognition (ASR) systems to epitomize second language (L2) listeners' problems in perception of TED talks. ASR-generated transcripts of videos often involve recognition errors, which may indicate difficult segments for L2 listeners. This paper aims to discover the root-causes of the ASR errors…

  9. Subjective Video Quality Assessment in H.264/AVC Video Coding Standard

    Directory of Open Access Journals (Sweden)

    Z. Miličević

    2012-11-01

    Full Text Available This paper seeks to provide an approach for subjective video quality assessment in the H.264/AVC standard. For this purpose a special software program for the subjective assessment of quality of all the tested video sequences is developed. It was developed in accordance with recommendation ITU-T P.910, since it is suitable for the testing of multimedia applications. The obtained results show that in the proposed selective intra prediction and optimized inter prediction algorithm there is a small difference in picture quality (signal-to-noise ratio between decoded original and modified video sequences.

  10. Moving object detection in top-view aerial videos improved by image stacking

    Science.gov (United States)

    Teutsch, Michael; Krüger, Wolfgang; Beyerer, Jürgen

    2017-08-01

    Image stacking is a well-known method that is used to improve the quality of images in video data. A set of consecutive images is aligned by applying image registration and warping. In the resulting image stack, each pixel has redundant information about its intensity value. This redundant information can be used to suppress image noise, resharpen blurry images, or even enhance the spatial image resolution as done in super-resolution. Small moving objects in the videos usually get blurred or distorted by image stacking and thus need to be handled explicitly. We use image stacking in an innovative way: image registration is applied to small moving objects only, and image warping blurs the stationary background that surrounds the moving objects. Our video data are coming from a small fixed-wing unmanned aerial vehicle (UAV) that acquires top-view gray-value images of urban scenes. Moving objects are mainly cars but also other vehicles such as motorcycles. The resulting images, after applying our proposed image stacking approach, are used to improve baseline algorithms for vehicle detection and segmentation. We improve precision and recall by up to 0.011, which corresponds to a reduction of the number of false positive and false negative detections by more than 3 per second. Furthermore, we show how our proposed image stacking approach can be implemented efficiently.

  11. Development of a video-delivered relaxation treatment of late-life anxiety for veterans.

    Science.gov (United States)

    Gould, Christine E; Zapata, Aimee Marie L; Bruce, Janine; Bereknyei Merrell, Sylvia; Wetherell, Julie Loebach; O'Hara, Ruth; Kuhn, Eric; Goldstein, Mary K; Beaudreau, Sherry A

    2017-10-01

    Behavioral treatments reduce anxiety, yet many older adults may not have access to these efficacious treatments. To address this need, we developed and evaluated the feasibility and acceptability of a video-delivered anxiety treatment for older Veterans. This treatment program, BREATHE (Breathing, Relaxation, and Education for Anxiety Treatment in the Home Environment), combines psychoeducation, diaphragmatic breathing, and progressive muscle relaxation training with engagement in activities. A mixed methods concurrent study design was used to examine the clarity of the treatment videos. We conducted semi-structured interviews with 20 Veterans (M age = 69.5, SD = 7.3 years; 55% White, Non-Hispanic) and collected ratings of video clarity. Quantitative ratings revealed that 100% of participants generally or definitely could follow breathing and relaxation video instructions. Qualitative findings, however, demonstrated more variability in the extent to which each video segment was clear. Participants identified both immediate benefits and motivation challenges associated with a video-delivered treatment. Participants suggested that some patients may need encouragement, whereas others need face-to-face therapy. Quantitative ratings of video clarity and qualitative findings highlight the feasibility of a video-delivered treatment for older Veterans with anxiety. Our findings demonstrate the importance of ensuring patients can follow instructions provided in self-directed treatments and the role that an iterative testing process has in addressing these issues. Next steps include testing the treatment videos with older Veterans with anxiety disorders.

  12. Multiple Vehicle Detection and Segmentation in Malaysia Traffic Flow

    Science.gov (United States)

    Fariz Hasan, Ahmad; Fikri Che Husin, Mohd; Affendi Rosli, Khairul; Norhafiz Hashim, Mohd; Faiz Zainal Abidin, Amar

    2018-03-01

    Vision based system are widely used in the field of Intelligent Transportation System (ITS) to extract a large amount of information to analyze traffic scenes. By rapid number of vehicles on the road as well as significant increase on cameras dictated the need for traffic surveillance systems. This system can take over the burden some task was performed by human operator in traffic monitoring centre. The main technique proposed by this paper is concentrated on developing a multiple vehicle detection and segmentation focusing on monitoring through Closed Circuit Television (CCTV) video. The system is able to automatically segment vehicle extracted from heavy traffic scene by optical flow estimation alongside with blob analysis technique in order to detect the moving vehicle. Prior to segmentation, blob analysis technique will compute the area of interest region corresponding to moving vehicle which will be used to create bounding box on that particular vehicle. Experimental validation on the proposed system was performed and the algorithm is demonstrated on various set of traffic scene.

  13. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  14. Collaborative Video Sketching

    DEFF Research Database (Denmark)

    Henningsen, Birgitte; Gundersen, Peter Bukovica; Hautopp, Heidi

    2017-01-01

    This paper introduces to what we define as a collaborative video sketching process. This process links various sketching techniques with digital storytelling approaches and creative reflection processes in video productions. Traditionally, sketching has been used by designers across various...... findings: 1) They are based on a collaborative approach. 2) The sketches act as a mean to externalizing hypotheses and assumptions among the participants. Based on our analysis we present an overview of factors involved in collaborative video sketching and shows how the factors relate to steps, where...... the participants: shape, record, review and edit their work, leading the participants to new insights about their work....

  15. Computer simulation of orthognathic surgery with video imaging

    Science.gov (United States)

    Sader, Robert; Zeilhofer, Hans-Florian U.; Horch, Hans-Henning

    1994-04-01

    Patients with extreme jaw imbalance must often undergo operative corrections. The goal of therapy is to harmonize the stomatognathic system and an aesthetical correction of the face profile. A new procedure will be presented which supports the maxillo-facial surgeon in planning the operation and which also presents the patient the result of the treatment by video images. Once an x-ray has been digitized it is possible to produce individualized cephalometric analyses. Using a ceph on screen, all current orthognathic operations can be simulated, whereby the bony segments are moved according to given parameters, and a new soft tissue profile can be calculated. The profile of the patient is fed into the computer by way of a video system and correlated to the ceph. Using the simulated operation the computer calculates a new video image of the patient which presents the expected postoperative appearance. In studies of patients treated between 1987-91, 76 out of 121 patients were able to be evaluated. The deviation in profile change varied between .0 and 1.6mm. A side effect of the practical applications was an increase in patient compliance.

  16. Assessment of YouTube videos as a source of information on medication use in pregnancy.

    Science.gov (United States)

    Hansen, Craig; Interrante, Julia D; Ailes, Elizabeth C; Frey, Meghan T; Broussard, Cheryl S; Godoshian, Valerie J; Lewis, Courtney; Polen, Kara N D; Garcia, Amanda P; Gilboa, Suzanne M

    2016-01-01

    When making decisions about medication use in pregnancy, women consult many information sources, including the Internet. The aim of this study was to assess the content of publicly accessible YouTube videos that discuss medication use in pregnancy. Using 2023 distinct combinations of search terms related to medications and pregnancy, we extracted metadata from YouTube videos using a YouTube video Application Programming Interface. Relevant videos were defined as those with a medication search term and a pregnancy-related search term in either the video title or description. We viewed relevant videos and abstracted content from each video into a database. We documented whether videos implied each medication to be "safe" or "unsafe" in pregnancy and compared that assessment with the medication's Teratogen Information System (TERIS) rating. After viewing 651 videos, 314 videos with information about medication use in pregnancy were available for the final analyses. The majority of videos were from law firms (67%), television segments (10%), or physicians (8%). Selective serotonin reuptake inhibitors (SSRIs) were the most common medication class named (225 videos, 72%), and 88% of videos about SSRIs indicated that they were unsafe for use in pregnancy. However, the TERIS ratings for medication products in this class range from "unlikely" to "minimal" teratogenic risk. For the majority of medications, current YouTube video content does not adequately reflect what is known about the safety of their use in pregnancy and should be interpreted cautiously. However, YouTube could serve as a platform for communicating evidence-based medication safety information. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Video game training and the reward system.

    Science.gov (United States)

    Lorenz, Robert C; Gleich, Tobias; Gallinat, Jürgen; Kühn, Simone

    2015-01-01

    Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training. Fifty healthy participants were randomly assigned to a video game training (TG) or control group (CG). Before and after training/control period, functional magnetic resonance imaging (fMRI) was conducted using a non-video game related reward task. At pretest, both groups showed strongest activation in ventral striatum (VS) during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated. This longitudinal study revealed that video game training may preserve reward responsiveness in the VS in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.

  18. Bullet-Block Science Video Puzzle

    Science.gov (United States)

    Shakur, Asif

    2015-01-01

    A science video blog, which has gone viral, shows a wooden block shot by a vertically aimed rifle. The video shows that the block hit dead center goes exactly as high as the one shot off-center. (Fig. 1). The puzzle is that the block shot off-center carries rotational kinetic energy in addition to the gravitational potential energy. This leads a…

  19. Why segmentation matters: Experience-driven segmentation errors impair "morpheme" learning.

    Science.gov (United States)

    Finn, Amy S; Hudson Kam, Carla L

    2015-09-01

    We ask whether an adult learner's knowledge of their native language impedes statistical learning in a new language beyond just word segmentation (as previously shown). In particular, we examine the impact of native-language word-form phonotactics on learners' ability to segment words into their component morphemes and learn phonologically triggered variation of morphemes. We find that learning is impaired when words and component morphemes are structured to conflict with a learner's native-language phonotactic system, but not when native-language phonotactics do not conflict with morpheme boundaries in the artificial language. A learner's native-language knowledge can therefore have a cascading impact affecting word segmentation and the morphological variation that relies upon proper segmentation. These results show that getting word segmentation right early in learning is deeply important for learning other aspects of language, even those (morphology) that are known to pose a great difficulty for adult language learners. (c) 2015 APA, all rights reserved).

  20. VideoSET: Video Summary Evaluation through Text

    OpenAIRE

    Yeung, Serena; Fathi, Alireza; Fei-Fei, Li

    2014-01-01

    In this paper we present VideoSET, a method for Video Summary Evaluation through Text that can evaluate how well a video summary is able to retain the semantic information contained in its original video. We observe that semantics is most easily expressed in words, and develop a text-based approach for the evaluation. Given a video summary, a text representation of the video summary is first generated, and an NLP-based metric is then used to measure its semantic distance to ground-truth text ...

  1. QLab 3 show control projects for live performances & installations

    CERN Document Server

    Hopgood, Jeromy

    2013-01-01

    Used from Broadway to Britain's West End, QLab software is the tool of choice for many of the world's most prominent sound, projection, and integrated media designers. QLab 3 Show Control: Projects for Live Performances & Installations is a project-based book on QLab software covering sound, video, and show control. With information on both sound and video system basics and the more advanced functions of QLab such as MIDI show control, new OSC capabilities, networking, video effects, and microphone integration, each chapter's specific projects will allow you to learn the software's capabilitie

  2. Adaptive block online learning target tracking based on super pixel segmentation

    Science.gov (United States)

    Cheng, Yue; Li, Jianzeng

    2018-04-01

    Video target tracking technology under the unremitting exploration of predecessors has made big progress, but there are still lots of problems not solved. This paper proposed a new algorithm of target tracking based on image segmentation technology. Firstly we divide the selected region using simple linear iterative clustering (SLIC) algorithm, after that, we block the area with the improved density-based spatial clustering of applications with noise (DBSCAN) clustering algorithm. Each sub-block independently trained classifier and tracked, then the algorithm ignore the failed tracking sub-block while reintegrate the rest of the sub-blocks into tracking box to complete the target tracking. The experimental results show that our algorithm can work effectively under occlusion interference, rotation change, scale change and many other problems in target tracking compared with the current mainstream algorithms.

  3. Indexing Motion Detection Data for Surveillance Video

    DEFF Research Database (Denmark)

    Vind, Søren Juhl; Bille, Philip; Gørtz, Inge Li

    2014-01-01

    We show how to compactly index video data to support fast motion detection queries. A query specifies a time interval T, a area A in the video and two thresholds v and p. The answer to a query is a list of timestamps in T where ≥ p% of A has changed by ≥ v values. Our results show that by building...... a small index, we can support queries with a speedup of two to three orders of magnitude compared to motion detection without an index. For high resolution video, the index size is about 20% of the compressed video size....

  4. Video Game Training and the Reward System

    Directory of Open Access Journals (Sweden)

    Robert C. Lorenz

    2015-02-01

    Full Text Available Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual towards playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training.Fifty healthy participants were randomly assigned to a video game training (TG or control group (CG. Before and after training/control period, functional magnetic resonance imaging (fMRI was conducted using a non-video game related reward task.At pretest, both groups showed strongest activation in ventral striatum (VS during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated.This longitudinal study revealed that video game training may preserve reward responsiveness in the ventral striatum in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.

  5. Video game training and the reward system

    Science.gov (United States)

    Lorenz, Robert C.; Gleich, Tobias; Gallinat, Jürgen; Kühn, Simone

    2015-01-01

    Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training. Fifty healthy participants were randomly assigned to a video game training (TG) or control group (CG). Before and after training/control period, functional magnetic resonance imaging (fMRI) was conducted using a non-video game related reward task. At pretest, both groups showed strongest activation in ventral striatum (VS) during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated. This longitudinal study revealed that video game training may preserve reward responsiveness in the VS in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training. PMID:25698962

  6. Video-Assisted Minithoracotomy for Pulmonary Laceration with a Massive Hemothorax

    Directory of Open Access Journals (Sweden)

    Hideki Ota

    2014-01-01

    Full Text Available Severe intrathoracic hemorrhage from pulmonary parenchyma is the most serious complication of pulmonary laceration after blunt trauma requiring immediate surgical hemostasis through open thoracotomy. The safety and efficacy of video-assisted thoracoscopic surgery (VATS techniques for this life-threatening condition have not been fully evaluated yet. We report a case of pulmonary laceration with a massive hemothorax after blunt trauma successfully treated using a combination of muscle-sparing minithoracotomy with VATS techniques (video-assisted minithoracotomy. A 22-year-old man was transferred to our department after a falling accident. A diagnosis of right-sided pneumothorax was made on physical examination and urgent chest decompression was performed with a tube thoracostomy. Chest computed tomographic scan revealed pulmonary laceration with hematoma in the right lung. The pulmonary hematoma extending along segmental pulmonary artery in the helium of the middle lobe ruptured suddenly into the thoracic cavity, resulting in hemorrhagic shock on the fourth day after admission. Emergency right middle lobectomy was performed through video-assisted minithoracotomy. We used two cotton dissectors as a chopstick for achieving compression hemostasis during surgery. The patient recovered satisfactorily. Video-assisted minithoracotomy can be an alternative approach for the treatment of pulmonary lacerations with a massive hemothorax in hemodynamically unstable patients.

  7. Toward enhancing the distributed video coder under a multiview video codec framework

    Science.gov (United States)

    Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua

    2016-11-01

    The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.

  8. Lecture Videos in Online Courses: A Follow-Up

    Science.gov (United States)

    Evans, Heather K.; Cordova, Victoria

    2015-01-01

    In a recent study regarding online lecture videos, Evans (2014) shows that lecture videos are not superior to still slides. Using two Introduction to American Government courses, taught in a 4-week summer session, she shows that students in a non-video course had higher satisfaction with the course and instructor and performed better on exams than…

  9. Multiple Moving Object Detection for Fast Video Content Description in Compressed Domain

    Directory of Open Access Journals (Sweden)

    Boris Mansencal

    2007-11-01

    Full Text Available Indexing deals with the automatic extraction of information with the objective of automatically describing and organizing the content. Thinking of a video stream, different types of information can be considered semantically important. Since we can assume that the most relevant one is linked to the presence of moving foreground objects, their number, their shape, and their appearance can constitute a good mean for content description. For this reason, we propose to combine both motion information and region-based color segmentation to extract moving objects from an MPEG2 compressed video stream starting only considering low-resolution data. This approach, which we refer to as “rough indexing,” consists in processing P-frame motion information first, and then in performing I-frame color segmentation. Next, since many details can be lost due to the low-resolution data, to improve the object detection results, a novel spatiotemporal filtering has been developed which is constituted by a quadric surface modeling the object trace along time. This method enables to effectively correct possible former detection errors without heavily increasing the computational effort.

  10. Body Segment Kinematics and Energy Expenditure in Active Videogames.

    Science.gov (United States)

    Böhm, Birgit; Hartmann, Michael; Böhm, Harald

    2016-06-01

    Energy expenditure (EE) in active videogames (AVGs) is a component for assessing its benefit for cardiovascular health. Existing evidence suggests that AVGs are able to increase EE above rest and when compared with playing passive videogames. However, the association between body movement and EE remains unclear. Furthermore, for goal-directed game design, it is important to know the contribution of body segments to EE. This knowledge will help to acquire a certain level of exercise intensity during active gaming. Therefore, the purpose of this study was to determine the best predictors of EE from body segment energies, acceleration, and heart rate during different game situations. EE and body segment movement of 17 subjects, aged 22.1 ± 2.5 years, were measured in two different AVGs. In randomized order, the subjects played a handheld-controlled Nintendo(®) Wii™ tennis (NWT) game and a whole body-controlled Sony EyeToy(®) waterfall (ETW) game. Body segment movement was analyzed using a three-dimensional motion capture system. From the video data, mean values of mechanical energy change and acceleration of 10 body segments were analyzed. Measured EE was significantly higher in ETW (7.8 ± 1.4 metabolic equivalents [METs]) than in NWT (3.4 ± 1.0 METs). The best prediction parameter for the more intense ETW game was the energy change of the right thigh and for the less intense hand-controlled NWT game was the energy change of the upper torso. Segment acceleration was less accurate in predicting EE. The best predictors of metabolic EE were the thighs and the upper torso in whole body and handheld-controlled games, respectively. Increasing movement of these body segments would lead to higher physical activity intensity during gaming, reducing sedentary behavior.

  11. Video genre classification using multimodal features

    Science.gov (United States)

    Jin, Sung Ho; Bae, Tae Meon; Choo, Jin Ho; Ro, Yong Man

    2003-12-01

    We propose a video genre classification method using multimodal features. The proposed method is applied for the preprocessing of automatic video summarization or the retrieval and classification of broadcasting video contents. Through a statistical analysis of low-level and middle-level audio-visual features in video, the proposed method can achieve good performance in classifying several broadcasting genres such as cartoon, drama, music video, news, and sports. In this paper, we adopt MPEG-7 audio-visual descriptors as multimodal features of video contents and evaluate the performance of the classification by feeding the features into a decision tree-based classifier which is trained by CART. The experimental results show that the proposed method can recognize several broadcasting video genres with a high accuracy and the classification performance with multimodal features is superior to the one with unimodal features in the genre classification.

  12. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing

    2011-01-01

    We present an approach to segmenting shapes in a heterogenous shape database. Our approach segments the shapes jointly, utilizing features from multiple shapes to improve the segmentation of each. The approach is entirely unsupervised and is based on an integer quadratic programming formulation of the joint segmentation problem. The program optimizes over possible segmentations of individual shapes as well as over possible correspondences between segments from multiple shapes. The integer quadratic program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape segmentation significantly outperforms single-shape segmentation techniques. © 2011 ACM.

  13. Illusory control, gambling, and video gaming: an investigation of regular gamblers and video game players.

    Science.gov (United States)

    King, Daniel L; Ejova, Anastasia; Delfabbro, Paul H

    2012-09-01

    There is a paucity of empirical research examining the possible association between gambling and video game play. In two studies, we examined the association between video game playing, erroneous gambling cognitions, and risky gambling behaviour. One hundred and fifteen participants, including 65 electronic gambling machine (EGM) players and 50 regular video game players, were administered a questionnaire that examined video game play, gambling involvement, problem gambling, and beliefs about gambling. We then assessed each groups' performance on a computerised gambling task that involved real money. A post-game survey examined perceptions of the skill and chance involved in the gambling task. The results showed that video game playing itself was not significantly associated with gambling involvement or problem gambling status. However, among those persons who both gambled and played video games, video game playing was uniquely and significantly positively associated with the perception of direct control over chance-based gambling events. Further research is needed to better understand the nature of this association, as it may assist in understanding the impact of emerging digital gambling technologies.

  14. Inferring segmented dense motion layers using 5D tensor voting.

    Science.gov (United States)

    Min, Changki; Medioni, Gérard

    2008-09-01

    We present a novel local spatiotemporal approach to produce motion segmentation and dense temporal trajectories from an image sequence. A common representation of image sequences is a 3D spatiotemporal volume, (x,y,t), and its corresponding mathematical formalism is the fiber bundle. However, directly enforcing the spatiotemporal smoothness constraint is difficult in the fiber bundle representation. Thus, we convert the representation into a new 5D space (x,y,t,vx,vy) with an additional velocity domain, where each moving object produces a separate 3D smooth layer. The smoothness constraint is now enforced by extracting 3D layers using the tensor voting framework in a single step that solves both correspondence and segmentation simultaneously. Motion segmentation is achieved by identifying those layers, and the dense temporal trajectories are obtained by converting the layers back into the fiber bundle representation. We proceed to address three applications (tracking, mosaic, and 3D reconstruction) that are hard to solve from the video stream directly because of the segmentation and dense matching steps, but become straightforward with our framework. The approach does not make restrictive assumptions about the observed scene or camera motion and is therefore generally applicable. We present results on a number of data sets.

  15. Why segmentation matters: experience-driven segmentation errors impair “morpheme” learning

    Science.gov (United States)

    Finn, Amy S.; Hudson Kam, Carla L.

    2015-01-01

    We ask whether an adult learner’s knowledge of their native language impedes statistical learning in a new language beyond just word segmentation (as previously shown). In particular, we examine the impact of native-language word-form phonotactics on learners’ ability to segment words into their component morphemes and learn phonologically triggered variation of morphemes. We find that learning is impaired when words and component morphemes are structured to conflict with a learner’s native-language phonotactic system, but not when native-language phonotactics do not conflict with morpheme boundaries in the artificial language. A learner’s native-language knowledge can therefore have a cascading impact affecting word segmentation and the morphological variation that relies upon proper segmentation. These results show that getting word segmentation right early in learning is deeply important for learning other aspects of language, even those (morphology) that are known to pose a great difficulty for adult language learners. PMID:25730305

  16. A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos.

    Science.gov (United States)

    Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian

    2016-04-01

    Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today's keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users' information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively.ResultsThe authors produced a prototype implementation of the proposed system, which is publicly accessible athttps://patentq.njit.edu/oer To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable

  17. Blind prediction of natural video quality.

    Science.gov (United States)

    Saad, Michele A; Bovik, Alan C; Charrier, Christophe

    2014-03-01

    We propose a blind (no reference or NR) video quality evaluation model that is nondistortion specific. The approach relies on a spatio-temporal model of video scenes in the discrete cosine transform domain, and on a model that characterizes the type of motion occurring in the scenes, to predict video quality. We use the models to define video statistics and perceptual features that are the basis of a video quality assessment (VQA) algorithm that does not require the presence of a pristine video to compare against in order to predict a perceptual quality score. The contributions of this paper are threefold. 1) We propose a spatio-temporal natural scene statistics (NSS) model for videos. 2) We propose a motion model that quantifies motion coherency in video scenes. 3) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to design a blind VQA algorithm that correlates highly with human judgments of quality. The proposed algorithm, called video BLIINDS, is tested on the LIVE VQA database and on the EPFL-PoliMi video database and shown to perform close to the level of top performing reduced and full reference VQA algorithms.

  18. An Efficient Periodic Broadcasting with Small Latency and Buffer Demand for Near Video on Demand

    Directory of Open Access Journals (Sweden)

    Ying-Nan Chen

    2012-01-01

    Full Text Available Broadcasting Protocols can efficiently transmit videos that simultaneously shared by clients with partitioning the videos into segments. Many studies focus on decreasing clients' waiting time, such as the fixed-delay pagoda broadcasting (FDPB and the harmonic broadcasting schemes. However, limited-capability client devices such as PDAs and set-top boxes (STBs suffer from storing a significant fraction of each video while it is being watched. How to reduce clients' buffer demands is thus an important issue. Related works include the staircase broadcasting (SB, the reverse fast broadcasting (RFB, and the hybrid broadcasting (HyB schemes. This work improves FDPB to save client buffering space as well as waiting time. In comparison with SB, RFB, and HyB, the improved FDPB scheme can yield the smallest waiting time under the same buffer requirements.

  19. Satisfaction with Online Teaching Videos: A Quantitative Approach

    Science.gov (United States)

    Meseguer-Martinez, Angel; Ros-Galvez, Alejandro; Rosa-Garcia, Alfonso

    2017-01-01

    We analyse the factors that determine the number of clicks on the "Like" button in online teaching videos, with a sample of teaching videos in the area of Microeconomics across Spanish-speaking countries. The results show that users prefer short online teaching videos. Moreover, some features of the videos have a significant impact on…

  20. Can student-produced video transform university teaching?

    DEFF Research Database (Denmark)

    2011-01-01

    as preparation for the two week intensive field course. The overall objective of the redesign was to modernize and improve the quality of the students learning experience, by exploring the potentials of video and online tools to create flexible, student-centered and student-activating education. The student...... produced three types of videos during the course: Video 1 was independently produced by the students, guided by online tasks and instructions. These videos were student produced learning material, showing cases from all over Europe. The videos was collected and presented in a "visual database" in Google...

  1. Segmentation, advertising and prices

    NARCIS (Netherlands)

    Galeotti, Andrea; Moraga González, José

    This paper explores the implications of market segmentation on firm competitiveness. In contrast to earlier work, here market segmentation is minimal in the sense that it is based on consumer attributes that are completely unrelated to tastes. We show that when the market is comprised by two

  2. Objectively Determining the Educational Potential of Computer and Video-Based Courseware; or, Producing Reliable Evaluations Despite the Dog and Pony Show.

    Science.gov (United States)

    Barrett, Andrew J.; And Others

    The Center for Interactive Technology, Applications, and Research at the College of Engineering of the University of South Florida (Tampa) has developed objective and descriptive evaluation models to assist in determining the educational potential of computer and video courseware. The computer-based courseware evaluation model and the video-based…

  3. Nuclear information for video presentation

    International Nuclear Information System (INIS)

    Dalton, J.

    1979-01-01

    In an effort to help calm the turbulence left in the wake of the Three Mile Island (TMI) nuclear accident, the Georgia Society of Professional Engineers sponsored the production of a video tape on the inner workings of a nuclear power plant. A 30-minute segment was shown on public television and a longer version is being prepared for use on a commercial network. The tape is neither pro nor con in the multitude of issues surrounding the future of nuclear energy. It simply gives a layman's tour of a nuclear power plant and hopes to provide the public with objective information on how nuclear power is generated. The article discusses the background of the taping program project, and how it was put together

  4. Pancreas and cyst segmentation

    Science.gov (United States)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  5. Video monitoring of wood transport in a free-meandering piedmont river

    Science.gov (United States)

    MacVicar, B. J.; Piégay, H.; Tougne, L.; Ali, I.

    2009-12-01

    Wood in rivers exerts an important influence on riverine habitat, sediment transport, geomorphological form, and human infrastructure. There is a need to quantify wood transport within river systems in order to understand the relevant processes and develop wood budgets at local and watershed scales. Here we present a study that uses a riverside video camera to monitor wood passage. The camera was installed at a gauging station on the Ain River, a 3500 km2 piedmont river (France), in early 2007. Video was obtained during 12 floods, including 5 that were at or greater than the bankfull discharge and one flood at twice the bankfull discharge with a return period between 2 and 5 years. An image analysis algorithm is presented that uses an intersection of intensity, gradient and image difference masks to detect moving wood objects on the surface of the water. The algorithm is compared to the results from manual detection of wood in a selection of video segments. Manual detection is also used to estimate the length, diameter, velocity, and rotation of wood pieces and to note the presence of roots and branches. Agreement between the detection algorithm and the manual detection procedure is on the order of 90%. Despite considerable scatter, results show a threshold of wood transport at approximately two-thirds bankfull, a linear relation between wood transport volume and flow discharge beyond the wood transport threshold, and a strong hysteresis effect such that wood transport is an order of magnitude higher on the rising limb than on the falling limb. Wood transport vs discharge for two floods

  6. Content-based video retrieval by example video clip

    Science.gov (United States)

    Dimitrova, Nevenka; Abdel-Mottaleb, Mohamed

    1997-01-01

    This paper presents a novel approach for video retrieval from a large archive of MPEG or Motion JPEG compressed video clips. We introduce a retrieval algorithm that takes a video clip as a query and searches the database for clips with similar contents. Video clips are characterized by a sequence of representative frame signatures, which are constructed from DC coefficients and motion information (`DC+M' signatures). The similarity between two video clips is determined by using their respective signatures. This method facilitates retrieval of clips for the purpose of video editing, broadcast news retrieval, or copyright violation detection.

  7. Critical Assessment of Video Production in Teacher Education: Can Video Production Foster Community-Engaged Scholarship?

    Science.gov (United States)

    Yang, Kyung-Hwa

    2014-01-01

    In the theoretical framework of production pedagogy, I reflect on a video production project conducted in a teacher education program and discuss the potential of video production to foster community-engaged scholarship among pre-service teachers. While the importance of engaging learners in creating media has been emphasized, studies show little…

  8. Video Measurements: Quantity or Quality

    Science.gov (United States)

    Zajkov, Oliver; Mitrevski, Boce

    2012-01-01

    Students have problems with understanding, using and interpreting graphs. In order to improve the students' skills for working with graphs, we propose Manual Video Measurement (MVM). In this paper, the MVM method is explained and its accuracy is tested. The comparison with the standardized video data software shows that its accuracy is comparable…

  9. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  10. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  11. Longer you play, the more hostile you feel: examination of first person shooter video games and aggression during video game play.

    Science.gov (United States)

    Barlett, Christopher P; Harris, Richard J; Baldassaro, Ross

    2007-01-01

    This study investigated the effects of video game play on aggression. Using the General Aggression Model, as applied to video games by Anderson and Bushman, [2002] this study measured physiological arousal, state hostility, and how aggressively participants would respond to three hypothetical scenarios. In addition, this study measured each of these variables multiple times to gauge how aggression would change with increased video game play. Results showed a significant increase from baseline in hostility and aggression (based on two of the three story stems), which is consistent with the General Aggression Model. This study adds to the existing literature on video games and aggression by showing that increased play of a violent first person shooter video game can significantly increase aggression from baseline. 2007 Wiley-Liss, Inc.

  12. Selectively De-animating and Stabilizing Videos

    Science.gov (United States)

    2014-12-11

    motions intact. Video textures [97, 65, 7, 77] are a well-known approach for seamlessly looping stochastic motions. Like cinema - graphs, a video...domain of input videos to portraits. We all use portrait photographs to express our identities online. Portraits are often the first visuals seen by...quality of our result, we show some comparisons of our automated cinema - graphs against our user driven method described in Chapter 3 in Figure 4.7

  13. Video pedagogy

    OpenAIRE

    Länsitie, Janne; Stevenson, Blair; Männistö, Riku; Karjalainen, Tommi; Karjalainen, Asko

    2016-01-01

    The short film is an introduction to the concept of video pedagogy. The five categories of video pedagogy further elaborate how videos can be used as a part of instruction and learning process. Most pedagogical videos represent more than one category. A video itself doesn’t necessarily define the category – the ways in which the video is used as a part of pedagogical script are more defining factors. What five categories did you find? Did you agree with the categories, or are more...

  14. Combating bad weather part I rain removal from video

    CERN Document Server

    Mukhopadhyay, Sudipta

    2015-01-01

    Current vision systems are designed to perform in normal weather condition. However, no one can escape from severe weather conditions. Bad weather reduces scene contrast and visibility, which results in degradation in the performance of various computer vision algorithms such as object tracking, segmentation and recognition. Thus, current vision systems must include some mechanisms that enable them to perform up to the mark in bad weather conditions such as rain and fog. Rain causes the spatial and temporal intensity variations in images or video frames. These intensity changes are due to the

  15. FPGA Implementation of Gaussian Mixture Model Algorithm for 47 fps Segmentation of 1080p Video

    Directory of Open Access Journals (Sweden)

    Mariangela Genovese

    2013-01-01

    Full Text Available Circuits and systems able to process high quality video in real time are fundamental in nowadays imaging systems. The circuit proposed in the paper, aimed at the robust identification of the background in video streams, implements the improved formulation of the Gaussian Mixture Model (GMM algorithm that is included in the OpenCV library. An innovative, hardware oriented, formulation of the GMM equations, the use of truncated binary multipliers, and ROM compression techniques allow reduced hardware complexity and increased processing capability. The proposed circuit has been designed having commercial FPGA devices as target and provides speed and logic resources occupation that overcome previously proposed implementations. The circuit, when implemented on Virtex6 or StratixIV, processes more than 45 frame per second in 1080p format and uses few percent of FPGA logic resources.

  16. The emerging High Efficiency Video Coding standard (HEVC)

    International Nuclear Information System (INIS)

    Raja, Gulistan; Khan, Awais

    2013-01-01

    High definition video (HDV) is becoming popular day by day. This paper describes the performance analysis of latest upcoming video standard known as High Efficiency Video Coding (HEVC). HEVC is designed to fulfil all the requirements for future high definition videos. In this paper, three configurations (intra only, low delay and random access) of HEVC are analyzed using various 480p, 720p and 1080p high definition test video sequences. Simulation results show the superior objective and subjective quality of HEVC

  17. A video for teaching english tenses

    Directory of Open Access Journals (Sweden)

    Frida Unsiah

    2017-04-01

    Students of English Language Education Program in Faculty of Cultural Studies Universitas Brawijaya ideally master Grammar before taking the degree of Sarjana Pendidikan. However, the fact shows that they are still weak in Grammar especially tenses. Therefore, the researchers initiate to develop a video as a media to teach tenses. Objectively, by using video, students get better understanding on tenses so that they can communicate using English accurately and contextually. To develop the video, the researchers used ADDIE model (Analysis, Design, Development, Implementation, Evaluation. First, the researchers analyzed the students’ learning need to determine the product that would be developed, in this case was a movie about English tenses. Then, the researchers developed a video as the product. The product then was validated by media expert who validated attractiveness, typography, audio, image, and usefulness and content expert and validated by a content expert who validated the language aspects and tenses of English used by the actors in the video dealing with the grammar content, pronunciation, and fluency performed by the actors. The result of validation shows that the video developed was considered good. Theoretically, it is appropriate to be used English Grammar classes. However, the media expert suggests that it still needs some improvement for the next development especially dealing with the synchronization between lips movement and sound on the scenes while the content expert suggests that the Grammar content of the video should focus on one tense only to provide more detailed concept of the tense.

  18. Novel dynamic caching for hierarchically distributed video-on-demand systems

    Science.gov (United States)

    Ogo, Kenta; Matsuda, Chikashi; Nishimura, Kazutoshi

    1998-02-01

    It is difficult to simultaneously serve the millions of video streams that will be needed in the age of 'Mega-Media' networks by using only one high-performance server. To distribute the service load, caching servers should be location near users. However, in previously proposed caching mechanisms, the grade of service depends on whether the data is already cached at a caching server. To make the caching servers transparent to the users, the ability to randomly access the large volume of data stored in the central server should be supported, and the operational functions of the provided service should not be narrowly restricted. We propose a mechanism for constructing a video-stream-caching server that is transparent to the users and that will always support all special playback functions for all available programs to all the contents with a latency of only 1 or 2 seconds. This mechanism uses Variable-sized-quantum-segment- caching technique derived from an analysis of the historical usage log data generated by a line-on-demand-type service experiment and based on the basic techniques used by a time- slot-based multiple-stream video-on-demand server.

  19. Using Predictability for Lexical Segmentation.

    Science.gov (United States)

    Çöltekin, Çağrı

    2017-09-01

    This study investigates a strategy based on predictability of consecutive sub-lexical units in learning to segment a continuous speech stream into lexical units using computational modeling and simulations. Lexical segmentation is one of the early challenges during language acquisition, and it has been studied extensively through psycholinguistic experiments as well as computational methods. However, despite strong empirical evidence, the explicit use of predictability of basic sub-lexical units in models of segmentation is underexplored. This paper presents an incremental computational model of lexical segmentation for exploring the usefulness of predictability for lexical segmentation. We show that the predictability cue is a strong cue for segmentation. Contrary to earlier reports in the literature, the strategy yields state-of-the-art segmentation performance with an incremental computational model that uses only this particular cue in a cognitively plausible setting. The paper also reports an in-depth analysis of the model, investigating the conditions affecting the usefulness of the strategy. Copyright © 2016 Cognitive Science Society, Inc.

  20. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  1. A holistic image segmentation framework for cloud detection and extraction

    Science.gov (United States)

    Shen, Dan; Xu, Haotian; Blasch, Erik; Horvath, Gregory; Pham, Khanh; Zheng, Yufeng; Ling, Haibin; Chen, Genshe

    2013-05-01

    Atmospheric clouds are commonly encountered phenomena affecting visual tracking from air-borne or space-borne sensors. Generally clouds are difficult to detect and extract because they are complex in shape and interact with sunlight in a complex fashion. In this paper, we propose a clustering game theoretic image segmentation based approach to identify, extract, and patch clouds. In our framework, the first step is to decompose a given image containing clouds. The problem of image segmentation is considered as a "clustering game". Within this context, the notion of a cluster is equivalent to a classical equilibrium concept from game theory, as the game equilibrium reflects both the internal and external (e.g., two-player) cluster conditions. To obtain the evolutionary stable strategies, we explore three evolutionary dynamics: fictitious play, replicator dynamics, and infection and immunization dynamics (InImDyn). Secondly, we use the boundary and shape features to refine the cloud segments. This step can lower the false alarm rate. In the third step, we remove the detected clouds and patch the empty spots by performing background recovery. We demonstrate our cloud detection framework on a video clip provides supportive results.

  2. Using Video in the English Language Clasroom

    Directory of Open Access Journals (Sweden)

    Amado Vicente

    2002-08-01

    Full Text Available Video is a popular and a motivating potential medium in schools. Using video in the language classroom helps the language teachers in many different ways. Video, for instance, brings the outside world into the language classroom, providing the class with many different topics and reasons to talk about. It can provide comprehensible input to the learners through contextualised models of language use. It also offers good opportunities to introduce native English speech into the language classroom. Through this article I will try to show what the benefits of using video are and, at the end, I present an instrument to select and classify video materials.

  3. Segmentation: Identification of consumer segments

    DEFF Research Database (Denmark)

    Høg, Esben

    2005-01-01

    It is very common to categorise people, especially in the advertising business. Also traditional marketing theory has taken in consumer segments as a favorite topic. Segmentation is closely related to the broader concept of classification. From a historical point of view, classification has its...... origin in other sciences as for example biology, anthropology etc. From an economic point of view, it is called segmentation when specific scientific techniques are used to classify consumers to different characteristic groupings. What is the purpose of segmentation? For example, to be able to obtain...... a basic understanding of grouping people. Advertising agencies may use segmentation totarget advertisements, while food companies may usesegmentation to develop products to various groups of consumers. MAPP has for example investigated the positioning of fish in relation to other food products...

  4. Rhythm-based segmentation of Popular Chinese Music

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2005-01-01

    We present a new method to segment popular music based on rhythm. By computing a shortest path based on the self-similarity matrix calculated from a model of rhythm, segmenting boundaries are found along the di- agonal of the matrix. The cost of a new segment is opti- mized by matching manual...... and automatic segment boundaries. We compile a small song database of 21 randomly selected popular Chinese songs which come from Chinese Mainland, Taiwan and Hong Kong. The segmenting results on the small corpus show that 78% manual segmentation points are detected and 74% auto- matic segmentation points...

  5. Interactive Videos Enhance Learning about Socio-Ecological Systems

    Science.gov (United States)

    Smithwick, Erica; Baxter, Emily; Kim, Kyung; Edel-Malizia, Stephanie; Rocco, Stevie; Blackstock, Dean

    2018-01-01

    Two forms of interactive video were assessed in an online course focused on conservation. The hypothesis was that interactive video enhances student perceptions about learning and improves mental models of social-ecological systems. Results showed that students reported greater learning and attitudes toward the subject following interactive video.…

  6. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  7. Automatic Moving Object Segmentation for Freely Moving Cameras

    Directory of Open Access Journals (Sweden)

    Yanli Wan

    2014-01-01

    Full Text Available This paper proposes a new moving object segmentation algorithm for freely moving cameras which is very common for the outdoor surveillance system, the car build-in surveillance system, and the robot navigation system. A two-layer based affine transformation model optimization method is proposed for camera compensation purpose, where the outer layer iteration is used to filter the non-background feature points, and the inner layer iteration is used to estimate a refined affine model based on the RANSAC method. Then the feature points are classified into foreground and background according to the detected motion information. A geodesic based graph cut algorithm is then employed to extract the moving foreground based on the classified features. Unlike the existing global optimization or the long term feature point tracking based method, our algorithm only performs on two successive frames to segment the moving foreground, which makes it suitable for the online video processing applications. The experiment results demonstrate the effectiveness of our algorithm in both of the high accuracy and the fast speed.

  8. Sentinel lymph node mapping in minimally invasive surgery: Role of imaging with color-segmented fluorescence (CSF).

    Science.gov (United States)

    Lopez Labrousse, Maite I; Frumovitz, Michael; Guadalupe Patrono, M; Ramirez, Pedro T

    2017-09-01

    Sentinel lymph node mapping, alone or in combination with pelvic lymphadenectomy, is considered a standard approach in staging of patients with cervical or endometrial cancer [1-3]. The goal of this video is to demonstrate the use of indocyanine green (ICG) and color-segmented fluorescence when performing lymphatic mapping in patients with gynecologic malignancies. Injection of ICG is performed in two cervical sites using 1mL (0.5mL superficial and deep, respectively) at the 3 and 9 o'clock position. Sentinel lymph nodes are identified intraoperatively using the Pinpoint near-infrared imaging system (Novadaq, Ontario, CA). Color-segmented fluorescence is used to image different levels of ICG uptake demonstrating higher levels of perfusion. A color key on the side of the monitor shows the colors that coordinate with different levels of ICG uptake. Color-segmented fluorescence may help surgeons identify true sentinel nodes from fatty tissue that, although absorbing fluorescent dye, does not contain true nodal tissue. It is not intended to differentiate the primary sentinel node from secondary sentinel nodes. The key ranges from low levels of ICG uptake (gray) to the highest rate of ICG uptake (red). Bilateral sentinel lymph nodes are identified along the external iliac vessels using both standard and color-segmented fluorescence. No evidence of disease was noted after ultra-staging was performed in each of the sentinel nodes. Use of ICG in sentinel lymph node mapping allows for high bilateral detection rates. Color-segmented fluorescence may increase accuracy of sentinel lymph node identification over standard fluorescent imaging. The following are the supplementary data related to this article. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Deception Detection in Videos

    OpenAIRE

    Wu, Zhe; Singh, Bharat; Davis, Larry S.; Subrahmanian, V. S.

    2017-01-01

    We present a system for covert automated deception detection in real-life courtroom trial videos. We study the importance of different modalities like vision, audio and text for this task. On the vision side, our system uses classifiers trained on low level video features which predict human micro-expressions. We show that predictions of high-level micro-expressions can be used as features for deception prediction. Surprisingly, IDT (Improved Dense Trajectory) features which have been widely ...

  10. [The Questionnaire of Experiences Associated with Video games (CERV): an instrument to detect the problematic use of video games in Spanish adolescents].

    Science.gov (United States)

    Chamarro, Andres; Carbonell, Xavier; Manresa, Josep Maria; Munoz-Miralles, Raquel; Ortega-Gonzalez, Raquel; Lopez-Morron, M Rosa; Batalla-Martinez, Carme; Toran-Monserrat, Pere

    2014-01-01

    The aim of this study is to validate the Video Game-Related Experiences Questionnaire (CERV in Spanish). The questionnaire consists of 17 items, developed from the CERI (Internet-Related Experiences Questionnaire - Beranuy and cols.), and assesses the problematic use of non-massive video games. It was validated for adolescents in Compulsory Secondary Education. To validate the questionnaire, a confirmatory factor analysis (CFA) and an internal consistency analysis were carried out. The factor structure shows two factors: (a) Psychological dependence and use for evasion; and (b) Negative consequences of using video games. Two cut-off points were established for people with no problems in their use of video games (NP), with potential problems in their use of video games (PP), and with serious problems in their use of video games (SP). Results show that there is higher prevalence among males and that problematic use decreases with age. The CERV seems to be a good instrument for the screening of adolescents with difficulties deriving from video game use. Further research should relate problematic video game use with difficulties in other life domains, such as the academic field.

  11. Contagious Content: Viral Video Ads Identification of Content Characteristics that Help Online Video Advertisements Go Viral

    Directory of Open Access Journals (Sweden)

    Yentl Knossenburg

    2016-12-01

    Full Text Available Why do some online video advertisements go viral while others remain unnoticed? What kind of video content keeps the viewer interested and motivated to share? Many companies have realized the need to innovate their marketing strategies and have embraced the newest ways of using technology, as the Internet, to their advantage as in the example of virality. Yet few marketers actually understand how, and academic literature on this topic is still in development. This study investigated which content characteristics distinguish successful from non-successful online viral video advertisements by analyzing 641 cases using Structural Equation Modeling. Results show that Engagement and Surprise are two main content characteristics that significantly increase the chance of online video advertisements to go viral.  

  12. Correlation Between Arthroscopy Simulator and Video Game Performance: A Cross-Sectional Study of 30 Volunteers Comparing 2- and 3-Dimensional Video Games.

    Science.gov (United States)

    Jentzsch, Thorsten; Rahm, Stefan; Seifert, Burkhardt; Farei-Campagna, Jan; Werner, Clément M L; Bouaicha, Samy

    2016-07-01

    To investigate the association between arthroscopy simulator performance and video game skills. This study compared the performances of 30 volunteers without experience performing arthroscopies in 3 different tasks of a validated virtual reality knee arthroscopy simulator with the video game experience using a questionnaire and actual performances in 5 different 2- and 3-dimensional (D) video games of varying genres on 2 different platforms. Positive correlations between knee arthroscopy simulator and video game performances (ρ = 0.63, P video game skills, they show a correlation with 2-D tile-matching puzzle games only for easier tasks with a rather limited focus, and highly correlate with 3-D sports and first-person shooter video games. These findings show that experienced and good 3-D gamers are better arthroscopists than nonexperienced and poor 3-D gamers. Level II, observational cross-sectional study. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  13. GIF Video Sentiment Detection Using Semantic Sequence

    Directory of Open Access Journals (Sweden)

    Dazhen Lin

    2017-01-01

    Full Text Available With the development of social media, an increasing number of people use short videos in social media applications to express their opinions and sentiments. However, sentiment detection of short videos is a very challenging task because of the semantic gap problem and sequence based sentiment understanding problem. In this context, we propose a SentiPair Sequence based GIF video sentiment detection approach with two contributions. First, we propose a Synset Forest method to extract sentiment related semantic concepts from WordNet to build a robust SentiPair label set. This approach considers the semantic gap between label words and selects a robust label subset which is related to sentiment. Secondly, we propose a SentiPair Sequence based GIF video sentiment detection approach that learns the semantic sequence to understand the sentiment from GIF videos. Our experiment results on GSO-2016 (GIF Sentiment Ontology data show that our approach not only outperforms four state-of-the-art classification methods but also shows better performance than the state-of-the-art middle level sentiment ontology features, Adjective Noun Pairs (ANPs.

  14. Correlates of video games playing among adolescents in an Islamic country

    Science.gov (United States)

    2010-01-01

    Background No study has ever explored the prevalence and correlates of video game playing among children in the Islamic Republic of Iran. This study describes patterns and correlates of excessive video game use in a random sample of middle-school students in Iran. Specifically, we examine the relationship between video game playing and psychological well-being, aggressive behaviors, and adolescents' perceived threat of video-computer game playing. Methods This cross-sectional study was performed with a random sample of 444 adolescents recruited from eight middle schools. A self-administered, anonymous questionnaire covered socio-demographics, video gaming behaviors, mental health status, self-reported aggressive behaviors, and perceived side effects of video game playing. Results Overall, participants spent an average of 6.3 hours per week playing video games. Moreover, 47% of participants reported that they had played one or more intensely violent games. Non-gamers reported suffering poorer mental health compared to excessive gamers. Both non-gamers and excessive gamers overall reported suffering poorer mental health compared to low or moderate players. Participants who initiated gaming at younger ages were more likely to score poorer in mental health measures. Participants' self-reported aggressive behaviors were associated with length of gaming. Boys, but not girls, who reported playing video games excessively showed more aggressive behaviors. A multiple binary logistic regression shows that when controlling for other variables, older students, those who perceived less serious side effects of video gaming, and those who have personal computers, were more likely to report that they had played video games excessively. Conclusion Our data show a curvilinear relationship between video game playing and mental health outcomes, with "moderate" gamers faring best and "excessive" gamers showing mild increases in problematic behaviors. Interestingly, "non-gamers" clearly

  15. Correlates of video games playing among adolescents in an Islamic country.

    Science.gov (United States)

    Allahverdipour, Hamid; Bazargan, Mohsen; Farhadinasab, Abdollah; Moeini, Babak

    2010-05-27

    No study has ever explored the prevalence and correlates of video game playing among children in the Islamic Republic of Iran. This study describes patterns and correlates of excessive video game use in a random sample of middle-school students in Iran. Specifically, we examine the relationship between video game playing and psychological well-being, aggressive behaviors, and adolescents' perceived threat of video-computer game playing. This cross-sectional study was performed with a random sample of 444 adolescents recruited from eight middle schools. A self-administered, anonymous questionnaire covered socio-demographics, video gaming behaviors, mental health status, self-reported aggressive behaviors, and perceived side effects of video game playing. Overall, participants spent an average of 6.3 hours per week playing video games. Moreover, 47% of participants reported that they had played one or more intensely violent games. Non-gamers reported suffering poorer mental health compared to excessive gamers. Both non-gamers and excessive gamers overall reported suffering poorer mental health compared to low or moderate players. Participants who initiated gaming at younger ages were more likely to score poorer in mental health measures. Participants' self-reported aggressive behaviors were associated with length of gaming. Boys, but not girls, who reported playing video games excessively showed more aggressive behaviors. A multiple binary logistic regression shows that when controlling for other variables, older students, those who perceived less serious side effects of video gaming, and those who have personal computers, were more likely to report that they had played video games excessively. Our data show a curvilinear relationship between video game playing and mental health outcomes, with "moderate" gamers faring best and "excessive" gamers showing mild increases in problematic behaviors. Interestingly, "non-gamers" clearly show the worst outcomes. Therefore

  16. No-Reference Video Quality Assessment using MPEG Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2013-01-01

    We present a method for No-Reference (NR) Video Quality Assessment (VQA) for decoded video without access to the bitstream. This is achieved by extracting and pooling features from a NR image quality assessment method used frame by frame. We also present methods to identify the video coding...... and estimate the video coding parameters for MPEG-2 and H.264/AVC which can be used to improve the VQA. The analysis differs from most other video coding analysis methods since it is without access to the bitstream. The results show that our proposed method is competitive with other recent NR VQA methods...

  17. Augmented reality during robot-assisted laparoscopic partial nephrectomy: toward real-time 3D-CT to stereoscopic video registration.

    Science.gov (United States)

    Su, Li-Ming; Vagvolgyi, Balazs P; Agarwal, Rahul; Reiley, Carol E; Taylor, Russell H; Hager, Gregory D

    2009-04-01

    To investigate a markerless tracking system for real-time stereo-endoscopic visualization of preoperative computed tomographic imaging as an augmented display during robot-assisted laparoscopic partial nephrectomy. Stereoscopic video segments of a patient undergoing robot-assisted laparoscopic partial nephrectomy for tumor and another for a partial staghorn renal calculus were processed to evaluate the performance of a three-dimensional (3D)-to-3D registration algorithm. After both cases, we registered a segment of the video recording to the corresponding preoperative 3D-computed tomography image. After calibrating the camera and overlay, 3D-to-3D registration was created between the model and the surgical recording using a modified iterative closest point technique. Image-based tracking technology tracked selected fixed points on the kidney surface to augment the image-to-model registration. Our investigation has demonstrated that we can identify and track the kidney surface in real time when applied to intraoperative video recordings and overlay the 3D models of the kidney, tumor (or stone), and collecting system semitransparently. Using a basic computer research platform, we achieved an update rate of 10 Hz and an overlay latency of 4 frames. The accuracy of the 3D registration was 1 mm. Augmented reality overlay of reconstructed 3D-computed tomography images onto real-time stereo video footage is possible using iterative closest point and image-based surface tracking technology that does not use external navigation tracking systems or preplaced surface markers. Additional studies are needed to assess the precision and to achieve fully automated registration and display for intraoperative use.

  18. Students’ Perception on Teaching Practicum Evaluation using Video Technology

    Science.gov (United States)

    Chee Sern, Lai; ‘Ain Helan Nor, Nurul; Foong, Lee Ming; Hassan, Razali

    2017-08-01

    Video technology has been widely used in education especially in teaching and learning. However, the use of video technology for evaluation purpose especially in teaching practicum is extremely scarce and the benefits of video technology in teaching practicum evaluation have not yet been fully discovered. For that reason, this quantitative research aimed at identifying the perceptions of trainee teachers towards teaching practicum evaluation via video technology. A total of 260 students of Teacher Certification Programme (Program Pensiswazahan Guru - PPG) from the Faculty of Technical and Vocational Education (FPTV) of Universiti Tun Hussein Onn Malaysia (UTHM) had been randomly selected as respondents. A set of questionnaire was developed to assess the suitability, effectiveness and satisfaction of using video technology for teaching practicum. Conclusively, this research showed that the trainee teachers have positive perceptions in all three aspects related teaching practicum evaluation using video technology. Apart from that, no significant racial difference was found in the measured aspects. In addition, the trainee teachers also showed an understanding of the vast importance of teaching practicum evaluation via video. These research findings suggest that video technology can be a feasible and practical means of teaching practicum evaluation especially for distance learning program.

  19. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    a microscope and we show how the method can handle transparent particles with significant glare point. The method generalizes to other problems. THis is illustrated by applying the method to camera calibration images and MRI of the midsagittal plane for gray and white matter separation and segmentation......We propose a novel and efficient way of performing local image segmentation. For many applications a threshold of pixel intensities is sufficient but determine the appropriate threshold value can be difficult. In cases with large global intensity variation the threshold value has to be adapted...... locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  20. No-Reference Video Quality Assessment Model for Distortion Caused by Packet Loss in the Real-Time Mobile Video Services

    Directory of Open Access Journals (Sweden)

    Jiarun Song

    2014-01-01

    Full Text Available Packet loss will make severe errors due to the corruption of related video data. For most video streams, because the predictive coding structures are employed, the transmission errors in one frame will not only cause decoding failure of itself at the receiver side, but also propagate to its subsequent frames along the motion prediction path, which will bring a significant degradation of end-to-end video quality. To quantify the effects of packet loss on video quality, a no-reference objective quality assessment model is presented in this paper. Considering the fact that the degradation of video quality significantly relies on the video content, the temporal complexity is estimated to reflect the varying characteristic of video content, using the macroblocks with different motion activities in each frame. Then, the quality of the frame affected by the reference frame loss, by error propagation, or by both of them is evaluated, respectively. Utilizing a two-level temporal pooling scheme, the video quality is finally obtained. Extensive experimental results show that the video quality estimated by the proposed method matches well with the subjective quality.

  1. Dashboard Videos

    Science.gov (United States)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  2. Noise destroys feedback enhanced figure-ground segmentation but not feedforward figure-ground segmentation

    Science.gov (United States)

    Romeo, August; Arall, Marina; Supèr, Hans

    2012-01-01

    Figure-ground (FG) segmentation is the separation of visual information into background and foreground objects. In the visual cortex, FG responses are observed in the late stimulus response period, when neurons fire in tonic mode, and are accompanied by a switch in cortical state. When such a switch does not occur, FG segmentation fails. Currently, it is not known what happens in the brain on such occasions. A biologically plausible feedforward spiking neuron model was previously devised that performed FG segmentation successfully. After incorporating feedback the FG signal was enhanced, which was accompanied by a change in spiking regime. In a feedforward model neurons respond in a bursting mode whereas in the feedback model neurons fired in tonic mode. It is known that bursts can overcome noise, while tonic firing appears to be much more sensitive to noise. In the present study, we try to elucidate how the presence of noise can impair FG segmentation, and to what extent the feedforward and feedback pathways can overcome noise. We show that noise specifically destroys the feedback enhanced FG segmentation and leaves the feedforward FG segmentation largely intact. Our results predict that noise produces failure in FG perception. PMID:22934028

  3. Geotail Video News Release

    Science.gov (United States)

    1992-01-01

    The Geotail mission, part of the International Solar Terrestrial Physics (ISTP) program, measures global energy flow and transformation in the magnetotail to increase understanding of fundamental magnetospheric processes. The satellite was launched on July 24, 1992 onboard a Delta II rocket. This video shows with animation the solar wind, and its effect on the Earth. The narrator explains that the Geotail spacecraft was designed and built by the Institute of Space and Astronautical Science (ISAS), the Japanese Space Agency. The mission objectives are reviewed by one of the scientist in a live view. The video also shows an animation of the orbit, while the narrator explains the orbit and the reason for the small launch window.

  4. Video game addiction, ADHD symptomatology, and video game reinforcement.

    Science.gov (United States)

    Mathews, Christine L; Morrell, Holly E R; Molle, Jon E

    2018-06-06

    Up to 23% of people who play video games report symptoms of addiction. Individuals with attention deficit hyperactivity disorder (ADHD) may be at increased risk for video game addiction, especially when playing games with more reinforcing properties. The current study tested whether level of video game reinforcement (type of game) places individuals with greater ADHD symptom severity at higher risk for developing video game addiction. Adult video game players (N = 2,801; Mean age = 22.43, SD = 4.70; 93.30% male; 82.80% Caucasian) completed an online survey. Hierarchical multiple linear regression analyses were used to test type of game, ADHD symptom severity, and the interaction between type of game and ADHD symptomatology as predictors of video game addiction severity, after controlling for age, gender, and weekly time spent playing video games. ADHD symptom severity was positively associated with increased addiction severity (b = .73 and .68, ps .05. The relationship between ADHD symptom severity and addiction severity did not depend on the type of video game played or preferred most, ps > .05. Gamers who have greater ADHD symptom severity may be at greater risk for developing symptoms of video game addiction and its negative consequences, regardless of type of video game played or preferred most. Individuals who report ADHD symptomatology and also identify as gamers may benefit from psychoeducation about the potential risk for problematic play.

  5. Video Tutorial of Continental Food

    Science.gov (United States)

    Nurani, A. S.; Juwaedah, A.; Mahmudatussa'adah, A.

    2018-02-01

    This research is motivated by the belief in the importance of media in a learning process. Media as an intermediary serves to focus on the attention of learners. Selection of appropriate learning media is very influential on the success of the delivery of information itself both in terms of cognitive, affective and skills. Continental food is a course that studies food that comes from Europe and is very complex. To reduce verbalism and provide more real learning, then the tutorial media is needed. Media tutorials that are audio visual can provide a more concrete learning experience. The purpose of this research is to develop tutorial media in the form of video. The method used is the development method with the stages of analyzing the learning objectives, creating a story board, validating the story board, revising the story board and making video tutorial media. The results show that the making of storyboards should be very thorough, and detailed in accordance with the learning objectives to reduce errors in video capture so as to save time, cost and effort. In video capturing, lighting, shooting angles, and soundproofing make an excellent contribution to the quality of tutorial video produced. In shooting should focus more on tools, materials, and processing. Video tutorials should be interactive and two-way.

  6. Video context-dependent recall.

    Science.gov (United States)

    Smith, Steven M; Manzano, Isabel

    2010-02-01

    In two experiments, we used an effective new method for experimentally manipulating local and global contexts to examine context-dependent recall. The method included video-recorded scenes of real environments, with target words superimposed over the scenes. In Experiment 1, we used a within-subjects manipulation of video contexts and compared the effects of reinstatement of a global context (15 words per context) with effects of less overloaded context cues (1 and 3 words per context) on recall. The size of the reinstatement effects in Experiment 1 show how potently video contexts can cue recall. A strong effect of cue overload was also found; reinstatement effects were smaller, but still quite robust, in the 15 words per context condition. The powerful reinstatement effect was replicated for local contexts in Experiment 2, which included a no-contexts-reinstated group, a control condition used to determine whether reinstatement of half of the cues caused biased output interference for uncued targets. The video context method is a potent way to investigate context-dependent memory.

  7. The Important Elements of a Science Video

    Science.gov (United States)

    Harned, D. A.; Moorman, M.; McMahon, G.

    2012-12-01

    New technologies have revolutionized use of video as a means of communication. Films have become easier to create and to distribute. Video is omnipresent in our culture and supplements or even replaces writing in many applications. How can scientists and educators best use video to communicate scientific results? Video podcasts are being used in addition to journal, print, and online publications to communicate the relevance of scientific findings of the U.S. Geological Survey's (USGS) National Water-Quality Assessment (NAWQA) program to general audiences such as resource managers, educational groups, public officials, and the general public. In an effort to improve the production of science videos a survey was developed to provide insight into effective science communication with video. Viewers of USGS podcast videos were surveyed using Likert response- scaling to identify the important elements of science videos. The surveys were of 120 scientists and educators attending the 2010 and 2011 Fall Meetings of the American Geophysical Union and the 2012 meeting of the National Monitoring Council. The median age of the respondents was 44 years, with an education level of a Bachelor's Degree or higher. Respondents reported that their primary sources for watching science videos were YouTube and science websites. Video length was the single most important element associated with reaching the greatest number of viewers. The surveys indicated a median length of 5 minutes as appropriate for a web video, with 5-7 minutes the 25th-75th percentiles. An illustration of the effect of length: a 5-minute and a 20-minute version of a USGS film on the effect of urbanization on water-quality was made available on the same website. The short film has been downloaded 3 times more frequently than the longer film version. The survey showed that the most important elements to include in a science film are style elements including strong visuals, an engaging story, and a simple message, and

  8. Tackling action-based video abstraction of animated movies for video browsing

    Science.gov (United States)

    Ionescu, Bogdan; Ott, Laurent; Lambert, Patrick; Coquin, Didier; Pacureanu, Alexandra; Buzuloiu, Vasile

    2010-07-01

    We address the issue of producing automatic video abstracts in the context of the video indexing of animated movies. For a quick browse of a movie's visual content, we propose a storyboard-like summary, which follows the movie's events by retaining one key frame for each specific scene. To capture the shot's visual activity, we use histograms of cumulative interframe distances, and the key frames are selected according to the distribution of the histogram's modes. For a preview of the movie's exciting action parts, we propose a trailer-like video highlight, whose aim is to show only the most interesting parts of the movie. Our method is based on a relatively standard approach, i.e., highlighting action through the analysis of the movie's rhythm and visual activity information. To suit every type of movie content, including predominantly static movies or movies without exciting parts, the concept of action depends on the movie's average rhythm. The efficiency of our approach is confirmed through several end-user studies.

  9. Enhancement system of nighttime infrared video image and visible video image

    Science.gov (United States)

    Wang, Yue; Piao, Yan

    2016-11-01

    Visibility of Nighttime video image has a great significance for military and medicine areas, but nighttime video image has so poor quality that we can't recognize the target and background. Thus we enhance the nighttime video image by fuse infrared video image and visible video image. According to the characteristics of infrared and visible images, we proposed improved sift algorithm andαβ weighted algorithm to fuse heterologous nighttime images. We would deduced a transfer matrix from improved sift algorithm. The transfer matrix would rapid register heterologous nighttime images. And theαβ weighted algorithm can be applied in any scene. In the video image fusion system, we used the transfer matrix to register every frame and then used αβ weighted method to fuse every frame, which reached the time requirement soft video. The fused video image not only retains the clear target information of infrared video image, but also retains the detail and color information of visible video image and the fused video image can fluency play.

  10. Wavelet-based audio embedding and audio/video compression

    Science.gov (United States)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  11. Sustainable Transportation Attitudes and Health Behavior Change: Evaluation of a Brief Stage-Targeted Video Intervention

    Directory of Open Access Journals (Sweden)

    Norbert Mundorf

    2018-01-01

    Full Text Available Promoting physical activity and sustainable transportation is essential in the face of rising health care costs, obesity rates, and other public health threats resulting from lack of physical activity. Targeted communications can encourage distinct population segments to adopt active and sustainable transportation modes. Our work is designed to promote the health, social, and environmental benefits of sustainable/active transportation (ST using the Transtheoretical Model of Change (TTM, which has been successfully applied to a range of health, and more recently, sustainability behaviors. Earlier, measurement development confirmed both the structure of ST pros and cons and efficacy measures as well as the relationship between these constructs and ST stages of change, replicating results found for many other behaviors. The present paper discusses a brief pre-post video pilot intervention study designed for precontemplators and contemplators (N = 604 that was well received, effective in moving respondents towards increased readiness for ST behavior change, and improving some ST attitudes, significantly reducing the cons of ST. This research program shows that a brief stage-targeted behavior change video can increase readiness and reduce the cons for healthy transportation choices.

  12. Sustainable Transportation Attitudes and Health Behavior Change: Evaluation of a Brief Stage-Targeted Video Intervention.

    Science.gov (United States)

    Mundorf, Norbert; Redding, Colleen A; Paiva, Andrea L

    2018-01-18

    Promoting physical activity and sustainable transportation is essential in the face of rising health care costs, obesity rates, and other public health threats resulting from lack of physical activity. Targeted communications can encourage distinct population segments to adopt active and sustainable transportation modes. Our work is designed to promote the health, social, and environmental benefits of sustainable/active transportation (ST) using the Transtheoretical Model of Change (TTM), which has been successfully applied to a range of health, and more recently, sustainability behaviors. Earlier, measurement development confirmed both the structure of ST pros and cons and efficacy measures as well as the relationship between these constructs and ST stages of change, replicating results found for many other behaviors. The present paper discusses a brief pre-post video pilot intervention study designed for precontemplators and contemplators (N = 604) that was well received, effective in moving respondents towards increased readiness for ST behavior change, and improving some ST attitudes, significantly reducing the cons of ST. This research program shows that a brief stage-targeted behavior change video can increase readiness and reduce the cons for healthy transportation choices.

  13. Sustainable Transportation Attitudes and Health Behavior Change: Evaluation of a Brief Stage-Targeted Video Intervention

    Science.gov (United States)

    Mundorf, Norbert; Redding, Colleen A.; Paiva, Andrea L.

    2018-01-01

    Promoting physical activity and sustainable transportation is essential in the face of rising health care costs, obesity rates, and other public health threats resulting from lack of physical activity. Targeted communications can encourage distinct population segments to adopt active and sustainable transportation modes. Our work is designed to promote the health, social, and environmental benefits of sustainable/active transportation (ST) using the Transtheoretical Model of Change (TTM), which has been successfully applied to a range of health, and more recently, sustainability behaviors. Earlier, measurement development confirmed both the structure of ST pros and cons and efficacy measures as well as the relationship between these constructs and ST stages of change, replicating results found for many other behaviors. The present paper discusses a brief pre-post video pilot intervention study designed for precontemplators and contemplators (N = 604) that was well received, effective in moving respondents towards increased readiness for ST behavior change, and improving some ST attitudes, significantly reducing the cons of ST. This research program shows that a brief stage-targeted behavior change video can increase readiness and reduce the cons for healthy transportation choices. PMID:29346314

  14. SnapVideo: Personalized Video Generation for a Sightseeing Trip.

    Science.gov (United States)

    Zhang, Luming; Jing, Peiguang; Su, Yuting; Zhang, Chao; Shaoz, Ling

    2017-11-01

    Leisure tourism is an indispensable activity in urban people's life. Due to the popularity of intelligent mobile devices, a large number of photos and videos are recorded during a trip. Therefore, the ability to vividly and interestingly display these media data is a useful technique. In this paper, we propose SnapVideo, a new method that intelligently converts a personal album describing of a trip into a comprehensive, aesthetically pleasing, and coherent video clip. The proposed framework contains three main components. The scenic spot identification model first personalizes the video clips based on multiple prespecified audience classes. We then search for some auxiliary related videos from YouTube 1 according to the selected photos. To comprehensively describe a scenery, the view generation module clusters the crawled video frames into a number of views. Finally, a probabilistic model is developed to fit the frames from multiple views into an aesthetically pleasing and coherent video clip, which optimally captures the semantics of a sightseeing trip. Extensive user studies demonstrated the competitiveness of our method from an aesthetic point of view. Moreover, quantitative analysis reflects that semantically important spots are well preserved in the final video clip. 1 https://www.youtube.com/.

  15. Status of the segment interconnect, cable segment ancillary logic, and the cable segment hybrid driver projects

    International Nuclear Information System (INIS)

    Swoboda, C.; Barsotti, E.; Chappa, S.; Downing, R.; Goeransson, G.; Lensy, D.; Moore, G.; Rotolo, C.; Urish, J.

    1985-01-01

    The FASTBUS Segment Interconnect (SI) provides a communication path between two otherwise independent, asynchronous bus segments. In particular, the Segment Interconnect links a backplane crate segment to a cable segment. All standard FASTBUS address and data transactions can be passed through the SI or any number of SIs and segments in a path. Thus systems of arbitrary connection complexity can be formed, allowing simultaneous independent processing, yet still permitting devices associated with one segment to be accessed from others. The model S1 Segment Interconnect and the Cable Segment Ancillary Logic covered in this report comply with all the mandatory features stated in the FASTBUS specification document DOE/ER-0189. A block diagram of the SI is shown

  16. An unsupervised method for summarizing egocentric sport videos

    Science.gov (United States)

    Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec

    2015-12-01

    People are getting more interested to record their sport activities using head-worn or hand-held cameras. This type of videos which is called egocentric sport videos has different motion and appearance patterns compared with life-logging videos. While a life-logging video can be defined in terms of well-defined human-object interactions, notwithstanding, it is not trivial to describe egocentric sport videos using well-defined activities. For this reason, summarizing egocentric sport videos based on human-object interaction might fail to produce meaningful results. In this paper, we propose an unsupervised method for summarizing egocentric videos by identifying the key-frames of the video. Our method utilizes both appearance and motion information and it automatically finds the number of the key-frames. Our blind user study on the new dataset collected from YouTube shows that in 93:5% cases, the users choose the proposed method as their first video summary choice. In addition, our method is within the top 2 choices of the users in 99% of studies.

  17. Efficient Foreground Extraction From HEVC Compressed Video for Application to Real-Time Analysis of Surveillance 'Big' Data.

    Science.gov (United States)

    Dey, Bhaskar; Kundu, Malay K

    2015-11-01

    While surveillance video is the biggest source of unstructured Big Data today, the emergence of high-efficiency video coding (HEVC) standard is poised to have a huge role in lowering the costs associated with transmission and storage. Among the benefits of HEVC over the legacy MPEG-4 Advanced Video Coding (AVC), is a staggering 40 percent or more bitrate reduction at the same visual quality. Given the bandwidth limitations, video data are compressed essentially by removing spatial and temporal correlations that exist in its uncompressed form. This causes compressed data, which are already de-correlated, to serve as a vital resource for machine learning with significantly fewer samples for training. In this paper, an efficient approach to foreground extraction/segmentation is proposed using novel spatio-temporal de-correlated block features extracted directly from the HEVC compressed video. Most related techniques, in contrast, work on uncompressed images claiming significant storage and computational resources not only for the decoding process prior to initialization but also for the feature selection/extraction and background modeling stage following it. The proposed approach has been qualitatively and quantitatively evaluated against several other state-of-the-art methods.

  18. Video processing project

    CSIR Research Space (South Africa)

    Globisch, R

    2009-03-01

    Full Text Available Video processing source code for algorithms and tools used in software media pipelines (e.g. image scalers, colour converters, etc.) The currently available source code is written in C++ with their associated libraries and DirectShow- Filters....

  19. Videos for Science Communication and Nature Interpretation: The TIB|AV-Portal as Resource.

    Science.gov (United States)

    Marín Arraiza, Paloma; Plank, Margret; Löwe, Peter

    2016-04-01

    relevant article or further supplement materials). By using media fragment identifiers not only the whole video can be cited, but also individual parts of it. Doing so, users are also likely to find high-quality related content (for instance, a video abstract and the corresponding article or an expedition documentary and its field notebook). Based on automatic analysis of speech, images and texts within the videos a large amount of metadata associated with the segments of the video is automatically generated. These metadata enhance the searchability of the video and make it easier to retrieve and interlink meaningful parts of the video. This new and reliable library-driven infrastructure allow all different types of data be discoverable, accessible, citable, freely reusable, and interlinked. Therefore, it simplifies Science Communication

  20. Methods of evaluating segmentation characteristics and segmentation of major faults

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kie Hwa; Chang, Tae Woo; Kyung, Jai Bok [Seoul National Univ., Seoul (Korea, Republic of)] (and others)

    2000-03-15

    Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary.

  1. Methods of evaluating segmentation characteristics and segmentation of major faults

    International Nuclear Information System (INIS)

    Lee, Kie Hwa; Chang, Tae Woo; Kyung, Jai Bok

    2000-03-01

    Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary

  2. Augmented video viewing: transforming video consumption into an active experience

    OpenAIRE

    WIJNANTS, Maarten; Leën, Jeroen; QUAX, Peter; LAMOTTE, Wim

    2014-01-01

    Traditional video productions fail to cater to the interactivity standards that the current generation of digitally native customers have become accustomed to. This paper therefore advertises the \\activation" of the video consumption process. In particular, it proposes to enhance HTML5 video playback with interactive features in order to transform video viewing into a dynamic pastime. The objective is to enable the authoring of more captivating and rewarding video experiences for end-users. T...

  3. Winter Video Series Coming in January | Poster

    Science.gov (United States)

    The Scientific Library’s annual Summer Video Series was so successful that it will be offering a new Winter Video Series beginning in January. For this inaugural event, the staff is showing the eight-part series from National Geographic titled “American Genius.” 

  4. Structure-properties relationships of novel poly(carbonate-co-amide) segmented copolymers with polyamide-6 as hard segments and polycarbonate as soft segments

    Science.gov (United States)

    Yang, Yunyun; Kong, Weibo; Yuan, Ye; Zhou, Changlin; Cai, Xufu

    2018-04-01

    Novel poly(carbonate-co-amide) (PCA) block copolymers are prepared with polycarbonate diol (PCD) as soft segments, polyamide-6 (PA6) as hard segments and 4,4'-diphenylmethane diisocyanate (MDI) as coupling agent through reactive processing. The reactive processing strategy is eco-friendly and resolve the incompatibility between polyamide segments and PCD segments in preparation processing. The chemical structure, crystalline properties, thermal properties, mechanical properties and water resistance were extensively studied by Fourier transform infrared spectroscopy (FTIR), X-ray diffraction (XRD), Differential scanning calorimetry (DSC), Thermal gravity analysis (TGA), Dynamic mechanical analysis (DMA), tensile testing, water contact angle and water absorption, respectively. The as-prepared PCAs exhibit obvious microphase separation between the crystalline hard PA6 phase and amorphous PCD soft segments. Meanwhile, PCAs showed outstanding mechanical with the maximum tensile strength of 46.3 MPa and elongation at break of 909%. The contact angle and water absorption results indicate that PCAs demonstrate outstanding water resistance even though possess the hydrophilic surfaces. The TGA measurements prove that the thermal stability of PCA can satisfy the requirement of multiple-processing without decomposition.

  5. Dynamics in international market segmentation of new product growth

    NARCIS (Netherlands)

    Lemmens, A.; Croux, C.; Stremersch, S.

    2012-01-01

    Prior international segmentation studies have been static in that they have identified segments that remain stable over time. This paper shows that country segments in new product growth are intrinsically dynamic. We propose a semiparametric hidden Markov model to dynamically segment countries based

  6. Correlates of video games playing among adolescents in an Islamic country

    Directory of Open Access Journals (Sweden)

    Moeini Babak

    2010-05-01

    Full Text Available Abstract Background No study has ever explored the prevalence and correlates of video game playing among children in the Islamic Republic of Iran. This study describes patterns and correlates of excessive video game use in a random sample of middle-school students in Iran. Specifically, we examine the relationship between video game playing and psychological well-being, aggressive behaviors, and adolescents' perceived threat of video-computer game playing. Methods This cross-sectional study was performed with a random sample of 444 adolescents recruited from eight middle schools. A self-administered, anonymous questionnaire covered socio-demographics, video gaming behaviors, mental health status, self-reported aggressive behaviors, and perceived side effects of video game playing. Results Overall, participants spent an average of 6.3 hours per week playing video games. Moreover, 47% of participants reported that they had played one or more intensely violent games. Non-gamers reported suffering poorer mental health compared to excessive gamers. Both non-gamers and excessive gamers overall reported suffering poorer mental health compared to low or moderate players. Participants who initiated gaming at younger ages were more likely to score poorer in mental health measures. Participants' self-reported aggressive behaviors were associated with length of gaming. Boys, but not girls, who reported playing video games excessively showed more aggressive behaviors. A multiple binary logistic regression shows that when controlling for other variables, older students, those who perceived less serious side effects of video gaming, and those who have personal computers, were more likely to report that they had played video games excessively. Conclusion Our data show a curvilinear relationship between video game playing and mental health outcomes, with "moderate" gamers faring best and "excessive" gamers showing mild increases in problematic behaviors

  7. Video Synchronization With Bit-Rate Signals and Correntropy Function

    Directory of Open Access Journals (Sweden)

    Igor Pereira

    2017-09-01

    Full Text Available We propose an approach for the synchronization of video streams using correntropy. Essentially, the time offset is calculated on the basis of the instantaneous transfer rates of the video streams that are extracted in the form of a univariate signal known as variable bit-rate (VBR. The state-of-the-art approach uses a window segmentation strategy that is based on consensual zero-mean normalized cross-correlation (ZNCC. This strategy has an elevated computational complexity, making its application to synchronizing online data streaming difficult. Hence, our proposal uses a different window strategy that, together with the correntropy function, allows the synchronization to be performed for online applications. This provides equivalent synchronization scores with a rapid offset determination as the streams come into the system. The efficiency of our approach has been verified through experiments that demonstrate its viability with values that are as precise as those obtained by ZNCC. The proposed approach scored 81 % in time reference classification against the equivalent 81 % of the state-of-the-art approach, requiring much less computational power.

  8. Competitive action video game players display rightward error bias during on-line video game play.

    Science.gov (United States)

    Roebuck, Andrew J; Dubnyk, Aurora J B; Cochran, David; Mandryk, Regan L; Howland, John G; Harms, Victoria

    2017-09-12

    Research in asymmetrical visuospatial attention has identified a leftward bias in the general population across a variety of measures including visual attention and line-bisection tasks. In addition, increases in rightward collisions, or bumping, during visuospatial navigation tasks have been demonstrated in real world and virtual environments. However, little research has investigated these biases beyond the laboratory. The present study uses a semi-naturalistic approach and the online video game streaming service Twitch to examine navigational errors and assaults as skilled action video game players (n = 60) compete in Counter Strike: Global Offensive. This study showed a significant rightward bias in both fatal assaults and navigational errors. Analysis using the in-game ranking system as a measure of skill failed to show a relationship between bias and skill. These results suggest that a leftward visuospatial bias may exist in skilled players during online video game play. However, the present study was unable to account for some factors such as environmental symmetry and player handedness. In conclusion, video game streaming is a promising method for behavioural research in the future, however further study is required before one can determine whether these results are an artefact of the method applied, or representative of a genuine rightward bias.

  9. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    Science.gov (United States)

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro ® 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro ® and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro ® and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro ® to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro ® 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  10. Multi-Task Video Captioning with Video and Entailment Generation

    OpenAIRE

    Pasunuru, Ramakanth; Bansal, Mohit

    2017-01-01

    Video captioning, the task of describing the content of a video, has seen some promising improvements in recent years with sequence-to-sequence models, but accurately learning the temporal and logical dynamics involved in the task still remains a challenge, especially given the lack of sufficient annotated data. We improve video captioning by sharing knowledge with two related directed-generation tasks: a temporally-directed unsupervised video prediction task to learn richer context-aware vid...

  11. Streaming Video--The Wave of the Video Future!

    Science.gov (United States)

    Brown, Laura

    2004-01-01

    Videos and DVDs give the teachers more flexibility than slide projectors, filmstrips, and 16mm films but teachers and students are excited about a new technology called streaming. Streaming allows the educators to view videos on demand via the Internet, which works through the transfer of digital media like video, and voice data that is received…

  12. A video authentication technique

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1987-01-01

    Unattended video surveillance systems are particularly vulnerable to the substitution of false video images into the cable that connects the camera to the video recorder. New technology has made it practical to insert a solid state video memory into the video cable, freeze a video image from the camera, and hold this image as long as desired. Various techniques, such as line supervision and sync detection, have been used to detect video cable tampering. The video authentication technique described in this paper uses the actual video image from the camera as the basis for detecting any image substitution made during the transmission of the video image to the recorder. The technique, designed for unattended video systems, can be used for any video transmission system where a two-way digital data link can be established. The technique uses similar microprocessor circuitry at the video camera and at the video recorder to select sample points in the video image for comparison. The gray scale value of these points is compared at the recorder controller and if the values agree within limits, the image is authenticated. If a significantly different image was substituted, the comparison would fail at a number of points and the video image would not be authenticated. The video authentication system can run as a stand-alone system or at the request of another system

  13. Single-incision video-assisted anatomical segmentectomy with handsewn bronchial closure for endobronchial lipoma.

    Science.gov (United States)

    Galvez, Carlos; Sesma, Julio; Bolufer, Sergio; Lirio, Francisco; Navarro-Martinez, Jose; Galiana, Maria; Baschwitz, Benno; Rivera, Maria Jesus

    2016-08-01

    Endobronchial lipomas are rare benign tumors whose symptoms are usually confused with recurrent infections or even asthma diagnosis, and mostly caused by endobronquial obstructive component which also conditions severity. We report a case of a 60-year-old man with a right-lower lobe upper-segment endobronchial myxoid tumor with uncertain diagnosis. We performed a single incision video-assisted anatomical segmentectomy and wedge bronchoplasty with handsewn closure to achieve complete resection and definitive diagnosis. During the postoperative air leak was not observed and there was no complication, with low pain scores and complete recovery. Final pathological exam showed endobronchial lipoma. Single-incision (SI) anatomical segmentectomies are lung-sparing resections for benign or low-grade malignancies with diagnostic and therapeutic value, and the need for a wedge bronchoplasty is not a necessary indication for conversion to multiport or open thoracotomy.

  14. Segmented block copolymers with monodisperse aramide end-segments

    NARCIS (Netherlands)

    Araichimani, A.; Gaymans, R.J.

    2008-01-01

    Segmented block copolymers were synthesized using monodisperse diaramide (TT) as hard segments and PTMO with a molecular weight of 2 900 g · mol-1 as soft segments. The aramide: PTMO segment ratio was increased from 1:1 to 2:1 thereby changing the structure from a high molecular weight multi-block

  15. The Video Interaction Guidance approach applied to teaching communication skills in dentistry.

    Science.gov (United States)

    Quinn, S; Herron, D; Menzies, R; Scott, L; Black, R; Zhou, Y; Waller, A; Humphris, G; Freeman, R

    2016-05-01

    To examine dentists' views of a novel video review technique to improve communication skills in complex clinical situations. Dentists (n = 3) participated in a video review known as Video Interaction Guidance to encourage more attuned interactions with their patients (n = 4). Part of this process is to identify where dentists and patients reacted positively and effectively. Each dentist was presented with short segments of video footage taken during an appointment with a patient with intellectual disabilities and communication difficulties. Having observed their interactions with patients, dentists were asked to reflect on their communication strategies with the assistance of a trained VIG specialist. Dentists reflected that their VIG session had been insightful and considered the review process as beneficial to communication skills training in dentistry. They believed that this technique could significantly improve the way dentists interact and communicate with patients. The VIG sessions increased their awareness of the communication strategies they use with their patients and were perceived as neither uncomfortable nor threatening. The VIG session was beneficial in this exploratory investigation because the dentists could identify when their interactions were most effective. Awareness of their non-verbal communication strategies and the need to adopt these behaviours frequently were identified as key benefits of this training approach. One dentist suggested that the video review method was supportive because it was undertaken by a behavioural scientist rather than a professional counterpart. Some evidence supports the VIG approach in this specialist area of communication skills and dental training. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Improved chaos-based video steganography using DNA alphabets

    Directory of Open Access Journals (Sweden)

    Nirmalya Kar

    2018-03-01

    Full Text Available DNA based steganography plays a vital role in the field of privacy and secure communication. Here, we propose a DNA properties-based mechanism to send data hidden inside a video file. Initially, the video file is converted into image frames. Random frames are then selected and data is hidden in these at random locations by using the Least Significant Bit substitution method. We analyze the proposed architecture in terms of peak signal-to-noise ratio as well as mean squared error measured between the original and steganographic files averaged over all video frames. The results show minimal degradation of the steganographic video file. Keywords: Chaotic map, DNA, Linear congruential generator, Video steganography, Least significant bit

  17. Segmentation of consumer's markets and evaluation of market's segments

    OpenAIRE

    ŠVECOVÁ, Iveta

    2013-01-01

    The goal of this bachelor thesis was to explain a possibly segmentation of consumer´s markets for a chosen company, and to present a suitable goods offer, so it would be suitable to the needs of selected segments. The work is divided into theoretical and practical part. First part describes marketing, segmentation, segmentation of consumer's markets, consumer's market, market's segments a other terms. Second part describes an evaluation of questionnaire survey, discovering of market's segment...

  18. The LivePhoto Physics videos and video analysis site

    Science.gov (United States)

    Abbott, David

    2009-09-01

    The LivePhoto site is similar to an archive of short films for video analysis. Some videos have Flash tools for analyzing the video embedded in the movie. Most of the videos address mechanics topics with titles like Rolling Pencil (check this one out for pedagogy and content knowledge—nicely done!), Juggler, Yo-yo, Puck and Bar (this one is an inelastic collision with rotation), but there are a few titles in other areas (E&M, waves, thermo, etc.).

  19. Quantitation of left ventricular dimensions and function by digital video subtraction angiography

    International Nuclear Information System (INIS)

    Higgins, C.B.; Norris, S.L.; Gerber, K.H.; Slutsky, R.A.; Ashburn, W.L.; Baily, N.

    1982-01-01

    Digital video subtraction angiography (DVSA) after central intravenous administration of contrast media was used in experimental animals and in patients with suspected coronary artery disease to quantitate left ventricular dimensions and regional and global contractile function. In animals, measurements of left ventricular (LV) volumes, wall thickness, ejection fraction, segmental contraction, and cardiac output correlated closely with sonocardiometry or thermodilution measurements. In patients, volumes and ejection fractions calculated from mask mode digital images correlated closely with direct left ventriculography. Global and segmental contractile function was displayed in patients by ejection shell images, stroke volume images, and time interval difference images. Central cardiovascular function was also quantitated by measurement of pulmonary transit time and calculation of pulmonary blood volume from digital fluoroscopic images. DVSA was shown to be useful and accurate in the quantitation of central cardiovascular physiology

  20. Trigger Videos on the Web: Impact of Audiovisual Design

    Science.gov (United States)

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  1. 47 CFR 76.1503 - Carriage of video programming providers on open video systems.

    Science.gov (United States)

    2010-10-01

    ... service showing that the Notice of Intent has been served on all local cable franchising authorities... video programming provider within five business days of receiving a written request from the provider...

  2. Fuzzy-Based Segmentation for Variable Font-Sized Text Extraction from Images/Videos

    Directory of Open Access Journals (Sweden)

    Samabia Tehsin

    2014-01-01

    Full Text Available Textual information embedded in multimedia can provide a vital tool for indexing and retrieval. A lot of work is done in the field of text localization and detection because of its very fundamental importance. One of the biggest challenges of text detection is to deal with variation in font sizes and image resolution. This problem gets elevated due to the undersegmentation or oversegmentation of the regions in an image. The paper addresses this problem by proposing a solution using novel fuzzy-based method. This paper advocates postprocessing segmentation method that can solve the problem of variation in text sizes and image resolution. The methodology is tested on ICDAR 2011 Robust Reading Challenge dataset which amply proves the strength of the recommended method.

  3. Playing violent video games increases intergroup bias.

    Science.gov (United States)

    Greitemeyer, Tobias

    2014-01-01

    Previous research has shown how, why, and for whom violent video game play is related to aggression and aggression-related variables. In contrast, less is known about whether some individuals are more likely than others to be the target of increased aggression after violent video game play. The present research examined the idea that the effects of violent video game play are stronger when the target is a member of an outgroup rather than an ingroup. In fact, a correlational study revealed that violent video game exposure was positively related to ethnocentrism. This relation remained significant when controlling for trait aggression. Providing causal evidence, an experimental study showed that playing a violent video game increased aggressive behavior, and that this effect was more pronounced when the target was an outgroup rather than an ingroup member. Possible mediating mechanisms are discussed.

  4. Statistical conditional sampling for variable-resolution video compression.

    Directory of Open Access Journals (Sweden)

    Alexander Wong

    Full Text Available In this study, we investigate a variable-resolution approach to video compression based on Conditional Random Field and statistical conditional sampling in order to further improve compression rate while maintaining high-quality video. In the proposed approach, representative key-frames within a video shot are identified and stored at full resolution. The remaining frames within the video shot are stored and compressed at a reduced resolution. At the decompression stage, a region-based dictionary is constructed from the key-frames and used to restore the reduced resolution frames to the original resolution via statistical conditional sampling. The sampling approach is based on the conditional probability of the CRF modeling by use of the constructed dictionary. Experimental results show that the proposed variable-resolution approach via statistical conditional sampling has potential for improving compression rates when compared to compressing the video at full resolution, while achieving higher video quality when compared to compressing the video at reduced resolution.

  5. A Novel High Efficiency Fractal Multiview Video Codec

    Directory of Open Access Journals (Sweden)

    Shiping Zhu

    2015-01-01

    Full Text Available Multiview video which is one of the main types of three-dimensional (3D video signals, captured by a set of video cameras from various viewpoints, has attracted much interest recently. Data compression for multiview video has become a major issue. In this paper, a novel high efficiency fractal multiview video codec is proposed. Firstly, intraframe algorithm based on the H.264/AVC intraprediction modes and combining fractal and motion compensation (CFMC algorithm in which range blocks are predicted by domain blocks in the previously decoded frame using translational motion with gray value transformation is proposed for compressing the anchor viewpoint video. Then temporal-spatial prediction structure and fast disparity estimation algorithm exploiting parallax distribution constraints are designed to compress the multiview video data. The proposed fractal multiview video codec can exploit temporal and spatial correlations adequately. Experimental results show that it can obtain about 0.36 dB increase in the decoding quality and 36.21% decrease in encoding bitrate compared with JMVC8.5, and the encoding time is saved by 95.71%. The rate-distortion comparisons with other multiview video coding methods also demonstrate the superiority of the proposed scheme.

  6. No Reference Video-Quality-Assessment Model for Monitoring Video Quality of IPTV Services

    Science.gov (United States)

    Yamagishi, Kazuhisa; Okamoto, Jun; Hayashi, Takanori; Takahashi, Akira

    Service providers should monitor the quality of experience of a communication service in real time to confirm its status. To do this, we previously proposed a packet-layer model that can be used for monitoring the average video quality of typical Internet protocol television content using parameters derived from transmitted packet headers. However, it is difficult to monitor the video quality per user using the average video quality because video quality depends on the video content. To accurately monitor the video quality per user, a model that can be used for estimating the video quality per video content rather than the average video quality should be developed. Therefore, to take into account the impact of video content on video quality, we propose a model that calculates the difference in video quality between the video quality of the estimation-target video and the average video quality estimated using a packet-layer model. We first conducted extensive subjective quality assessments for different codecs and video sequences. We then model their characteristics based on parameters related to compression and packet loss. Finally, we verify the performance of the proposed model by applying it to unknown data sets different from the training data sets used for developing the model.

  7. Chromosome condensation and segmentation

    International Nuclear Information System (INIS)

    Viegas-Pequignot, E.M.

    1981-01-01

    Some aspects of chromosome condensation in mammalians -humans especially- were studied by means of cytogenetic techniques of chromosome banding. Two further approaches were adopted: a study of normal condensation as early as prophase, and an analysis of chromosome segmentation induced by physical (temperature and γ-rays) or chemical agents (base analogues, antibiotics, ...) in order to show out the factors liable to affect condensation. Here 'segmentation' means an abnormal chromosome condensation appearing systematically and being reproducible. The study of normal condensation was made possible by the development of a technique based on cell synchronization by thymidine and giving prophasic and prometaphasic cells. Besides, the possibility of inducing R-banding segmentations on these cells by BrdU (5-bromodeoxyuridine) allowed a much finer analysis of karyotypes. Another technique was developed using 5-ACR (5-azacytidine), it allowed to induce a segmentation similar to the one obtained using BrdU and identify heterochromatic areas rich in G-C bases pairs [fr

  8. Deficit in figure-ground segmentation following closed head injury.

    Science.gov (United States)

    Baylis, G C; Baylis, L L

    1997-08-01

    Patient CB showed a severe impairment in figure-ground segmentation following a closed head injury. Unlike normal subjects, CB was unable to parse smaller and brighter parts of stimuli as figure. Moreover, she did not show the normal effect that symmetrical regions are seen as figure, although she was able to make overt judgments of symmetry. Since she was able to attend normally to isolated objects, CB demonstrates a dissociation between figure ground segmentation and subsequent processes of attention. Despite her severe impairment in figure-ground segmentation, CB showed normal 'parallel' single feature visual search. This suggests that figure-ground segmentation is dissociable from 'preattentive' processes such as visual search.

  9. Script Design for Information Film and Video.

    Science.gov (United States)

    Shelton, S. M. (Marty); And Others

    1993-01-01

    Shows how the empathy created in the audience by each of the five genres of film/video is a function of the five elements of film design: camera angle, close up, composition, continuity, and cutting. Discusses film/video script designing. Illustrates these concepts with a sample script and story board. (SR)

  10. Physics and Video Analysis

    Science.gov (United States)

    Allain, Rhett

    2016-05-01

    We currently live in a world filled with videos. There are videos on YouTube, feature movies and even videos recorded with our own cameras and smartphones. These videos present an excellent opportunity to not only explore physical concepts, but also inspire others to investigate physics ideas. With video analysis, we can explore the fantasy world in science-fiction films. We can also look at online videos to determine if they are genuine or fake. Video analysis can be used in the introductory physics lab and it can even be used to explore the make-believe physics embedded in video games. This book covers the basic ideas behind video analysis along with the fundamental physics principles used in video analysis. The book also includes several examples of the unique situations in which video analysis can be used.

  11. Replacing Non-Active Video Gaming by Active Video Gaming to Prevent Excessive Weight Gain in Adolescents.

    Science.gov (United States)

    Simons, Monique; Brug, Johannes; Chinapaw, Mai J M; de Boer, Michiel; Seidell, Jaap; de Vet, Emely

    2015-01-01

    The aim of the current study was to evaluate the effects of and adherence to an active video game promotion intervention on anthropometrics, sedentary screen time and consumption of sugar-sweetened beverages and snacks among non-active video gaming adolescents who primarily were of healthy weight. We assigned 270 gaming (i.e. ≥ 2 hours/week non-active video game time) adolescents randomly to an intervention group (n = 140) (receiving active video games and encouragement to play) or a waiting-list control group (n = 130). BMI-SDS (SDS = adjusted for mean standard deviation score), waist circumference-SDS, hip circumference and sum of skinfolds were measured at baseline, at four and ten months follow-up (primary outcomes). Sedentary screen time, physical activity, consumption of sugar-sweetened beverages and snacks, and process measures (not at baseline) were assessed with self-reports at baseline, one, four and ten months follow-up. Multi-level-intention to treat-regression analyses were conducted. The control group decreased significantly more than the intervention group on BMI-SDS (β = 0.074, 95%CI: 0.008;0.14), and sum of skinfolds (β = 3.22, 95%CI: 0.27;6.17) (overall effects). The intervention group had a significantly higher decrease in self-reported non-active video game time (β = -1.76, 95%CI: -3.20;-0.32) and total sedentary screen time (Exp (β = 0.81, 95%CI: 0.74;0.88) than the control group (overall effects). The process evaluation showed that 14% of the adolescents played the Move video games every week ≥ 1 hour/week during the whole intervention period. The active video game intervention did not result in lower values on anthropometrics in a group of 'excessive' non-active video gamers (mean ~ 14 hours/week) who primarily were of healthy weight compared to a control group throughout a ten-month-period. Even some effects in the unexpected direction were found, with the control group showing lower BMI-SDS and skin folds than the intervention group

  12. Replacing Non-Active Video Gaming by Active Video Gaming to Prevent Excessive Weight Gain in Adolescents.

    Directory of Open Access Journals (Sweden)

    Monique Simons

    Full Text Available The aim of the current study was to evaluate the effects of and adherence to an active video game promotion intervention on anthropometrics, sedentary screen time and consumption of sugar-sweetened beverages and snacks among non-active video gaming adolescents who primarily were of healthy weight.We assigned 270 gaming (i.e. ≥ 2 hours/week non-active video game time adolescents randomly to an intervention group (n = 140 (receiving active video games and encouragement to play or a waiting-list control group (n = 130. BMI-SDS (SDS = adjusted for mean standard deviation score, waist circumference-SDS, hip circumference and sum of skinfolds were measured at baseline, at four and ten months follow-up (primary outcomes. Sedentary screen time, physical activity, consumption of sugar-sweetened beverages and snacks, and process measures (not at baseline were assessed with self-reports at baseline, one, four and ten months follow-up. Multi-level-intention to treat-regression analyses were conducted.The control group decreased significantly more than the intervention group on BMI-SDS (β = 0.074, 95%CI: 0.008;0.14, and sum of skinfolds (β = 3.22, 95%CI: 0.27;6.17 (overall effects. The intervention group had a significantly higher decrease in self-reported non-active video game time (β = -1.76, 95%CI: -3.20;-0.32 and total sedentary screen time (Exp (β = 0.81, 95%CI: 0.74;0.88 than the control group (overall effects. The process evaluation showed that 14% of the adolescents played the Move video games every week ≥ 1 hour/week during the whole intervention period.The active video game intervention did not result in lower values on anthropometrics in a group of 'excessive' non-active video gamers (mean ~ 14 hours/week who primarily were of healthy weight compared to a control group throughout a ten-month-period. Even some effects in the unexpected direction were found, with the control group showing lower BMI-SDS and skin folds than the intervention

  13. Replacing Non-Active Video Gaming by Active Video Gaming to Prevent Excessive Weight Gain in Adolescents

    Science.gov (United States)

    Simons, Monique; Brug, Johannes; Chinapaw, Mai J. M.; de Boer, Michiel; Seidell, Jaap; de Vet, Emely

    2015-01-01

    Objective The aim of the current study was to evaluate the effects of and adherence to an active video game promotion intervention on anthropometrics, sedentary screen time and consumption of sugar-sweetened beverages and snacks among non-active video gaming adolescents who primarily were of healthy weight. Methods We assigned 270 gaming (i.e. ≥2 hours/week non-active video game time) adolescents randomly to an intervention group (n = 140) (receiving active video games and encouragement to play) or a waiting-list control group (n = 130). BMI-SDS (SDS = adjusted for mean standard deviation score), waist circumference-SDS, hip circumference and sum of skinfolds were measured at baseline, at four and ten months follow-up (primary outcomes). Sedentary screen time, physical activity, consumption of sugar-sweetened beverages and snacks, and process measures (not at baseline) were assessed with self-reports at baseline, one, four and ten months follow-up. Multi-level-intention to treat-regression analyses were conducted. Results The control group decreased significantly more than the intervention group on BMI-SDS (β = 0.074, 95%CI: 0.008;0.14), and sum of skinfolds (β = 3.22, 95%CI: 0.27;6.17) (overall effects). The intervention group had a significantly higher decrease in self-reported non-active video game time (β = -1.76, 95%CI: -3.20;-0.32) and total sedentary screen time (Exp (β = 0.81, 95%CI: 0.74;0.88) than the control group (overall effects). The process evaluation showed that 14% of the adolescents played the Move video games every week ≥1 hour/week during the whole intervention period. Conclusions The active video game intervention did not result in lower values on anthropometrics in a group of ‘excessive’ non-active video gamers (mean ~ 14 hours/week) who primarily were of healthy weight compared to a control group throughout a ten-month-period. Even some effects in the unexpected direction were found, with the control group showing lower BMI

  14. Study of the morphology exhibited by linear segmented polyurethanes

    International Nuclear Information System (INIS)

    Pereira, I.M.; Orefice, R.L.

    2009-01-01

    Five series of segmented polyurethanes with different hard segment content were prepared by the prepolymer mixing method. The nano-morphology of the obtained polyurethanes and their microphase separation were investigated by infrared spectroscopy, modulated differential scanning calorimetry and small-angle X-ray scattering. Although highly hydrogen bonded hard segments were formed, high hard segment contents promoted phase mixture and decreased the chain mobility, decreasing the hard segment domain precipitation and the soft segments crystallization. The applied techniques were able to show that the hard-segment content and the hard-segment interactions were the two controlling factors for determining the structure of segmented polyurethanes. (author)

  15. Video games

    OpenAIRE

    Kolář, Vojtěch

    2012-01-01

    This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...

  16. Scale selection for supervised image segmentation

    DEFF Research Database (Denmark)

    Li, Yan; Tax, David M J; Loog, Marco

    2012-01-01

    schemes are usually unsupervised, as they do not take into account the actual segmentation problem at hand. In this paper, we consider the problem of selecting scales, which aims at an optimal discrimination between user-defined classes in the segmentation. We show the deficiency of the classical...

  17. Enhance Video Film using Retnix method

    Science.gov (United States)

    Awad, Rasha; Al-Zuky, Ali A.; Al-Saleh, Anwar H.; Mohamad, Haidar J.

    2018-05-01

    An enhancement technique used to improve the studied video quality. Algorithms like mean and standard deviation are used as a criterion within this paper, and it applied for each video clip that divided into 80 images. The studied filming environment has different light intensity (315, 566, and 644Lux). This different environment gives similar reality to the outdoor filming. The outputs of the suggested algorithm are compared with the results before applying it. This method is applied into two ways: first, it is applied for the full video clip to get the enhanced film; second, it is applied for every individual image to get the enhanced image then compiler them to get the enhanced film. This paper shows that the enhancement technique gives good quality video film depending on a statistical method, and it is recommended to use it in different application.

  18. Alleviating travel anxiety through virtual reality and narrated video technology.

    Science.gov (United States)

    Ahn, J C; Lee, O

    2013-01-01

    This study presents an empirical evidence of benefit of narrative video clips in embedded virtual reality websites of hotels for relieving travel anxiety. Even though it was proven that virtual reality functions do provide some relief in travel anxiety, a stronger virtual reality website can be built when narrative video clips that show video clips with narration about important aspects of the hotel. We posit that these important aspects are 1. Escape route and 2. Surrounding neighborhood information, which are derived from the existing research on anxiety disorder as well as travel anxiety. Thus we created a video clip that showed and narrated about the escape route from the hotel room, another video clip that showed and narrated about surrounding neighborhood. We then conducted experiments with this enhanced virtual reality website of a hotel by having human subjects play with the website and fill out a questionnaire. The result confirms our hypothesis that there is a statistically significant relationship between the degree of travel anxiety and psychological relief caused by the use of embedded virtual reality functions with narrative video clips of a hotel website (Tab. 2, Fig. 3, Ref. 26).

  19. The Optimiser: monitoring and improving switching delays in video conferencing

    NARCIS (Netherlands)

    S. Gunkel (Simon); A.J. Jansen (Jack); I. Kegel; D.C.A. Bulterman (Dick); P.S. Cesar Garcia (Pablo Santiago)

    2014-01-01

    htmlabstractWith the growing popularity of video communication systems, more people are using group video chat, rather than only one-to-one video calls. In such multi-party sessions, remote participants compete for the available screen space and bandwidth. A common solution is showing the current

  20. Statistical motion vector analysis for object tracking in compressed video streams

    Science.gov (United States)

    Leny, Marc; Prêteux, Françoise; Nicholson, Didier

    2008-02-01

    Compressed video is the digital raw material provided by video-surveillance systems and used for archiving and indexing purposes. Multimedia standards have therefore a direct impact on such systems. If MPEG-2 used to be the coding standard, MPEG-4 (part 2) has now replaced it in most installations, and MPEG-4 AVC/H.264 solutions are now being released. Finely analysing the complex and rich MPEG-4 streams is a challenging issue addressed in that paper. The system we designed is based on five modules: low-resolution decoder, motion estimation generator, object motion filtering, low-resolution object segmentation, and cooperative decision. Our contributions refer to as the statistical analysis of the spatial distribution of the motion vectors, the computation of DCT-based confidence maps, the automatic motion activity detection in the compressed file and a rough indexation by dedicated descriptors. The robustness and accuracy of the system are evaluated on a large corpus (hundreds of hours of in-and outdoor videos with pedestrians and vehicles). The objective benchmarking of the performances is achieved with respect to five metrics allowing to estimate the error part due to each module and for different implementations. This evaluation establishes that our system analyses up to 200 frames (720x288) per second (2.66 GHz CPU).

  1. Osmotic and Heat Stress Effects on Segmentation.

    Directory of Open Access Journals (Sweden)

    Julian Weiss

    Full Text Available During vertebrate embryonic development, early skin, muscle, and bone progenitor populations organize into segments known as somites. Defects in this conserved process of segmentation lead to skeletal and muscular deformities, such as congenital scoliosis, a curvature of the spine caused by vertebral defects. Environmental stresses such as hypoxia or heat shock produce segmentation defects, and significantly increase the penetrance and severity of vertebral defects in genetically susceptible individuals. Here we show that a brief exposure to a high osmolarity solution causes reproducible segmentation defects in developing zebrafish (Danio rerio embryos. Both osmotic shock and heat shock produce border defects in a dose-dependent manner, with an increase in both frequency and severity of defects. We also show that osmotic treatment has a delayed effect on somite development, similar to that observed in heat shocked embryos. Our results establish osmotic shock as an alternate experimental model for stress, affecting segmentation in a manner comparable to other known environmental stressors. The similar effects of these two distinct environmental stressors support a model in which a variety of cellular stresses act through a related response pathway that leads to disturbances in the segmentation process.

  2. The Reliability of the Segmental Assessment of Trunk Control (SATCo) in Children with Cerebral Palsy

    DEFF Research Database (Denmark)

    Hansen, Lisbeth; Erhardsen, Katrine Thingholm; Bencke, Jesper

    2018-01-01

     months and 16 years of age (22 males, mean age 8y 10mo [SD 3y 5mo], Gross Motor Function Classification System level I [n = 13], II [n = 4], III [n = 4], IV [n = 3], and V [n = 7]) were included. Children were tested twice by two raters and tests were video recorded. Wilcoxon Signed-Rank Test, ICC [2......Aims: To assess the live-versus-video, intrarater interday and interrater interday reliability of the test Segmental Assessment of Trunk Control (SATCo), which seeks to estimate the degree of sitting trunk control in children with cerebral palsy (CP). Method: Thirty-one children with CP between 9...... testing method could potentially improve the reliability and the value of the test in research and in clinical practice....

  3. Playing Action Video Games Improves Visuomotor Control.

    Science.gov (United States)

    Li, Li; Chen, Rongrong; Chen, Jing

    2016-08-01

    Can playing action video games improve visuomotor control? If so, can these games be used in training people to perform daily visuomotor-control tasks, such as driving? We found that action gamers have better lane-keeping and visuomotor-control skills than do non-action gamers. We then trained non-action gamers with action or nonaction video games. After they played a driving or first-person-shooter video game for 5 or 10 hr, their visuomotor control improved significantly. In contrast, non-action gamers showed no such improvement after they played a nonaction video game. Our model-driven analysis revealed that although different action video games have different effects on the sensorimotor system underlying visuomotor control, action gaming in general improves the responsiveness of the sensorimotor system to input error signals. The findings support a causal link between action gaming (for as little as 5 hr) and enhancement in visuomotor control, and suggest that action video games can be beneficial training tools for driving. © The Author(s) 2016.

  4. Relacije umetnosti i video igara / Relations of Art and Video Games

    OpenAIRE

    Manojlo Maravić

    2012-01-01

    When discussing the art of video games, three different contexts need to be considered: the 'high' art (video games and the art); commercial video games (video games as the art) and the fan art. Video games are a legitimate artistic medium subject to modifications and recontextualisations in the process of creating a specific experience of the player/user/audience and political action by referring to particular social problems. They represent a high technological medium that increases, with p...

  5. Rare Disease Video Portal

    OpenAIRE

    Sánchez Bocanegra, Carlos Luis

    2011-01-01

    Rare Disease Video Portal (RD Video) is a portal web where contains videos from Youtube including all details from 12 channels of Youtube. Rare Disease Video Portal (RD Video) es un portal web que contiene los vídeos de Youtube incluyendo todos los detalles de 12 canales de Youtube. Rare Disease Video Portal (RD Video) és un portal web que conté els vídeos de Youtube i que inclou tots els detalls de 12 Canals de Youtube.

  6. Medical students' perceptions of video-linked lectures and video-streaming

    Directory of Open Access Journals (Sweden)

    Karen Mattick

    2010-12-01

    Full Text Available Video-linked lectures allow healthcare students across multiple sites, and between university and hospital bases, to come together for the purposes of shared teaching. Recording and streaming video-linked lectures allows students to view them at a later date and provides an additional resource to support student learning. As part of a UK Higher Education Academy-funded Pathfinder project, this study explored medical students' perceptions of video-linked lectures and video-streaming, and their impact on learning. The methodology involved semi-structured interviews with 20 undergraduate medical students across four sites and five year groups. Several key themes emerged from the analysis. Students generally preferred live lectures at the home site and saw interaction between sites as a major challenge. Students reported that their attendance at live lectures was not affected by the availability of streamed lectures and tended to be influenced more by the topic and speaker than the technical arrangements. These findings will inform other educators interested in employing similar video technologies in their teaching.Keywords: video-linked lecture; video-streaming; student perceptions; decisionmaking; cross-campus teaching.

  7. Modifying the affective behavior of preschoolers with autism using in-vivo or video modeling and reinforcement contingencies.

    Science.gov (United States)

    Gena, Angeliki; Couloura, Sophia; Kymissis, Effie

    2005-10-01

    The purpose of this study was to modify the affective behavior of three preschoolers with autism in home settings and in the context of play activities, and to compare the effects of video modeling to the effects of in-vivo modeling in teaching these children contextually appropriate affective responses. A multiple-baseline design across subjects, with a return to baseline condition, was used to assess the effects of treatment that consisted of reinforcement, video modeling, in-vivo modeling, and prompting. During training trials, reinforcement in the form of verbal praise and tokens was delivered contingent upon appropriate affective responding. Error correction procedures differed for each treatment condition. In the in-vivo modeling condition, the therapist used modeling and verbal prompting. In the video modeling condition, video segments of a peer modeling the correct response and verbal prompting by the therapist were used as corrective procedures. Participants received treatment in three categories of affective behavior--sympathy, appreciation, and disapproval--and were presented with a total of 140 different scenarios. The study demonstrated that both treatments--video modeling and in-vivo modeling--systematically increased appropriate affective responding in all response categories for the three participants. Additionally, treatment effects generalized across responses to untrained scenarios, the child's mother, new therapists, and time.

  8. The Aesthetics of the Ambient Video Experience

    Directory of Open Access Journals (Sweden)

    Jim Bizzocchi

    2008-01-01

    Full Text Available Ambient Video is an emergent cultural phenomenon, with roots that go deeply into the history of experimental film and video art. Ambient Video, like Brian Eno's ambient music, is video that "must be as easy to ignore as notice" [9]. This minimalist description conceals the formidable aesthetic challenge that faces this new form. Ambient video art works will hang on the walls of our living rooms, corporate offices, and public spaces. They will play in the background of our lives, living video paintings framed by the new generation of elegant, high-resolution flat-panel display units. However, they cannot command attention like a film or television show. They will patiently play in the background of our lives, yet they must always be ready to justify our attention in any given moment. In this capacity, ambient video works need to be equally proficient at rewarding a fleeting glance, a more direct look, or a longer contemplative gaze. This paper connects a series of threads that collectively illuminate the aesthetics of this emergent form: its history as a popular culture phenomenon, its more substantive artistic roots in avant-garde cinema and video art, its relationship to new technologies, the analysis of the viewer's conditions of reception, and the work of current artists who practice within this form.

  9. Bayesian automated cortical segmentation for neonatal MRI

    Science.gov (United States)

    Chou, Zane; Paquette, Natacha; Ganesh, Bhavana; Wang, Yalin; Ceschin, Rafael; Nelson, Marvin D.; Macyszyn, Luke; Gaonkar, Bilwaj; Panigrahy, Ashok; Lepore, Natasha

    2017-11-01

    Several attempts have been made in the past few years to develop and implement an automated segmentation of neonatal brain structural MRI. However, accurate automated MRI segmentation remains challenging in this population because of the low signal-to-noise ratio, large partial volume effects and inter-individual anatomical variability of the neonatal brain. In this paper, we propose a learning method for segmenting the whole brain cortical grey matter on neonatal T2-weighted images. We trained our algorithm using a neonatal dataset composed of 3 fullterm and 4 preterm infants scanned at term equivalent age. Our segmentation pipeline combines the FAST algorithm from the FSL library software and a Bayesian segmentation approach to create a threshold matrix that minimizes the error of mislabeling brain tissue types. Our method shows promising results with our pilot training set. In both preterm and full-term neonates, automated Bayesian segmentation generates a smoother and more consistent parcellation compared to FAST, while successfully removing the subcortical structure and cleaning the edges of the cortical grey matter. This method show promising refinement of the FAST segmentation by considerably reducing manual input and editing required from the user, and further improving reliability and processing time of neonatal MR images. Further improvement will include a larger dataset of training images acquired from different manufacturers.

  10. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... YouTube Videos » NEI YouTube Videos: Amblyopia Listen NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration ... Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: ...

  11. Blood Sampling in Newborns: A Systematic Review of YouTube Videos.

    Science.gov (United States)

    Bueno, Mariana; Nishi, Érika Tihemi; Costa, Taine; Freire, Laís Machado; Harrison, Denise

    Objective of this study was to conduct a systematic review of YouTube videos showing neonatal blood sampling, and to evaluate pain management and comforting interventions used. Selected videos were consumer- or professional-produced videos showing human newborns undergoing heel lancing or venipuncture for blood sampling, videos showing the entire blood sampling procedure (from the first attempt or puncture to the time of application of a cotton ball or bandage), publication date prior to October 2014, Portuguese titles, available audio. Search terms included "neonate," "newborn," "neonatal screening," and "blood collection." Two reviewers independently screened the videos and extracted the following data. A total of 13 140 videos were retrieved, of which 1354 were further evaluated, and 68 were included. Videos were mostly consumer produced (97%). Heel lancing was performed in 62 (91%). Forty-nine infants (72%) were held by an adult during the procedure. Median pain score immediately after puncture was 4 (interquartile range [IQR] = 0-5), and median length of cry throughout the procedure was 61 seconds (IQR = 88). Breastfeeding (3%) and swaddling (1.5%) were rarely implemented. Posted YouTube videos in Portuguese of newborns undergoing blood collection demonstrate minimal use of pain treatment, and maximal distress during procedures. Knowledge translation strategies are needed to implement effective measures for neonatal pain relief and comfort.

  12. Identifying hidden voice and video streams

    Science.gov (United States)

    Fan, Jieyan; Wu, Dapeng; Nucci, Antonio; Keralapura, Ram; Gao, Lixin

    2009-04-01

    Given the rising popularity of voice and video services over the Internet, accurately identifying voice and video traffic that traverse their networks has become a critical task for Internet service providers (ISPs). As the number of proprietary applications that deliver voice and video services to end users increases over time, the search for the one methodology that can accurately detect such services while being application independent still remains open. This problem becomes even more complicated when voice and video service providers like Skype, Microsoft, and Google bundle their voice and video services with other services like file transfer and chat. For example, a bundled Skype session can contain both voice stream and file transfer stream in the same layer-3/layer-4 flow. In this context, traditional techniques to identify voice and video streams do not work. In this paper, we propose a novel self-learning classifier, called VVS-I , that detects the presence of voice and video streams in flows with minimum manual intervention. Our classifier works in two phases: training phase and detection phase. In the training phase, VVS-I first extracts the relevant features, and subsequently constructs a fingerprint of a flow using the power spectral density (PSD) analysis. In the detection phase, it compares the fingerprint of a flow to the existing fingerprints learned during the training phase, and subsequently classifies the flow. Our classifier is not only capable of detecting voice and video streams that are hidden in different flows, but is also capable of detecting different applications (like Skype, MSN, etc.) that generate these voice/video streams. We show that our classifier can achieve close to 100% detection rate while keeping the false positive rate to less that 1%.

  13. Effect of video server topology on contingency capacity requirements

    Science.gov (United States)

    Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.

    1996-03-01

    Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.

  14. Guerrilla Video: A New Protocol for Producing Classroom Video

    Science.gov (United States)

    Fadde, Peter; Rich, Peter

    2010-01-01

    Contemporary changes in pedagogy point to the need for a higher level of video production value in most classroom video, replacing the default video protocol of an unattended camera in the back of the classroom. The rich and complex environment of today's classroom can be captured more fully using the higher level, but still easily manageable,…

  15. Scorpion image segmentation system

    Science.gov (United States)

    Joseph, E.; Aibinu, A. M.; Sadiq, B. A.; Bello Salau, H.; Salami, M. J. E.

    2013-12-01

    Death as a result of scorpion sting has been a major public health problem in developing countries. Despite the high rate of death as a result of scorpion sting, little report exists in literature of intelligent device and system for automatic detection of scorpion. This paper proposed a digital image processing approach based on the floresencing characteristics of Scorpion under Ultra-violet (UV) light for automatic detection and identification of scorpion. The acquired UV-based images undergo pre-processing to equalize uneven illumination and colour space channel separation. The extracted channels are then segmented into two non-overlapping classes. It has been observed that simple thresholding of the green channel of the acquired RGB UV-based image is sufficient for segmenting Scorpion from other background components in the acquired image. Two approaches to image segmentation have also been proposed in this work, namely, the simple average segmentation technique and K-means image segmentation. The proposed algorithm has been tested on over 40 UV scorpion images obtained from different part of the world and results obtained show an average accuracy of 97.7% in correctly classifying the pixel into two non-overlapping clusters. The proposed 1system will eliminate the problem associated with some of the existing manual approaches presently in use for scorpion detection.

  16. MO-DE-BRA-01: Flipped Physics Courses Within a Radiologic Technologist Program: Video Production and Long Term Outcomes

    International Nuclear Information System (INIS)

    Oshiro, T; Donaghy, M; Slechta, A

    2016-01-01

    Purpose: To determine if the flipped class format has an effect on examination results for a radiologic technologist (RT) program and discuss benefits from creating video resources. Methods: From 2001–2015, students had taken both a radiological physics and quality control (QC) class as a part of their didactic training. In 2005/2006, the creation of videos of didactic lectures and QC test demonstrations allowed for a flip where content was studied at home while exercises and reviews were done in-class. Final examinations were retrospectively reviewed from this timeframe. 12 multiple choice physics questions (MCP) and 5 short answer QC questions (SAQC) were common to pre and post flip exams. The RT program’s ARRT exam scores were also obtained and compared to national averages. Results: In total, 36 lecture videos and 65 quality control videos were created for the flipped content. Data was ∼2.4GB and distributed to students via USB or CD media. For MCP questions, scores improved by 7.9% with the flipped format and significance (Student’s t-test, p<0.05) was found for 3 of the 12 questions. SAQC questions showed improvement by 14.6% and significance was found for 2 of the 5 questions. Student enrollment increased from ∼14 (2001–2004) to ∼23 students (2005–15). Content was continuously added post-flip due to the efficiency of delivery. The QC class in 2003 covered 45 test setups in-class while 65 were covered with video segments in 2014. Flipped materials are currently being repurposed. In 2015, this video content was restructured into an ARRT exam review guide and in 2016, the content was reorganized for fluoroscopy training for physicians. Conclusion: We believe that flipped classes can improve efficiency of content delivery and improve student performance even with an increase in class size. This format allows for flexibility in learning as well as re-use in multiple applications.

  17. MO-DE-BRA-01: Flipped Physics Courses Within a Radiologic Technologist Program: Video Production and Long Term Outcomes

    Energy Technology Data Exchange (ETDEWEB)

    Oshiro, T [UCLA, Los Angeles, CA (United States); Donaghy, M [California State University, Northridge, Northridge, CA (United States); Slechta, A [California State University, Northridge, Northridge, CA (United States)

    2016-06-15

    Purpose: To determine if the flipped class format has an effect on examination results for a radiologic technologist (RT) program and discuss benefits from creating video resources. Methods: From 2001–2015, students had taken both a radiological physics and quality control (QC) class as a part of their didactic training. In 2005/2006, the creation of videos of didactic lectures and QC test demonstrations allowed for a flip where content was studied at home while exercises and reviews were done in-class. Final examinations were retrospectively reviewed from this timeframe. 12 multiple choice physics questions (MCP) and 5 short answer QC questions (SAQC) were common to pre and post flip exams. The RT program’s ARRT exam scores were also obtained and compared to national averages. Results: In total, 36 lecture videos and 65 quality control videos were created for the flipped content. Data was ∼2.4GB and distributed to students via USB or CD media. For MCP questions, scores improved by 7.9% with the flipped format and significance (Student’s t-test, p<0.05) was found for 3 of the 12 questions. SAQC questions showed improvement by 14.6% and significance was found for 2 of the 5 questions. Student enrollment increased from ∼14 (2001–2004) to ∼23 students (2005–15). Content was continuously added post-flip due to the efficiency of delivery. The QC class in 2003 covered 45 test setups in-class while 65 were covered with video segments in 2014. Flipped materials are currently being repurposed. In 2015, this video content was restructured into an ARRT exam review guide and in 2016, the content was reorganized for fluoroscopy training for physicians. Conclusion: We believe that flipped classes can improve efficiency of content delivery and improve student performance even with an increase in class size. This format allows for flexibility in learning as well as re-use in multiple applications.

  18. A Dataset and Benchmarks for Segmentation and Recognition of Gestures in Robotic Surgery.

    Science.gov (United States)

    Ahmidi, Narges; Tao, Lingling; Sefati, Shahin; Gao, Yixin; Lea, Colin; Haro, Benjamin Bejar; Zappella, Luca; Khudanpur, Sanjeev; Vidal, Rene; Hager, Gregory D

    2017-09-01

    State-of-the-art techniques for surgical data analysis report promising results for automated skill assessment and action recognition. The contributions of many of these techniques, however, are limited to study-specific data and validation metrics, making assessment of progress across the field extremely challenging. In this paper, we address two major problems for surgical data analysis: First, lack of uniform-shared datasets and benchmarks, and second, lack of consistent validation processes. We address the former by presenting the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS), a public dataset that we have created to support comparative research benchmarking. JIGSAWS contains synchronized video and kinematic data from multiple performances of robotic surgical tasks by operators of varying skill. We address the latter by presenting a well-documented evaluation methodology and reporting results for six techniques for automated segmentation and classification of time-series data on JIGSAWS. These techniques comprise four temporal approaches for joint segmentation and classification: hidden Markov model, sparse hidden Markov model (HMM), Markov semi-Markov conditional random field, and skip-chain conditional random field; and two feature-based ones that aim to classify fixed segments: bag of spatiotemporal features and linear dynamical systems. Most methods recognize gesture activities with approximately 80% overall accuracy under both leave-one-super-trial-out and leave-one-user-out cross-validation settings. Current methods show promising results on this shared dataset, but room for significant progress remains, particularly for consistent prediction of gesture activities across different surgeons. The results reported in this paper provide the first systematic and uniform evaluation of surgical activity recognition techniques on the benchmark database.

  19. REAL-TIME VIDEO SCALING BASED ON CONVOLUTION NEURAL NETWORK ARCHITECTURE

    Directory of Open Access Journals (Sweden)

    S Safinaz

    2017-08-01

    Full Text Available In recent years, video super resolution techniques becomes mandatory requirements to get high resolution videos. Many super resolution techniques researched but still video super resolution or scaling is a vital challenge. In this paper, we have presented a real-time video scaling based on convolution neural network architecture to eliminate the blurriness in the images and video frames and to provide better reconstruction quality while scaling of large datasets from lower resolution frames to high resolution frames. We compare our outcomes with multiple exiting algorithms. Our extensive results of proposed technique RemCNN (Reconstruction error minimization Convolution Neural Network shows that our model outperforms the existing technologies such as bicubic, bilinear, MCResNet and provide better reconstructed motioning images and video frames. The experimental results shows that our average PSNR result is 47.80474 considering upscale-2, 41.70209 for upscale-3 and 36.24503 for upscale-4 for Myanmar dataset which is very high in contrast to other existing techniques. This results proves our proposed model real-time video scaling based on convolution neural network architecture’s high efficiency and better performance.

  20. Video Comparator

    International Nuclear Information System (INIS)

    Rose, R.P.

    1978-01-01

    The Video Comparator is a comparative gage that uses electronic images from two sources, a standard and an unknown. Two matched video cameras are used to obtain the electronic images. The video signals are mixed and displayed on a single video receiver (CRT). The video system is manufactured by ITP of Chatsworth, CA and is a Tele-Microscope II, Model 148. One of the cameras is mounted on a toolmaker's microscope stand and produces a 250X image of a cast. The other camera is mounted on a stand and produces an image of a 250X template. The two video images are mixed in a control box provided by ITP and displayed on a CRT. The template or the cast can be moved to align the desired features. Vertical reference lines are provided on the CRT, and a feature on the cast can be aligned with a line on the CRT screen. The stage containing the casts can be moved using a Boeckleler micrometer equipped with a digital readout, and a second feature aligned with the reference line and the distance moved obtained from the digital display

  1. Multidimensional Brain MRI segmentation using graph cuts

    International Nuclear Information System (INIS)

    Lecoeur, Jeremy

    2010-01-01

    This thesis deals with the segmentation of multimodal brain MRIs by graph cuts method. First, we propose a method that utilizes three MRI modalities by merging them. The border information given by the spectral gradient is then challenged by a region information, given by the seeds selected by the user, using a graph cut algorithm. Then, we propose three enhancements of this method. The first consists in finding an optimal spectral space because the spectral gradient is based on natural images and then inadequate for multimodal medical images. This results in a learning based segmentation method. We then explore the automation of the graph cut method. Here, the various pieces of information usually given by the user are inferred from a robust expectation-maximization algorithm. We show the performance of these two enhanced versions on multiple sclerosis lesions. Finally, we integrate atlases for the automatic segmentation of deep brain structures. These three new techniques show the adaptability of our method to various problems. Our different segmentation methods are better than most of nowadays techniques, speaking of computation time or segmentation accuracy. (authors)

  2. A video event trigger for high frame rate, high resolution video technology

    Science.gov (United States)

    Williams, Glenn L.

    1991-12-01

    When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.

  3. Subjective video quality comparison of HDTV monitors

    Science.gov (United States)

    Seo, G.; Lim, C.; Lee, S.; Lee, C.

    2009-01-01

    HDTV broadcasting services have become widely available. Furthermore, in the upcoming IPTV services, HDTV services are important and quality monitoring becomes an issue, particularly in IPTV services. Consequently, there have been great efforts to develop video quality measurement methods for HDTV. On the other hand, most HDTV programs will be watched on digital TV monitors which include LCD and PDP TV monitors. In general, the LCD and PDP TV monitors have different color characteristics and response times. Furthermore, most commercial TV monitors include post-processing to improve video quality. In this paper, we compare subjective video quality of some commercial HD TV monitors to investigate the impact of monitor type on perceptual video quality. We used the ACR method as a subjective testing method. Experimental results show that the correlation coefficients among the HDTV monitors are reasonable high. However, for some video sequences and impairments, some differences in subjective scores were observed.

  4. Radiation between segments of the seated human body

    DEFF Research Database (Denmark)

    Sørensen, Dan Nørtoft

    2002-01-01

    Detailed radiation properties for a thermal manikin were predicted numerically. The view factors between individual body-segments and between the body-segments and the outer surfaces were tabulated. On an integral basis, the findings compared well to other studies and the results showed...... that situations exist for which radiation between individual body segments is important....

  5. Video game playing and its relations with aggressive and prosocial behaviour.

    Science.gov (United States)

    Wiegman, O; van Schie, E G

    1998-09-01

    In this study of 278 children from the seventh and eighth grade of five elementary schools in Enschede, The Netherlands, the relationship between the amount of time children spent on playing video games and aggressive as well as prosocial behaviour was investigated. In addition, the relationship between the preference for aggressive video games and aggressive and prosocial behaviour was studied. No significant relationship was found between video game use in general and aggressive behaviour, but a significant negative relationship with prosocial behaviour was supported. However, separate analyses for boys and girls did not reveal this relationship. More consistent results were found for the preference for aggressive video games: children, especially boys, who preferred aggressive video games were more aggressive and showed less prosocial behaviour than those with a low preference for these games. Further analyses showed that children who preferred playing aggressive video games tended to be less intelligent.

  6. Physics-Based Image Segmentation Using First Order Statistical Properties and Genetic Algorithm for Inductive Thermography Imaging.

    Science.gov (United States)

    Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun

    2018-05-01

    Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.

  7. No-Reference Video Quality Assessment by HEVC Codec Analysis

    DEFF Research Database (Denmark)

    Huang, Xin; Søgaard, Jacob; Forchhammer, Søren

    2015-01-01

    This paper proposes a No-Reference (NR) Video Quality Assessment (VQA) method for videos subject to the distortion given by High Efficiency Video Coding (HEVC). The proposed assessment can be performed either as a BitstreamBased (BB) method or as a Pixel-Based (PB). It extracts or estimates...... the transform coefficients, estimates the distortion, and assesses the video quality. The proposed scheme generates VQA features based on Intra coded frames, and then maps features using an Elastic Net to predict subjective video quality. A set of HEVC coded 4K UHD sequences are tested. Results show...... that the quality scores computed by the proposed method are highly correlated with the subjective assessment....

  8. Quality of Experience Assessment of Video Quality in Social Clouds

    Directory of Open Access Journals (Sweden)

    Asif Ali Laghari

    2017-01-01

    Full Text Available Video sharing on social clouds is popular among the users around the world. High-Definition (HD videos have big file size so the storing in cloud storage and streaming of videos with high quality from cloud to the client are a big problem for service providers. Social clouds compress the videos to save storage and stream over slow networks to provide quality of service (QoS. Compression of video decreases the quality compared to original video and parameters are changed during the online play as well as after download. Degradation of video quality due to compression decreases the quality of experience (QoE level of end users. To assess the QoE of video compression, we conducted subjective (QoE experiments by uploading, sharing, and playing videos from social clouds. Three popular social clouds, Facebook, Tumblr, and Twitter, were selected to upload and play videos online for users. The QoE was recorded by using questionnaire given to users to provide their experience about the video quality they perceive. Results show that Facebook and Twitter compressed HD videos more as compared to other clouds. However, Facebook gives a better quality of compressed videos compared to Twitter. Therefore, users assigned low ratings for Twitter for online video quality compared to Tumblr that provided high-quality online play of videos with less compression.

  9. Multi-frame super-resolution with quality self-assessment for retinal fundus videos.

    Science.gov (United States)

    Köhler, Thomas; Brost, Alexander; Mogalle, Katja; Zhang, Qianyi; Köhler, Christiane; Michelson, Georg; Hornegger, Joachim; Tornow, Ralf P

    2014-01-01

    This paper proposes a novel super-resolution framework to reconstruct high-resolution fundus images from multiple low-resolution video frames in retinal fundus imaging. Natural eye movements during an examination are used as a cue for super-resolution in a robust maximum a-posteriori scheme. In order to compensate heterogeneous illumination on the fundus, we integrate retrospective illumination correction for photometric registration to the underlying imaging model. Our method utilizes quality self-assessment to provide objective quality scores for reconstructed images as well as to select regularization parameters automatically. In our evaluation on real data acquired from six human subjects with a low-cost video camera, the proposed method achieved considerable enhancements of low-resolution frames and improved noise and sharpness characteristics by 74%. In terms of image analysis, we demonstrate the importance of our method for the improvement of automatic blood vessel segmentation as an example application, where the sensitivity was increased by 13% using super-resolution reconstruction.

  10. Playing a first-person shooter video game induces neuroplastic change.

    Science.gov (United States)

    Wu, Sijing; Cheng, Cho Kin; Feng, Jing; D'Angelo, Lisa; Alain, Claude; Spence, Ian

    2012-06-01

    Playing a first-person shooter (FPS) video game alters the neural processes that support spatial selective attention. Our experiment establishes a causal relationship between playing an FPS game and neuroplastic change. Twenty-five participants completed an attentional visual field task while we measured ERPs before and after playing an FPS video game for a cumulative total of 10 hr. Early visual ERPs sensitive to bottom-up attentional processes were little affected by video game playing for only 10 hr. However, participants who played the FPS video game and also showed the greatest improvement on the attentional visual field task displayed increased amplitudes in the later visual ERPs. These potentials are thought to index top-down enhancement of spatial selective attention via increased inhibition of distractors. Individual variations in learning were observed, and these differences show that not all video game players benefit equally, either behaviorally or in terms of neural change.

  11. An Analysis of Video Navigation Behavior for Web Leisure

    Directory of Open Access Journals (Sweden)

    Ying-Han Chang

    2012-12-01

    Full Text Available People nowadays put much emphasis on leisure activities, and web video has gradually become one of the main sources for popular leisure. This article introduces the related concepts of leisure and navigation behavior as well as some recent research topics. Moreover, using YouTube as an experimental setting, the authors invited some experienced web video users and conducted an empirical study on their navigating the web videos for leisure purpose. The study used questionnaires, navigation logs, diaries, and interviews to collect data. Major results show: the subjects watched a variety of video content on the web either from traditional media or user-generated video; these videos can meet their leisure needs of both the broad and personal interests; during the navigation process, each subject quite focuses on video leisure, and is willingly to explore unknown videos; however, within a limited amount of time for leisure, a balance between leisure and rest becomes an issue of achieving real relaxation, which is worth of further attention. [Article content in Chinese

  12. Application of robust face recognition in video surveillance systems

    Science.gov (United States)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  13. Video Design Games

    DEFF Research Database (Denmark)

    Smith, Rachel Charlotte; Christensen, Kasper Skov; Iversen, Ole Sejer

    We introduce Video Design Games to train educators in teaching design. The Video Design Game is a workshop format consisting of three rounds in which participants observe, reflect and generalize based on video snippets from their own practice. The paper reports on a Video Design Game workshop...... in which 25 educators as part of a digital fabrication and design program were able to critically reflect on their teaching practice....

  14. Process Segmentation Typology in Czech Companies

    Directory of Open Access Journals (Sweden)

    Tucek David

    2016-03-01

    Full Text Available This article describes process segmentation typology during business process management implementation in Czech companies. Process typology is important for a manager’s overview of process orientation as well as for a manager’s general understanding of business process management. This article provides insight into a process-oriented organizational structure. The first part analyzes process segmentation typology itself as well as some original results of quantitative research evaluating process segmentation typology in the specific context of Czech company strategies. Widespread data collection was carried out in 2006 and 2013. The analysis of this data showed that managers have more options regarding process segmentation and its selection. In terms of practicality and ease of use, the most frequently used method of process segmentation (managerial, main, and supportive stems directly from the requirements of ISO 9001. Because of ISO 9001:2015, managers must now apply risk planning in relation to the selection of processes that are subjected to process management activities. It is for this fundamental reason that this article focuses on process segmentation typology.

  15. Games people play: How video games improve probabilistic learning.

    Science.gov (United States)

    Schenk, Sabrina; Lech, Robert K; Suchan, Boris

    2017-09-29

    Recent research suggests that video game playing is associated with many cognitive benefits. However, little is known about the neural mechanisms mediating such effects, especially with regard to probabilistic categorization learning, which is a widely unexplored area in gaming research. Therefore, the present study aimed to investigate the neural correlates of probabilistic classification learning in video gamers in comparison to non-gamers. Subjects were scanned in a 3T magnetic resonance imaging (MRI) scanner while performing a modified version of the weather prediction task. Behavioral data yielded evidence for better categorization performance of video gamers, particularly under conditions characterized by stronger uncertainty. Furthermore, a post-experimental questionnaire showed that video gamers had acquired higher declarative knowledge about the card combinations and the related weather outcomes. Functional imaging data revealed for video gamers stronger activation clusters in the hippocampus, the precuneus, the cingulate gyrus and the middle temporal gyrus as well as in occipital visual areas and in areas related to attentional processes. All these areas are connected with each other and represent critical nodes for semantic memory, visual imagery and cognitive control. Apart from this, and in line with previous studies, both groups showed activation in brain areas that are related to attention and executive functions as well as in the basal ganglia and in memory-associated regions of the medial temporal lobe. These results suggest that playing video games might enhance the usage of declarative knowledge as well as hippocampal involvement and enhances overall learning performance during probabilistic learning. In contrast to non-gamers, video gamers showed better categorization performance, independently of the uncertainty of the condition. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Multimodal Semantics Extraction from User-Generated Videos

    Directory of Open Access Journals (Sweden)

    Francesco Cricri

    2012-01-01

    Full Text Available User-generated video content has grown tremendously fast to the point of outpacing professional content creation. In this work we develop methods that analyze contextual information of multiple user-generated videos in order to obtain semantic information about public happenings (e.g., sport and live music events being recorded in these videos. One of the key contributions of this work is a joint utilization of different data modalities, including such captured by auxiliary sensors during the video recording performed by each user. In particular, we analyze GPS data, magnetometer data, accelerometer data, video- and audio-content data. We use these data modalities to infer information about the event being recorded, in terms of layout (e.g., stadium, genre, indoor versus outdoor scene, and the main area of interest of the event. Furthermore we propose a method that automatically identifies the optimal set of cameras to be used in a multicamera video production. Finally, we detect the camera users which fall within the field of view of other cameras recording at the same public happening. We show that the proposed multimodal analysis methods perform well on various recordings obtained in real sport events and live music performances.

  17. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  18. Algorithms for Cytoplasm Segmentation of Fluorescence Labelled Cells

    Directory of Open Access Journals (Sweden)

    Carolina Wählby

    2002-01-01

    Full Text Available Automatic cell segmentation has various applications in cytometry, and while the nucleus is often very distinct and easy to identify, the cytoplasm provides a lot more challenge. A new combination of image analysis algorithms for segmentation of cells imaged by fluorescence microscopy is presented. The algorithm consists of an image pre‐processing step, a general segmentation and merging step followed by a segmentation quality measurement. The quality measurement consists of a statistical analysis of a number of shape descriptive features. Objects that have features that differ to that of correctly segmented single cells can be further processed by a splitting step. By statistical analysis we therefore get a feedback system for separation of clustered cells. After the segmentation is completed, the quality of the final segmentation is evaluated. By training the algorithm on a representative set of training images, the algorithm is made fully automatic for subsequent images created under similar conditions. Automatic cytoplasm segmentation was tested on CHO‐cells stained with calcein. The fully automatic method showed between 89% and 97% correct segmentation as compared to manual segmentation.

  19. Privacy enabling technology for video surveillance

    Science.gov (United States)

    Dufaux, Frédéric; Ouaret, Mourad; Abdeljaoued, Yousri; Navarro, Alfonso; Vergnenègre, Fabrice; Ebrahimi, Touradj

    2006-05-01

    In this paper, we address the problem privacy in video surveillance. We propose an efficient solution based on transformdomain scrambling of regions of interest in a video sequence. More specifically, the sign of selected transform coefficients is flipped during encoding. We address more specifically the case of Motion JPEG 2000. Simulation results show that the technique can be successfully applied to conceal information in regions of interest in the scene while providing with a good level of security. Furthermore, the scrambling is flexible and allows adjusting the amount of distortion introduced. This is achieved with a small impact on coding performance and negligible computational complexity increase. In the proposed video surveillance system, heterogeneous clients can remotely access the system through the Internet or 2G/3G mobile phone network. Thanks to the inherently scalable Motion JPEG 2000 codestream, the server is able to adapt the resolution and bandwidth of the delivered video depending on the usage environment of the client.

  20. Transesophageal Echocardiography-Guided Epicardial Left Ventricular Lead Placement by Video-Assisted Thoracoscopic Surgery in Nonresponders to Biventricular Pacing and Previous Chest Surgery.

    Science.gov (United States)

    Schroeder, Carsten; Chung, Jane M; Mackall, Judith A; Cakulev, Ivan T; Patel, Aaron; Patel, Sunny J; Hoit, Brian D; Sahadevan, Jayakumar

    2018-06-14

    The aim of the study was to study the feasibility, safety, and efficacy of transesophageal echocardiography-guided intraoperative left ventricular lead placement via a video-assisted thoracoscopic surgery approach in patients with failed conventional biventricular pacing. Twelve patients who could not have the left ventricular lead placed conventionally underwent epicardial left ventricular lead placement by video-assisted thoracoscopic surgery. Eight patients had previous chest surgery (66%). Operative positioning was a modified far lateral supine exposure with 30-degree bed tilt, allowing for groin and sternal access. To determine the optimal left ventricular location for lead placement, the left ventricular surface was divided arbitrarily into nine segments. These segments were transpericardially paced using a hand-held malleable pacing probe identifying the optimal site verified by transesophageal echocardiography. The pacing leads were screwed into position via a limited pericardiotomy. The video-assisted thoracoscopic surgery approach was successful in all patients. Biventricular pacing was achieved in all patients and all reported symptomatic benefit with reduction in New York Heart Association class from III to I-II (P = 0.016). Baseline ejection fraction was 23 ± 3%; within 1-year follow-up, the ejection fraction increased to 32 ± 10% (P = 0.05). The mean follow-up was 566 days. The median length of hospital stay was 7 days with chest tube removal between postoperative days 2 and 5. In patients who are nonresponders to conventional biventricular pacing, intraoperative left ventricular lead placement using anatomical and functional characteristics via a video-assisted thoracoscopic surgery approach is effective in improving heart failure symptoms. This optimized left ventricular lead placement is feasible and safe. Previous chest surgery is no longer an exclusion criterion for a video-assisted thoracoscopic surgery approach.

  1. A robust approach towards unknown transformation, regional adjacency graphs, multigraph matching, segmentation video frames from unnamed aerial vehicles (UAV)

    Science.gov (United States)

    Gohatre, Umakant Bhaskar; Patil, Venkat P.

    2018-04-01

    In computer vision application, the multiple object detection and tracking, in real-time operation is one of the important research field, that have gained a lot of attentions, in last few years for finding non stationary entities in the field of image sequence. The detection of object is advance towards following the moving object in video and then representation of object is step to track. The multiple object recognition proof is one of the testing assignment from detection multiple objects from video sequence. The picture enrollment has been for quite some time utilized as a reason for the location the detection of moving multiple objects. The technique of registration to discover correspondence between back to back casing sets in view of picture appearance under inflexible and relative change. The picture enrollment is not appropriate to deal with event occasion that can be result in potential missed objects. In this paper, for address such problems, designs propose novel approach. The divided video outlines utilizing area adjancy diagram of visual appearance and geometric properties. Then it performed between graph sequences by using multi graph matching, then getting matching region labeling by a proposed graph coloring algorithms which assign foreground label to respective region. The plan design is robust to unknown transformation with significant improvement in overall existing work which is related to moving multiple objects detection in real time parameters.

  2. Pnrc2 regulates 3'UTR-mediated decay of segmentation clock-associated transcripts during zebrafish segmentation.

    Science.gov (United States)

    Gallagher, Thomas L; Tietz, Kiel T; Morrow, Zachary T; McCammon, Jasmine M; Goldrich, Michael L; Derr, Nicolas L; Amacher, Sharon L

    2017-09-01

    Vertebrate segmentation is controlled by the segmentation clock, a molecular oscillator that regulates gene expression and cycles rapidly. The expression of many genes oscillates during segmentation, including hairy/Enhancer of split-related (her or Hes) genes, which encode transcriptional repressors that auto-inhibit their own expression, and deltaC (dlc), which encodes a Notch ligand. We previously identified the tortuga (tor) locus in a zebrafish forward genetic screen for genes involved in cyclic transcript regulation and showed that cyclic transcripts accumulate post-splicing in tor mutants. Here we show that cyclic mRNA accumulation in tor mutants is due to loss of pnrc2, which encodes a proline-rich nuclear receptor co-activator implicated in mRNA decay. Using an inducible in vivo reporter system to analyze transcript stability, we find that the her1 3'UTR confers Pnrc2-dependent instability to a heterologous transcript. her1 mRNA decay is Dicer-independent and likely employs a Pnrc2-Upf1-containing mRNA decay complex. Surprisingly, despite accumulation of cyclic transcripts in pnrc2-deficient embryos, we find that cyclic protein is expressed normally. Overall, we show that Pnrc2 promotes 3'UTR-mediated decay of developmentally-regulated segmentation clock transcripts and we uncover an additional post-transcriptional regulatory layer that ensures oscillatory protein expression in the absence of cyclic mRNA decay. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Segmentation-less Digital Rock Physics

    Science.gov (United States)

    Tisato, N.; Ikeda, K.; Goldfarb, E. J.; Spikes, K. T.

    2017-12-01

    In the last decade, Digital Rock Physics (DRP) has become an avenue to investigate physical and mechanical properties of geomaterials. DRP offers the advantage of simulating laboratory experiments on numerical samples that are obtained from analytical methods. Potentially, DRP could allow sparing part of the time and resources that are allocated to perform complicated laboratory tests. Like classic laboratory tests, the goal of DRP is to estimate accurately physical properties of rocks like hydraulic permeability or elastic moduli. Nevertheless, the physical properties of samples imaged using micro-computed tomography (μCT) are estimated through segmentation of the μCT dataset. Segmentation proves to be a challenging and arbitrary procedure that typically leads to inaccurate estimates of physical properties. Here we present a novel technique to extract physical properties from a μCT dataset without the use of segmentation. We show examples in which we use segmentation-less method to simulate elastic wave propagation and pressure wave diffusion to estimate elastic properties and permeability, respectively. The proposed method takes advantage of effective medium theories and uses the density and the porosity that are measured in the laboratory to constrain the results. We discuss the results and highlight that segmentation-less DRP is more accurate than segmentation based DRP approaches and theoretical modeling for the studied rock. In conclusion, the segmentation-less approach here presented seems to be a promising method to improve accuracy and to ease the overall workflow of DRP.

  4. Continuity-Aware Scheduling Algorithm for Scalable Video Streaming

    Directory of Open Access Journals (Sweden)

    Atinat Palawan

    2016-05-01

    Full Text Available The consumer demand for retrieving and delivering visual content through consumer electronic devices has increased rapidly in recent years. The quality of video in packet networks is susceptible to certain traffic characteristics: average bandwidth availability, loss, delay and delay variation (jitter. This paper presents a scheduling algorithm that modifies the stream of scalable video to combat jitter. The algorithm provides unequal look-ahead by safeguarding the base layer (without the need for overhead of the scalable video. The results of the experiments show that our scheduling algorithm reduces the number of frames with a violated deadline and significantly improves the continuity of the video stream without compromising the average Y Peek Signal-to-Noise Ratio (PSNR.

  5. The Measurement of Intelligence in the XXI Century using Video Games.

    Science.gov (United States)

    Quiroga, M A; Román, F J; De La Fuente, J; Privado, J; Colom, R

    2016-12-05

    This paper reviews the use of video games for measuring intelligence differences and reports two studies analyzing the relationship between intelligence and performance on a leisure video game. In the first study, the main focus was to design an Intelligence Test using puzzles from the video game. Forty-seven young participants played "Professor Layton and the curious village"® for a maximum of 15 hours and completed a set of intelligence standardized tests. Results show that the time required for completing the game interacts with intelligence differences: the higher the intelligence, the lower the time (d = .91). Furthermore, a set of 41 puzzles showed excellent psychometric properties. The second study, done seven years later, confirmed the previous findings. We finally discuss the pros and cons of video games as tools for measuring cognitive abilities with commercial video games, underscoring that psychologists must develop their own intelligence video games and delineate their key features for the measurement devices of next generation.

  6. Research on compression performance of ultrahigh-definition videos

    Science.gov (United States)

    Li, Xiangqun; He, Xiaohai; Qing, Linbo; Tao, Qingchuan; Wu, Di

    2017-11-01

    With the popularization of high-definition (HD) images and videos (1920×1080 pixels and above), there are even 4K (3840×2160) television signals and 8 K (8192×4320) ultrahigh-definition videos. The demand for HD images and videos is increasing continuously, along with the increasing data volume. The storage and transmission cannot be properly solved only by virtue of the expansion capacity of hard disks and the update and improvement of transmission devices. Based on the full use of the coding standard high-efficiency video coding (HEVC), super-resolution reconstruction technology, and the correlation between the intra- and the interprediction, we first put forward a "division-compensation"-based strategy to further improve the compression performance of a single image and frame I. Then, by making use of the above thought and HEVC encoder and decoder, a video compression coding frame is designed. HEVC is used inside the frame. Last, with the super-resolution reconstruction technology, the reconstructed video quality is further improved. The experiment shows that by the proposed compression method for a single image (frame I) and video sequence here, the performance is superior to that of HEVC in a low bit rate environment.

  7. The Association Between Video Game Play and Cognitive Function: Does Gaming Platform Matter?

    Science.gov (United States)

    Huang, Vivian; Young, Michaelia; Fiocco, Alexandra J

    2017-11-01

    Despite consumer growth, few studies have evaluated the cognitive effects of gaming using mobile devices. This study examined the association between video game play platform and cognitive performance. Furthermore, the differential effect of video game genre (action versus nonaction) was explored. Sixty undergraduate students completed a video game experience questionnaire, and we divided them into three groups: mobile video game players (MVGPs), console/computer video game players (CVGPs), and nonvideo game players (NVGPs). Participants completed a cognitive battery to assess executive function, and learning and memory. Controlling for sex and ethnicity, analyses showed that frequent video game play is associated with enhanced executive function, but not learning and memory. MVGPs were significantly more accurate on working memory performances than NVGPs. Both MVGPs and CVGPs were similarly associated with enhanced cognitive function, suggesting that platform does not significantly determine the benefits of frequent video game play. Video game platform was found to differentially associate with preference for action video game genre and motivation for gaming. Exploratory analyses show that sex significantly effects frequent video game play, platform and genre preference, and cognitive function. This study represents a novel exploration of the relationship between mobile video game play and cognition and adds support to the cognitive benefits of frequent video game play.

  8. Non-mydriatic, wide field, fundus video camera

    Science.gov (United States)

    Hoeher, Bernhard; Voigtmann, Peter; Michelson, Georg; Schmauss, Bernhard

    2014-02-01

    We describe a method we call "stripe field imaging" that is capable of capturing wide field color fundus videos and images of the human eye at pupil sizes of 2mm. This means that it can be used with a non-dilated pupil even with bright ambient light. We realized a mobile demonstrator to prove the method and we could acquire color fundus videos of subjects successfully. We designed the demonstrator as a low-cost device consisting of mass market components to show that there is no major additional technical outlay to realize the improvements we propose. The technical core idea of our method is breaking the rotational symmetry in the optical design that is given in many conventional fundus cameras. By this measure we could extend the possible field of view (FOV) at a pupil size of 2mm from a circular field with 20° in diameter to a square field with 68° by 18° in size. We acquired a fundus video while the subject was slightly touching and releasing the lid. The resulting video showed changes at vessels in the region of the papilla and a change of the paleness of the papilla.

  9. Joint Segmentation and Shape Regularization with a Generalized Forward Backward Algorithm.

    Science.gov (United States)

    Stefanoiu, Anca; Weinmann, Andreas; Storath, Martin; Navab, Nassir; Baust, Maximilian

    2016-05-11

    This paper presents a method for the simultaneous segmentation and regularization of a series of shapes from a corresponding sequence of images. Such series arise as time series of 2D images when considering video data, or as stacks of 2D images obtained by slicewise tomographic reconstruction. We first derive a model where the regularization of the shape signal is achieved by a total variation prior on the shape manifold. The method employs a modified Kendall shape space to facilitate explicit computations together with the concept of Sobolev gradients. For the proposed model, we derive an efficient and computationally accessible splitting scheme. Using a generalized forward-backward approach, our algorithm treats the total variation atoms of the splitting via proximal mappings, whereas the data terms are dealt with by gradient descent. The potential of the proposed method is demonstrated on various application examples dealing with 3D data. We explain how to extend the proposed combined approach to shape fields which, for instance, arise in the context of 3D+t imaging modalities, and show an application in this setup as well.

  10. Videos and Animations for Vocabulary Learning: A Study on Difficult Words

    Science.gov (United States)

    Lin, Chih-cheng; Tseng, Yi-fang

    2012-01-01

    Studies on using still images and dynamic videos in multimedia annotations produced inconclusive results. A further examination, however, showed that the principle of using videos to explain complex concepts was not observed in the previous studies. This study was intended to investigate whether videos, compared with pictures, better assist…

  11. The Impact of Cosumers' Attitude on Online Video Advertising Towards Product Branding

    OpenAIRE

    Gunawan, Lunardi

    2015-01-01

    Online video advertising is becoming much more common and data shows that companies' demand for online video advertising is increasing rapidly. However, there is a scarcity of research done about the consumers' perspective on online video advertising and there is still little evidence that online video advertising is beneficial for companies and the products being advertised. This research would like to find out the impact of consumers' attitude on online video advertising towards product bra...

  12. Data Partitioning Technique for Improved Video Prioritization

    Directory of Open Access Journals (Sweden)

    Ismail Amin Ali

    2017-07-01

    Full Text Available A compressed video bitstream can be partitioned according to the coding priority of the data, allowing prioritized wireless communication or selective dropping in a congested channel. Known as data partitioning in the H.264/Advanced Video Coding (AVC codec, this paper introduces a further sub-partition of one of the H.264/AVC codec’s three data-partitions. Results show a 5 dB improvement in Peak Signal-to-Noise Ratio (PSNR through this innovation. In particular, the data partition containing intra-coded residuals is sub-divided into data from: those macroblocks (MBs naturally intra-coded, and those MBs forcibly inserted for non-periodic intra-refresh. Interactive user-to-user video streaming can benefit, as then HTTP adaptive streaming is inappropriate and the High Efficiency Video Coding (HEVC codec is too energy demanding.

  13. Low-complexity JPEG-based progressive video codec for wireless video transmission

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Forchhammer, Søren

    2010-01-01

    This paper discusses the question of video codec enhancement for wireless video transmission of high definition video data taking into account constraints on memory and complexity. Starting from parameter adjustment for JPEG2000 compression algorithm used for wireless transmission and achieving...

  14. Feedback from Westinghouse experience on segmentation of reactor vessel internals - 59013

    International Nuclear Information System (INIS)

    Kreitman, Paul J.; Boucau, Joseph; Segerud, Per; Fallstroem, Stefan

    2012-01-01

    With more than 25 years of experience in the development of reactor vessel internals segmentation and packaging technology, Westinghouse has accumulated significant know-how in the reactor dismantling market. Building on tooling concepts and cutting methodologies developed decades ago for the successful removal of nuclear fuel from the damaged Three Mile Island Unit 2 reactor (TMI-2), Westinghouse has continuously improved its approach to internals segmentation and packaging by incorporating lessons learned and best practices into each successive project. Westinghouse has developed several concepts to dismantle reactor internals based on safe and reliable techniques, including plasma arc cutting (PAC), abrasive water-jet cutting (AWJC), metal disintegration machining (MDM), or mechanical cutting. Westinghouse has applied its technology to all types of reactors covering Pressurized Water Reactors (PWR's), Boiling Water Reactors (BWR's), Gas Cooled Reactors (GCR's) and sodium reactors. The primary challenges of a segmentation and packaging project are to separate the highly activated materials from the less-activated materials and package them into appropriate containers for disposal. Since space is almost always a limiting factor it is therefore important to plan and optimize the available room in the segmentation areas. The choice of the optimum cutting technology is important for a successful project implementation and depends on some specific constraints like disposal costs, project schedule, available areas or safety. Detailed 3-D modeling is the basis for tooling design and provides invaluable support in determining the optimum strategy for component cutting and disposal in waste containers, taking account of the radiological and packaging constraints. Westinghouse has also developed a variety of special handling tools, support fixtures, service bridges, water filtration systems, video-monitoring systems and customized rigging, all of which are required for a

  15. Effective Quality-of-Service Renegotiating Schemes for Streaming Video

    Directory of Open Access Journals (Sweden)

    Song Hwangjun

    2004-01-01

    Full Text Available This paper presents effective quality-of-service renegotiating schemes for streaming video. The conventional network supporting quality of service generally allows a negotiation at a call setup. However, it is not efficient for the video application since the compressed video traffic is statistically nonstationary. Thus, we consider the network supporting quality-of-service renegotiations during the data transmission and study effective quality-of-service renegotiating schemes for streaming video. The token bucket model, whose parameters are token filling rate and token bucket size, is adopted for the video traffic model. The renegotiating time instants and the parameters are determined by analyzing the statistical information of compressed video traffic. In this paper, two renegotiating approaches, that is, fixed renegotiating interval case and variable renegotiating interval case, are examined. Finally, the experimental results are provided to show the performance of the proposed schemes.

  16. A NDVI assisted remote sensing image adaptive scale segmentation method

    Science.gov (United States)

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  17. Colour application on mammography image segmentation

    Science.gov (United States)

    Embong, R.; Aziz, N. M. Nik Ab.; Karim, A. H. Abd; Ibrahim, M. R.

    2017-09-01

    The segmentation process is one of the most important steps in image processing and computer vision since it is vital in the initial stage of image analysis. Segmentation of medical images involves complex structures and it requires precise segmentation result which is necessary for clinical diagnosis such as the detection of tumour, oedema, and necrotic tissues. Since mammography images are grayscale, researchers are looking at the effect of colour in the segmentation process of medical images. Colour is known to play a significant role in the perception of object boundaries in non-medical colour images. Processing colour images require handling more data, hence providing a richer description of objects in the scene. Colour images contain ten percent (10%) additional edge information as compared to their grayscale counterparts. Nevertheless, edge detection in colour image is more challenging than grayscale image as colour space is considered as a vector space. In this study, we implemented red, green, yellow, and blue colour maps to grayscale mammography images with the purpose of testing the effect of colours on the segmentation of abnormality regions in the mammography images. We applied the segmentation process using the Fuzzy C-means algorithm and evaluated the percentage of average relative error of area for each colour type. The results showed that all segmentation with the colour map can be done successfully even for blurred and noisy images. Also the size of the area of the abnormality region is reduced when compare to the segmentation area without the colour map. The green colour map segmentation produced the smallest percentage of average relative error (10.009%) while yellow colour map segmentation gave the largest percentage of relative error (11.367%).

  18. Violent Video Games and Children’s Aggressive Behaviors

    Directory of Open Access Journals (Sweden)

    Luca Milani

    2015-08-01

    Full Text Available The literature provides some evidence that the use of violent video games increases the risk for young people to develop aggressive cognitions and even behaviors. We aimed to verify whether exposure to violent video games is linked to problems of aggression in a sample of Italian children. Four questionnaires were administered to 346 children between 7 and 14 years of age, attending primary and secondary schools in Northern Italy. Variables measured were externalization, quality of interpersonal relationships, aggression, quality of coping strategies, and parental stress. Participants who preferred violent games showed higher scores for externalization and aggression. The use of violent video games and age were linked to higher levels of aggression, coping strategies, and the habitual video game weekly consumption of participants. Our data confirm the role of violent video games as risk factors for problems of aggressive behavior and of externalization in childhood and early adolescence.

  19. Division-Free Multiquantization Scheme for Modern Video Codecs

    Directory of Open Access Journals (Sweden)

    Mousumi Das

    2012-01-01

    Full Text Available The current trend of digital convergence leads to the need of the video encoder/decoder (codec that should support multiple video standards on a single platform as it is expensive to use dedicated video codec chip for each standard. The paper presents a high performance circuit shared architecture that can perform the quantization of five popular video codecs such as H.264/AVC, AVS, VC-1, MPEG-2/4, and JPEG. The proposed quantizer architecture is completely division-free as the division operation is replaced by shift and addition operations for all the standards. The design is implemented on FPGA and later synthesized in CMOS 0.18 μm technology. The results show that the proposed design satisfies the requirement of all five codecs with a maximum decoding capability of 60 fps at 187 MHz on Xilinx FPGA platform for 1080 p HD video.

  20. Effective Educational Videos: Principles and Guidelines for Maximizing Student Learning from Video Content

    Science.gov (United States)

    Brame, Cynthia J.

    2016-01-01

    Educational videos have become an important part of higher education, providing an important content-delivery tool in many flipped, blended, and online classes. Effective use of video as an educational tool is enhanced when instructors consider three elements: how to manage cognitive load of the video; how to maximize student engagement with the video; and how to promote active learning from the video. This essay reviews literature relevant to each of these principles and suggests practical ways instructors can use these principles when using video as an educational tool. PMID:27789532

  1. Using Short Videos to Teach Research Ethics

    Science.gov (United States)

    Loui, M. C.

    2014-12-01

    Created with support from the National Science Foundation, EthicsCORE (www.natonalethicscenter.org) is an online resource center for ethics in science and engineering. Among the resources, EthicsCORE hosts short video vignettes produced at the University of Nebraska - Lincoln that dramatize problems in the responsible conduct of research, such as peer review of journal submissions, and mentoring relationships between faculty and graduate students. I will use one of the video vignettes in an interactive pedagogical demonstration. After showing the video, I will ask participants to engage in a think-pair-share activity on the professional obligations of researchers. During the sharing phase, participants will supply the reasons for these obligations.

  2. Retina image–based optic disc segmentation

    Directory of Open Access Journals (Sweden)

    Ching-Lin Wang

    2016-05-01

    Full Text Available The change of optic disc can be used to diagnose many eye diseases, such as glaucoma, diabetic retinopathy and macular degeneration. Moreover, retinal blood vessel pattern is unique for human beings even for identical twins. It is a highly stable pattern in biometric identification. Since optic disc is the beginning of the optic nerve and main blood vessels in retina, it can be used as a reference point of identification. Therefore, optic disc segmentation is an important technique for developing a human identity recognition system and eye disease diagnostic system. This article hence presents an optic disc segmentation method to extract the optic disc from a retina image. The experimental results show that the optic disc segmentation method can give impressive results in segmenting the optic disc from a retina image.

  3. International Good Market Segmentation and Financial Market Structure

    OpenAIRE

    Basak, Suleyman; Croitoru, Benjamin

    2003-01-01

    While financial markets have recently become more complete and international capital flows well liberalized, markets for goods remain segmented. To investigate how more complete security markets may relieve the effects of this segmentation, we examine a series of two-country economies with internationally segmented good markets, distinguished by the available financial securities. We show that, under heterogeneity within countries, the financial structure matters: even with internationally co...

  4. Multifractal-based nuclei segmentation in fish images.

    Science.gov (United States)

    Reljin, Nikola; Slavkovic-Ilic, Marijeta; Tapia, Coya; Cihoric, Nikola; Stankovic, Srdjan

    2017-09-01

    The method for nuclei segmentation in fluorescence in-situ hybridization (FISH) images, based on the inverse multifractal analysis (IMFA) is proposed. From the blue channel of the FISH image in RGB format, the matrix of Holder exponents, with one-by-one correspondence with the image pixels, is determined first. The following semi-automatic procedure is proposed: initial nuclei segmentation is performed automatically from the matrix of Holder exponents by applying predefined hard thresholding; then the user evaluates the result and is able to refine the segmentation by changing the threshold, if necessary. After successful nuclei segmentation, the HER2 (human epidermal growth factor receptor 2) scoring can be determined in usual way: by counting red and green dots within segmented nuclei, and finding their ratio. The IMFA segmentation method is tested over 100 clinical cases, evaluated by skilled pathologist. Testing results show that the new method has advantages compared to already reported methods.

  5. Convolutional Neural Networks for SAR Image Segmentation

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Nobel-Jørgensen, Morten

    2015-01-01

    Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides...

  6. Open-source telemedicine platform for wireless medical video communication.

    Science.gov (United States)

    Panayides, A; Eleftheriou, I; Pantziaris, M

    2013-01-01

    An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings.

  7. Open-Source Telemedicine Platform for Wireless Medical Video Communication

    Science.gov (United States)

    Panayides, A.; Eleftheriou, I.; Pantziaris, M.

    2013-01-01

    An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings. PMID:23573082

  8. Open-Source Telemedicine Platform for Wireless Medical Video Communication

    Directory of Open Access Journals (Sweden)

    A. Panayides

    2013-01-01

    Full Text Available An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN and 3.5G high-speed packet access (HSPA wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings.

  9. Open segmental fracture of both bone forearm and dislocation of ipsilateral elbow with extruded middle segment radius

    Directory of Open Access Journals (Sweden)

    Pawan Kumar

    2013-01-01

    Full Text Available Extruded middle segment of radius with open segmental fracture both bone forearm and dislocation of ipsilateral elbow is a rare injury. A 12-year-old child presented to us within 4 hours following fall from tree. The child′s mother was carrying a 12-cm-long extruded soiled segment of radius. The extruded bone was thoroughly washed. The medullary cavity was properly syringed with antiseptic solution. The bone was autoclaved and put in the muscle plane of the distal forearm after debridement of the wound. After 5 days, a 2.5-mm K-wire was introduced by retrograde method into the proximal radius by passing through the extruded segment. Another 2.5-mm K-wire was passed in ulna. The limb was evaluated clinicoradiologically every 2 weeks. The wound was healed by primary intention. At 4 months, the reposed bone appeared less dense radiologically and K-wire seemed to be out of the bone. In the subsequent months, the roentgenograms show remodeling of the extruded fragment. After 20 weeks, the K-wires were removed (first ulnar and then radial. Complete union was achieved with full range of movement except loss of few degrees of extension of elbow and thumb. This case is reported to show a good outcome following successful incorporation of an extruded segment of radius in an open fracture.

  10. The Effects of Captioning Videos Used for Foreign Language Listening Activities

    Directory of Open Access Journals (Sweden)

    Paula Winke

    2010-02-01

    Full Text Available This study investigated the effects of captioning during video-based listening activities. Second- and fourth-year learners of Arabic, Chinese, Spanish, and Russian watched three short videos with and without captioning in randomized order. Spanish learners had two additional groups: one watched the videos twice with no captioning, and another watched them twice with captioning. After the second showing of the video, learners took comprehension and vocabulary tests based on the video. Twenty-six learners participated in interviews following the actual experiment. They were asked about their general reactions to the videos (captioned and noncaptioned. Results from t-tests and two-way ANOVAs indicated that captioning was more effective than no captioning. Captioning during the first showing of the videos was more effective for performance on aural vocabulary tests. For Spanish and Russian, captioning first was generally more effective than captioning second; while for Arabic and Chinese, there was a trend toward captioning second being more effective. The interview data revealed that learners used captions to increase their attention, improve processing, reinforce previous knowledge, and analyze language. Learners also reported using captions as a crutch.

  11. High Definition Video Streaming Using H.264 Video Compression

    OpenAIRE

    Bechqito, Yassine

    2009-01-01

    This thesis presents high definition video streaming using H.264 codec implementation. The experiment carried out in this study was done for an offline streaming video but a model for live high definition streaming is introduced, as well. Prior to the actual experiment, this study describes digital media streaming. Also, the different technologies involved in video streaming are covered. These include streaming architecture and a brief overview on H.264 codec as well as high definition t...

  12. Investigating MCTS Modifications in General Video Game Playing

    DEFF Research Database (Denmark)

    Frydenberg, Frederik; Andersen, Kasper; Risi, Sebastian

    2015-01-01

    -style video games. This paper investigates of how well these modifications perform in general video game playing using the general video game AI (GVG-AI) framework and introduces a new MCTS modification called UCT reverse penalty that penalizes the MCTS controller for exploring recently visited children......While Monte Carlo tree search (MCTS) methods have shown promise in a variety of different board games, more complex video games still present significant challenges. Recently, several modifications to the core MCTS algorithm have been proposed with the hope to increase its effectiveness on arcade....... The results of our experiments show that a combination of two MCTS modifications can improve the performance of the vanilla MCTS controller, but the effectiveness of the modifications highly depends on the particular game being played....

  13. Hemorrhage Detection and Segmentation in Traumatic Pelvic Injuries

    Science.gov (United States)

    Davuluri, Pavani; Wu, Jie; Tang, Yang; Cockrell, Charles H.; Ward, Kevin R.; Najarian, Kayvan; Hargraves, Rosalyn H.

    2012-01-01

    Automated hemorrhage detection and segmentation in traumatic pelvic injuries is vital for fast and accurate treatment decision making. Hemorrhage is the main cause of deaths in patients within first 24 hours after the injury. It is very time consuming for physicians to analyze all Computed Tomography (CT) images manually. As time is crucial in emergence medicine, analyzing medical images manually delays the decision-making process. Automated hemorrhage detection and segmentation can significantly help physicians to analyze these images and make fast and accurate decisions. Hemorrhage segmentation is a crucial step in the accurate diagnosis and treatment decision-making process. This paper presents a novel rule-based hemorrhage segmentation technique that utilizes pelvic anatomical information to segment hemorrhage accurately. An evaluation measure is used to quantify the accuracy of hemorrhage segmentation. The results show that the proposed method is able to segment hemorrhage very well, and the results are promising. PMID:22919433

  14. 3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging

    Science.gov (United States)

    Aloni, Doron; Jung, Jae-Hyun; Yitzhaky, Yitzhak

    2017-10-01

    Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.

  15. Online video examination

    DEFF Research Database (Denmark)

    Qvist, Palle

    have large influence on their own teaching, learning and curriculum. The programme offers streamed videos in combination with other learning resources. It is a concept which offers video as pure presentation - video lectures - but also as an instructional tool which gives the students the possibility...... to construct their knowledge, collaboration and communication. In its first years the programme has used Skype video communication for collaboration and communication within and between groups, group members and their facilitators. Also exams have been mediated with the help of Skype and have for all students......, examiners and external examiners been a challenge and opportunity and has brought new knowledge and experience. This paper brings results from a questionnaire focusing on how the students experience the video examination....

  16. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  17. Video Sharing System Based on Wi-Fi Camera

    OpenAIRE

    Qidi Lin; Hewei Yu; Jinbin Huang; Weile Liang

    2015-01-01

    This paper introduces a video sharing platform based on WiFi, which consists of camera, mobile phone and PC server. This platform can receive wireless signal from the camera and show the live video on the mobile phone captured by camera. In addition, it is able to send commands to camera and control the camera's holder to rotate. The platform can be applied to interactive teaching and dangerous area's monitoring and so on. Testing results show that the platform can share ...

  18. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  19. Brookhaven segment interconnect

    International Nuclear Information System (INIS)

    Morse, W.M.; Benenson, G.; Leipuner, L.B.

    1983-01-01

    We have performed a high energy physics experiment using a multisegment Brookhaven FASTBUS system. The system was composed of three crate segments and two cable segments. We discuss the segment interconnect module which permits communication between the various segments

  20. Background fluorescence estimation and vesicle segmentation in live cell imaging with conditional random fields.

    Science.gov (United States)

    Pécot, Thierry; Bouthemy, Patrick; Boulanger, Jérôme; Chessel, Anatole; Bardin, Sabine; Salamero, Jean; Kervrann, Charles

    2015-02-01

    Image analysis applied to fluorescence live cell microscopy has become a key tool in molecular biology since it enables to characterize biological processes in space and time at the subcellular level. In fluorescence microscopy imaging, the moving tagged structures of interest, such as vesicles, appear as bright spots over a static or nonstatic background. In this paper, we consider the problem of vesicle segmentation and time-varying background estimation at the cellular scale. The main idea is to formulate the joint segmentation-estimation problem in the general conditional random field framework. Furthermore, segmentation of vesicles and background estimation are alternatively performed by energy minimization using a min cut-max flow algorithm. The proposed approach relies on a detection measure computed from intensity contrasts between neighboring blocks in fluorescence microscopy images. This approach permits analysis of either 2D + time or 3D + time data. We demonstrate the performance of the so-called C-CRAFT through an experimental comparison with the state-of-the-art methods in fluorescence video-microscopy. We also use this method to characterize the spatial and temporal distribution of Rab6 transport carriers at the cell periphery for two different specific adhesion geometries.

  1. Bidirectional uncompressed HD video distribution over fiber employing VCSELs

    DEFF Research Database (Denmark)

    Estaran Tolosa, Jose Manuel; Vegas Olmos, Juan José; Rodes, G. A.

    2012-01-01

    We report on a bidirectional system in which VCSELs are simultaneously modulated with two uncompressed HD video signals. The results show a large power budget and a negligible penalty over 10 km long transmission links.......We report on a bidirectional system in which VCSELs are simultaneously modulated with two uncompressed HD video signals. The results show a large power budget and a negligible penalty over 10 km long transmission links....

  2. The Daily Show with Jon Stewart: Part 2

    Science.gov (United States)

    Trier, James

    2008-01-01

    "The Daily Show With Jon Stewart" is one of the best critical literacy programs on television, and in this Media Literacy column the author suggests ways that teachers can use video clips from the show in their classrooms. (For Part 1, see EJ784683.)

  3. Real-time visual communication to aid disaster recovery in a multi-segment hybrid wireless networking system

    Science.gov (United States)

    Al Hadhrami, Tawfik; Wang, Qi; Grecos, Christos

    2012-06-01

    When natural disasters or other large-scale incidents occur, obtaining accurate and timely information on the developing situation is vital to effective disaster recovery operations. High-quality video streams and high-resolution images, if available in real time, would provide an invaluable source of current situation reports to the incident management team. Meanwhile, a disaster often causes significant damage to the communications infrastructure. Therefore, another essential requirement for disaster management is the ability to rapidly deploy a flexible incident area communication network. Such a network would facilitate the transmission of real-time video streams and still images from the disrupted area to remote command and control locations. In this paper, a comprehensive end-to-end video/image transmission system between an incident area and a remote control centre is proposed and implemented, and its performance is experimentally investigated. In this study a hybrid multi-segment communication network is designed that seamlessly integrates terrestrial wireless mesh networks (WMNs), distributed wireless visual sensor networks, an airborne platform with video camera balloons, and a Digital Video Broadcasting- Satellite (DVB-S) system. By carefully integrating all of these rapidly deployable, interworking and collaborative networking technologies, we can fully exploit the joint benefits provided by WMNs, WSNs, balloon camera networks and DVB-S for real-time video streaming and image delivery in emergency situations among the disaster hit area, the remote control centre and the rescue teams in the field. The whole proposed system is implemented in a proven simulator. Through extensive simulations, the real-time visual communication performance of this integrated system has been numerically evaluated, towards a more in-depth understanding in supporting high-quality visual communications in such a demanding context.

  4. Industrial-Strength Streaming Video.

    Science.gov (United States)

    Avgerakis, George; Waring, Becky

    1997-01-01

    Corporate training, financial services, entertainment, and education are among the top applications for streaming video servers, which send video to the desktop without downloading the whole file to the hard disk, saving time and eliminating copyrights questions. Examines streaming video technology, lists ten tips for better net video, and ranks…

  5. Bayesian segmentation of brainstem structures in MRI

    DEFF Research Database (Denmark)

    Iglesias, Juan Eugenio; Van Leemput, Koen; Bhatt, Priyanka

    2015-01-01

    the brainstem structures in novel scans. Thanks to the generative nature of the scheme, the segmentation method is robust to changes in MRI contrast or acquisition hardware. Using cross validation, we show that the algorithm can segment the structures in previously unseen T1 and FLAIR scans with great accuracy...

  6. Designing online audiovisual heritage services: an empirical study of two comparable online video services

    Science.gov (United States)

    Ongena, G.; van de Wijngaert, L. A. L.; Huizer, E.

    2013-03-01

    The purpose of this study is to seek input for a new online audiovisual heritage service. In doing so, we assess comparable online video services to gain insights into the motivations and perceptual innovation characteristics of the video services. The research is based on data from a Dutch survey held among 1,939 online video service users. The results show that online video service held overlapping antecedents but does show differences in motivations and in perceived innovation characteristics. Hence, in general, one can state that in comparison, online video services comply with different needs and have differences in perceived innovation characteristics. This implies that one can design online video services for different needs. In addition to scientific implications, the outcomes also provide guidance for practitioners in implementing new online video services.

  7. Active Segmentation.

    Science.gov (United States)

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.

  8. Trigger videos on the Web: Impact of audiovisual design

    NARCIS (Netherlands)

    Verleur, R.; Heuvelman, A.; Verhagen, Pleunes Willem

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is

  9. Deep hierarchical attention network for video description

    Science.gov (United States)

    Li, Shuohao; Tang, Min; Zhang, Jun

    2018-03-01

    Pairing video to natural language description remains a challenge in computer vision and machine translation. Inspired by image description, which uses an encoder-decoder model for reducing visual scene into a single sentence, we propose a deep hierarchical attention network for video description. The proposed model uses convolutional neural network (CNN) and bidirectional LSTM network as encoders while a hierarchical attention network is used as the decoder. Compared to encoder-decoder models used in video description, the bidirectional LSTM network can capture the temporal structure among video frames. Moreover, the hierarchical attention network has an advantage over single-layer attention network on global context modeling. To make a fair comparison with other methods, we evaluate the proposed architecture with different types of CNN structures and decoders. Experimental results on the standard datasets show that our model has a more superior performance than the state-of-the-art techniques.

  10. A Novel Quantum Video Steganography Protocol with Large Payload Based on MCQI Quantum Video

    Science.gov (United States)

    Qu, Zhiguo; Chen, Siyi; Ji, Sai

    2017-11-01

    As one of important multimedia forms in quantum network, quantum video attracts more and more attention of experts and scholars in the world. A secure quantum video steganography protocol with large payload based on the video strip encoding method called as MCQI (Multi-Channel Quantum Images) is proposed in this paper. The new protocol randomly embeds the secret information with the form of quantum video into quantum carrier video on the basis of unique features of video frames. It exploits to embed quantum video as secret information for covert communication. As a result, its capacity are greatly expanded compared with the previous quantum steganography achievements. Meanwhile, the new protocol also achieves good security and imperceptibility by virtue of the randomization of embedding positions and efficient use of redundant frames. Furthermore, the receiver enables to extract secret information from stego video without retaining the original carrier video, and restore the original quantum video as a follow. The simulation and experiment results prove that the algorithm not only has good imperceptibility, high security, but also has large payload.

  11. The development of video game enjoyment in a role playing game.

    Science.gov (United States)

    Wirth, Werner; Ryffel, Fabian; von Pape, Thilo; Karnowski, Veronika

    2013-04-01

    This study examines the development of video game enjoyment over time. The results of a longitudinal study (N=62) show that enjoyment increases over several sessions. Moreover, results of a multilevel regression model indicate a causal link between the dependent variable video game enjoyment and the predictor variables exploratory behavior, spatial presence, competence, suspense and solution, and simulated experiences of life. These findings are important for video game research because they reveal the antecedents of video game enjoyment in a real-world longitudinal setting. Results are discussed in terms of the dynamics of video game enjoyment under real-world conditions.

  12. Skin Segmentation Based on Graph Cuts

    Institute of Scientific and Technical Information of China (English)

    HU Zhilan; WANG Guijin; LIN Xinggang; YAN Hong

    2009-01-01

    Skin segmentation is widely used in many computer vision tasks to improve automated visualiza-tion. This paper presents a graph cuts algorithm to segment arbitrary skin regions from images. The detected face is used to determine the foreground skin seeds and the background non-skin seeds with the color probability distributions for the foreground represented by a single Gaussian model and for the background by a Gaussian mixture model. The probability distribution of the image is used for noise suppression to alle-viate the influence of the background regions having skin-like colors. Finally, the skin is segmented by graph cuts, with the regional parameter y optimally selected to adapt to different images. Tests of the algorithm on many real wodd photographs show that the scheme accurately segments skin regions and is robust against illumination variations, individual skin variations, and cluttered backgrounds.

  13. Social Properties of Mobile Video

    Science.gov (United States)

    Mitchell, April Slayden; O'Hara, Kenton; Vorbau, Alex

    Mobile video is now an everyday possibility with a wide array of commercially available devices, services, and content. These new technologies have created dramatic shifts in the way video-based media can be produced, consumed, and delivered by people beyond the familiar behaviors associated with fixed TV and video technologies. Such technology revolutions change the way users behave and change their expectations in regards to their mobile video experiences. Building upon earlier studies of mobile video, this paper reports on a study using diary techniques and ethnographic interviews to better understand how people are using commercially available mobile video technologies in their everyday lives. Drawing on reported episodes of mobile video behavior, the study identifies the social motivations and values underpinning these behaviors that help characterize mobile video consumption beyond the simplistic notion of viewing video only to kill time. This paper also discusses the significance of user-generated content and the usage of video in social communities through the description of two mobile video technology services that allow users to create and share content. Implications for adoption and design of mobile video technologies and services are discussed as well.

  14. Remote sensing image segmentation based on Hadoop cloud platform

    Science.gov (United States)

    Li, Jie; Zhu, Lingling; Cao, Fubin

    2018-01-01

    To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.

  15. Video games.

    Science.gov (United States)

    Funk, Jeanne B

    2005-06-01

    The video game industry insists that it is doing everything possible to provide information about the content of games so that parents can make informed choices; however, surveys indicate that ratings may not reflect consumer views of the nature of the content. This article describes some of the currently popular video games, as well as developments that are on the horizon, and discusses the status of research on the positive and negative impacts of playing video games. Recommendations are made to help parents ensure that children play games that are consistent with their values.

  16. Video2vec Embeddings Recognize Events When Examples Are Scarce.

    Science.gov (United States)

    Habibian, Amirhossein; Mensink, Thomas; Snoek, Cees G M

    2017-10-01

    This paper aims for event recognition when video examples are scarce or even completely absent. The key in such a challenging setting is a semantic video representation. Rather than building the representation from individual attribute detectors and their annotations, we propose to learn the entire representation from freely available web videos and their descriptions using an embedding between video features and term vectors. In our proposed embedding, which we call Video2vec, the correlations between the words are utilized to learn a more effective representation by optimizing a joint objective balancing descriptiveness and predictability. We show how learning the Video2vec embedding using a multimodal predictability loss, including appearance, motion and audio features, results in a better predictable representation. We also propose an event specific variant of Video2vec to learn a more accurate representation for the words, which are indicative of the event, by introducing a term sensitive descriptiveness loss. Our experiments on three challenging collections of web videos from the NIST TRECVID Multimedia Event Detection and Columbia Consumer Videos datasets demonstrate: i) the advantages of Video2vec over representations using attributes or alternative embeddings, ii) the benefit of fusing video modalities by an embedding over common strategies, iii) the complementarity of term sensitive descriptiveness and multimodal predictability for event recognition. By its ability to improve predictability of present day audio-visual video features, while at the same time maximizing their semantic descriptiveness, Video2vec leads to state-of-the-art accuracy for both few- and zero-example recognition of events in video.

  17. Digital video clips for improved pedagogy and illustration of scientific research — with illustrative video clips on atomic spectrometry

    Science.gov (United States)

    Michel, Robert G.; Cavallari, Jennifer M.; Znamenskaia, Elena; Yang, Karl X.; Sun, Tao; Bent, Gary

    1999-12-01

    This article is an electronic publication in Spectrochimica Acta Electronica (SAE), a section of Spectrochimica Acta Part B (SAB). The hardcopy text is accompanied by an electronic archive, stored on the CD-ROM accompanying this issue. The archive contains video clips. The main article discusses the scientific aspects of the subject and explains the purpose of the video files. Short, 15-30 s, digital video clips are easily controllable at the computer keyboard, which gives a speaker the ability to show fine details through the use of slow motion. Also, they are easily accessed from the computer hard drive for rapid extemporaneous presentation. In addition, they are easily transferred to the Internet for dissemination. From a pedagogical point of view, the act of making a video clip by a student allows for development of powers of observation, while the availability of the technology to make digital video clips gives a teacher the flexibility to demonstrate scientific concepts that would otherwise have to be done as 'live' demonstrations, with all the likely attendant misadventures. Our experience with digital video clips has been through their use in computer-based presentations by undergraduate and graduate students in analytical chemistry classes, and by high school and middle school teachers and their students in a variety of science and non-science classes. In physics teaching laboratories, we have used the hardware to capture digital video clips of dynamic processes, such as projectiles and pendulums, for later mathematical analysis.

  18. Video Analytics for Business Intelligence

    CERN Document Server

    Porikli, Fatih; Xiang, Tao; Gong, Shaogang

    2012-01-01

    Closed Circuit TeleVision (CCTV) cameras have been increasingly deployed pervasively in public spaces including retail centres and shopping malls. Intelligent video analytics aims to automatically analyze content of massive amount of public space video data and has been one of the most active areas of computer vision research in the last two decades. Current focus of video analytics research has been largely on detecting alarm events and abnormal behaviours for public safety and security applications. However, increasingly CCTV installations have also been exploited for gathering and analyzing business intelligence information, in order to enhance marketing and operational efficiency. For example, in retail environments, surveillance cameras can be utilised to collect statistical information about shopping behaviour and preference for marketing (e.g., how many people entered a shop; how many females/males or which age groups of people showed interests to a particular product; how long did they stay in the sho...

  19. Learning literacy and content through video activities in primary education

    OpenAIRE

    Heitink, Maaike Christine; Fisser, Petra; McKenney, Susan; Resta, P.

    2012-01-01

    This case study research explored to what extent and in which ways teachers used Technological Pedagogical Content Knowledge (TPCK) and related competencies to implement video activities in primary education. Three Dutch teachers implemented video activities to improve students‟ content knowledge and literacy- and communication skills simultaneously. Lesson materials were provided but teachers chose the theme or subject (content) linked to the video activities themselves. Results show that ap...

  20. Student’s Video Production as Formative Assessment

    Directory of Open Access Journals (Sweden)

    Eduardo Gama

    2017-04-01

    Full Text Available Learning assessments are subject of discussion both in their theoretical and practical approaches. The process of measuring learning in physics by high school students, either qualitatively or quantitatively, is one in which it should be possible to identify not only the concepts and contents students failed to achieve but also the reasons for the failure. We propose that students’ video production offers a very effective formative assessment tool to teachers: as a formative assessment, it produces information that allows the understanding of where and when the learning process succeeded or failed, of identifying, as a subject or as a group, the deficiencies or misunderstandings related to the theme under analysis and their interpretation by students, and it provides also a different kind of assessment, related to some other life skills, such as ability to carry on a project till its conclusion and to work cooperatively. In this paper, we describe the use of videos produced by high school students as an assessment resource. The students were asked to prepare a short video, which was then presented to the whole group and discussed. The videos reveal aspects of students’ difficulties that usually do not appear in formal assessments such as tests and questionnaires. After the use of the videos as a component of classroom assessments and the use of the discussions to rethink learning activities in the group, the videos were analysed and classified in various categories. This analysis showed a strong correlation between the technical quality of the video and the content quality of the students’ argumentation. Also, it was shown that the students do not prepare their video based on quick and easy production; they usually choose forms of video production that require careful planning and implementation, and this reflects directly on the overall quality of the video and of the learning process.

  1. Incorporation of squalene into rod outer segments

    International Nuclear Information System (INIS)

    Keller, R.K.; Fliesler, S.J.

    1990-01-01

    We have reported previously that squalene is the major radiolabeled nonsaponifiable lipid product derived from [ 3 H]acetate in short term incubations of frog retinas. In the present study, we demonstrate that newly synthesized squalene is incorporated into rod outer segments under similar in vitro conditions. We show further that squalene is an endogenous constituent of frog rod outer segment membranes; its concentration is approximately 9.5 nmol/mumol of phospholipid or about 9% of the level of cholesterol. Pulse-chase experiments with radiolabeled precursors revealed no metabolism of outer segment squalene to sterols in up to 20 h of chase. Taken together with our previous absolute rate studies, these results suggest that most, if not all, of the squalene synthesized by the frog retina is transported to rod outer segments. Synthesis of protein is not required for squalene transport since puromycin had no effect on squalene incorporation into outer segments. Conversely, inhibition of isoprenoid synthesis with mevinolin had no effect on the incorporation of opsin into the outer segment. These latter results support the conclusion that the de novo synthesis and subsequent intracellular trafficking of opsin and isoprenoid lipids destined for the outer segment occur via independent mechanisms

  2. Portrayal of tobacco in Mongolian language YouTube videos: policy gaps.

    Science.gov (United States)

    Tsai, Feng-Jen; Sainbayar, Bolor

    2016-07-01

    This study examined how effectively current policy measures control depictions of tobacco in Mongolian language YouTube videos. A search of YouTube videos using the Mongolian term for 'tobacco', and employing 'relevance' and 'view count' criteria, resulted in a total sample of 120 videos, from which 38 unique videos were coded and analysed. Most videos were antismoking public service announcements; however, analyses of viewing patterns showed that pro-smoking videos accounted for about two-thirds of all views. Pro-smoking videos were also perceived more positively and had a like:dislike ratio of 4.6 compared with 3.5 and 1.5, respectively, for the magic trick and antismoking videos. Although Mongolia prohibits tobacco advertising, 3 of the pro-smoking videos were made by a tobacco company; additionally, 1 pro-smoking video promoted electronic cigarettes. Given the popularity of Mongolian YouTube videos that promote smoking, policy changes are urgently required to control this medium, and more effectively protect youth and young adults from insidious tobacco marketing. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  3. SALIENCY BASED SEGMENTATION OF SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    A. Sharma

    2015-03-01

    Full Text Available Saliency gives the way as humans see any image and saliency based segmentation can be eventually helpful in Psychovisual image interpretation. Keeping this in view few saliency models are used along with segmentation algorithm and only the salient segments from image have been extracted. The work is carried out for terrestrial images as well as for satellite images. The methodology used in this work extracts those segments from segmented image which are having higher or equal saliency value than a threshold value. Salient and non salient regions of image become foreground and background respectively and thus image gets separated. For carrying out this work a dataset of terrestrial images and Worldview 2 satellite images (sample data are used. Results show that those saliency models which works better for terrestrial images are not good enough for satellite image in terms of foreground and background separation. Foreground and background separation in terrestrial images is based on salient objects visible on the images whereas in satellite images this separation is based on salient area rather than salient objects.

  4. Single-segment and double-segment INTACS for post-LASIK ectasia.

    Directory of Open Access Journals (Sweden)

    Hassan Hashemi

    2014-09-01

    Full Text Available The objective of the present study was to compare single segment and double segment INTACS rings in the treatment of post-LASIK ectasia. In this interventional study, 26 eyes with post-LASIK ectasia were assessed. Ectasia was defined as progressive myopia regardless of astigmatism, along with topographic evidence of inferior steepening of the cornea after LASIK. We excluded those with a history of intraocular surgery, certain eye conditions, and immune disorders, as well as monocular, pregnant and lactating patients. A total of 11 eyes had double ring and 15 eyes had single ring implantation. Visual and refractive outcomes were compared with preoperative values based on the number of implanted INTACS rings. Pre and postoperative spherical equivalent were -3.92 and -2.29 diopter (P=0.007. The spherical equivalent decreased by 1 ± 3.2 diopter in the single-segment group and 2.56 ± 1.58 diopter in the double-segment group (P=0.165. Mean preoperative astigmatism was 2.38 ± 1.93 diopter which decreased to 2.14 ± 1.1 diopter after surgery (P=0.508; 0.87 ± 1.98 diopter decrease in the single-segment group and 0.67 ± 1.2 diopter increase in the double-segment group (P=0.025. Nineteen patients (75% gained one or two lines, and only three, who were all in the double-segment group, lost one or two lines of best corrected visual acuity. The spherical equivalent and vision significantly decreased in all patients. In these post-LASIK ectasia patients, the spherical equivalent was corrected better with two segments compared to single segment implantation; nonetheless, the level of astigmatism in the single-segment group was significantly better than that in the double-segment group.

  5. Statistical Analysis of Video Frame Size Distribution Originating from Scalable Video Codec (SVC

    Directory of Open Access Journals (Sweden)

    Sima Ahmadpour

    2017-01-01

    Full Text Available Designing an effective and high performance network requires an accurate characterization and modeling of network traffic. The modeling of video frame sizes is normally applied in simulation studies and mathematical analysis and generating streams for testing and compliance purposes. Besides, video traffic assumed as a major source of multimedia traffic in future heterogeneous network. Therefore, the statistical distribution of video data can be used as the inputs for performance modeling of networks. The finding of this paper comprises the theoretical definition of distribution which seems to be relevant to the video trace in terms of its statistical properties and finds the best distribution using both the graphical method and the hypothesis test. The data set used in this article consists of layered video traces generating from Scalable Video Codec (SVC video compression technique of three different movies.

  6. Short segment search method for phylogenetic analysis using nested sliding windows

    Science.gov (United States)

    Iskandar, A. A.; Bustamam, A.; Trimarsanto, H.

    2017-10-01

    To analyze phylogenetics in Bioinformatics, coding DNA sequences (CDS) segment is needed for maximal accuracy. However, analysis by CDS cost a lot of time and money, so a short representative segment by CDS, which is envelope protein segment or non-structural 3 (NS3) segment is necessary. After sliding window is implemented, a better short segment than envelope protein segment and NS3 is found. This paper will discuss a mathematical method to analyze sequences using nested sliding window to find a short segment which is representative for the whole genome. The result shows that our method can find a short segment which more representative about 6.57% in topological view to CDS segment than an Envelope segment or NS3 segment.

  7. Development of P4140 video data wall projector; Video data wall projector

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, H.; Inoue, H. [Toshiba Corp., Tokyo (Japan)

    1998-12-01

    The P4140 is a 3 cathode-ray tube (CRT) video data wall projector for super video graphics array (SVGA) signals. It is used as an image display unit, providing a large screen when several sets are put together. A high-quality picture has been realized by higher resolution and improved color uniformity technology. A new convergence adjustment system has also been developed through the optimal combination of digital and analog technologies. This video data wall installation has been greatly enhanced by the automation of cubes and cube performance settings. The P4140 video data wall projector can be used for displaying not only data but video as well. (author)

  8. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  9. Intelligent video surveillance systems

    CERN Document Server

    Dufour, Jean-Yves

    2012-01-01

    Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi

  10. Video Vectorization via Tetrahedral Remeshing.

    Science.gov (United States)

    Wang, Chuan; Zhu, Jie; Guo, Yanwen; Wang, Wenping

    2017-02-09

    We present a video vectorization method that generates a video in vector representation from an input video in raster representation. A vector-based video representation offers the benefits of vector graphics, such as compactness and scalability. The vector video we generate is represented by a simplified tetrahedral control mesh over the spatial-temporal video volume, with color attributes defined at the mesh vertices. We present novel techniques for simplification and subdivision of a tetrahedral mesh to achieve high simplification ratio while preserving features and ensuring color fidelity. From an input raster video, our method is capable of generating a compact video in vector representation that allows a faithful reconstruction with low reconstruction errors.

  11. The Influence of National Culture on Educational Videos: The Case of MOOCs

    Science.gov (United States)

    Bayeck, Rebecca Yvonne; Choi, Jinhee

    2018-01-01

    This paper discusses the influence of cultural dimensions on Massive Open Online Course (MOOC) introductory videos. The study examined the introductory videos produced by three universities on Coursera platforms using communication theory and Hofstede's cultural dimensions. The results show that introductory videos in MOOCs are influenced by the…

  12. Segmentation of time series with long-range fractal correlations

    Science.gov (United States)

    Bernaola-Galván, P.; Oliver, J.L.; Hackenberg, M.; Coronado, A.V.; Ivanov, P.Ch.; Carpena, P.

    2012-01-01

    Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome. PMID:23645997

  13. Segmentation of time series with long-range fractal correlations.

    Science.gov (United States)

    Bernaola-Galván, P; Oliver, J L; Hackenberg, M; Coronado, A V; Ivanov, P Ch; Carpena, P

    2012-06-01

    Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome.

  14. Low-latency video transmission over high-speed WPANs based on low-power video compression

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Ann

    2010-01-01

    This paper presents latency-constrained video transmission over high-speed wireless personal area networks (WPANs). Low-power video compression is proposed as an alternative to uncompressed video transmission. A video source rate control based on MINMAX quality criteria is introduced. Practical...

  15. Video Self-Modeling

    Science.gov (United States)

    Buggey, Tom; Ogle, Lindsey

    2012-01-01

    Video self-modeling (VSM) first appeared on the psychology and education stage in the early 1970s. The practical applications of VSM were limited by lack of access to tools for editing video, which is necessary for almost all self-modeling videos. Thus, VSM remained in the research domain until the advent of camcorders and VCR/DVD players and,…

  16. Physics Girl: Where Education meets Cat Videos

    Science.gov (United States)

    Cowern, Dianna

    YouTube is usually considered an entertainment medium to watch cats, gaming, and music videos. But educational channels have been gaining momentum on the platform, some garnering millions of subscribers and billions of views. The Physics Girl YouTube channel is an educational series with PBS Digital Studios created by Dianna Cowern. Using Physics Girl as an example, this talk will examine what it takes to start a short-form educational video series, including logistics and resources. One benefit of video is that every failure is documented on camera and can, and will, be used in this talk as a learning tool. We will look at the channels demographical reach, discuss best practices for effective physics outreach, and survey how online media and technology can facilitate good and bad learning. The aim of this talk is to show how videos are a unique way to share science and enrich the learning experience, in and out of a classroom.

  17. Real-time video compressing under DSP/BIOS

    Science.gov (United States)

    Chen, Qiu-ping; Li, Gui-ju

    2009-10-01

    This paper presents real-time MPEG-4 Simple Profile video compressing based on the DSP processor. The programming framework of video compressing is constructed using TMS320C6416 Microprocessor, TDS510 simulator and PC. It uses embedded real-time operating system DSP/BIOS and the API functions to build periodic function, tasks and interruptions etcs. Realize real-time video compressing. To the questions of data transferring among the system. Based on the architecture of the C64x DSP, utilized double buffer switched and EDMA data transfer controller to transit data from external memory to internal, and realize data transition and processing at the same time; the architecture level optimizations are used to improve software pipeline. The system used DSP/BIOS to realize multi-thread scheduling. The whole system realizes high speed transition of a great deal of data. Experimental results show the encoder can realize real-time encoding of 768*576, 25 frame/s video images.

  18. Bandwidth Reduction via Localized Peer-to-Peer (P2P Video

    Directory of Open Access Journals (Sweden)

    Ken Kerpez

    2010-01-01

    Full Text Available This paper presents recent research into P2P distribution of video that can be highly localized, preferably sharing content among users on the same access network and Central Office (CO. Models of video demand and localized P2P serving areas are presented. Detailed simulations of passive optical networks (PON are run, and these generate statistics of P2P video localization. Next-Generation PON (NG-PON is shown to fully enable P2P video localization, but the lower rates of Gigabit-PON (GPON restrict performance. Results here show that nearly all of the traffic volume of unicast video could be delivered via localized P2P. Strong growth in video delivery via localized P2P could lower overall future aggregation and core network bandwidth of IP video traffic by 58.2%, and total consumer Internet traffic by 43.5%. This assumes aggressive adoption of technologies and business practices that enable highly localized P2P video.

  19. Single-incision video-assisted thoracoscopic surgery left-lower lobe anterior segmentectomy (S8).

    Science.gov (United States)

    Galvez, Carlos; Lirio, Francisco; Sesma, Julio; Baschwitz, Benno; Bolufer, Sergio

    2017-01-01

    Unusual anatomical segmentectomies are technically demanding procedures that require a deep knowledge of intralobar anatomy and surgical skill. In the other hand, these procedures preserve more normal lung parenchyma for lesions located in specific anatomical segments, and are indicated for benign lesions, metastasis and also early stage adenocarcinomas without nodal involvement. A 32-year-old woman was diagnosed of a benign pneumocytoma in the anterior segment of the left-lower lobe (S8, LLL), so we performed a single-incision video-assisted thoracoscopic surgery (SI-VATS) anatomical S8 segmentectomy in 140 minutes under intercostal block. There were no intraoperative neither postoperative complications, the chest tube was removed at 24 hours and the patient discharged at 5 th postoperative day with low pain on the visual analogue scale (VAS). Final pathologic exam reported a benign sclerosant pneumocytoma with free margins. The patient has recovered her normal activities at 3 months completely with radiological normal controls at 1 and 3 months.

  20. Akademisk video

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    Dette kapitel har fokus på metodiske problemstillinger, der opstår i forhold til at bruge (digital) video i forbindelse med forskningskommunikation, ikke mindst online. Video har længe været benyttet i forskningen til dataindsamling og forskningskommunikation. Med digitaliseringen og internettet ...

  1. Visualization of ground truth tracks for the video 'Tracking a "facer's" behavior in a public plaza'

    DEFF Research Database (Denmark)

    2015-01-01

    The video shows the ground truth tracks in GIS of all pedestrians in the video 'Tracking a 'facer's" behavior in a public plaza'. The visualization was made using QGIS TimeManager.......The video shows the ground truth tracks in GIS of all pedestrians in the video 'Tracking a 'facer's" behavior in a public plaza'. The visualization was made using QGIS TimeManager....

  2. Event Segmentation Improves Event Memory up to One Month Later

    Science.gov (United States)

    Flores, Shaney; Bailey, Heather R.; Eisenberg, Michelle L.; Zacks, Jeffrey M.

    2017-01-01

    When people observe everyday activity, they spontaneously parse it into discrete meaningful events. Individuals who segment activity in a more normative fashion show better subsequent memory for the events. If segmenting events effectively leads to better memory, does asking people to attend to segmentation improve subsequent memory? To answer…

  3. Computer Aided Segmentation Analysis: New Software for College Admissions Marketing.

    Science.gov (United States)

    Lay, Robert S.; Maguire, John J.

    1983-01-01

    Compares segmentation solutions obtained using a binary segmentation algorithm (THAID) and a new chi-square-based procedure (CHAID) that segments the prospective pool of college applicants using application and matriculation as criteria. Results showed a higher number of estimated qualified inquiries and more accurate estimates with CHAID. (JAC)

  4. Processing Decoded Video for Backlight Dimming

    DEFF Research Database (Denmark)

    Burini, Nino; Korhonen, Jari

    rendition of the signals, particularly in the case of LCDs with dynamic local backlight. This thesis shows that it is possible to model LCDs with dynamic backlight to design algorithms that improve the visual quality of 2D and 3D content, and that digital video coding artifacts like blocking or ringing can......Quality of digital image and video signals on TV screens is aected by many factors, including the display technology and compression standards. An accurate knowledge of the characteristics of the display andof the video signals can be used to develop advanced algorithms that improve the visual...... be reduced with post-processing. LCD screens with dynamic local backlight are modeled in their main aspects, like pixel luminance, light diusion and light perception. Following the model, novel algorithms based on optimization are presented and extended, then reduced in complexity, to produce backlights...

  5. VBR video traffic models

    CERN Document Server

    Tanwir, Savera

    2014-01-01

    There has been a phenomenal growth in video applications over the past few years. An accurate traffic model of Variable Bit Rate (VBR) video is necessary for performance evaluation of a network design and for generating synthetic traffic that can be used for benchmarking a network. A large number of models for VBR video traffic have been proposed in the literature for different types of video in the past 20 years. Here, the authors have classified and surveyed these models and have also evaluated the models for H.264 AVC and MVC encoded video and discussed their findings.

  6. Impact of video games on plasticity of the hippocampus.

    Science.gov (United States)

    West, G L; Konishi, K; Diarra, M; Benady-Chorney, J; Drisdelle, B L; Dahmani, L; Sodums, D J; Lepore, F; Jolicoeur, P; Bohbot, V D

    2017-08-08

    The hippocampus is critical to healthy cognition, yet results in the current study show that action video game players have reduced grey matter within the hippocampus. A subsequent randomised longitudinal training experiment demonstrated that first-person shooting games reduce grey matter within the hippocampus in participants using non-spatial memory strategies. Conversely, participants who use hippocampus-dependent spatial strategies showed increased grey matter in the hippocampus after training. A control group that trained on 3D-platform games displayed growth in either the hippocampus or the functionally connected entorhinal cortex. A third study replicated the effect of action video game training on grey matter in the hippocampus. These results show that video games can be beneficial or detrimental to the hippocampal system depending on the navigation strategy that a person employs and the genre of the game.Molecular Psychiatry advance online publication, 8 August 2017; doi:10.1038/mp.2017.155.

  7. Video Games as a Multifaceted Medium: A Review of Quantitative Social Science Research on Video Games and a Typology of Video Game Research Approaches

    Directory of Open Access Journals (Sweden)

    James D. Ivory

    2013-01-01

    Full Text Available Although there is a vast and useful body of quantitative social science research dealing with the social role and impact of video games, it is difficult to compare studies dealing with various dimensions of video games because they are informed by different perspectives and assumptions, employ different methodologies, and address different problems. Studies focusing on different social dimensions of video games can produce varied findings about games’ social function that are often difficult to reconcile— or even contradictory. Research is also often categorized by topic area, rendering a comprehensive view of video games’ social role across topic areas difficult. This interpretive review presents a novel typology of four identified approaches that categorize much of the quantitative social science video game research conducted to date: “video games as stimulus,” “video games as avocation,” “video games as skill,” and “video games as social environment.” This typology is useful because it provides an organizational structure within which the large and growing number of studies on video games can be categorized, guiding comparisons between studies on different research topics and aiding a more comprehensive understanding of video games’ social role. Categorizing the different approaches to video game research provides a useful heuristic for those critiquing and expanding that research, as well as an understandable entry point for scholars new to video game research. Further, and perhaps more importantly, the typology indicates when topics should be explored using different approaches than usual to shed new light on the topic areas. Lastly, the typology exposes the conceptual disconnects between the different approaches to video game research, allowing researchers to consider new ways to bridge gaps between the different approaches’ strengths and limitations with novel methods.

  8. An interactive medical image segmentation framework using iterative refinement.

    Science.gov (United States)

    Kalshetti, Pratik; Bundele, Manas; Rahangdale, Parag; Jangra, Dinesh; Chattopadhyay, Chiranjoy; Harit, Gaurav; Elhence, Abhay

    2017-04-01

    Segmentation is often performed on medical images for identifying diseases in clinical evaluation. Hence it has become one of the major research areas. Conventional image segmentation techniques are unable to provide satisfactory segmentation results for medical images as they contain irregularities. They need to be pre-processed before segmentation. In order to obtain the most suitable method for medical image segmentation, we propose MIST (Medical Image Segmentation Tool), a two stage algorithm. The first stage automatically generates a binary marker image of the region of interest using mathematical morphology. This marker serves as the mask image for the second stage which uses GrabCut to yield an efficient segmented result. The obtained result can be further refined by user interaction, which can be done using the proposed Graphical User Interface (GUI). Experimental results show that the proposed method is accurate and provides satisfactory segmentation results with minimum user interaction on medical as well as natural images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Web-video-mining-supported workflow modeling for laparoscopic surgeries.

    Science.gov (United States)

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao

    2016-11-01

    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Using neutrosophic graph cut segmentation algorithm for qualified rendering image selection in thyroid elastography video.

    Science.gov (United States)

    Guo, Yanhui; Jiang, Shuang-Quan; Sun, Baiqing; Siuly, Siuly; Şengür, Abdulkadir; Tian, Jia-Wei

    2017-12-01

    Recently, elastography has become very popular in clinical investigation for thyroid cancer detection and diagnosis. In elastogram, the stress results of the thyroid are displayed using pseudo colors. Due to variation of the rendering results in different frames, it is difficult for radiologists to manually select the qualified frame image quickly and efficiently. The purpose of this study is to find the qualified rendering result in the thyroid elastogram. This paper employs an efficient thyroid ultrasound image segmentation algorithm based on neutrosophic graph cut to find the qualified rendering images. Firstly, a thyroid ultrasound image is mapped into neutrosophic set, and an indeterminacy filter is constructed to reduce the indeterminacy of the spatial and intensity information in the image. A graph is defined on the image and the weight for each pixel is represented using the value after indeterminacy filtering. The segmentation results are obtained using a maximum-flow algorithm on the graph. Then the anatomic structure is identified in thyroid ultrasound image. Finally the rendering colors on these anatomic regions are extracted and validated to find the frames which satisfy the selection criteria. To test the performance of the proposed method, a thyroid elastogram dataset is built and totally 33 cases were collected. An experienced radiologist manually evaluates the selection results of the proposed method. Experimental results demonstrate that the proposed method finds the qualified rendering frame with 100% accuracy. The proposed scheme assists the radiologists to diagnose the thyroid diseases using the qualified rendering images.

  11. QoS INVESTIGATION ON MOODLE’S VIDEO CONFERENCE

    Directory of Open Access Journals (Sweden)

    LINAWATI LINAWATI

    2011-06-01

    Full Text Available Learning Management System (LMS supports e-learning as a distant learning. Moodle is one of open source LMS applications that allow embedding multimedia into learning activity in a course, such as video conference session. The paper investigates quality of service (QoS of video conference session embedded in Moodle, i.e. end-to-end delay, jitter, throughput, packet loss and PSNR. Three scenarios were implemented in the experiment. The scenarios were applied on both wire and wireless transmission, and p2p and p2m connections. The investigation results show that the QoS of video conference session meets the standards issued by ITU-T G.1010 and G.114, for minimum bandwidth of 128 kbps. Thus the application of video conferencing that is integrated in Moodle can run well with minimum bandwidth of 128 Kbps. 

  12. HUBBLE VISION: A Planetarium Show About Hubble Space Telescope

    Science.gov (United States)

    Petersen, Carolyn Collins

    1995-05-01

    In 1991, a planetarium show called "Hubble: Report From Orbit" outlining the current achievements of the Hubble Space Telescope was produced by the independent planetarium production company Loch Ness Productions, for distribution to facilities around the world. The program was subsequently converted to video. In 1994, that program was updated and re-produced under the name "Hubble Vision" and offered to the planetarium community. It is periodically updated and remains a sought-after and valuable resource within the community. This paper describes the production of the program, and the role of the astronomical community in the show's production (and subsequent updates). The paper is accompanied by a video presentation of Hubble Vision.

  13. Scan-rescan reproducibility of segmental aortic wall shear stress as assessed by phase-specific segmentation with 4D flow MRI in healthy volunteers.

    Science.gov (United States)

    van der Palen, Roel L F; Roest, Arno A W; van den Boogaard, Pieter J; de Roos, Albert; Blom, Nico A; Westenberg, Jos J M

    2018-05-26

    The aim was to investigate scan-rescan reproducibility and observer variability of segmental aortic 3D systolic wall shear stress (WSS) by phase-specific segmentation with 4D flow MRI in healthy volunteers. Ten healthy volunteers (age 26.5 ± 2.6 years) underwent aortic 4D flow MRI twice. Maximum 3D systolic WSS (WSSmax) and mean 3D systolic WSS (WSSmean) for five thoracic aortic segments over five systolic cardiac phases by phase-specific segmentations were calculated. Scan-rescan analysis and observer reproducibility analysis were performed. Scan-rescan data showed overall good reproducibility for WSSmean (coefficient of variation, COV 10-15%) with moderate-to-strong intraclass correlation coefficient (ICC 0.63-0.89). The variability in WSSmax was high (COV 16-31%) with moderate-to-good ICC (0.55-0.79) for different aortic segments. Intra- and interobserver reproducibility was good-to-excellent for regional aortic WSSmax (ICC ≥ 0.78; COV ≤ 17%) and strong-to-excellent for WSSmean (ICC ≥ 0.86; COV ≤ 11%). In general, ascending aortic segments showed more WSSmax/WSSmean variability compared to aortic arch or descending aortic segments for scan-rescan, intraobserver and interobserver comparison. Scan-rescan reproducibility was good for WSSmean and moderate for WSSmax for all thoracic aortic segments over multiple systolic phases in healthy volunteers. Intra/interobserver reproducibility for segmental WSS assessment was good-to-excellent. Variability of WSSmax is higher and should be taken into account in case of individual follow-up or in comparative rest-stress studies to avoid misinterpretation.

  14. Segmentation of knee injury swelling on infrared images

    Science.gov (United States)

    Puentes, John; Langet, Hélène; Herry, Christophe; Frize, Monique

    2011-03-01

    Interpretation of medical infrared images is complex due to thermal noise, absence of texture, and small temperature differences in pathological zones. Acute inflammatory response is a characteristic symptom of some knee injuries like anterior cruciate ligament sprains, muscle or tendons strains, and meniscus tear. Whereas artificial coloring of the original grey level images may allow to visually assess the extent inflammation in the area, their automated segmentation remains a challenging problem. This paper presents a hybrid segmentation algorithm to evaluate the extent of inflammation after knee injury, in terms of temperature variations and surface shape. It is based on the intersection of rapid color segmentation and homogeneous region segmentation, to which a Laplacian of a Gaussian filter is applied. While rapid color segmentation enables to properly detect the observed core of swollen area, homogeneous region segmentation identifies possible inflammation zones, combining homogeneous grey level and hue area segmentation. The hybrid segmentation algorithm compares the potential inflammation regions partially detected by each method to identify overlapping areas. Noise filtering and edge segmentation are then applied to common zones in order to segment the swelling surfaces of the injury. Experimental results on images of a patient with anterior cruciate ligament sprain show the improved performance of the hybrid algorithm with respect to its separated components. The main contribution of this work is a meaningful automatic segmentation of abnormal skin temperature variations on infrared thermography images of knee injury swelling.

  15. A Hybrid Technique for Medical Image Segmentation

    Directory of Open Access Journals (Sweden)

    Alamgir Nyma

    2012-01-01

    Full Text Available Medical image segmentation is an essential and challenging aspect in computer-aided diagnosis and also in pattern recognition research. This paper proposes a hybrid method for magnetic resonance (MR image segmentation. We first remove impulsive noise inherent in MR images by utilizing a vector median filter. Subsequently, Otsu thresholding is used as an initial coarse segmentation method that finds the homogeneous regions of the input image. Finally, an enhanced suppressed fuzzy c-means is used to partition brain MR images into multiple segments, which employs an optimal suppression factor for the perfect clustering in the given data set. To evaluate the robustness of the proposed approach in noisy environment, we add different types of noise and different amount of noise to T1-weighted brain MR images. Experimental results show that the proposed algorithm outperforms other FCM based algorithms in terms of segmentation accuracy for both noise-free and noise-inserted MR images.

  16. Coarse-to-Fine Segmentation with Shape-Tailored Continuum Scale Spaces

    KAUST Repository

    Khan, Naeemullah

    2017-11-09

    We formulate an energy for segmentation that is designed to have preference for segmenting the coarse over fine structure of the image, without smoothing across boundaries of regions. The energy is formulated by integrating a continuum of scales from a scale space computed from the heat equation within regions. We show that the energy can be optimized without computing a continuum of scales, but instead from a single scale. This makes the method computationally efficient in comparison to energies using a discrete set of scales. We apply our method to texture and motion segmentation. Experiments on benchmark datasets show that a continuum of scales leads to better segmentation accuracy over discrete scales and other competing methods.

  17. Coarse-to-Fine Segmentation with Shape-Tailored Continuum Scale Spaces

    KAUST Repository

    Khan, Naeemullah; Hong, Byung-Woo; Yezzi, Anthony; Sundaramoorthi, Ganesh

    2017-01-01

    We formulate an energy for segmentation that is designed to have preference for segmenting the coarse over fine structure of the image, without smoothing across boundaries of regions. The energy is formulated by integrating a continuum of scales from a scale space computed from the heat equation within regions. We show that the energy can be optimized without computing a continuum of scales, but instead from a single scale. This makes the method computationally efficient in comparison to energies using a discrete set of scales. We apply our method to texture and motion segmentation. Experiments on benchmark datasets show that a continuum of scales leads to better segmentation accuracy over discrete scales and other competing methods.

  18. Accounting for segment correlations in segmented gamma-ray scans

    International Nuclear Information System (INIS)

    Sheppard, G.A.; Prettyman, T.H.; Piquette, E.C.

    1994-01-01

    In a typical segmented gamma-ray scanner (SGS), the detector's field of view is collimated so that a complete horizontal slice or segment of the desired thickness is visible. Ordinarily, the collimator is not deep enough to exclude gamma rays emitted from sample volumes above and below the segment aligned with the collimator. This can lead to assay biases, particularly for certain radioactive-material distributions. Another consequence of the collimator's low aspect ratio is that segment assays at the top and bottom of the sample are biased low because the detector's field of view is not filled. This effect is ordinarily countered by placing the sample on a low-Z pedestal and scanning one or more segment thicknesses below and above the sample. This takes extra time, however, We have investigated a number of techniques that both account for correlated segments and correct for end effects in SGS assays. Also, we have developed an algorithm that facilitates estimates of assay precision. Six calculation methods have been compared by evaluating the results of thousands of simulated, assays for three types of gamma-ray source distribution and ten masses. We will report on these computational studies and their experimental verification

  19. Real-time Multiple Abnormality Detection in Video Data

    DEFF Research Database (Denmark)

    Have, Simon Hartmann; Ren, Huamin; Moeslund, Thomas B.

    2013-01-01

    Automatic abnormality detection in video sequences has recently gained an increasing attention within the research community. Although progress has been seen, there are still some limitations in current research. While most systems are designed at detecting specific abnormality, others which...... are capable of detecting more than two types of abnormalities rely on heavy computation. Therefore, we provide a framework for detecting abnormalities in video surveillance by using multiple features and cascade classifiers, yet achieve above real-time processing speed. Experimental results on two datasets...... show that the proposed framework can reliably detect abnormalities in the video sequence, outperforming the current state-of-the-art methods....

  20. Teaching autistic children conversational speech using video modeling.

    Science.gov (United States)

    Charlop, M H; Milstein, J P

    1989-01-01

    We assessed the effects of video modeling on acquisition and generalization of conversational skills among autistic children. Three autistic boys observed videotaped conversations consisting of two people discussing specific toys. When criterion for learning was met, generalization of conversational skills was assessed with untrained topics of conversation; new stimuli (toys); unfamiliar persons, siblings, and autistic peers; and other settings. The results indicated that the children learned through video modeling, generalized their conversational skills, and maintained conversational speech over a 15-month period. Video modeling shows much promise as a rapid and effective procedure for teaching complex verbal skills such as conversational speech. PMID:2793634

  1. Open-source software platform for medical image segmentation applications

    Science.gov (United States)

    Namías, R.; D'Amato, J. P.; del Fresno, M.

    2017-11-01

    Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.

  2. Video Games Related to Young Adults: Mapping Research Interest

    Science.gov (United States)

    Piotrowski, Chris

    2015-01-01

    This study attempts to identify the typological-research domain of the extant literature on video games related to college-age samples (18-29 years-of-age). A content analysis of 264 articles, from PsycINFO for these identifiers, was performed. Findings showed that negative or pathological aspects of video gaming, i.e., violence potential,…

  3. Digital Video in Research

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2012-01-01

    Is video becoming “the new black” in academia, if so, what are the challenges? The integration of video in research methodology (for collection, analysis) is well-known, but the use of “academic video” for dissemination is relatively new (Eriksson and Sørensen). The focus of this paper is academic......). In the video, I appear (along with other researchers) and two Danish film directors, and excerpts from their film. My challenges included how to edit the academic video and organize the collaborative effort. I consider video editing as a semiotic, transformative process of “reassembling” voices....... In the discussion, I review academic video in terms of relevance and implications for research practice. The theoretical background is social constructivist, combining social semiotics (Kress, van Leeuwen, McCloud), visual anthropology (Banks, Pink) and dialogic theory (Bakhtin). The Bakhtinian notion of “voices...

  4. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... search for current job openings visit HHS USAJobs Home >> NEI YouTube Videos >> NEI YouTube Videos: Amblyopia Listen NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia Animations Blindness Cataract ...

  5. Automatic data-driven real-time segmentation and recognition of surgical workflow.

    Science.gov (United States)

    Dergachyova, Olga; Bouget, David; Huaulmé, Arnaud; Morandi, Xavier; Jannin, Pierre

    2016-06-01

    With the intention of extending the perception and action of surgical staff inside the operating room, the medical community has expressed a growing interest towards context-aware systems. Requiring an accurate identification of the surgical workflow, such systems make use of data from a diverse set of available sensors. In this paper, we propose a fully data-driven and real-time method for segmentation and recognition of surgical phases using a combination of video data and instrument usage signals, exploiting no prior knowledge. We also introduce new validation metrics for assessment of workflow detection. The segmentation and recognition are based on a four-stage process. Firstly, during the learning time, a Surgical Process Model is automatically constructed from data annotations to guide the following process. Secondly, data samples are described using a combination of low-level visual cues and instrument information. Then, in the third stage, these descriptions are employed to train a set of AdaBoost classifiers capable of distinguishing one surgical phase from others. Finally, AdaBoost responses are used as input to a Hidden semi-Markov Model in order to obtain a final decision. On the MICCAI EndoVis challenge laparoscopic dataset we achieved a precision and a recall of 91 % in classification of 7 phases. Compared to the analysis based on one data type only, a combination of visual features and instrument signals allows better segmentation, reduction of the detection delay and discovery of the correct phase order.

  6. Security and Privacy in Video Surveillance: Requirements and Challenges

    DEFF Research Database (Denmark)

    Mahmood Rajpoot, Qasim; Jensen, Christian D.

    2014-01-01

    observed by the system. Several techniques to protect the privacy of individuals have therefore been proposed, but very little research work has focused on the specific security requirements of video surveillance data (in transit or in storage) and on authorizing access to this data. In this paper, we...... present a general model of video surveillance systems that will help identify the major security and privacy requirements for a video surveillance system and we use this model to identify practical challenges in ensuring the security of video surveillance data in all stages (in transit and at rest). Our...... study shows a gap between the identified security requirements and the proposed security solutions where future research efforts may focus in this domain....

  7. Talking Video in 'Everyday Life'

    DEFF Research Database (Denmark)

    McIlvenny, Paul

    For better or worse, video technologies have made their way into many domains of social life, for example in the domain of therapeutics. Techniques such as Marte Meo, Video Interaction Guidance (ViG), Video-Enhanced Reflection on Communication, Video Home Training and Video intervention....../prevention (VIP) all promote the use of video as a therapeutic tool. This paper focuses on media therapeutics and the various in situ uses of video technologies in the mass media for therapeutic purposes. Reality TV parenting programmes such as Supernanny, Little Angels, The House of Tiny Tearaways, Honey, We...... observation and instruction (directives) relayed across different spaces; 2) the use of recorded video by participants to visualise, spatialise and localise talk and action that is distant in time and/or space; 3) the translating, stretching and cutting of social experience in and through the situated use...

  8. Watch-and-Comment as an Approach to Collaboratively Annotate Points of Interest in Video and Interactive-TV Programs

    Science.gov (United States)

    Pimentel, Maria Da Graça C.; Cattelan, Renan G.; Melo, Erick L.; Freitas, Giliard B.; Teixeira, Cesar A.

    In earlier work we proposed the Watch-and-Comment (WaC) paradigm as the seamless capture of multimodal comments made by one or more users while watching a video, resulting in the automatic generation of multimedia documents specifying annotated interactive videos. The aim is to allow services to be offered by applying document engineering techniques to the multimedia document generated automatically. The WaC paradigm was demonstrated with a WaCTool prototype application which supports multimodal annotation over video frames and segments, producing a corresponding interactive video. In this chapter, we extend the WaC paradigm to consider contexts in which several viewers may use their own mobile devices while watching and commenting on an interactive-TV program. We first review our previous work. Next, we discuss scenarios in which mobile users can collaborate via the WaC paradigm. We then present a new prototype application which allows users to employ their mobile devices to collaboratively annotate points of interest in video and interactive-TV programs. We also detail the current software infrastructure which supports our new prototype; the infrastructure extends the Ginga middleware for the Brazilian Digital TV with an implementation of the UPnP protocol - the aim is to provide the seamless integration of the users' mobile devices into the TV environment. As a result, the work reported in this chapter defines the WaC paradigm for the mobile-user as an approach to allow the collaborative annotation of the points of interest in video and interactive-TV programs.

  9. Examination of YouTube videos related to synthetic cannabinoids.

    Science.gov (United States)

    Fullwood, M Dottington; Kecojevic, Aleksandar; Basch, Corey H

    2016-08-17

    The popularity of synthetic cannabinoids (SCBs) is increasing the chance for adverse health issues in the United States. Moreover, social media platforms such as YouTube that provided a platform for user-generated content can convey misinformation or glorify use of SCBs. The aim of this study was to fill this gap by describing the content of the most popular YouTube videos containing content related to the SCBs. Videos with at least 1000 or more views found under the search terms "K2" and "spice" included in the analysis. The collective number of views was over 7.5 million. Nearly half of videos were consumer produced (n=42). The most common content in the videos was description of K2 (n=69), followed by mentioning dangers of using K2 (n=47), mentioning side effects (n=38) and showing a person using K2 (n=37). One-third of videos (n=34) promoted use of K2, while 22 videos mentioned risk of dying as a consequence of using K2. YouTube could be used as a surveillance tool to combat this epidemic, but instead, the most widely videos related to SCBs are uploaded by consumers. The content of these consumer videos on YouTube often provide the viewer with access to view a wide array of uploaders describing, encouraging, participating and promoting use.

  10. Augmented Reality Video Games: New Possibilities and Implications for Children and Adolescents

    Directory of Open Access Journals (Sweden)

    Prithwijit Das

    2017-04-01

    Full Text Available In recent years, the video game market has embraced augmented reality video games, a class of video games that is set to grow as gaming technologies develop. Given the widespread use of video games among children and adolescents, the health implications of augmented reality technology must be closely examined. Augmented reality technology shows a potential for the promotion of healthy behaviors and social interaction among children. However, the full immersion and physical movement required in augmented reality video games may also put users at risk for physical and mental harm. Our review article and commentary emphasizes both the benefits and dangers of augmented reality video games for children and adolescents.

  11. Construction and validation of an educational video on foot reflexology

    Directory of Open Access Journals (Sweden)

    Natiele Favarão da Silva

    2017-12-01

    Full Text Available The aim of this study was to construct and validate an educational video about foot reflexology. A methodological study was conducted at a higher education institution in southeastern Brazil, where the video pre-production, production and post-production stages were performed, followed by an evaluation of content understanding and comprehensiveness. The duration of the final version of the educational video is 12’7” (12 minutes and 7 seconds. The experts considered it an educational resource that presents the theme in a clear and objective way. The students considered it a proper educational material and showed good acceptance. The stages adopted for video construction and validation produced a clear, objective and proper educational material. Further studies should evaluate the impact of an educational video on the construction of foot reflexology knowledge.

  12. Video Classification and Adaptive QoP/QoS Control for Multiresolution Video Applications on IPTV

    Directory of Open Access Journals (Sweden)

    Huang Shyh-Fang

    2012-01-01

    Full Text Available With the development of heterogeneous networks and video coding standards, multiresolution video applications over networks become important. It is critical to ensure the service quality of the network for time-sensitive video services. Worldwide Interoperability for Microwave Access (WIMAX is a good candidate for delivering video signals because through WIMAX the delivery quality based on the quality-of-service (QoS setting can be guaranteed. The selection of suitable QoS parameters is, however, not trivial for service users. Instead, what a video service user really concerns with is the video quality of presentation (QoP which includes the video resolution, the fidelity, and the frame rate. In this paper, we present a quality control mechanism in multiresolution video coding structures over WIMAX networks and also investigate the relationship between QoP and QoS in end-to-end connections. Consequently, the video presentation quality can be simply mapped to the network requirements by a mapping table, and then the end-to-end QoS is achieved. We performed experiments with multiresolution MPEG coding over WIMAX networks. In addition to the QoP parameters, the video characteristics, such as, the picture activity and the video mobility, also affect the QoS significantly.

  13. Heuristically improved Bayesian segmentation of brain MR images ...

    African Journals Online (AJOL)

    Heuristically improved Bayesian segmentation of brain MR images. ... or even the most prevalent task in medical image processing is image segmentation. Among them, brain MR images suffer ... show that our algorithm performs well in comparison with the one implemented in SPM. It can be concluded that incorporating ...

  14. A low delay transmission method of multi-channel video based on FPGA

    Science.gov (United States)

    Fu, Weijian; Wei, Baozhi; Li, Xiaobin; Wang, Quan; Hu, Xiaofei

    2018-03-01

    In order to guarantee the fluency of multi-channel video transmission in video monitoring scenarios, we designed a kind of video format conversion method based on FPGA and its DMA scheduling for video data, reduces the overall video transmission delay.In order to sace the time in the conversion process, the parallel ability of FPGA is used to video format conversion. In order to improve the direct memory access (DMA) writing transmission rate of PCIe bus, a DMA scheduling method based on asynchronous command buffer is proposed. The experimental results show that this paper designs a low delay transmission method based on FPGA, which increases the DMA writing transmission rate by 34% compared with the existing method, and then the video overall delay is reduced to 23.6ms.

  15. Segmentation of DTI based on tensorial morphological gradient

    Science.gov (United States)

    Rittner, Leticia; de Alencar Lotufo, Roberto

    2009-02-01

    This paper presents a segmentation technique for diffusion tensor imaging (DTI). This technique is based on a tensorial morphological gradient (TMG), defined as the maximum dissimilarity over the neighborhood. Once this gradient is computed, the tensorial segmentation problem becomes an scalar one, which can be solved by conventional techniques, such as watershed transform and thresholding. Similarity functions, namely the dot product, the tensorial dot product, the J-divergence and the Frobenius norm, were compared, in order to understand their differences regarding the measurement of tensor dissimilarities. The study showed that the dot product and the tensorial dot product turned out to be inappropriate for computation of the TMG, while the Frobenius norm and the J-divergence were both capable of measuring tensor dissimilarities, despite the distortion of Frobenius norm, since it is not an affine invariant measure. In order to validate the TMG as a solution for DTI segmentation, its computation was performed using distinct similarity measures and structuring elements. TMG results were also compared to fractional anisotropy. Finally, synthetic and real DTI were used in the method validation. Experiments showed that the TMG enables the segmentation of DTI by watershed transform or by a simple choice of a threshold. The strength of the proposed segmentation method is its simplicity and robustness, consequences of TMG computation. It enables the use, not only of well-known algorithms and tools from the mathematical morphology, but also of any other segmentation method to segment DTI, since TMG computation transforms tensorial images in scalar ones.

  16. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... search for current job openings visit HHS USAJobs Home » NEI YouTube Videos » NEI YouTube Videos: Amblyopia Listen NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia Animations Blindness Cataract ...

  17. Consumer-based technology for distribution of surgical videos for objective evaluation.

    Science.gov (United States)

    Gonzalez, Ray; Martinez, Jose M; Lo Menzo, Emanuele; Iglesias, Alberto R; Ro, Charles Y; Madan, Atul K

    2012-08-01

    The Global Operative Assessment of Laparoscopic Skill (GOALS) is one validated metric utilized to grade laparoscopic skills and has been utilized to score recorded operative videos. To facilitate easier viewing of these recorded videos, we are developing novel techniques to enable surgeons to view these videos. The objective of this study is to determine the feasibility of utilizing widespread current consumer-based technology to assist in distributing appropriate videos for objective evaluation. Videos from residents were recorded via a direct connection from the camera processor via an S-video output via a cable into a hub to connect to a standard laptop computer via a universal serial bus (USB) port. A standard consumer-based video editing program was utilized to capture the video and record in appropriate format. We utilized mp4 format, and depending on the size of the file, the videos were scaled down (compressed), their format changed (using a standard video editing program), or sliced into multiple videos. Standard available consumer-based programs were utilized to convert the video into a more appropriate format for handheld personal digital assistants. In addition, the videos were uploaded to a social networking website and video sharing websites. Recorded cases of laparoscopic cholecystectomy in a porcine model were utilized. Compression was required for all formats. All formats were accessed from home computers, work computers, and iPhones without difficulty. Qualitative analyses by four surgeons demonstrated appropriate quality to grade for these formats. Our preliminary results show promise that, utilizing consumer-based technology, videos can be easily distributed to surgeons to grade via GOALS via various methods. Easy accessibility may help make evaluation of resident videos less complicated and cumbersome.

  18. Design of batch audio/video conversion platform based on JavaEE

    Science.gov (United States)

    Cui, Yansong; Jiang, Lianpin

    2018-03-01

    With the rapid development of digital publishing industry, the direction of audio / video publishing shows the diversity of coding standards for audio and video files, massive data and other significant features. Faced with massive and diverse data, how to quickly and efficiently convert to a unified code format has brought great difficulties to the digital publishing organization. In view of this demand and present situation in this paper, basing on the development architecture of Sptring+SpringMVC+Mybatis, and combined with the open source FFMPEG format conversion tool, a distributed online audio and video format conversion platform with a B/S structure is proposed. Based on the Java language, the key technologies and strategies designed in the design of platform architecture are analyzed emphatically in this paper, designing and developing a efficient audio and video format conversion system, which is composed of “Front display system”, "core scheduling server " and " conversion server ". The test results show that, compared with the ordinary audio and video conversion scheme, the use of batch audio and video format conversion platform can effectively improve the conversion efficiency of audio and video files, and reduce the complexity of the work. Practice has proved that the key technology discussed in this paper can be applied in the field of large batch file processing, and has certain practical application value.

  19. GPS-Aided Video Tracking

    Directory of Open Access Journals (Sweden)

    Udo Feuerhake

    2015-08-01

    Full Text Available Tracking moving objects is both challenging and important for a large variety of applications. Different technologies based on the global positioning system (GPS and video or radio data are used to obtain the trajectories of the observed objects. However, in some use cases, they fail to provide sufficiently accurate, complete and correct data at the same time. In this work we present an approach for fusing GPS- and video-based tracking in order to exploit their individual advantages. In this way we aim to combine the reliability of GPS tracking with the high geometric accuracy of camera detection. For the fusion of the movement data provided by the different devices we use a hidden Markov model (HMM formulation and the Viterbi algorithm to extract the most probable trajectories. In three experiments, we show that our approach is able to deal with challenging situations like occlusions or objects which are temporarily outside the monitored area. The results show the desired increase in terms of accuracy, completeness and correctness.

  20. Video as a technology for interpersonal communications: a new perspective

    Science.gov (United States)

    Whittaker, Steve

    1995-03-01

    Some of the most challenging multimedia applications have involved real- time conferencing, using audio and video to support interpersonal communication. Here we re-examine assumptions about the role, importance and implementation of video information in such systems. Rather than focussing on novel technologies, we present evaluation data relevant to both the classes of real-time multimedia applications we should develop and their design and implementation. Evaluations of videoconferencing systems show that previous work has overestimated the importance of video at the expense of audio. This has strong implications for the implementation of bandwidth allocation and synchronization. Furthermore our recent studies of workplace interaction show that prior work has neglected another potentially vital function of visual information: in assessing the communication availability of others. In this new class of application, rather than providing a supplement to audio information, visual information is used to promote the opportunistic communications that are prevalent in face-to-face settings. We discuss early experiments with such connection applications and identify outstanding design and implementation issues. Finally we examine a different class of application 'video-as-data', where the video image is used to transmit information about the work objects themselves, rather than information about interactants.