WorldWideScience

Sample records for video file featuring

  1. Forensic analysis of video file formats

    National Research Council Canada - National Science Library

    Gloe, Thomas; Fischer, André; Kirchner, Matthias

    2014-01-01

    .... In combination, such characteristics can help to authenticate digital video files in forensic settings by distinguishing between original and post-processed videos, verifying the purported source...

  2. Colour grading video files in Adobe Lightroom

    OpenAIRE

    Tommiska, Tarina

    2017-01-01

    The purpose of this thesis was to instruct the reader how to colour grade video files in a way that would be best suitable for photographers. This thesis suggests why video content should be colour graded in order to make an impact on the viewer and stand out in a meaningful way. I go through the very basics of colour theory to help the reader better understand the emotional impact of colour when observed. Colour theory sets the base for all colour grading and correction related work. ...

  3. Video Stabilization Using Feature Point Matching

    Science.gov (United States)

    Kulkarni, Shamsundar; Bormane, D. S.; Nalbalwar, S. L.

    2017-01-01

    Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper an algorithm is proposed to stabilize jittery videos. A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video is identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.

  4. Feature Quantization and Pooling for Videos

    Science.gov (United States)

    2014-05-01

    similar. 1.2 Context Video has become a very popular media for communication, entertainment , and science. Videos are widely used in educational...The same approach applied to action classification from YouTube videos of sport events shows that BoW approaches on real world data sets need further...dog videos, where the camera also tracks the people and animals . In Figure 4.38 we compare across action classes how well each segmentation

  5. Semisupervised feature selection via spline regression for video semantic recognition.

    Science.gov (United States)

    Han, Yahong; Yang, Yi; Yan, Yan; Ma, Zhigang; Sebe, Nicu; Zhou, Xiaofang

    2015-02-01

    To improve both the efficiency and accuracy of video semantic recognition, we can perform feature selection on the extracted video features to select a subset of features from the high-dimensional feature set for a compact and accurate video data representation. Provided the number of labeled videos is small, supervised feature selection could fail to identify the relevant features that are discriminative to target classes. In many applications, abundant unlabeled videos are easily accessible. This motivates us to develop semisupervised feature selection algorithms to better identify the relevant video features, which are discriminative to target classes by effectively exploiting the information underlying the huge amount of unlabeled video data. In this paper, we propose a framework of video semantic recognition by semisupervised feature selection via spline regression (S(2)FS(2)R) . Two scatter matrices are combined to capture both the discriminative information and the local geometry structure of labeled and unlabeled training videos: A within-class scatter matrix encoding discriminative information of labeled training videos and a spline scatter output from a local spline regression encoding data distribution. An l2,1 -norm is imposed as a regularization term on the transformation matrix to ensure it is sparse in rows, making it particularly suitable for feature selection. To efficiently solve S(2)FS(2)R , we develop an iterative algorithm and prove its convergency. In the experiments, three typical tasks of video semantic recognition, such as video concept detection, video classification, and human action recognition, are used to demonstrate that the proposed S(2)FS(2)R achieves better performance compared with the state-of-the-art methods.

  6. An improvement analysis on video compression using file segmentation

    Science.gov (United States)

    Sharma, Shubhankar; Singh, K. John; Priya, M.

    2017-11-01

    From the past two decades the extreme evolution of the Internet has lead a massive rise in video technology and significantly video consumption over the Internet which inhabits the bulk of data traffic in general. Clearly, video consumes that so much data size on the World Wide Web, to reduce the burden on the Internet and deduction of bandwidth consume by video so that the user can easily access the video data.For this, many video codecs are developed such as HEVC/H.265 and V9. Although after seeing codec like this one gets a dilemma of which would be improved technology in the manner of rate distortion and the coding standard.This paper gives a solution about the difficulty for getting low delay in video compression and video application e.g. ad-hoc video conferencing/streaming or observation by surveillance. Also this paper describes the benchmark of HEVC and V9 technique of video compression on subjective oral estimations of High Definition video content, playback on web browsers. Moreover, this gives the experimental ideology of dividing the video file into several segments for compression and putting back together to improve the efficiency of video compression on the web as well as on the offline mode.

  7. Penyembunyian Data pada File Video Menggunakan Metode LSB dan DCT

    Directory of Open Access Journals (Sweden)

    Mahmuddin Yunus

    2014-01-01

    Full Text Available Abstrak Penyembunyian data pada file video dikenal dengan istilah steganografi video. Metode steganografi yang dikenal diantaranya metode Least Significant Bit (LSB dan Discrete Cosine Transform (DCT. Dalam penelitian ini dilakukan penyembunyian data pada file video dengan menggunakan metode LSB, metode DCT, dan gabungan metode LSB-DCT. Sedangkan kualitas file video yang dihasilkan setelah penyisipan dihitung dengan menggunakan Mean Square Error (MSE dan Peak Signal to Noise Ratio (PSNR.Uji eksperimen dilakukan berdasarkan ukuran file video, ukuran file berkas rahasia yang disisipkan, dan resolusi video. Hasil pengujian menunjukkan tingkat keberhasilan steganografi video dengan menggunakan metode LSB adalah 38%, metode DCT adalah 90%, dan gabungan metode LSB-DCT adalah 64%. Sedangkan hasil perhitungan MSE, nilai MSE metode DCT paling rendah dibandingkan metode LSB dan gabungan metode LSB-DCT. Sedangkan metode LSB-DCT mempunyai nilai yang lebih kecil dibandingkan metode LSB. Pada pengujian PSNR diperoleh databahwa nilai PSNR metode DCTlebih tinggi dibandingkan metode LSB dan gabungan metode LSB-DCT. Sedangkan nilai PSNR metode gabungan LSB-DCT lebih tinggi dibandingkan metode LSB.   Kata Kunci— Steganografi, Video, Least Significant Bit (LSB, Discrete Cosine Transform (DCT, Mean Square Error (MSE, Peak Signal to Noise Ratio (PSNR                             Abstract Hiding data in video files is known as video steganography. Some of the well known steganography methods areLeast Significant Bit (LSB and Discrete Cosine Transform (DCT method. In this research, data will be hidden on the video file with LSB method, DCT method, and the combined method of LSB-DCT. While the quality result of video file after insertion is calculated using the Mean Square Error (MSE and Peak Signal to Noise Ratio (PSNR. The experiments were conducted based on the size of the video file, the file size of the inserted secret files, and

  8. 78 FR 34370 - Revisions to Electric Quarterly Report Filing Process; Notice of Availability of Video Showing...

    Science.gov (United States)

    2013-06-07

    ... processes for filing EQRs allows an EQR seller and its agent to file using a web interface that generally replicates the Commission-distributed software used currently. A video showing how EQRs can be filed using...

  9. Feature Weighting via Optimal Thresholding for Video Analysis (Open Access)

    Science.gov (United States)

    2014-03-03

    combine multiple descriptors. For example, STIP [10] feature combines HOG descriptor for shape information and HOF descriptor for motion informa- tion...Dense Trajectories feature [23] is an integration of de- scriptors of trajectory, HOG , HOF and Motion Boundary Histogram (MBH). In the video action...three features pro- vided by [5]: STIP features with 5,000 dimensional BoWs representation, SIFT features extracted every two seconds with 5,000

  10. Evaluation of Different Features for Face Recognition in Video

    Science.gov (United States)

    2014-09-01

    15 4 Graph presents the performance comparison among different algorithms implemented in OpenCV (Fisherfaces, Eigenfaces and LBPH)- all use...for face recog- nition in video, in particular those available in the OpenCV library [13]. Comparative performance analysis of these algorithms is...videos. The first one used a generic class that exists in OpenCV (version 2.4.1), called FeatureDetector, which allowed the automatic extraction of

  11. Video Anomaly Detection with Compact Feature Sets for Online Performance.

    Science.gov (United States)

    Leyva, Roberto; Sanchez, Victor; Li, Chang-Tsun

    2017-04-18

    Over the past decade, video anomaly detection has been explored with remarkable results. However, research on methodologies suitable for online performance is still very limited. In this paper, we present an online framework for video anomaly detection. The key aspect of our framework is a compact set of highly descriptive features, which is extracted from a novel cell structure that helps to define support regions in a coarse-to-fine fashion. Based on the scene's activity, only a limited number of support regions are processed, thus limiting the size of the feature set. Specifically, we use foreground occupancy and optical flow features. The framework uses an inference mechanism that evaluates the compact feature set via Gaussian Mixture Models, Markov Chains and Bag-of-Words in order to detect abnormal events. Our framework also considers the joint response of the models in the local spatio-temporal neighborhood to increase detection accuracy. We test our framework on popular existing datasets and on a new dataset comprising a wide variety of realistic videos captured by surveillance cameras. This particular dataset includes surveillance videos depicting criminal activities, car accidents and other dangerous situations. Evaluation results show that our framework outperforms other online methods and attains a very competitive detection performance compared to state-of-the-art non-online methods.

  12. Audio-video feature correlation: faces and speech

    Science.gov (United States)

    Durand, Gwenael; Montacie, Claude; Caraty, Marie-Jose; Faudemay, Pascal

    1999-08-01

    This paper presents a study of the correlation of features automatically extracted from the audio stream and the video stream of audiovisual documents. In particular, we were interested in finding out whether speech analysis tools could be combined with face detection methods, and to what extend they should be combined. A generic audio signal partitioning algorithm as first used to detect Silence/Noise/Music/Speech segments in a full length movie. A generic object detection method was applied to the keyframes extracted from the movie in order to detect the presence or absence of faces. The correlation between the presence of a face in the keyframes and of the corresponding voice in the audio stream was studied. A third stream, which is the script of the movie, is warped on the speech channel in order to automatically label faces appearing in the keyframes with the name of the corresponding character. We naturally found that extracted audio and video features were related in many cases, and that significant benefits can be obtained from the joint use of audio and video analysis methods.

  13. Psychogenic Tremor: A Video Guide to Its Distinguishing Features

    Directory of Open Access Journals (Sweden)

    Joseph Jankovic

    2014-08-01

    Full Text Available Background: Psychogenic tremor is the most common psychogenic movement disorder. It has characteristic clinical features that can help distinguish it from other tremor disorders. There is no diagnostic gold standard and the diagnosis is based primarily on clinical history and examination. Despite proposed diagnostic criteria, the diagnosis of psychogenic tremor can be challenging. While there are numerous studies evaluating psychogenic tremor in the literature, there are no publications that provide a video/visual guide that demonstrate the clinical characteristics of psychogenic tremor. Educating clinicians about psychogenic tremor will hopefully lead to earlier diagnosis and treatment. Methods: We selected videos from the database at the Parkinson's Disease Center and Movement Disorders Clinic at Baylor College of Medicine that illustrate classic findings supporting the diagnosis of psychogenic tremor.Results: We include 10 clinical vignettes with accompanying videos that highlight characteristic clinical signs of psychogenic tremor including distractibility, variability, entrainability, suggestibility, and coherence.Discussion: Psychogenic tremor should be considered in the differential diagnosis of patients presenting with tremor, particularly if it is of abrupt onset, intermittent, variable and not congruous with organic tremor. The diagnosis of psychogenic tremor, however, should not be simply based on exclusion of organic tremor, such as essential, parkinsonian, or cerebellar tremor, but on positive criteria demonstrating characteristic features. Early recognition and management are critical for good long-term outcome.

  14. Cell Phone Video Recording Feature as a Language Learning Tool: A Case Study

    Science.gov (United States)

    Gromik, Nicolas A.

    2012-01-01

    This paper reports on a case study conducted at a Japanese national university. Nine participants used the video recording feature on their cell phones to produce weekly video productions. The task required that participants produce one 30-second video on a teacher-selected topic. Observations revealed the process of video creation with a cell…

  15. Feature Extraction in IR Images Via Synchronous Video Detection

    Science.gov (United States)

    Shepard, Steven M.; Sass, David T.

    1989-03-01

    IR video images acquired by scanning imaging radiometers are subject to several problems which make measurement of small temperature differences difficult. Among these problems are 1) aliasing, which occurs When events at frequencies higher than the video frame rate are observed, 2) limited temperature resolution imposed by the 3-bit digitization available in existing commercial systems, and 3) susceptibility to noise and background clutter. Bandwidth narrowing devices (e.g. lock-in amplifiers or boxcar averagers) are routinely used to achieve a high degree of signal to noise improvement for time-varying 1-dimensional signals. We will describe techniques which allow similar S/N improvement for 2-dimensional imagery acquired with an off the shelf scanning imaging radiometer system. These techniques are iplemented in near-real-time, utilizing a microcomputer and specially developed hardware and software . We will also discuss the application of the system to feature extraction in cluttered images, and to acquisition of events which vary faster than the frame rate.

  16. Obscene Video Recognition Using Fuzzy SVM and New Sets of Features

    Directory of Open Access Journals (Sweden)

    Alireza Behrad

    2013-02-01

    Full Text Available In this paper, a novel approach for identifying normal and obscene videos is proposed. In order to classify different episodes of a video independently and discard the need to process all frames, first, key frames are extracted and skin regions are detected for groups of video frames starting with key frames. In the second step, three different features including 1- structural features based on single frame information, 2- features based on spatiotemporal volume and 3-motion-based features, are extracted for each episode of video. The PCA-LDA method is then applied to reduce the size of structural features and select more distinctive features. For the final step, we use fuzzy or a Weighted Support Vector Machine (WSVM classifier to identify video episodes. We also employ a multilayer Kohonen network as an initial clustering algorithm to increase the ability to discriminate between the extracted features into two classes of videos. Features based on motion and periodicity characteristics increase the efficiency of the proposed algorithm in videos with bad illumination and skin colour variation. The proposed method is evaluated using 1100 videos in different environmental and illumination conditions. The experimental results show a correct recognition rate of 94.2% for the proposed algorithm.

  17. High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images

    Science.gov (United States)

    Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2006-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.

  18. Real-time skin feature identification in a time-sequential video stream

    Science.gov (United States)

    Kramberger, Iztok

    2005-04-01

    Skin color can be an important feature when tracking skin-colored objects. Particularly this is the case for computer-vision-based human-computer interfaces (HCI). Humans have a highly developed feeling of space and, therefore, it is reasonable to support this within intelligent HCI, where the importance of augmented reality can be foreseen. Joining human-like interaction techniques within multimodal HCI could, or will, gain a feature for modern mobile telecommunication devices. On the other hand, real-time processing plays an important role in achieving more natural and physically intuitive ways of human-machine interaction. The main scope of this work is the development of a stereoscopic computer-vision hardware-accelerated framework for real-time skin feature identification in the sense of a single-pass image segmentation process. The hardware-accelerated preprocessing stage is presented with the purpose of color and spatial filtering, where the skin color model within the hue-saturation-value (HSV) color space is given with a polyhedron of threshold values representing the basis of the filter model. An adaptive filter management unit is suggested to achieve better segmentation results. This enables the adoption of filter parameters to the current scene conditions in an adaptive way. Implementation of the suggested hardware structure is given at the level of filed programmable system level integrated circuit (FPSLIC) devices using an embedded microcontroller as their main feature. A stereoscopic clue is achieved using a time-sequential video stream, but this shows no difference for real-time processing requirements in terms of hardware complexity. The experimental results for the hardware-accelerated preprocessing stage are given by efficiency estimation of the presented hardware structure using a simple motion-detection algorithm based on a binary function.

  19. Comparison of Image Transform-Based Features for Visual Speech Recognition in Clean and Corrupted Videos

    Directory of Open Access Journals (Sweden)

    Ji Ming

    2008-03-01

    Full Text Available We present results of a study into the performance of a variety of different image transform-based feature types for speaker-independent visual speech recognition of isolated digits. This includes the first reported use of features extracted using a discrete curvelet transform. The study will show a comparison of some methods for selecting features of each feature type and show the relative benefits of both static and dynamic visual features. The performance of the features will be tested on both clean video data and also video data corrupted in a variety of ways to assess each feature type's robustness to potential real-world conditions. One of the test conditions involves a novel form of video corruption we call jitter which simulates camera and/or head movement during recording.

  20. A spatiotemporal feature-based approach for facial expression recognition from depth video

    Science.gov (United States)

    Uddin, Md. Zia

    2015-07-01

    In this paper, a novel spatiotemporal feature-based method is proposed to recognize facial expressions from depth video. Independent Component Analysis (ICA) spatial features of the depth faces of facial expressions are first augmented with the optical flow motion features. Then, the augmented features are enhanced by Fisher Linear Discriminant Analysis (FLDA) to make them robust. The features are then combined with on Hidden Markov Models (HMMs) to model different facial expressions that are later used to recognize appropriate expression from a test expression depth video. The experimental results show superior performance of the proposed approach over the conventional methods.

  1. A Joint Compression Scheme of Video Feature Descriptors and Visual Content.

    Science.gov (United States)

    Zhang, Xiang; Ma, Siwei; Wang, Shiqi; Zhang, Xinfeng; Sun, Huifang; Gao, Wen

    2017-02-01

    High-efficiency compression of visual feature descriptors has recently emerged as an active topic due to the rapidly increasing demand in mobile visual retrieval over bandwidth-limited networks. However, transmitting only those feature descriptors may largely restrict its application scale due to the lack of necessary visual content. To facilitate the wide spread of feature descriptors, a hybrid framework of jointly compressing the feature descriptors and visual content is highly desirable. In this paper, such a content-plus-feature coding scheme is investigated, aiming to shape the next generation of video compression system toward visual retrieval, where the high-efficiency coding of both feature descriptors and visual content can be achieved by exploiting the interactions between each other. On the one hand, visual feature descriptors can achieve compact and efficient representation by taking advantages of the structure and motion information in the compressed video stream. To optimize the retrieval performance, a novel rate-accuracy optimization technique is proposed to accurately estimate the retrieval performance degradation in feature coding. On the other hand, the already compressed feature data can be utilized to further improve the video coding efficiency by applying feature matching-based affine motion compensation. Extensive simulations have shown that the proposed joint compression framework can offer significant bitrate reduction in representing both feature descriptors and video frames, while simultaneously maintaining the state-of-the-art visual retrieval performance.

  2. Forest Fire Smoke Video Detection Using Spatiotemporal and Dynamic Texture Features

    Directory of Open Access Journals (Sweden)

    Yaqin Zhao

    2015-01-01

    Full Text Available Smoke detection is a very key part of fire recognition in a forest fire surveillance video since the smoke produced by forest fires is visible much before the flames. The performance of smoke video detection algorithm is often influenced by some smoke-like objects such as heavy fog. This paper presents a novel forest fire smoke video detection based on spatiotemporal features and dynamic texture features. At first, Kalman filtering is used to segment candidate smoke regions. Then, candidate smoke region is divided into small blocks. Spatiotemporal energy feature of each block is extracted by computing the energy features of its 8-neighboring blocks in the current frame and its two adjacent frames. Flutter direction angle is computed by analyzing the centroid motion of the segmented regions in one candidate smoke video clip. Local Binary Motion Pattern (LBMP is used to define dynamic texture features of smoke videos. Finally, smoke video is recognized by Adaboost algorithm. The experimental results show that the proposed method can effectively detect smoke image recorded from different scenes.

  3. Real-time UAV trajectory generation using feature points matching between video image sequences

    Science.gov (United States)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  4. Exploiting Feature and Class Relationships in Video Categorization with Regularized Deep Neural Networks.

    Science.gov (United States)

    Jiang, Yu-Gang; Wu, Zuxuan; Wang, Jun; Xue, Xiangyang; Chang, Shih-Fu

    2018-02-01

    In this paper, we study the challenging problem of categorizing videos according to high-level semantics such as the existence of a particular human action or a complex event. Although extensive efforts have been devoted in recent years, most existing works combined multiple video features using simple fusion strategies and neglected the utilization of inter-class semantic relationships. This paper proposes a novel unified framework that jointly exploits the feature relationships and the class relationships for improved categorization performance. Specifically, these two types of relationships are estimated and utilized by imposing regularizations in the learning process of a deep neural network (DNN). Through arming the DNN with better capability of harnessing both the feature and the class relationships, the proposed regularized DNN (rDNN) is more suitable for modeling video semantics. We show that rDNN produces better performance over several state-of-the-art approaches. Competitive results are reported on the well-known Hollywood2 and Columbia Consumer Video benchmarks. In addition, to stimulate future research on large scale video categorization, we collect and release a new benchmark dataset, called FCVID, which contains 91,223 Internet videos and 239 manually annotated categories.

  5. Extracting foreground ensemble features to detect abnormal crowd behavior in intelligent video-surveillance systems

    Science.gov (United States)

    Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien

    2017-09-01

    Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.

  6. Scientists feature their work in Arctic-focused short videos by FrontierScientists

    Science.gov (United States)

    Nielsen, L.; O'Connell, E.

    2013-12-01

    Whether they're guiding an unmanned aerial vehicle into a volcanic plume to sample aerosols, or documenting core drilling at a frozen lake in Siberia formed 3.6 million years ago by a massive meteorite impact, Arctic scientists are using video to enhance and expand their science and science outreach. FrontierScientists (FS), a forum for showcasing scientific work, produces and promotes radically different video blogs featuring Arctic scientists. Three- to seven- minute multimedia vlogs help deconstruct researcher's efforts and disseminate stories, communicating scientific discoveries to our increasingly connected world. The videos cover a wide range of current field work being performed in the Arctic. All videos are freely available to view or download from the FrontierScientists.com website, accessible via any internet browser or via the FrontierScientists app. FS' filming process fosters a close collaboration between the scientist and the media maker. Film creation helps scientists reach out to the public, communicate the relevance of their scientific findings, and craft a discussion. Videos keep audience tuned in; combining field footage, pictures, audio, and graphics with a verbal explanation helps illustrate ideas, allowing one video to reach people with different learning strategies. The scientists' stories are highlighted through social media platforms online. Vlogs grant scientists a voice, letting them illustrate their own work while ensuring accuracy. Each scientific topic on FS has its own project page where easy-to-navigate videos are featured prominently. Video sets focus on different aspects of a researcher's work or follow one of their projects into the field. We help the scientist slip the answers to their five most-asked questions into the casual script in layman's terms in order to free the viewers' minds to focus on new concepts. Videos are accompanied by written blogs intended to systematically demystify related facts so the scientists can focus

  7. Identifying Key Features of Student Performance in Educational Video Games and Simulations through Cluster Analysis

    Science.gov (United States)

    Kerr, Deirdre; Chung, Gregory K. W. K.

    2012-01-01

    The assessment cycle of "evidence-centered design" (ECD) provides a framework for treating an educational video game or simulation as an assessment. One of the main steps in the assessment cycle of ECD is the identification of the key features of student performance. While this process is relatively simple for multiple choice tests, when…

  8. Feature-based fast coding unit partition algorithm for high efficiency video coding

    Directory of Open Access Journals (Sweden)

    Yih-Chuan Lin

    2015-04-01

    Full Text Available High Efficiency Video Coding (HEVC, which is the newest video coding standard, has been developed for the efficient compression of ultra high definition videos. One of the important features in HEVC is the adoption of a quad-tree based video coding structure, in which each incoming frame is represented as a set of non-overlapped coding tree blocks (CTB by variable-block sized prediction and coding process. To do this, each CTB needs to be recursively partitioned into coding unit (CU, predict unit (PU and transform unit (TU during the coding process, leading to a huge computational load in the coding of each video frame. This paper proposes to extract visual features in a CTB and uses them to simplify the coding procedure by reducing the depth of quad-tree partition for each CTB in HEVC intra coding mode. A measure for the edge strength in a CTB, which is defined with simple Sobel edge detection, is used to constrain the possible maximum depth of quad-tree partition of the CTB. With the constrained partition depth, the proposed method can reduce a lot of encoding time. Experimental results by HM10.1 show that the average time-savings is about 13.4% under the increase of encoded BD-Rate by only 0.02%, which is a less performance degradation in comparison to other similar methods.

  9. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    Science.gov (United States)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  10. EEG-based recognition of video-induced emotions: selecting subject-independent feature set.

    Science.gov (United States)

    Kortelainen, Jukka; Seppänen, Tapio

    2013-01-01

    Emotions are fundamental for everyday life affecting our communication, learning, perception, and decision making. Including emotions into the human-computer interaction (HCI) could be seen as a significant step forward offering a great potential for developing advanced future technologies. While the electrical activity of the brain is affected by emotions, offers electroencephalogram (EEG) an interesting channel to improve the HCI. In this paper, the selection of subject-independent feature set for EEG-based emotion recognition is studied. We investigate the effect of different feature sets in classifying person's arousal and valence while watching videos with emotional content. The classification performance is optimized by applying a sequential forward floating search algorithm for feature selection. The best classification rate (65.1% for arousal and 63.0% for valence) is obtained with a feature set containing power spectral features from the frequency band of 1-32 Hz. The proposed approach substantially improves the classification rate reported in the literature. In future, further analysis of the video-induced EEG changes including the topographical differences in the spectral features is needed.

  11. Exemplar-Based Image and Video Stylization Using Fully Convolutional Semantic Features.

    Science.gov (United States)

    Zhu, Feida; Yan, Zhicheng; Bu, Jiajun; Yu, Yizhou

    2017-05-10

    Color and tone stylization in images and videos strives to enhance unique themes with artistic color and tone adjustments. It has a broad range of applications from professional image postprocessing to photo sharing over social networks. Mainstream photo enhancement softwares, such as Adobe Lightroom and Instagram, provide users with predefined styles, which are often hand-crafted through a trial-and-error process. Such photo adjustment tools lack a semantic understanding of image contents and the resulting global color transform limits the range of artistic styles it can represent. On the other hand, stylistic enhancement needs to apply distinct adjustments to various semantic regions. Such an ability enables a broader range of visual styles. In this paper, we first propose a novel deep learning architecture for exemplar-based image stylization, which learns local enhancement styles from image pairs. Our deep learning architecture consists of fully convolutional networks (FCNs) for automatic semantics-aware feature extraction and fully connected neural layers for adjustment prediction. Image stylization can be efficiently accomplished with a single forward pass through our deep network. To extend our deep network from image stylization to video stylization, we exploit temporal superpixels (TSPs) to facilitate the transfer of artistic styles from image exemplars to videos. Experiments on a number of datasets for image stylization as well as a diverse set of video clips demonstrate the effectiveness of our deep learning architecture.

  12. TEACHING LEARNING MATERIALS: THE REVIEWS COURSEBOOKS, GAMES, WORKSHEETS, AUDIO VIDEO FILES

    Directory of Open Access Journals (Sweden)

    Anak Agung Sagung Shanti Sari Dewi

    2016-11-01

    Full Text Available Teaching learning materials (TLM has been widely recognised as one of most important components in language teaching to support the success of language learning. TLM is essential for teachers in planning their lessons, assisting them in their professional duty, and use them as rosources to describe instructions. This writing reviews 10 (ten teaching learning materials in the form of cousebooks, games, worksheets, and audio video files. The materials were chosen randomly and were analysed qualitatively. The discussion of the materials is done individually by presenting their target learners, how they are applied by teachers and students, the aims of the use of the materials, and the role of teachers and learners in different kind of TLM.

  13. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    National Research Council Canada - National Science Library

    Chang, Yuchou; Lee, DJ; Hong, Yi; Archibald, James

    .... In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection...

  14. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Science.gov (United States)

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  15. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Directory of Open Access Journals (Sweden)

    Florian Eyben

    Full Text Available Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  16. Use of digital video for documentation of microscopic features of tissue samples.

    Science.gov (United States)

    Melín-Aldana, Héctor; Gasilionis, Valdas; Kapur, Umesh

    2008-05-01

    Digital photography is commonly used to document microscopic features of tissue samples, but it relies on the capture of arbitrarily selected representative areas. Current technologic advances permit the review of an entire sample, some even replicating the use of a microscope. To demonstrate the applicability of digital video to the documentation of histologic samples. A Canon Elura MC40 digital camcorder was mounted on a microscope, glass slide-mounted tissue sections were filmed, and the unedited movies were transferred to a Apple Mac Pro computer. Movies were edited using the software iMovie HD, including placement of a time counter and a voice recording. The finished movies can be viewed in computers, incorporated onto DVDs, or placed on a Web site after compression with Flash software. The final movies range, on average, between 2 and 8 minutes, depending on the size of the sample, and between 50 MB and 1.6 GB, depending on the intended means of distribution, with DVDs providing the best image quality. Digital video is a practical methodology for documentation of entire tissue samples. We propose an affordable method that uses easily available hardware and software and does not require significant computer knowledge. Pathology education can be enhanced by the implementation of digital video technology.

  17. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Hong Yi

    2008-01-01

    Full Text Available Abstract Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  18. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Yuchou Chang

    2008-02-01

    Full Text Available Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  19. Extracting Road Features from Aerial Videos of Small Unmanned Aerial Vehicles

    Science.gov (United States)

    Rajamohan, D.; Rajan, K. S.

    2013-09-01

    With major aerospace companies showing interest in certifying UAV systems for civilian airspace, their use in commercial remote sensing applications like traffic monitoring, map refinement, agricultural data collection, etc., are on the rise. But ambitious requirements like real-time geo-referencing of data, support for multiple sensor angle-of-views, smaller UAV size and cheaper investment cost have lead to challenges in platform stability, sensor noise reduction and increased onboard processing. Especially in small UAVs the geo-referencing of data collected is only as good as the quality of their localization sensors. This drives a need for developing methods that pickup spatial features from the captured video/image and aid in geo-referencing. This paper presents one such method to identify road segments and intersections based on traffic flow and compares well with the accuracy of manual observation. Two test video datasets, one each from moving and stationary platforms were used. The results obtained show a promising average percentage difference of 7.01 % and 2.48 % for the road segment extraction process using moving and stationary platform respectively. For the intersection identification process, the moving platform shows an accuracy of 75 % where as the stationary platform data reaches an accuracy of 100 %.

  20. Investigation on effectiveness of mid-level feature representation for semantic boundary detection in news video

    Science.gov (United States)

    Radhakrishan, Regunathan; Xiong, Ziyou; Divakaran, Ajay; Raj, Bhiksha

    2003-11-01

    In our past work, we have attempted to use a mid-level feature namely the state population histogram obtained from the Hidden Markov Model (HMM) of a general sound class, for speaker change detection so as to extract semantic boundaries in broadcast news. In this paper, we compare the performance of our previous approach with another approach based on video shot detection and speaker change detection using the Bayesian Information Criterion (BIC). Our experiments show that the latter approach performs significantly better than the former. This motivated us to examine the mid-level feature closely. We found that the component population histogram enabled discovery of broad phonetic categories such as vowels, nasals, fricatives etc, regardless of the number of distinct speakers in the test utterance. In order for it to be useful for speaker change detection, the individual components should model the phonetic sounds of each speaker separately. From our experiments, we conclude that state/component population histograms can only be useful for further clustering or semantic class discovery if the features are chosen carefully so that the individual states represent the semantic categories of interest.

  1. Clinical features and axis I comorbidity of Australian adolescent pathological Internet and video game users.

    Science.gov (United States)

    King, Daniel L; Delfabbro, Paul H; Zwaans, Tara; Kaptsis, Dean

    2013-11-01

    Although there is growing international recognition of pathological technology use (PTU) in adolescence, there has been a paucity of empirical research conducted in Australia. This study was designed to assess the clinical features of pathological video gaming (PVG) and pathological Internet use (PIU) in a normative Australian adolescent population. A secondary objective was to investigate the axis I comorbidities associated with PIU and video gaming. A total of 1287 South Australian secondary school students aged 12-18 years were recruited. Participants were assessed using the PTU checklist, Revised Children's Anxiety and Depression Scale, Social Anxiety Scale for Adolescents, revised UCLA Loneliness Scale, and Teenage Inventory of Social Skills. Adolescents who met the criteria for PVG or PIU or both were compared to normal adolescents in terms of axis I comorbidity. The prevalence rates of PIU and PVG were 6.4% and 1.8%, respectively. A subgroup with co-occurring PIU and PVG was identified (3.3%). The most distinguishing clinical features of PTU were withdrawal, tolerance, lies and secrecy, and conflict. Symptoms of preoccupation, inability to self-limit, and using technology as an escape were commonly reported by adolescents without PTU, and therefore may be less useful as clinical indicators. Depression, panic disorder, and separation anxiety were most prevalent among adolescents with PIU. PTU among Australian adolescents remains an issue warranting clinical concern. These results suggest an emerging trend towards the greater uptake and use of the Internet among female adolescents, with associated PIU. Although there exists an overlap of PTU disorders, adolescents with PIU appear to be at greater risk of axis I comorbidity than adolescents with PVG alone. Further research with an emphasis on validation techniques, such as verified identification of harm, may enable an informed consensus on the definition and diagnosis of PTU.

  2. Image Segmentation and Feature Extraction for Recognizing Strokes in Tennis Game Videos

    NARCIS (Netherlands)

    Zivkovic, Z.; van der Heijden, Ferdinand; Petkovic, M.; Jonker, Willem; Langendijk, R.L.; Heijnsdijk, J.W.J.; Pimentel, A.D.; Wilkinson, M.H.F.

    This paper addresses the problem of recognizing human actions from video. Particularly, the case of recognizing events in tennis game videos is analyzed. Driven by our domain knowledge, a robust player segmentation algorithm is developed for real video data. Further, we introduce a number of novel

  3. Gaussian Process Regression-Based Video Anomaly Detection and Localization With Hierarchical Feature Representation.

    Science.gov (United States)

    Cheng, Kai-Wen; Chen, Yie-Tarng; Fang, Wen-Hsien

    2015-12-01

    This paper presents a hierarchical framework for detecting local and global anomalies via hierarchical feature representation and Gaussian process regression (GPR) which is fully non-parametric and robust to the noisy training data, and supports sparse features. While most research on anomaly detection has focused more on detecting local anomalies, we are more interested in global anomalies that involve multiple normal events interacting in an unusual manner, such as car accidents. To simultaneously detect local and global anomalies, we cast the extraction of normal interactions from the training videos as a problem of finding the frequent geometric relations of the nearby sparse spatio-temporal interest points (STIPs). A codebook of interaction templates is then constructed and modeled using the GPR, based on which a novel inference method for computing the likelihood of an observed interaction is also developed. Thereafter, these local likelihood scores are integrated into globally consistent anomaly masks, from which anomalies can be succinctly identified. To the best of our knowledge, it is the first time GPR is employed to model the relationship of the nearby STIPs for anomaly detection. Simulations based on four widespread datasets show that the new method outperforms the main state-of-the-art methods with lower computational burden.

  4. Wireless capsule endoscopy video segmentation using an unsupervised learning approach based on probabilistic latent semantic analysis with scale invariant features.

    Science.gov (United States)

    Shen, Yao; Guturu, Parthasarathy Partha; Buckles, Bill P

    2012-01-01

    Since wireless capsule endoscopy (WCE) is a novel technology for recording the videos of the digestive tract of a patient, the problem of segmenting the WCE video of the digestive tract into subvideos corresponding to the entrance, stomach, small intestine, and large intestine regions is not well addressed in the literature. A selected few papers addressing this problem follow supervised leaning approaches that presume availability of a large database of correctly labeled training samples. Considering the difficulties in procuring sizable WCE training data sets needed for achieving high classification accuracy, we introduce in this paper an unsupervised learning approach that employs Scale Invariant Feature Transform (SIFT) for extraction of local image features and the probabilistic latent semantic analysis (pLSA) model used in the linguistic content analysis for data clustering. Results of experimentation indicate that this method compares well in classification accuracy with the state-of-the-art supervised classification approaches to WCE video segmentation.

  5. RST-Resilient Video Watermarking Using Scene-Based Feature Extraction

    OpenAIRE

    Jung Han-Seung; Lee Young-Yoon; Lee Sang Uk

    2004-01-01

    Watermarking for video sequences should consider additional attacks, such as frame averaging, frame-rate change, frame shuffling or collusion attacks, as well as those of still images. Also, since video is a sequence of analogous images, video watermarking is subject to interframe collusion. In order to cope with these attacks, we propose a scene-based temporal watermarking algorithm. In each scene, segmented by scene-change detection schemes, a watermark is embedded temporally to one-dimens...

  6. Tracking of Moving Objects in Video Through Invariant Features in Their Graph Representation

    Directory of Open Access Journals (Sweden)

    Averbuch A

    2008-01-01

    Full Text Available Abstract The paper suggests a contour-based algorithm for tracking moving objects in video. The inputs are segmented moving objects. Each segmented frame is transformed into region adjacency graphs (RAGs. The object's contour is divided into subcurves. Contour's junctions are derived. These junctions are the unique “signature� of the tracked object. Junctions from two consecutive frames are matched. The junctions' motion is estimated using RAG edges in consecutive frames. Each pair of matched junctions may be connected by several paths (edges that become candidates that represent a tracked contour. These paths are obtained by the -shortest paths algorithm between two nodes. The RAG is transformed into a weighted directed graph. The final tracked contour construction is derived by a match between edges (subcurves and candidate paths sets. The RAG constructs the tracked contour that enables an accurate and unique moving object representation. The algorithm tracks multiple objects, partially covered (occluded objects, compounded object of merge/split such as players in a soccer game and tracking in a crowded area for surveillance applications. We assume that features of topologic signature of the tracked object stay invariant in two consecutive frames. The algorithm's complexity depends on RAG's edges and not on the image's size.

  7. The Effect of Typographical Features of Subtitles on Nonnative English Viewers’ Retention and Recall of Lyrics in English Music Videos

    OpenAIRE

    Farshid Tayari Ashtiani

    2017-01-01

    The goal of this study was to test the effect of typographical features of subtitles including size, color and position on nonnative English viewers’ retention and recall of lyrics in music videos. To do so, the researcher played a simple subtitled music video for the participants at the beginning of their classes, and administered a 31-blank cloze test from the lyrics at the end of the classes. In the second test, the control group went through the same procedure but experimental group watch...

  8. Fast Mode Decision in the HEVC Video Coding Standard by Exploiting Region with Dominated Motion and Saliency Features.

    Science.gov (United States)

    Podder, Pallab Kanti; Paul, Manoranjan; Murshed, Manzur

    2016-01-01

    The emerging High Efficiency Video Coding (HEVC) standard introduces a number of innovative and powerful coding tools to acquire better compression efficiency compared to its predecessor H.264. The encoding time complexities have also increased multiple times that is not suitable for realtime video coding applications. To address this limitation, this paper employs a novel coding strategy to reduce the time complexity in HEVC encoder by efficient selection of appropriate block-partitioning modes based on human visual features (HVF). The HVF in the proposed technique comprise with human visual attention modelling-based saliency feature and phase correlation-based motion features. The features are innovatively combined through a fusion process by developing a content-based adaptive weighted cost function to determine the region with dominated motion/saliency (RDMS)- based binary pattern for the current block. The generated binary pattern is then compared with a codebook of predefined binary pattern templates aligned to the HEVC recommended block-paritioning to estimate a subset of inter-prediction modes. Without exhaustive exploration of all modes available in the HEVC standard, only the selected subset of modes are motion estimated and motion compensated for a particular coding unit. The experimental evaluation reveals that the proposed technique notably down-scales the average computational time of the latest HEVC reference encoder by 34% while providing similar rate-distortion (RD) performance for a wide range of video sequences.

  9. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  10. An Automatic Gastrointestinal Polyp Detection System in Video Endoscopy Using Fusion of Color Wavelet and Convolutional Neural Network Features.

    Science.gov (United States)

    Billah, Mustain; Waheed, Sajjad; Rahman, Mohammad Motiur

    2017-01-01

    Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW) features and convolutional neural network (CNN) features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM). Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%.

  11. An Automatic Gastrointestinal Polyp Detection System in Video Endoscopy Using Fusion of Color Wavelet and Convolutional Neural Network Features

    Science.gov (United States)

    Waheed, Sajjad; Rahman, Mohammad Motiur

    2017-01-01

    Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW) features and convolutional neural network (CNN) features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM). Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%. PMID:28894460

  12. An Automatic Gastrointestinal Polyp Detection System in Video Endoscopy Using Fusion of Color Wavelet and Convolutional Neural Network Features

    Directory of Open Access Journals (Sweden)

    Mustain Billah

    2017-01-01

    Full Text Available Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW features and convolutional neural network (CNN features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM. Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%.

  13. Motion Entropy Feature and Its Applications to Event-Based Segmentation of Sports Video

    Science.gov (United States)

    Chen, Chen-Yu; Wang, Jia-Ching; Wang, Jhing-Fa; Hu, Yu-Hen

    2008-12-01

    An entropy-based criterion is proposed to characterize the pattern and intensity of object motion in a video sequence as a function of time. By applying a homoscedastic error model-based time series change point detection algorithm to this motion entropy curve, one is able to segment the corresponding video sequence into individual sections, each consisting of a semantically relevant event. The proposed method is tested on six hours of sports videos including basketball, soccer, and tennis. Excellent experimental results are observed.

  14. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction

    National Research Council Canada - National Science Library

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    .... Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer...

  15. The Effect of Typographical Features of Subtitles on Nonnative English Viewers’ Retention and Recall of Lyrics in English Music Videos

    Directory of Open Access Journals (Sweden)

    Farshid Tayari Ashtiani

    2017-10-01

    Full Text Available The goal of this study was to test the effect of typographical features of subtitles including size, color and position on nonnative English viewers’ retention and recall of lyrics in music videos. To do so, the researcher played a simple subtitled music video for the participants at the beginning of their classes, and administered a 31-blank cloze test from the lyrics at the end of the classes. In the second test, the control group went through the same procedure but experimental group watched the customized subtitled version of the music video. The results demonstrated no significant difference between the two groups in the first test but in the second, the scores remarkably increased in the experimental group and proved better retention and recall. This study has implications for English language teachers and material developers to benefit customized bimodal subtitles as a mnemonic tool for better comprehension, retention and recall of aural contents in videos via Computer Assisted Language Teaching approach.

  16. Word2VisualVec: Image and Video to Sentence Matching by Visual Feature Prediction

    OpenAIRE

    Dong, Jianfeng; Li, Xirong; Snoek, Cees G. M.

    2016-01-01

    This paper strives to find the sentence best describing the content of an image or video. Different from existing works, which rely on a joint subspace for image / video to sentence matching, we propose to do so in a visual space only. We contribute Word2VisualVec, a deep neural network architecture that learns to predict a deep visual encoding of textual input based on sentence vectorization and a multi-layer perceptron. We thoroughly analyze its architectural design, by varying the sentence...

  17. Detection of Double-Compressed H.264/AVC Video Incorporating the Features of the String of Data Bits and Skip Macroblocks

    Directory of Open Access Journals (Sweden)

    Heng Yao

    2017-12-01

    Full Text Available Today’s H.264/AVC coded videos have a high quality, high data-compression ratio. They also have a strong fault tolerance, better network adaptability, and have been widely applied on the Internet. With the popularity of powerful and easy-to-use video editing software, digital videos can be tampered with in various ways. Therefore, the double compression in the H.264/AVC video can be used as a first step in the study of video-tampering forensics. This paper proposes a simple, but effective, double-compression detection method that analyzes the periodic features of the string of data bits (SODBs and the skip macroblocks (S-MBs for all I-frames and P-frames in a double-compressed H.264/AVC video. For a given suspicious video, the SODBs and S-MBs are extracted for each frame. Both features are then incorporated to generate one enhanced feature to represent the periodic artifact of the double-compressed video. Finally, a time-domain analysis is conducted to detect the periodicity of the features. The primary Group of Pictures (GOP size is estimated based on an exhaustive strategy. The experimental results demonstrate the efficacy of the proposed method.

  18. Repurposing Video Documentaries as Features of a Flipped-Classroom Approach to Community-Centered Development

    Science.gov (United States)

    Arbogast, Douglas; Eades, Daniel; Plein, L. Christopher

    2017-01-01

    Online and off-site educational programming is increasingly incorporated by Extension educators to reach their clientele. Models such as the flipped classroom combine online content and in-person learning, allowing clients to both gain information and build peer learning communities. We demonstrate how video documentaries used in traditional…

  19. Proliferative and necrotising otitis externa in a cat without pinnal involvement: video-otoscopic features.

    Science.gov (United States)

    Borio, Stefano; Massari, Federico; Abramo, Francesca; Colombo, Silvia

    2013-04-01

    Proliferative and necrotising otitis externa is a rare and recently described disease affecting the ear canals and concave pinnae of kittens. This article describes a case of proliferative and necrotising otits externa in a young adult cat. In this case, the lesions did not affected the pinnae, but both ear canals were severely involved. Video-otoscopy revealed a digitally proliferative lesion, growing at 360° all around the ear canals for their entire length, without involvement of the middle ear. Histopathological examination confirmed the diagnosis, and the cat responded completely to a once-daily application of 0.1% tacrolimus ointment diluted in mineral oil in the ear canals. Video-otoscopy findings, not described previously, were very peculiar and may help clinicians to diagnose this rare disease.

  20. Can interface features affect aggression resulting from violent video game play? An examination of realistic controller and large screen size.

    Science.gov (United States)

    Kim, Ki Joon; Sundar, S Shyam

    2013-05-01

    Aggressiveness attributed to violent video game play is typically studied as a function of the content features of the game. However, can interface features of the game also affect aggression? Guided by the General Aggression Model (GAM), we examine the controller type (gun replica vs. mouse) and screen size (large vs. small) as key technological aspects that may affect the state aggression of gamers, with spatial presence and arousal as potential mediators. Results from a between-subjects experiment showed that a realistic controller and a large screen display induced greater aggression, presence, and arousal than a conventional mouse and a small screen display, respectively, and confirmed that trait aggression was a significant predictor of gamers' state aggression. Contrary to GAM, however, arousal showed no effects on aggression; instead, presence emerged as a significant mediator.

  1. Fusion of visual and audio features for person identification in real video

    Science.gov (United States)

    Li, Dongge; Wei, Gang; Sethi, Ishwar K.; Dimitrova, Nevenka

    2001-01-01

    In this research, we studied the joint use of visual and audio information for the problem of identifying persons in real video. A person identification system, which is able to identify characters in TV shows by the fusion of audio and visual information, is constructed based on two different fusion strategies. In the first strategy, speaker identification is used to verify the face recognition result. The second strategy consists of using face recognition and tracking to supplement speaker identification results. To evaluate our system's performance, an information database was generated by manually labeling the speaker and the main person's face in every I-frame of a video segment of the TV show 'Seinfeld'. By comparing the output form our system with our information database, we evaluated the performance of each of the analysis channels and their fusion. The results show that while the first fusion strategy is suitable for applications where precision is much more critical than recall. The second fusion strategy, on the other hand, generates the best overall identification performance. It outperforms either of the analysis channels greatly in both precision an recall and is applicable to more general applications, such as, in our case, to identify persons in TV programs.

  2. Digital video.

    Science.gov (United States)

    Johnson, Don; Johnson, Mike

    2004-04-01

    The process of digital capture, editing, and archiving video has become an important aspect of documenting arthroscopic surgery. Recording the arthroscopic findings before and after surgery is an essential part of the patient's medical record. The hardware and software has become more reasonable to purchase, but the learning curve to master the software is steep. Digital video is captured at the time of arthroscopy to a hard disk, and written to a CD at the end of the operative procedure. The process of obtaining video of open procedures is more complex. Outside video of the procedure is recorded on digital tape with a digital video camera. The camera must be plugged into a computer to capture the video on the hard disk. Adobe Premiere software is used to edit the video and render the finished video to the hard drive. This finished video is burned onto a CD. We outline the choice of computer hardware and software for the manipulation of digital video. The techniques of backup and archiving the completed projects and files also are outlined. The uses of digital video for education and the formats that can be used in PowerPoint presentations are discussed.

  3. Coding Local and Global Binary Visual Features Extracted From Video Sequences.

    Science.gov (United States)

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.

  4. Search Method Based on Figurative Indexation of Folksonomic Features of Graphic Files

    Directory of Open Access Journals (Sweden)

    Oleg V. Bisikalo

    2013-11-01

    Full Text Available In this paper the search method based on usage of figurative indexation of folksonomic characteristics of graphical files is described. The method takes into account extralinguistic information, is based on using a model of figurative thinking of humans. The paper displays the creation of a method of searching image files based on their formal, including folksonomical clues.

  5. Online scene change detection of multicast (MBone) video

    Science.gov (United States)

    Zhou, Wensheng; Shen, Ye; Vellaikal, Asha; Kuo, C.-C. Jay

    1998-10-01

    Many multimedia applications, such as multimedia data management systems and communication systems, require efficient representation of multimedia content. Thus semantic interpretation of video content has been a popular research area. Currently, most content-based video representation involves the segmentation of video based on key frames which are generated using scene change detection techniques as well as camera/object motion. Then, video features can be extracted from key frames. However most of such research performs off-line video processing in which the whole video scope is known as a priori which allows multiple scans of the stored video files during video processing. In comparison, relatively not much research has been done in the area of on-line video processing, which is crucial in video communication applications such as on-line collaboration, news broadcasts and so on. Our research investigates on-line real-time scene change detection of multicast video over the Internet. Our on-line processing system are designed to meet the requirements of real-time video multicasting over the Internet and to utilize the successful video parsing techniques available today. The proposed algorithms extract key frames from video bitstreams sent through the MBone network, and the extracted key frames are multicasted as annotations or metadata over a separate channel to assist in content filtering such as those anticipated to be in use by on-line filtering proxies in the Internet. The performance of the proposed algorithms are demonstrated and discussed in this paper.

  6. The 15 March 2007 paroxysm of Stromboli: video-image analysis, and textural and compositional features of the erupted deposit

    Science.gov (United States)

    Andronico, Daniele; Taddeucci, Jacopo; Cristaldi, Antonio; Miraglia, Lucia; Scarlato, Piergiorgio; Gaeta, Mario

    2013-07-01

    On 15 March 2007, a paroxysmal event occurred within the crater terrace of Stromboli, in the Aeolian Islands (Italy). Infrared and visible video recordings from the monitoring network reveal that there was a succession of highly explosive pulses, lasting about 5 min, from at least four eruptive vents. Initially, brief jets with low apparent temperature were simultaneously erupted from the three main vent regions, becoming hotter and transitioning to bomb-rich fountaining that lasted for 14 s. Field surveys estimate the corresponding fallout deposit to have a mass of ˜1.9 × 107 kg that, coupled with the video information on eruption duration, provides a mean mass eruption rate of ˜5.4 × 105 kg/s. Textural and chemical analyses of the erupted tephra reveal unexpected complexity, with grain-size bimodality in the samples associated with the different percentages of ash types (juvenile, lithics, and crystals) that reflects almost simultaneous deposition from multiple and evolving plumes. Juvenile glass chemistry ranges from a gas-rich, low porphyricity end member (typical of other paroxysmal events) to a gas-poor high porphyricity one usually associated with low-intensity Strombolian explosions. Integration of our diverse data sets reveals that (1) the 2007 event was a paroxysmal explosion driven by a magma sharing common features with large-scale paroxysms as well as with "ordinary" Strombolian explosions; (2) initial vent opening by the release of a pressurized gas slug and subsequent rapid magma vesiculation and ejection, which were recorded both by the infrared camera and in the texture of fallout products; and (3) lesser paroxysmal events can be highly dynamic and produce surprisingly complex fallout deposits, which would be difficult to interpret from the geological record alone.

  7. Multiple Feature Fusion Based on Co-Training Approach and Time Regularization for Place Classification in Wearable Video

    Directory of Open Access Journals (Sweden)

    Vladislavs Dovgalecs

    2013-01-01

    Full Text Available The analysis of video acquired with a wearable camera is a challenge that multimedia community is facing with the proliferation of such sensors in various applications. In this paper, we focus on the problem of automatic visual place recognition in a weakly constrained environment, targeting the indexing of video streams by topological place recognition. We propose to combine several machine learning approaches in a time regularized framework for image-based place recognition indoors. The framework combines the power of multiple visual cues and integrates the temporal continuity information of video. We extend it with computationally efficient semisupervised method leveraging unlabeled video sequences for an improved indexing performance. The proposed approach was applied on challenging video corpora. Experiments on a public and a real-world video sequence databases show the gain brought by the different stages of the method.

  8. A Depth Video-based Human Detection and Activity Recognition using Multi-features and Embedded Hidden Markov Models for Health Care Monitoring Systems

    Directory of Open Access Journals (Sweden)

    Ahmad Jalal

    2017-08-01

    Full Text Available Increase in number of elderly people who are living independently needs especial care in the form of healthcare monitoring systems. Recent advancements in depth video technologies have made human activity recognition (HAR realizable for elderly healthcare applications. In this paper, a depth video-based novel method for HAR is presented using robust multi-features and embedded Hidden Markov Models (HMMs to recognize daily life activities of elderly people living alone in indoor environment such as smart homes. In the proposed HAR framework, initially, depth maps are analyzed by temporal motion identification method to segment human silhouettes from noisy background and compute depth silhouette area for each activity to track human movements in a scene. Several representative features, including invariant, multi-view differentiation and spatiotemporal body joints features were fused together to explore gradient orientation change, intensity differentiation, temporal variation and local motion of specific body parts. Then, these features are processed by the dynamics of their respective class and learned, modeled, trained and recognized with specific embedded HMM having active feature values. Furthermore, we construct a new online human activity dataset by a depth sensor to evaluate the proposed features. Our experiments on three depth datasets demonstrated that the proposed multi-features are efficient and robust over the state of the art features for human action and activity recognition.

  9. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  10. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-03-01

    Full Text Available Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT, speed-up robust feature (SURF, local binary patterns (LBP, histogram of oriented gradients (HOG, and weighted HOG. Recently, the convolutional neural network (CNN method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  11. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  12. Combining high-speed SVM learning with CNN feature encoding for real-time target recognition in high-definition video for ISR missions

    Science.gov (United States)

    Kroll, Christine; von der Werth, Monika; Leuck, Holger; Stahl, Christoph; Schertler, Klaus

    2017-05-01

    For Intelligence, Surveillance, Reconnaissance (ISR) missions of manned and unmanned air systems typical electrooptical payloads provide high-definition video data which has to be exploited with respect to relevant ground targets in real-time by automatic/assisted target recognition software. Airbus Defence and Space is developing required technologies for real-time sensor exploitation since years and has combined the latest advances of Deep Convolutional Neural Networks (CNN) with a proprietary high-speed Support Vector Machine (SVM) learning method into a powerful object recognition system with impressive results on relevant high-definition video scenes compared to conventional target recognition approaches. This paper describes the principal requirements for real-time target recognition in high-definition video for ISR missions and the Airbus approach of combining an invariant feature extraction using pre-trained CNNs and the high-speed training and classification ability of a novel frequency-domain SVM training method. The frequency-domain approach allows for a highly optimized implementation for General Purpose Computation on a Graphics Processing Unit (GPGPU) and also an efficient training of large training samples. The selected CNN which is pre-trained only once on domain-extrinsic data reveals a highly invariant feature extraction. This allows for a significantly reduced adaptation and training of the target recognition method for new target classes and mission scenarios. A comprehensive training and test dataset was defined and prepared using relevant high-definition airborne video sequences. The assessment concept is explained and performance results are given using the established precision-recall diagrams, average precision and runtime figures on representative test data. A comparison to legacy target recognition approaches shows the impressive performance increase by the proposed CNN+SVM machine-learning approach and the capability of real-time high

  13. Port Video and Logo

    OpenAIRE

    Whitehead, Stuart; Rush, Joshua

    2013-01-01

    Logo PDF files should be accessible by any PDF reader such as Adobe Reader. SVG files of the logo are vector graphics accessible by programs such as Inkscape or Adobe Illustrator. PNG files are image files of the logo that should be able to be opened by any operating system's default image viewer. The final report is submitted in both .doc (Microsoft Word) and .pdf formats. The video is submitted in .avi format and can be viewed with Windows Media Player or VLC. Audio .wav files are also ...

  14. The Effect of Theme Preference on Academic Word List Use: A Case for Smartphone Video Recording Feature

    Science.gov (United States)

    Gromik, Nicolas A.

    2017-01-01

    Sixty-seven Japanese English as a Second Language undergraduate learners completed one smartphone video production per week for 12 weeks, based on a teacher-selected theme. Designed as a case study for this specific context, data from students' oral performances was analyzed on a weekly basis for their use of the Academic Word List (AWL). A…

  15. Real-Time FPGA-Based Object Tracker with Automatic Pan-Tilt Features for Smart Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2017-05-01

    Full Text Available The design of smart video surveillance systems is an active research field among the computer vision community because of their ability to perform automatic scene analysis by selecting and tracking the objects of interest. In this paper, we present the design and implementation of an FPGA-based standalone working prototype system for real-time tracking of an object of interest in live video streams for such systems. In addition to real-time tracking of the object of interest, the implemented system is also capable of providing purposive automatic camera movement (pan-tilt in the direction determined by movement of the tracked object. The complete system, including camera interface, DDR2 external memory interface controller, designed object tracking VLSI architecture, camera movement controller and display interface, has been implemented on the Xilinx ML510 (Virtex-5 FX130T FPGA Board. Our proposed, designed and implemented system robustly tracks the target object present in the scene in real time for standard PAL (720 × 576 resolution color video and automatically controls camera movement in the direction determined by the movement of the tracked object.

  16. Video display terminal workstation improvement program: I. Baseline associations between musculoskeletal discomfort and ergonomic features of workstations.

    Science.gov (United States)

    Demure, B; Luippold, R S; Bigelow, C; Ali, D; Mundt, K A; Liese, B

    2000-08-01

    Associations between selected sites of musculoskeletal discomfort and ergonomic characteristics of the video display terminal (VDT) workstation were assessed in analyses controlling for demographic, psychosocial stress, and VDT use factors in 273 VDT users from a large administrative department. Significant associations with wrist/hand discomfort were seen for female gender; working 7+ hours at a VDT; low job satisfaction; poor keyboard position; use of new, adjustable furniture; and layout of the workstation. Significantly increased odds ratios for neck/shoulder discomfort were observed for 7+ hours at a VDT, less than complete job control, older age (40 to 49 years), and never/infrequent breaks. Lower back discomfort was related marginally to working 7+ hours at a VDT. These results demonstrate that some characteristics of VDT workstations, after accounting for psychosocial stress, can be correlated with musculoskeletal discomfort.

  17. Automatic Synthesis of Background Music Track Data by Analysis of Video Contents

    Science.gov (United States)

    Modegi, Toshio

    This paper describes an automatic creation technique of background music track data for given video file. Our proposed system is based on a novel BGM synthesizer, called “Matrix Music Player”, which can produce 3125 kinds of high-quality BGM contents by dynamically mixing 5 audio files, which are freely selected from total 25 audio waveform files. In order to retrieve appropriate BGM mixing patterns, we have constructed an acoustic analysis database, which records acoustic features of total 3125 synthesized patterns. Developing a video analyzer which generates image parameters of given video data and converts them to acoustic parameters, we will access the acoustic analysis database and retrieve an appropriate synthesized BGM signal, which can be included in the audio track of the source video file. Based on our proposed method, we have tried BGM synthesis experiments using several around 20-second video clips. The automatically inserted BGM audio streams for all of our given video clips have been objectively acceptable. In this paper, we describe briefly our proposed BGM synthesized method and its experimental results.

  18. Algorithm combination of deblurring and denoising on video frames using the method search of local features on image

    Directory of Open Access Journals (Sweden)

    Semenishchev Evgeny

    2017-01-01

    Full Text Available In this paper, we propose an approach that allows us to perform an operation to reduce error in the form of noise and lubrication. To improve the processing speed and the possibility of parallelization of the process, we use the approach is based on the search for local features on the image.

  19. A Video Streaming Application Using Mobile Media Application Programming Interface

    Directory of Open Access Journals (Sweden)

    Henning Titi Ciptaningtyas

    2010-12-01

    Full Text Available Recently, the development of mobile phone technology is growing rapidly. These developments led to the emerging of a multimedia mobile phone that supports Wireless Local Area Network (WLAN. However, the use of WLAN technology on mobile phones to access the streaming video is very rarely encountered. While the current S60 Symbian operating system as a multimedia mobile phone is very reliable in handling a variety of media such as video.The discussion in this study presents the making of a video streaming application in mobile phone via a WLAN connection using JSR 135 technology or better known as the Mobile Media API (MMAPI. MMAPI is used to control the process of video streaming and its supporting features. The application will use the two protocols namely RTSP and HTTP. Experiment results show that the use of MMAPI on Symbian 60 based mobile phones to do video streaming is feasible and has good reliability. This is indicated by 0% packet loss on a reliable connection. In addition, the times required to play multimedia files are not affected by the size of video streaming files.

  20. Accelerating video carving from unallocated space

    Science.gov (United States)

    Kalva, Hari; Parikh, Anish; Srinivasan, Avinash

    2013-03-01

    Video carving has become an essential tool in digital forensics. Video carving enables recovery of deleted video files from hard disks. Processing data to extract videos is a computationally intensive task. In this paper we present two methods to accelerate video carving: a method to accelerate fragment extraction, and a method to accelerate combining of these fragments into video segments. Simulation results show that complexity of video fragment extraction can be reduced by as much as 75% with minimal impact on the videos recovered.

  1. Preparation Strategies for Video-based Introductory Physics

    Science.gov (United States)

    DeMuth, David M., Jr.; Schwalm, M.

    2006-12-01

    A video capture and Macromedia Flash-based video analysis system is used in an implementation of the problem solving and collaborative methodologies (UMinn, Heller/Heller) at the University of Minnesota, Crookston, where an open source polling system has been developed to verify preparation. Its features include a platform independent web interface, assignable and automatic polling intervals, and graphical reporting. Question types: Multiple Choice (optionally randomly presented), random variable, ranking, and survey. An HTML editor with image/video/file uploading allows for high quality presentation of questions. An optional collaborative feature forces agreement among students in small groups. An overview of the system and its impact on preparation will be presented. http://ray.crk.umn.edu/aapt/ Funded by NSF-CCLI 102280781

  2. Automatic Story Segmentation for TV News Video Using Multiple Modalities

    Directory of Open Access Journals (Sweden)

    Émilie Dumont

    2012-01-01

    Full Text Available While video content is often stored in rather large files or broadcasted in continuous streams, users are often interested in retrieving only a particular passage on a topic of interest to them. It is, therefore, necessary to split video documents or streams into shorter segments corresponding to appropriate retrieval units. We propose here a method for the automatic segmentation of TV news videos into stories. A-multiple-descriptor based segmentation approach is proposed. The selected multimodal features are complementary and give good insights about story boundaries. Once extracted, these features are expanded with a local temporal context and combined by an early fusion process. The story boundaries are then predicted using machine learning techniques. We investigate the system by experiments conducted using TRECVID 2003 data and protocol of the story boundary detection task, and we show that the proposed approach outperforms the state-of-the-art methods while requiring a very small amount of manual annotation.

  3. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  4. Videos & Tools: MedlinePlus

    Science.gov (United States)

    ... of this page: https://medlineplus.gov/videosandcooltools.html Videos & Tools To use the sharing features on this page, please enable JavaScript. Watch health videos on topics such as anatomy, body systems, and ...

  5. Health Videos: MedlinePlus

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/anatomyvideos.html.htm Health Videos To use the sharing features on this page, please enable JavaScript. These animated videos show the anatomy of body parts and organ ...

  6. An unsupervised meta-graph clustering based prototype-specific feature quantification for human re-identification in video surveillance

    Directory of Open Access Journals (Sweden)

    Aparajita Nanda

    2017-06-01

    Full Text Available Human re-identification is an emerging research area in the field of visual surveillance. It refers to the task of associating the images of the persons captured by one camera (probe set with the images captured by another camera (gallery set at different locations in different time instances. The performance of these systems are often challenged by some factors—variation in articulated human pose and clothing, frequent occlusion with various objects, change in light illumination, and the cluttered background are to name a few. Besides, the ambiguity in recognition increases between individuals with similar appearance. In this paper, we present a novel framework for human re-identification that finds the correspondence image pair across non-overlapping camera views in the presence of the above challenging scenarios. The proposed framework handles the visual ambiguity having similar appearance by first segmenting the gallery instances into disjoint prototypes (groups, where each prototype represents the images with high commonality. Then, a weighing scheme is formulated that quantifies the selective and distinct information about the features concerning the level of contribution against each prototype. Finally, the prototype specific weights are utilized in the similarity measure and fused with the existing generic weighing to facilitates improvement in the re-identification. Exhaustive simulation on three benchmark datasets alongside the CMC (Cumulative Matching Characteristics plot enumerate the efficacy of our proposed framework over the counterparts.

  7. Video Quality Prediction over Wireless 4G

    KAUST Repository

    Lau, Chun Pong

    2013-04-14

    In this paper, we study the problem of video quality prediction over the wireless 4G network. Video transmission data is collected from a real 4G SCM testbed for investigating factors that affect video quality. After feature transformation and selection on video and network parameters, video quality is predicted by solving as regression problem. Experimental results show that the dominated factor on video quality is the channel attenuation and video quality can be well estimated by our models with small errors.

  8. Video recording in movement disorders: practical issues.

    Science.gov (United States)

    Duker, Andrew P

    2013-10-01

    Video recording can provide a valuable and unique record of the physical examinations of patients with a movement disorder, capturing nuances of movement and supplementing the written medical record. In addition, video is an indispensable tool for education and research in movement disorders. Digital file recording and storage has largely replaced analog tape recording, increasing the ease of editing and storing video records. Practical issues to consider include hardware and software configurations, video format, the security and longevity of file storage, patient consent, and video protocols.

  9. Forensic analysis of video steganography tools

    Directory of Open Access Journals (Sweden)

    Thomas Sloan

    2015-05-01

    Full Text Available Steganography is the art and science of concealing information in such a way that only the sender and intended recipient of a message should be aware of its presence. Digital steganography has been used in the past on a variety of media including executable files, audio, text, games and, notably, images. Additionally, there is increasing research interest towards the use of video as a media for steganography, due to its pervasive nature and diverse embedding capabilities. In this work, we examine the embedding algorithms and other security characteristics of several video steganography tools. We show how all feature basic and severe security weaknesses. This is potentially a very serious threat to the security, privacy and anonymity of their users. It is important to highlight that most steganography users have perfectly legal and ethical reasons to employ it. Some common scenarios would include citizens in oppressive regimes whose freedom of speech is compromised, people trying to avoid massive surveillance or censorship, political activists, whistle blowers, journalists, etc. As a result of our findings, we strongly recommend ceasing any use of these tools, and to remove any contents that may have been hidden, and any carriers stored, exchanged and/or uploaded online. For many of these tools, carrier files will be trivial to detect, potentially compromising any hidden data and the parties involved in the communication. We finish this work by presenting our steganalytic results, that highlight a very poor current state of the art in practical video steganography tools. There is unfortunately a complete lack of secure and publicly available tools, and even commercial tools offer very poor security. We therefore encourage the steganography community to work towards the development of more secure and accessible video steganography tools, and make them available for the general public. The results presented in this work can also be seen as a useful

  10. Reviews in instructional video

    NARCIS (Netherlands)

    van der Meij, Hans

    2017-01-01

    This study investigates the effectiveness of a video tutorial for software training whose construction was based on a combination of insights from multimedia learning and Demonstration-Based Training. In the videos, a model of task performance was enhanced with instructional features that were

  11. Personal Digital Video Stories

    DEFF Research Database (Denmark)

    Ørngreen, Rikke; Henningsen, Birgitte Sølbeck; Louw, Arnt Vestergaard

    2016-01-01

    agenda focusing on video productions in combination with digital storytelling, followed by a presentation of the digital storytelling features. The paper concludes with a suggestion to initiate research in what is identified as Personal Digital Video (PDV) Stories within longitudinal settings, while...

  12. Use of an iPhone 4 with Video Features to Assist Location of Students with Moderate Intellectual Disability When Lost in Community Settings

    Science.gov (United States)

    Purrazzella, Kaitlin; Mechling, Linda C.

    2013-01-01

    This study evaluated the acquisition of use of an iPhone 4 by adults with moderate intellectual disability to take and send video captions of their location when lost in the community. A multiple probe across participants design was used to evaluate the effectiveness of the intervention which used video modeling, picture prompts, and instructor…

  13. Video Streaming in the Wild West

    OpenAIRE

    Helen Gail Prosser

    2006-01-01

    Northern Lakes College in north-central Alberta is the first post-secondary institution in Canada to use the Media on Demand digital video system to stream large video files between dispersed locations (Karlsen). Staff and students at distant locations of Northern Lakes College are now viewing more than 350 videos using video streaming technology. This has been made possible by SuperNet, a high capacity broadband network that connects schools, hospitals, libraries and government offices thr...

  14. Collaborative Video Sketching

    DEFF Research Database (Denmark)

    Henningsen, Birgitte; Gundersen, Peter Bukovica; Hautopp, Heidi

    2017-01-01

    This paper introduces to what we define as a collaborative video sketching process. This process links various sketching techniques with digital storytelling approaches and creative reflection processes in video productions. Traditionally, sketching has been used by designers across various...... forms and through empirical examples, we present and discuss the video recording of sketching sessions, as well as development of video sketches by rethinking, redoing and editing the recorded sessions. The empirical data is based on workshop sessions with researchers and students from universities...... and university colleges and primary and secondary school teachers. As researchers, we have had different roles in these action research case studies where various video sketching techniques were applied.The analysis illustrates that video sketching can take many forms, and two common features are important...

  15. MPEG-4 video compression optimization research

    Science.gov (United States)

    Wei, Xianmin

    2011-10-01

    In order to make a large amount of video data compression and effectively with limited network bandwidth to transfer smoothly, this article using the MPEG-4 compression technology to compress video stream. In the network transmission, according to the characteristics of video stream, for transmission technology to carry out full analysis and optimization, and combining current network bandwidth status and protocol, to establish one network model with transferring and playback video streaming effectively. Through a combination of these two areas, significantly improved compression and storage of video files and network transmission efficiency, increased video processing power.

  16. Caching Eliminates the Wireless Bottleneck in Video Aware Wireless Networks

    Directory of Open Access Journals (Sweden)

    Andreas F. Molisch

    2014-01-01

    Full Text Available Wireless video is the main driver for rapid growth in cellular data traffic. Traditional methods for network capacity increase are very costly and do not exploit the unique features of video, especially asynchronous content reuse. In this paper we give an overview of our work that proposed and detailed a new transmission paradigm exploiting content reuse and the widespread availability of low-cost storage. Our network structure uses caching in helper stations (femtocaching and/or devices, combined with highly spectrally efficient short-range communications to deliver video files. For femtocaching, we develop optimum storage schemes and dynamic streaming policies that optimize video quality. For caching on devices, combined with device-to-device (D2D communications, we show that communications within clusters of mobile stations should be used; the cluster size can be adjusted to optimize the tradeoff between frequency reuse and the probability that a device finds a desired file cached by another device in the same cluster. In many situations the network throughput increases linearly with the number of users, and the tradeoff between throughput and outage is better than in traditional base-station centric systems. Simulation results with realistic numbers of users and channel conditions show that network throughput can be increased by two orders of magnitude compared to conventional schemes.

  17. PROTOTIPE VIDEO EDITOR DENGAN MENGGUNAKAN DIRECT X DAN DIRECT SHOW

    Directory of Open Access Journals (Sweden)

    Djoni Haryadi Setiabudi

    2004-01-01

    Full Text Available Technology development had given people the chance to capture their memorable moments in video format. A high quality digital video is a result of a good editing process. Which in turn, arise the new need of an editor application. In accordance to the problem, here the process of making a simple application for video editing needs. The application development use the programming techniques often applied in multimedia applications, especially video. First part of the application will begin with the video file compression and decompression, then we'll step into the editing part of the digital video file. Furthermore, the application also equipped with the facilities needed for the editing processes. The application made with Microsoft Visual C++ with DirectX technology, particularly DirectShow. The application provides basic facilities that will help the editing process of a digital video file. The application will produce an AVI format file after the editing process is finished. Through the testing process of this application shows the ability of this application to do the 'cut' and 'insert' of video files in AVI, MPEG, MPG and DAT formats. The 'cut' and 'insert' process only can be done in static order. Further, the aplication also provide the effects facility for transition process in each clip. Lastly, the process of saving the new edited video file in AVI format from the application. Abstract in Bahasa Indonesia : Perkembangan teknologi memberi kesempatan masyarakat untuk mengabadikan saat - saat yang penting menggunakan video. Pembentukan video digital yang baik membutuhkan proses editing yang baik pula. Untuk melakukan proses editing video digital dibutuhkan program editor. Berdasarkan permasalahan diatas maka pada penelitian ini dibuat prototipe editor sederhana untuk video digital. Pembuatan aplikasi memakai teknik pemrograman di bidang multimedia, khususnya video. Perencanaan dalam pembuatan aplikasi tersebut dimulai dengan pembentukan

  18. Robust Watermarking of Video Streams

    Directory of Open Access Journals (Sweden)

    T. Polyák

    2006-01-01

    Full Text Available In the past few years there has been an explosion in the use of digital video data. Many people have personal computers at home, and with the help of the Internet users can easily share video files on their computer. This makes possible the unauthorized use of digital media, and without adequate protection systems the authors and distributors have no means to prevent it.Digital watermarking techniques can help these systems to be more effective by embedding secret data right into the video stream. This makes minor changes in the frames of the video, but these changes are almost imperceptible to the human visual system. The embedded information can involve copyright data, access control etc. A robust watermark is resistant to various distortions of the video, so it cannot be removed without affecting the quality of the host medium. In this paper I propose a video watermarking scheme that fulfills the requirements of a robust watermark. 

  19. In search of video event semantics

    NARCIS (Netherlands)

    Mazloom, M.

    2016-01-01

    In this thesis we aim to represent an event in a video using semantic features. We start from a bank of concept detectors for representing events in video. At first we considered the relevance of concepts to the event inside the video representation. We address the problem of video event

  20. z206sc_video_observations

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of DS 781 presents video observations from cruise Z206SC for the Santa Barbara Channel region and beyond in southern California. The vector data file is...

  1. Video-OCS Floating Wind Farm Site

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data release contains digital video files from the USGS field activity 2014-607-FA, a survey of the Oregon Outer Continental Shelf (OCS) Floating Wind Farm Site...

  2. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  3. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    OpenAIRE

    Dat Tien Nguyen; Ki Wan Kim; Hyung Gil Hong; Ja Hyung Koo; Min Cheol Kim; Kang Ryoung Park

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has ...

  4. Video Editing System

    Science.gov (United States)

    Schlecht, Leslie E.; Kutler, Paul (Technical Monitor)

    1998-01-01

    This is a proposal for a general use system based, on the SGI IRIS workstation platform, for recording computer animation to videotape. In addition, this system would provide features for simple editing and enhancement. Described here are a list of requirements for the system, and a proposed configuration including the SGI VideoLab Integrator, VideoMedia VLAN animation controller and the Pioneer rewritable laserdisc recorder.

  5. Digital Video Teach Yourself VISUALLY

    CERN Document Server

    Watson, Lonzell

    2010-01-01

    Tips and techniques for shooting and sharing superb digital videos. Never before has video been more popular-or more accessible to the home photographer. Now you can create YouTube-worthy, professional-looking video, with the help of this richly illustrated guide. In a straightforward, simple, highly visual format, Teach Yourself VISUALLY Digital Video demystifies the secrets of great video. With colorful screenshots and illustrations plus step-by-step instructions, the book explains the features of your camera and their capabilities, and shows you how to go beyond "auto" to manually

  6. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  7. Video streaming in the Wild West

    Directory of Open Access Journals (Sweden)

    Helen Gail Prosser

    2006-11-01

    Full Text Available Northern Lakes College in north-central Alberta is the first post-secondary institution in Canada to use the Media on Demand digital video system to stream large video files between dispersed locations (Karlsen. Staff and students at distant locations of Northern Lakes College are now viewing more than 350 videos using video streaming technology. This has been made possible by SuperNet, a high capacity broadband network that connects schools, hospitals, libraries and government offices throughout the province of Alberta (Alberta SuperNet. This article describes the technical process of implementing video streaming at Northern Lakes College from March 2005 until March 2006.

  8. Illustrating Geology With Customized Video in Introductory Geoscience Courses

    Science.gov (United States)

    Magloughlin, J. F.

    2008-12-01

    For the past several years, I have been creating short videos for use in large-enrollment introductory physical geology classes. The motivation for this project included 1) lack of appropriate depth in existing videos, 2) engagement of non-science students, 3) student indifference to traditional textbooks, 4) a desire to share the visual splendor of geology through virtual field trips, and 5) a desire to meld photography, animation, narration, and videography in self-contained experiences. These (HD) videos are information-intensive but short, allowing a focus on relatively narrow topics from numerous subdisciplines, incorporation into lectures to help create variety while minimally interrupting flow and holding students' attention, and manageable file sizes. Nearly all involve one or more field locations, including sites throughout the western and central continental U.S., as well as Hawaii, Italy, New Zealand, and Scotland. The limited scope of the project and motivations mentioned preclude a comprehensive treatment of geology. Instead, videos address geologic processes, locations, features, and interactions with humans. The videos have been made available via DVD and on-line streaming. Such a project requires an array of video and audio equipment and software, a broad knowledge of geology, very good computing power, adequate time, creativity, a substantial travel budget, liability insurance, elucidation of the separation (or non-separation) between such a project and other responsibilities, and, preferably but not essentially, the support of one's supervisor or academic unit. Involving students in such projects entails risks, but involving necessary technical expertise is virtually unavoidable. In my own courses, some videos are used in class and/or made available on-line as simply another aspect of the educational experience. Student response has been overwhelmingly positive, particularly when expectations of students regarding the content of the videos is made

  9. Web Audio/Video Streaming Tool

    Science.gov (United States)

    Guruvadoo, Eranna K.

    2003-01-01

    In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.

  10. Camcorder 101: Buying and Using Video Cameras.

    Science.gov (United States)

    Catron, Louis E.

    1991-01-01

    Lists nine practical applications of camcorders to theater companies and programs. Discusses the purchase of video gear, camcorder features, accessories, the use of the camcorder in the classroom, theater management, student uses, and video production. (PRA)

  11. Video Player Keyboard Shortcuts: MedlinePlus

    Science.gov (United States)

    ... of this page: https://medlineplus.gov/hotkeys.html Video Player Keyboard Shortcuts To use the sharing features ... of accessible keyboard shortcuts for our latest Health videos on the MedlinePlus site. These shortcuts allow you ...

  12. Watermarking textures in video games

    Science.gov (United States)

    Liu, Huajian; Berchtold, Waldemar; Schäfer, Marcel; Lieb, Patrick; Steinebach, Martin

    2014-02-01

    Digital watermarking is a promising solution to video game piracy. In this paper, based on the analysis of special challenges and requirements in terms of watermarking textures in video games, a novel watermarking scheme for DDS textures in video games is proposed. To meet the performance requirements in video game applications, the proposed algorithm embeds the watermark message directly in the compressed stream in DDS files and can be straightforwardly applied in watermark container technique for real-time embedding. Furthermore, the embedding approach achieves high watermark payload to handle collusion secure fingerprinting codes with extreme length. Hence, the scheme is resistant to collusion attacks, which is indispensable in video game applications. The proposed scheme is evaluated in aspects of transparency, robustness, security and performance. Especially, in addition to classical objective evaluation, the visual quality and playing experience of watermarked games is assessed subjectively in game playing.

  13. Children's Video Games as Interactive Racialization

    OpenAIRE

    Martin, Cathlena

    2008-01-01

    Cathlena Martin explores in her paper "Children's Video Games as Interactive Racialization" selected children's video games. Martin argues that children's video games often act as reinforcement for the games' television and film counterparts and their racializing characteristics and features. In Martin's analysis the video games discussed represent media through which to analyze racial identities and ideologies. In making the case for positive female minority leads in children's video games, ...

  14. Digital video transcoding for transmission and storage

    CERN Document Server

    Sun, Huifang; Chen, Xuemin

    2004-01-01

    Professionals in the video and multimedia industries need a book that explains industry standards for video coding and how to convert the compressed information between standards. Digital Video Transcoding for Transmission and Storage answers this demand while also supplying the theories and principles of video compression and transcoding technologies. Emphasizing digital video transcoding techniques, this book summarizes its content via examples of practical methods for transcoder implementation. It relates almost all of its featured transcoding technologies to practical applications.This vol

  15. Using the multi-bit feature of memristors for register files in signed-digit arithmetic units

    Science.gov (United States)

    Fey, Dietmar

    2014-10-01

    One of the outstanding features of memristors is their principle possibility to store more than one binary value in a single memory cell. Due to their further benefits of non-volatility, fast access times, low energy consumption, compactness and compatibility with CMOS logic, memristors are excellent devices for storing register values nearby arithmetic units. In particular, the capability to store multi-bit values allows one to realise procedures for high-speed arithmetic circuits, which are not based on usual binary but on ternary values. Arithmetic units based on three-state number representation allow carrying out an addition in two steps, i.e., in O(1), independent of the operands word length n. They have been well-known in literature for a long time but have not been brought into practice because of the lack of appropriate devices to store more than two states in one elementary register or main memory cell. The disadvantage of this number representation is that a corresponding arithmetic unit would require a doubling of the memory capacity. Using memristors for the registers can avoid this drawback. Therefore, this paper presents a conceptual solution for a three-state adder based on tri-stable memristive devices. The principal feasibility of such a unit is demonstrated by SPICE simulations and the performance increase is evaluated in comparison with a ripple-carry and a carry-look-ahead adder.

  16. Indexed Captioned Searchable Videos: A Learning Companion for STEM Coursework

    Science.gov (United States)

    Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher

    2017-01-01

    Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character…

  17. Using Learning Styles and Viewing Styles in Streaming Video

    Science.gov (United States)

    de Boer, Jelle; Kommers, Piet A. M.; de Brock, Bert

    2011-01-01

    Improving the effectiveness of learning when students observe video lectures becomes urgent with the rising advent of (web-based) video materials. Vital questions are how students differ in their learning preferences and what patterns in viewing video can be detected in log files. Our experiments inventory students' viewing patterns while watching…

  18. Using learning styles and viewing styles in streaming video

    NARCIS (Netherlands)

    de Boer, Jelle; Kommers, Petrus A.M.; de Brock, Bert

    2011-01-01

    Improving the effectiveness of learning when students observe video lectures becomes urgent with the rising advent of (web-based) video materials. Vital questions are how students differ in their learning preferences and what patterns in viewing video can be detected in log files. Our experiments

  19. How to improve learning from video, using an eye tracker

    NARCIS (Netherlands)

    Jelle de Boer

    2014-01-01

    The initial trigger of this research about learning from video was the availability of log files from users of video material. Video modality is seen as attractive as it is associated with the relaxed mood of watching TV. The experiments in this research have the goal to gain more insight in

  20. Using learning styles and viewing styles in streaming video

    NARCIS (Netherlands)

    de Boer, Jelle; Kommers, Piet A. M.; de Brock, Bert

    Improving the effectiveness of learning when students observe video lectures becomes urgent with the rising advent of (web-based) video materials. Vital questions are how students differ in their learning preferences and what patterns in viewing video can be detected in log files. Our experiments

  1. A new video programme

    CERN Multimedia

    CERN video productions

    2011-01-01

    "What's new @ CERN?", a new monthly video programme, will be broadcast on the Monday of every month on webcast.cern.ch. Aimed at the general public, the programme will cover the latest CERN news, with guests and explanatory features. Tune in on Monday 3 October at 4 pm (CET) to see the programme in English, and then at 4:20 pm (CET) for the French version.   var flash_video_player=get_video_player_path(); insert_player_for_external('Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-0753-kbps-640x360-25-fps-audio-64-kbps-44-kHz-stereo', 'mms://mediastream.cern.ch/MediaArchive/Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-Multirate-200-to-753-kbps-640x360-25-fps.wmv', 'false', 480, 360, 'https://mediastream.cern.ch/MediaArchive/Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-posterframe-640x360-at-10-percent.jpg', '1383406', true, 'Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-0600-kbps-maxH-360-25-fps-...

  2. Video Malware - Behavioral Analysis

    Directory of Open Access Journals (Sweden)

    Rajdeepsinh Dodia

    2015-04-01

    Full Text Available Abstract The counts of malware attacks exploiting the internet increasing day by day and has become a serious threat. The latest malware spreading out through the media players embedded using the video clip of funny in nature to lure the end users. Once it is executed and installed then the behavior of the malware is in the malware authors hand. The spread of the malware emulates through Internet USB drives sharing of the files and folders can be anything which makes presence concealed. The funny video named as it connected to the film celebrity where the malware variant was collected from the laptop of the terror outfit organization .It runs in the backend which it contains malicious code which steals the user sensitive information like banking credentials username amp password and send it to the remote host user called command amp control. The stealed data is directed to the email encapsulated in the malicious code. The potential malware will spread through the USB and other devices .In summary the analysis reveals the presence of malicious code in executable video file and its behavior.

  3. Fast Aerial Video Stitching

    Directory of Open Access Journals (Sweden)

    Jing Li

    2014-10-01

    Full Text Available The highly efficient and robust stitching of aerial video captured by unmanned aerial vehicles (UAVs is a challenging problem in the field of robot vision. Existing commercial image stitching systems have seen success with offline stitching tasks, but they cannot guarantee high-speed performance when dealing with online aerial video sequences. In this paper, we present a novel system which has an unique ability to stitch high-frame rate aerial video at a speed of 150 frames per second (FPS. In addition, rather than using a high-speed vision platform such as FPGA or CUDA, our system is running on a normal personal computer. To achieve this, after the careful comparison of the existing invariant features, we choose the FAST corner and binary descriptor for efficient feature extraction and representation, and present a spatial and temporal coherent filter to fuse the UAV motion information into the feature matching. The proposed filter can remove the majority of feature correspondence outliers and significantly increase the speed of robust feature matching by up to 20 times. To achieve a balance between robustness and efficiency, a dynamic key frame-based stitching framework is used to reduce the accumulation errors. Extensive experiments on challenging UAV datasets demonstrate that our approach can break through the speed limitation and generate an accurate stitching image for aerial video stitching tasks.

  4. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  5. Video games

    OpenAIRE

    Kolář, Vojtěch

    2012-01-01

    This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...

  6. Unusual features of negative leaders' development in natural lightning, according to simultaneous records of current, electric field, luminosity, and high-speed video

    Science.gov (United States)

    Guimaraes, Miguel; Arcanjo, Marcelo; Murta Vale, Maria Helena; Visacro, Silverio

    2017-02-01

    The development of downward and upward leaders that formed two negative cloud-to-ground return strokes in natural lightning, spaced only about 200 µs apart and terminating on ground only a few hundred meters away, was monitored at Morro do Cachimbo Station, Brazil. The simultaneous records of current, close electric field, relative luminosity, and corresponding high-speed video frames (sampling rate of 20,000 frames per second) reveal that the initiation of the first return stroke interfered in the development of the second negative leader, leading it to an apparent continuous development before the attachment, without stepping, and at a regular two-dimensional speed. Based on the experimental data, the formation processes of the two return strokes are discussed, and plausible interpretations for their development are provided.

  7. CHARACTER RECOGNITION OF VIDEO SUBTITLES\\

    Directory of Open Access Journals (Sweden)

    Satish S Hiremath

    2016-11-01

    Full Text Available An important task in content based video indexing is to extract text information from videos. The challenges involved in text extraction and recognition are variation of illumination on each video frame with text, the text present on the complex background and different font size of the text. Using various image processing algorithms like morphological operations, blob detection and histogram of oriented gradients the character recognition of video subtitles is implemented. Segmentation, feature extraction and classification are the major steps of character recognition. Several experimental results are shown to demonstrate the performance of the proposed algorithm

  8. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  9. LBA-ECO LC-07 Validation Overflight for Amazon Mosaics, Video, 1999

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set presents georeferenced digital video files from Validation Overflight for Amazon Mosaics (VOAM) aerial video surveys as part of the Large-Scale...

  10. Bridging analog and digital video in the surgical setting.

    Science.gov (United States)

    Miron, Hagai; Blumenthal, Eytan Z

    2003-10-01

    Editing surgical videos requires a basic understanding of key technical issues, especially when transforming from analog to digital media. These issues include an understanding of compression-decompression (eg, MPEGs), generation quality loss, video formats, and compression ratios. We introduce basic terminology and concepts related to analog and digital video, emphasizing the process of converting analog video to digital files. The choice of hardware, software, and formats is discussed, including advantages and drawbacks. Last, we provide an inexpensive hardware-software solution.

  11. Video Pulses: User-Based Modeling of Interesting Video Segments

    Directory of Open Access Journals (Sweden)

    Markos Avlonitis

    2014-01-01

    Full Text Available We present a user-based method that detects regions of interest within a video in order to provide video skims and video summaries. Previous research in video retrieval has focused on content-based techniques, such as pattern recognition algorithms that attempt to understand the low-level features of a video. We are proposing a pulse modeling method, which makes sense of a web video by analyzing users' Replay interactions with the video player. In particular, we have modeled the user information seeking behavior as a time series and the semantic regions as a discrete pulse of fixed width. Then, we have calculated the correlation coefficient between the dynamically detected pulses at the local maximums of the user activity signal and the pulse of reference. We have found that users' Replay activity significantly matches the important segments in information-rich and visually complex videos, such as lecture, how-to, and documentary. The proposed signal processing of user activity is complementary to previous work in content-based video retrieval and provides an additional user-based dimension for modeling the semantics of a social video on the web.

  12. Computational Thinking in Constructionist Video Games

    Science.gov (United States)

    Weintrop, David; Holbert, Nathan; Horn, Michael S.; Wilensky, Uri

    2016-01-01

    Video games offer an exciting opportunity for learners to engage in computational thinking in informal contexts. This paper describes a genre of learning environments called constructionist video games that are especially well suited for developing learners' computational thinking skills. These games blend features of conventional video games with…

  13. Robust video object cosegmentation.

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing; Li, Xuelong; Porikli, Fatih

    2015-10-01

    With ever-increasing volumes of video data, automatic extraction of salient object regions became even more significant for visual analytic solutions. This surge has also opened up opportunities for taking advantage of collective cues encapsulated in multiple videos in a cooperative manner. However, it also brings up major challenges, such as handling of drastic appearance, motion pattern, and pose variations, of foreground objects as well as indiscriminate backgrounds. Here, we present a cosegmentation framework to discover and segment out common object regions across multiple frames and multiple videos in a joint fashion. We incorporate three types of cues, i.e., intraframe saliency, interframe consistency, and across-video similarity into an energy optimization framework that does not make restrictive assumptions on foreground appearance and motion model, and does not require objects to be visible in all frames. We also introduce a spatio-temporal scale-invariant feature transform (SIFT) flow descriptor to integrate across-video correspondence from the conventional SIFT-flow into interframe motion flow from optical flow. This novel spatio-temporal SIFT flow generates reliable estimations of common foregrounds over the entire video data set. Experimental results show that our method outperforms the state-of-the-art on a new extensive data set (ViCoSeg).

  14. Face Recognition and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Swapnil Vitthal Tathe

    2017-07-01

    Full Text Available Advancement in computer vision technology and availability of video capturing devices such as surveillance cameras has evoked new video processing applications. The research in video face recognition is mostly biased towards law enforcement applications. Applications involves human recognition based on face and iris, human computer interaction, behavior analysis, video surveillance etc. This paper presents face tracking framework that is capable of face detection using Haar features, recognition using Gabor feature extraction, matching using correlation score and tracking using Kalman filter. The method has good recognition rate for real-life videos and robust performance to changes due to illumination, environmental factors, scale, pose and orientations.

  15. Apples to Oranges: Comparing Streaming Video Platforms

    OpenAIRE

    Milewski, Steven; Threatt, Monique

    2017-01-01

    Librarians rely on an ever-increasing variety of platforms to deliver streaming video content to our patrons. These two presentations will examine different aspects of video streaming platforms to gain guidance from the comparison of platforms. The first will examine the accessibility compliance of the various video streaming platforms for users with disabilities by examining accessibility features of the platforms. The second will be a comparison of subject usage of two of the larger video s...

  16. Recognizing problem video game use.

    Science.gov (United States)

    Porter, Guy; Starcevic, Vladan; Berle, David; Fenech, Pauline

    2010-02-01

    It has been increasingly recognized that some people develop problem video game use, defined here as excessive use of video games resulting in various negative psychosocial and/or physical consequences. The main objectives of the present study were to identify individuals with problem video game use and compare them with those without problem video game use on several variables. An international, anonymous online survey was conducted, using a questionnaire with provisional criteria for problem video game use, which the authors have developed. These criteria reflect the crucial features of problem video game use: preoccupation with and loss of control over playing video games and multiple adverse consequences of this activity. A total of 1945 survey participants completed the survey. Respondents who were identified as problem video game users (n = 156, 8.0%) differed significantly from others (n = 1789) on variables that provided independent, preliminary validation of the provisional criteria for problem video game use. They played longer than planned and with greater frequency, and more often played even though they did not want to and despite believing that they should not do it. Problem video game users were more likely to play certain online role-playing games, found it easier to meet people online, had fewer friends in real life, and more often reported excessive caffeine consumption. People with problem video game use can be identified by means of a questionnaire and on the basis of the present provisional criteria, which require further validation. These findings have implications for recognition of problem video game users among individuals, especially adolescents, who present to mental health services. Mental health professionals need to acknowledge the public health significance of the multiple negative consequences of problem video game use.

  17. A Video Tour through ViSta 6.4

    Directory of Open Access Journals (Sweden)

    J. Gabriel Molina

    2004-12-01

    Full Text Available This paper offers a visual tour throughout ViSta 6.4, a freeware statistical program based on Lisp-Stat and focused on techniques for statistical visualization (Young 2004. This travel around ViSta is based on screen recordings that illustrate the main features of the program in action. The following aspects of ViSta 6.4 are displayed: the program's interface (ViSta's desktop, menubar and pop-up menus, help system; its data management capabilities (data input and editing, data transformations; features associated to data analysis (data description, statistical modeling; and the options for Lisp-Stat development in ViSta. The video recordings associated to this tour (.wmv files can be visualized at http://www.jstatsoft.org/v13/i08/ using the Internet Explorer navigator, or by clicking on the figures in the paper.

  18. Video Podcasts

    DEFF Research Database (Denmark)

    Nortvig, Anne Mette; Sørensen, Birgitte Holm

    2016-01-01

    This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...

  19. Videography-Based Unconstrained Video Analysis.

    Science.gov (United States)

    Li, Kang; Li, Sheng; Oh, Sangmin; Fu, Yun

    2017-05-01

    Video analysis and understanding play a central role in visual intelligence. In this paper, we aim to analyze unconstrained videos, by designing features and approaches to represent and analyze videography styles in the videos. Videography denotes the process of making videos. The unconstrained videos are defined as the long duration consumer videos that usually have diverse editing artifacts and significant complexity of contents. We propose to construct a videography dictionary, which can be utilized to represent every video clip as a sequence of videography words. In addition to semantic features, such as foreground object motion and camera motion, we also incorporate two novel interpretable features to characterize videography, including the scale information and the motion correlations. We then demonstrate that, by using statistical analysis methods, the unique videography signatures extracted from different events can be automatically identified. For real-world applications, we explore the use of videography analysis for three types of applications, including content-based video retrieval, video summarization (both visual and textual), and videography-based feature pooling. In the experiments, we evaluate the performance of our approach and other methods on a large-scale unconstrained video dataset, and show that the proposed approach significantly benefits video analysis in various ways.

  20. Video Playback Modifications for a DSpace Repository

    Directory of Open Access Journals (Sweden)

    Keith Gilbertson

    2016-01-01

    Full Text Available This paper focuses on modifications to an institutional repository system using the open source DSpace software to support playback of digital videos embedded within item pages. The changes were made in response to the formation and quick startup of an event capture group within the library that was charged with creating and editing video recordings of library events and speakers. This paper specifically discusses the selection of video formats, changes to the visual theme of the repository to allow embedded playback and captioning support, and modifications and bug fixes to the file downloading subsystem to enable skip-ahead playback of videos via byte-range requests. This paper also describes workflows for transcoding videos in the required formats, creating captions, and depositing videos into the repository.

  1. Video Primal Sketch: A Unified Middle-Level Representation for Video

    OpenAIRE

    Han, Zhi; Xu, Zongben; Zhu, Song-Chun

    2015-01-01

    This paper presents a middle-level video representation named Video Primal Sketch (VPS), which integrates two regimes of models: i) sparse coding model using static or moving primitives to explicitly represent moving corners, lines, feature points, etc., ii) FRAME /MRF model reproducing feature statistics extracted from input video to implicitly represent textured motion, such as water and fire. The feature statistics include histograms of spatio-temporal filters and velocity distributions. T...

  2. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  3. Video Game Playing and Gambling in Adolescents: Common Risk Factors

    Science.gov (United States)

    Wood, Richard T. A.; Gupta, Rina; Griffiths, Mark

    2004-01-01

    Video games and gambling often contain very similar elements with both providing intermittent rewards and elements of randomness. Furthermore, at a psychological and behavioral level, slot machine gambling, video lottery terminal (VLT) gambling and video game playing share many of the same features. Despite the similarities between video game…

  4. Using MPEG DASH SRD for zoomable and navigable video

    NARCIS (Netherlands)

    D'Acunto, L.; Berg, J. van den; Thomas, E.; Niamut, O.A.

    2016-01-01

    This paper presents a video streaming client implementation that makes use of the Spatial Relationship Description (SRD) feature of the MPEG-DASH standard, to provide a zoomable and navigable video to an end user. SRD allows a video streaming client to request spatial subparts of a particular video

  5. Satisfaction with Online Teaching Videos: A Quantitative Approach

    Science.gov (United States)

    Meseguer-Martinez, Angel; Ros-Galvez, Alejandro; Rosa-Garcia, Alfonso

    2017-01-01

    We analyse the factors that determine the number of clicks on the "Like" button in online teaching videos, with a sample of teaching videos in the area of Microeconomics across Spanish-speaking countries. The results show that users prefer short online teaching videos. Moreover, some features of the videos have a significant impact on…

  6. Video Game Structural Characteristics: A New Psychological Taxonomy

    Science.gov (United States)

    King, Daniel; Delfabbro, Paul; Griffiths, Mark

    2010-01-01

    Excessive video game playing behaviour may be influenced by a variety of factors including the structural characteristics of video games. Structural characteristics refer to those features inherent within the video game itself that may facilitate initiation, development and maintenance of video game playing over time. Numerous structural…

  7. Role of intraoperative indocyanine green video-angiography to identify small, posterior fossa arteriovenous malformations mimicking cavernous angiomas. Technical report and review of the literature on common features of these cerebral vascular malformations.

    Science.gov (United States)

    Barbagallo, Giuseppe M V; Certo, Francesco; Caltabiano, Rosario; Chiaramonte, Ignazio; Albanese, Vincenzo; Visocchi, Massimiliano

    2015-11-01

    To illustrate the usefulness of intraoperative indocyanine green videoangiography (ICG-VA) to identify the nidus and feeders of a small cerebellar AVM resembling a cavernous hemangioma. To review the unique features regarding the overlay between these two vascular malformations and to highlight the importance to identify with ICG-VA, and treat accordingly, the arterial and venous vessels of the AVM. A 36-year old man presented with bilateral cerebellar hemorrhage. MRI was equivocal in showing an underlying vascular malformation but angiography demonstrated a small, Spetzler-Martin grade I AVM. Surgical resection of the AVM with the aid of intraoperative ICG-VA was performed. After hematoma evacuation, pre-resection ICG-VA did not reveal tortuous arterial and venous vessels in keeping with a typical AVM but rather an unusual blackberry-like image resembling a cavernous hemangioma, with tiny surrounding vessels. Such intraoperative appearance, which could also be the consequence of a "leakage" of fluorescent dye from the nidal pathological vessels, with absent blood-brain barrier, into the surrounding parenchymal pathological capillary network, is important to be recognized as an unusual AVM appearance. Post-resection ICG-VA confirmed the AVM removal, as also shown by postoperative and 3-month follow-up DSAs. Despite technical limitations associated with ICG-VA in post-hemorrhage AVMs, this case together with the intraoperative video, demonstrates the useful role of ICG-VA in identifying small AVMs with peculiar features. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Akademisk video

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    Dette kapitel har fokus på metodiske problemstillinger, der opstår i forhold til at bruge (digital) video i forbindelse med forskningskommunikation, ikke mindst online. Video har længe været benyttet i forskningen til dataindsamling og forskningskommunikation. Med digitaliseringen og internettet er...... der dog opstået nye muligheder og udfordringer i forhold til at formidle og distribuere forskningsresultater til forskellige målgrupper via video. Samtidig er klassiske metodologiske problematikker som forskerens positionering i forhold til det undersøgte stadig aktuelle. Både klassiske og nye...... problemstillinger diskuteres i kapitlet, som rammesætter diskussionen ud fra forskellige positioneringsmuligheder: formidler, historiefortæller, eller dialogist. Disse positioner relaterer sig til genrer inden for ’akademisk video’. Afslutningsvis præsenteres en metodisk værktøjskasse med redskaber til planlægning...

  9. Quality of Experience Assessment of Video Quality in Social Clouds

    Directory of Open Access Journals (Sweden)

    Asif Ali Laghari

    2017-01-01

    Full Text Available Video sharing on social clouds is popular among the users around the world. High-Definition (HD videos have big file size so the storing in cloud storage and streaming of videos with high quality from cloud to the client are a big problem for service providers. Social clouds compress the videos to save storage and stream over slow networks to provide quality of service (QoS. Compression of video decreases the quality compared to original video and parameters are changed during the online play as well as after download. Degradation of video quality due to compression decreases the quality of experience (QoE level of end users. To assess the QoE of video compression, we conducted subjective (QoE experiments by uploading, sharing, and playing videos from social clouds. Three popular social clouds, Facebook, Tumblr, and Twitter, were selected to upload and play videos online for users. The QoE was recorded by using questionnaire given to users to provide their experience about the video quality they perceive. Results show that Facebook and Twitter compressed HD videos more as compared to other clouds. However, Facebook gives a better quality of compressed videos compared to Twitter. Therefore, users assigned low ratings for Twitter for online video quality compared to Tumblr that provided high-quality online play of videos with less compression.

  10. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... World Videos. The workshops were run on December 4, 2016, in Cancun in Mexico. The two workshops together received 13 papers. Each paper was then reviewed by at least two expert reviewers in the field. In all, 11 papers were accepted to be presented at the workshops. The topics covered in the papers...

  11. MedlinePlus FAQ: Is audio description available for videos on MedlinePlus?

    Science.gov (United States)

    ... audiodescription.html Question: Is audio description available for videos on MedlinePlus? To use the sharing features on ... page, please enable JavaScript. Answer: Audio description of videos helps make the content of videos accessible to ...

  12. Random Numbers Generated from Audio and Video Sources

    Directory of Open Access Journals (Sweden)

    I-Te Chen

    2013-01-01

    Full Text Available Random numbers are very useful in simulation, chaos theory, game theory, information theory, pattern recognition, probability theory, quantum mechanics, statistics, and statistical mechanics. The random numbers are especially helpful in cryptography. In this work, the proposed random number generators come from white noise of audio and video (A/V sources which are extracted from high-resolution IPCAM, WEBCAM, and MPEG-1 video files. The proposed generator applied on video sources from IPCAM and WEBCAM with microphone would be the true random number generator and the pseudorandom number generator when applied on video sources from MPEG-1 video file. In addition, when applying NIST SP 800-22 Rev.1a 15 statistics tests on the random numbers generated from the proposed generator, around 98% random numbers can pass 15 statistical tests. Furthermore, the audio and video sources can be found easily; hence, the proposed generator is a qualified, convenient, and efficient random number generator.

  13. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real W...

  14. Accessing files in an internet - The Jade file system

    Science.gov (United States)

    Rao, Herman C.; Peterson, Larry L.

    1993-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  15. Accessing files in an Internet: The Jade file system

    Science.gov (United States)

    Peterson, Larry L.; Rao, Herman C.

    1991-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  16. Educational Video Recording and Editing for The Hand Surgeon

    OpenAIRE

    Rehim, Shady A.; Chung, Kevin C.

    2015-01-01

    Digital video recordings are increasingly used across various medical and surgical disciplines including hand surgery for documentation of patient care, resident education, scientific presentations and publications. In recent years, the introduction of sophisticated computer hardware and software technology has simplified the process of digital video production and improved means of disseminating large digital data files. However, the creation of high quality surgical video footage requires b...

  17. Video consultation use by Australian general practitioners: video vignette study.

    Science.gov (United States)

    Jiwa, Moyez; Meng, Xingqiong

    2013-06-19

    There is unequal access to health care in Australia, particularly for the one-third of the population living in remote and rural areas. Video consultations delivered via the Internet present an opportunity to provide medical services to those who are underserviced, but this is not currently routine practice in Australia. There are advantages and shortcomings to using video consultations for diagnosis, and general practitioners (GPs) have varying opinions regarding their efficacy. The aim of this Internet-based study was to explore the attitudes of Australian GPs toward video consultation by using a range of patient scenarios presenting different clinical problems. Overall, 102 GPs were invited to view 6 video vignettes featuring patients presenting with acute and chronic illnesses. For each vignette, they were asked to offer a differential diagnosis and to complete a survey based on the theory of planned behavior documenting their views on the value of a video consultation. A total of 47 GPs participated in the study. The participants were younger than Australian GPs based on national data, and more likely to be working in a larger practice. Most participants (72%-100%) agreed on the differential diagnosis in all video scenarios. Approximately one-third of the study participants were positive about video consultations, one-third were ambivalent, and one-third were against them. In all, 91% opposed conducting a video consultation for the patient with symptoms of an acute myocardial infarction. Inability to examine the patient was most frequently cited as the reason for not conducting a video consultation. Australian GPs who were favorably inclined toward video consultations were more likely to work in larger practices, and were more established GPs, especially in rural areas. The survey results also suggest that the deployment of video technology will need to focus on follow-up consultations. Patients with minor self-limiting illnesses and those with medical

  18. Telemetry and Communication IP Video Player

    Science.gov (United States)

    OFarrell, Zachary L.

    2011-01-01

    Aegis Video Player is the name of the video over IP system for the Telemetry and Communications group of the Launch Services Program. Aegis' purpose is to display video streamed over a network connection to be viewed during launches. To accomplish this task, a VLC ActiveX plug-in was used in C# to provide the basic capabilities of video streaming. The program was then customized to be used during launches. The VLC plug-in can be configured programmatically to display a single stream, but for this project multiple streams needed to be accessed. To accomplish this, an easy to use, informative menu system was added to the program to enable users to quickly switch between videos. Other features were added to make the player more useful, such as watching multiple videos and watching a video in full screen.

  19. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition...

  20. Video Analytics

    DEFF Research Database (Denmark)

    include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition......This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...

  1. Mining Videos for Features that Drive Attention

    Science.gov (United States)

    2015-04-01

    1_14 311 312 F. Baluch and L. Itti known as a saccade , to bring the area of interest into alignment with the fovea.Within the fovea too, attention can...infer attentional allocation. The eye traces recorded during the viewing of the stimuli by the subjects were parsed into saccades based on a threshold...of velocity as described before [1]. A total of 11,430 saccades were extracted and analyzed. Using the saliency model, we were able to extract

  2. STS-69 Flight Day 5 Video File

    Science.gov (United States)

    1995-01-01

    Awakening to the theme song of the television show 'Rin Tin Tin', the astronauts, Cmdr. Dave Walker, Pilot Ken Cockrell, and Mission Specialists Jim Voss, Jim Newman, and Mike Gernhardt, of the STS-69 mission began their fifth day in orbit. The deployment of the Wake Shield Facility (WSF) was accomplished successfully, although it was delayed several hours due to communication problems between the satellite and its carrier platform located in the shuttle's cargo bay. The WSF satellite's main purpose was to grow up to seven layers of semiconductor films in a vacuum-like state while orbiting behind the space shuttle. The shuttle's Global Positioning System and Satellite Tracking System were both given checkout tests.

  3. STS-69 Flight Day 8 Video File

    Science.gov (United States)

    1995-01-01

    The astronauts, Cmdr. Dave Walker, Pilot Ken Cockrell, and Mission Specialists Jim Voss, Jim Newman, and Mike Gernhardt were awakened by the theme song of the television cartoon show 'Underdog' on this eighth day of the STS-69 mission. The retrieval of the Wake Shield Facility (WSF) occurred without any major problems. The WSF was unable to grow all seven layers of films before its retrieval. Only four were grown due to thermal problems.

  4. User-oriented summary extraction for soccer video based on multimodal analysis

    Science.gov (United States)

    Liu, Huayong; Jiang, Shanshan; He, Tingting

    2011-11-01

    An advanced user-oriented summary extraction method for soccer video is proposed in this work. Firstly, an algorithm of user-oriented summary extraction for soccer video is introduced. A novel approach that integrates multimodal analysis, such as extraction and analysis of the stadium features, moving object features, audio features and text features is introduced. By these features the semantic of the soccer video and the highlight mode are obtained. Then we can find the highlight position and put them together by highlight degrees to obtain the video summary. The experimental results for sports video of world cup soccer games indicate that multimodal analysis is effective for soccer video browsing and retrieval.

  5. Production of 360° video : Introduction to 360° video and production guidelines

    OpenAIRE

    Ghimire, Sujan

    2016-01-01

    The main goal of this thesis project is to introduce latest media technology and provide a complete guideline. This project is based on the production of 360° video by using multiple GoPro cameras. This project was the first 360° video project at Helsinki Metropolia University of Applied Sciences. 360° video is a video with a totally different viewing experience and incomparable features on it. 360° x 180° video coverage and active participation from viewers are the best part of this vid...

  6. Using Video Clips To Teach Social Psychology.

    Science.gov (United States)

    Roskos-Ewoldsen, David R.; Roskos-Ewoldsen, Beverly

    2001-01-01

    Explores the effectiveness of using short video clips from feature films to highlight theoretical concepts when teaching social psychology. Reveals that short video clips have many of the same advantages as showing full-length films and demonstrates that students saw the use of these clips as an effective tool. (CMK)

  7. Indexed Captioned Searchable Videos: A Learning Companion for STEM Coursework

    Science.gov (United States)

    Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher

    2017-02-01

    Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character recognition (OCR) technology was enhanced with image transformations for extraction of text from video frames to support indexing and search. The images and text on video frames is analyzed to divide lecture videos into topical segments. The ICS video player integrates indexing, search, and captioning in video playback providing instant access to the content of interest. This video framework has been used by more than 70 courses in a variety of STEM disciplines and assessed by more than 4000 students. Results presented from the surveys demonstrate the value of the videos as a learning resource and the role played by videos in a students learning process. Survey results also establish the value of indexing and search features in a video platform for education. This paper reports on the development and evaluation of ICS videos framework and over 5 years of usage experience in several STEM courses.

  8. An Evaluation of Video-to-Video Face Verification

    NARCIS (Netherlands)

    Poh, N.; Chan, C.H.; Kittler, J.; Marcel, S.; Mc Cool, C.; Argones Rúa, E.; Alba Castro, J.L.; Villegas, M.; Paredes, R.; Štruc, V.; Pavešić, N.; Salah, A.A.; Fang, H.; Costen, N.

    2010-01-01

    Person recognition using facial features, e.g., mug-shot images, has long been used in identity documents. However, due to the widespread use of web-cams and mobile devices embedded with a camera, it is now possible to realize facial video recognition, rather than resorting to just still images. In

  9. Objective video presentation QoE predictor for smart adaptive video streaming

    Science.gov (United States)

    Wang, Zhou; Zeng, Kai; Rehman, Abdul; Yeganeh, Hojatollah; Wang, Shiqi

    2015-09-01

    How to deliver videos to consumers over the network for optimal quality-of-experience (QoE) has been the central goal of modern video delivery services. Surprisingly, regardless of the large volume of videos being delivered everyday through various systems attempting to improve visual QoE, the actual QoE of end consumers is not properly assessed, not to say using QoE as the key factor in making critical decisions at the video hosting, network and receiving sites. Real-world video streaming systems typically use bitrate as the main video presentation quality indicator, but using the same bitrate to encode different video content could result in drastically different visual QoE, which is further affected by the display device and viewing condition of each individual consumer who receives the video. To correct this, we have to put QoE back to the driver's seat and redesign the video delivery systems. To achieve this goal, a major challenge is to find an objective video presentation QoE predictor that is accurate, fast, easy-to-use, display device adaptive, and provides meaningful QoE predictions across resolution and content. We propose to use the newly developed SSIMplus index (https://ece.uwaterloo.ca/~z70wang/research/ssimplus/) for this role. We demonstrate that based on SSIMplus, one can develop a smart adaptive video streaming strategy that leads to much smoother visual QoE impossible to achieve using existing adaptive bitrate video streaming approaches. Furthermore, SSIMplus finds many more applications, in live and file-based quality monitoring, in benchmarking video encoders and transcoders, and in guiding network resource allocations.

  10. Roadside video data analysis deep learning

    CERN Document Server

    Verma, Brijesh; Stockwell, David

    2017-01-01

    This book highlights the methods and applications for roadside video data analysis, with a particular focus on the use of deep learning to solve roadside video data segmentation and classification problems. It describes system architectures and methodologies that are specifically built upon learning concepts for roadside video data processing, and offers a detailed analysis of the segmentation, feature extraction and classification processes. Lastly, it demonstrates the applications of roadside video data analysis including scene labelling, roadside vegetation classification and vegetation biomass estimation in fire risk assessment.

  11. Turning Video Resource Management into Cloud Computing

    Directory of Open Access Journals (Sweden)

    Weili Kou

    2016-07-01

    Full Text Available Big data makes cloud computing more and more popular in various fields. Video resources are very useful and important to education, security monitoring, and so on. However, issues of their huge volumes, complex data types, inefficient processing performance, weak security, and long times for loading pose challenges in video resource management. The Hadoop Distributed File System (HDFS is an open-source framework, which can provide cloud-based platforms and presents an opportunity for solving these problems. This paper presents video resource management architecture based on HDFS to provide a uniform framework and a five-layer model for standardizing the current various algorithms and applications. The architecture, basic model, and key algorithms are designed for turning video resources into a cloud computing environment. The design was tested by establishing a simulation system prototype.

  12. XML Files: MedlinePlus

    Science.gov (United States)

    ... this page: https://medlineplus.gov/xml.html MedlinePlus XML Files To use the sharing features on this page, please enable JavaScript. MedlinePlus produces XML data sets that you are welcome to download ...

  13. Mengolah Data Video Analog menjadi Video Digital Sederhana

    Directory of Open Access Journals (Sweden)

    Nick Soedarso

    2010-10-01

    Full Text Available Nowadays, editing technology has entered the digital age. Technology will demonstrate the evidence of processing analog to digital data has become simpler since editing technology has been integrated in the society in all aspects. Understanding the technique of processing analog to digital data is important in producing a video. To utilize this technology, the introduction of equipments is fundamental to understand the features. The next phase is the capturing process that supports the preparation in editing process from scene to scene; therefore, it will become a watchable video.   

  14. No-Reference Video Quality Assessment by HEVC Codec Analysis

    DEFF Research Database (Denmark)

    Huang, Xin; Søgaard, Jacob; Forchhammer, Søren

    2015-01-01

    the transform coefficients, estimates the distortion, and assesses the video quality. The proposed scheme generates VQA features based on Intra coded frames, and then maps features using an Elastic Net to predict subjective video quality. A set of HEVC coded 4K UHD sequences are tested. Results show......This paper proposes a No-Reference (NR) Video Quality Assessment (VQA) method for videos subject to the distortion given by High Efficiency Video Coding (HEVC). The proposed assessment can be performed either as a BitstreamBased (BB) method or as a Pixel-Based (PB). It extracts or estimates...

  15. Classifying smoke in laparoscopic videos using SVM

    Directory of Open Access Journals (Sweden)

    Alshirbaji Tamer Abdulbaki

    2017-09-01

    Full Text Available Smoke in laparoscopic videos usually appears due to the use of electrocautery when cutting or coagulating tissues. Therefore, detecting smoke can be used for event-based annotation in laparoscopic surgeries by retrieving the events associated with the electrocauterization. Furthermore, smoke detection can also be used for automatic smoke removal. However, detecting smoke in laparoscopic video is a challenge because of the changeability of smoke patterns, the moving camera and the different lighting conditions. In this paper, we present a video-based smoke detection algorithm to detect smoke of different densities such as fog, low and high density in laparoscopic videos. The proposed method depends on extracting various visual features from the laparoscopic images and providing them to support vector machine (SVM classifier. Features are based on motion, colour and texture patterns of the smoke. We validated our algorithm using experimental evaluation on four laparoscopic cholecystectomy videos. These four videos were manually annotated by defining every frame as smoke or non-smoke frame. The algorithm was applied to the videos by using different feature combinations for classification. Experimental results show that the combination of all proposed features gives the best classification performance. The overall accuracy (i.e. correctly classified frames is around 84%, with the sensitivity (i.e. correctly detected smoke frames and the specificity (i.e. correctly detected non-smoke frames are 89% and 80%, respectively.

  16. The role of structural characteristics in problem video game playing: a review

    OpenAIRE

    King, DL; Delfabbro, PH; Griffiths, M.

    2010-01-01

    The structural characteristics of video games may play an important role in explaining why some people play video games to excess. This paper provides a review of the literature on structural features of video games and the psychological experience of playing video games. The dominant view of the appeal of video games is based on operant conditioning theory and the notion that video games satisfy various needs for social interaction and belonging. However, there is a lack of experimental and ...

  17. Artificial Intelligence in Video Games: Towards a Unified Framework

    OpenAIRE

    Safadi, Firas; Fonteneau, Raphael; Ernst, Damien

    2015-01-01

    With modern video games frequently featuring sophisticated and realistic environments, the need for smart and comprehensive agents that understand the various aspects of complex environments is pressing. Since video game AI is often specifically designed for each game, video game AI tools currently focus on allowing video game developers to quickly and efficiently create specific AI. One issue with this approach is that it does not efficiently exploit the numerous similarities that exist betw...

  18. Web-Mediated Augmentation and Interactivity Enhancement of Omni-Directional Video in Both 2D and 3D

    OpenAIRE

    Wijnants, Maarten; Van Erum, Kris; QUAX, Peter; Lamotte, Wim

    2015-01-01

    Video consumption has since the emergence of the medium largely been a passive affair. This paper proposes augmented Omni-Directional Video (ODV) as a novel format to engage viewers and to open up new ways of interacting with video content. Augmented ODV blends two important contemporary technologies: Augmented Video Viewing and 360 degree video. The former allows for the addition of interactive features to Web-based video playback, while the latter unlocks spatial video navigation opportunit...

  19. Video signals integrator (VSI) system architecture

    Science.gov (United States)

    Kasprowicz, Grzegorz; Pastuszak, Grzegorz; Poźniak, Krzysztof; Trochimiuk, Maciej; Abramowski, Andrzej; Gaska, Michal; Bukowiecka, Danuta; Tyburska, Agata; Struniawski, Jarosław; Jastrzebski, Pawel; Jewartowski, Blazej; Frasunek, Przemysław; Nalbach-Moszynska, Małgorzata; Brawata, Sebastian; Bubak, Iwona; Gloza, Małgorzata

    2016-09-01

    The purpose of the project is development of a platform which integrates video signals from many sources. The signals can be sourced by existing analogue CCTV surveillance installations, recent internet-protocol (IP) cameras or single cameras of any type. The system will consist of portable devices that provide conversion, encoding, transmission and archiving. The sharing subsystem will use distributed file system and also user console which provides simultaneous access to any of video streams in real time. The system is fully modular so its extension is possible, both from hardware and software side. Due to standard modular technology used, partial technology modernization is also possible during a long exploitation period.

  20. File sharing

    NARCIS (Netherlands)

    van Eijk, N.

    2011-01-01

    File sharing’ has become generally accepted on the Internet. Users share files for downloading music, films, games, software etc. In this note, we have a closer look at the definition of file sharing, the legal and policy-based context as well as enforcement issues. The economic and cultural

  1. Multi-Modal Surrogates for Retrieving and Making Sense of Videos: Is Synchronization between the Multiple Modalities Optimal?

    Science.gov (United States)

    Song, Yaxiao

    2010-01-01

    Video surrogates can help people quickly make sense of the content of a video before downloading or seeking more detailed information. Visual and audio features of a video are primary information carriers and might become important components of video retrieval and video sense-making. In the past decades, most research and development efforts on…

  2. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... NEI YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration ... Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: ...

  3. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia ... of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ...

  4. NEI You Tube Videos: Amblyopia

    Science.gov (United States)

    ... YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia ... of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ...

  5. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed ... Activity Role of Body Weight in Osteoarthritis Educational Videos for Patients Rheumatoid Arthritis Educational Video Series Psoriatic ...

  6. Evaluation of stereoscopic medical video content on an autostereoscopic display for undergraduate medical education

    Science.gov (United States)

    Ilgner, Justus F. R.; Kawai, Takashi; Shibata, Takashi; Yamazoe, Takashi; Westhofen, Martin

    2006-02-01

    Introduction: An increasing number of surgical procedures are performed in a microsurgical and minimally-invasive fashion. However, the performance of surgery, its possibilities and limitations become difficult to teach. Stereoscopic video has evolved from a complex production process and expensive hardware towards rapid editing of video streams with standard and HDTV resolution which can be displayed on portable equipment. This study evaluates the usefulness of stereoscopic video in teaching undergraduate medical students. Material and methods: From an earlier study we chose two clips each of three different microsurgical operations (tympanoplasty type III of the ear, endonasal operation of the paranasal sinuses and laser chordectomy for carcinoma of the larynx). This material was added by 23 clips of a cochlear implantation, which was specifically edited for a portable computer with an autostereoscopic display (PC-RD1-3D, SHARP Corp., Japan). The recording and synchronization of left and right image was performed at the University Hospital Aachen. The footage was edited stereoscopically at the Waseda University by means of our original software for non-linear editing of stereoscopic 3-D movies. Then the material was converted into the streaming 3-D video format. The purpose of the conversion was to present the video clips by a file type that does not depend on a television signal such as PAL or NTSC. 25 4th year medical students who participated in the general ENT course at Aachen University Hospital were asked to estimate depth clues within the six video clips plus cochlear implantation clips. Another 25 4th year students who were shown the material monoscopically on a conventional laptop served as control. Results: All participants noted that the additional depth information helped with understanding the relation of anatomical structures, even though none had hands-on experience with Ear, Nose and Throat operations before or during the course. The monoscopic

  7. 61214++++','DOAJ-ART-EN'); return false;" href="+++++https://jual.nipissingu.ca/wp-content/uploads/sites/25/2014/06/v61214.m4v">61214++++">Jailed - Video

    Directory of Open Access Journals (Sweden)

    Cameron CULBERT

    2012-07-01

    Full Text Available As the public education system in Northern Ontario continues to take a downward spiral, a plethora of secondary school students are being placed in an alternative educational environment. Juxtaposing the two educational settings reveals very similar methods and characteristics of educating our youth as opposed to using a truly alternative approach to education. This video reviews the relationship between public education and alternative education in a remote Northern Ontario setting. It is my belief that the traditional methods of teaching are not appropriate in educating at risk students in alternative schools. Paper and pencil worksheets do not motivate these students to learn and succeed. Alternative education should emphasize experiential learning, a just in time curriculum based on every unique individual and the students true passion for everyday life. Cameron Culbert was born on February 3rd, 1977 in North Bay, Ontario. His teenage years were split between attending public school and his willed curriculum on the ski hill. Culbert spent 10 years (1996-2002 & 2006-2010 competing for Canada as an alpine ski racer. His passion for teaching and coaching began as an athlete and has now transferred into the classroom and the community. As a graduate of Nipissing University (BA, BEd, MEd. Camerons research interests are alternative education, physical education and technology in the classroom. Currently Cameron is an active educator and coach in Northern Ontario.

  8. Video Design Games

    DEFF Research Database (Denmark)

    Smith, Rachel Charlotte; Christensen, Kasper Skov; Iversen, Ole Sejer

    We introduce Video Design Games to train educators in teaching design. The Video Design Game is a workshop format consisting of three rounds in which participants observe, reflect and generalize based on video snippets from their own practice. The paper reports on a Video Design Game workshop...

  9. Characterization of social video

    Science.gov (United States)

    Ostrowski, Jeffrey R.; Sarhan, Nabil J.

    2009-01-01

    The popularity of social media has grown dramatically over the World Wide Web. In this paper, we analyze the video popularity distribution of well-known social video websites (YouTube, Google Video, and the AOL Truveo Video Search engine) and characterize their workload. We identify trends in the categories, lengths, and formats of those videos, as well as characterize the evolution of those videos over time. We further provide an extensive analysis and comparison of video content amongst the main regions of the world.

  10. Exchanging digital video of laryngeal examinations.

    Science.gov (United States)

    Crump, John M; Deutsch, Thomas

    2004-03-01

    Laryngeal examinations, especially stroboscopic examinations, are increasingly recorded using digital video formats on computer media, rather than using analog formats on videotape. It would be useful to share these examinations with other medical professionals in formats that would facilitate reliable and high-quality playback on a personal computer by the recipients. Unfortunately, a personal computer is not well designed for reliable presentation of artifact-free video. It is particularly important that laryngeal video play without artifacts of motion or color because these are often the characteristics of greatest clinical interest. With proper tools and procedures, and with reasonable compromises in image resolution and the duration of the examination, digital video of laryngeal examinations can be reliably exchanged. However, the tools, procedures, and formats for recording, converting to another digital format ("transcoding"), communicating, copying, and playing digital video with a personal computer are not familiar to most medical professionals. Some understanding of digital video and the tools available is required of those wanting to exchange digital video. Best results are achieved by recording to a digital format best suited for recording (such as MJPEG or DV),judiciously selecting a segment of the recording for sharing, and converting to a format suited to distribution (such as MPEG1 or MPEG2) using a medium suited to the situation (such as e-mail attachment, CD-ROM, a "clip" within a Microsoft PowerPoint presentation, or DVD-Video). If digital video is sent to a colleague, some guidance on playing files and using a PC media player is helpful.

  11. Handling of Multimedia Files in the Invenio Software

    CERN Document Server

    Oltmanns, Björn; Schiefer, Bernhard

    ‘Handling of multimedia files in the Invenio Software’ is motivated by the need for integration of multimedia files in the open-source, large-scale digital library software Invenio, developed and used at CERN, the European Organisation for Nuclear Research. In the last years, digital assets like pictures, presentations podcasts and videos became abundant in these systems and digital libraries have grown out of their classic role of only storing bibliographical metadata. The thesis focuses on digital video as a type of multimedia and covers the complete workflow of handling video material in the Invenio software: from the ingestion of digital video material to its processing on to the storage and preservation and finally the streaming and presentation of videos to the user. The potential technologies to realise a video submission workflow are discussed in-depth and evaluated towards system integration with Invenio. The focus is set on open and free technologies, which can be redistributed with the Inve...

  12. Study on a High Compression Processing for Video-on-Demand e-learning System

    Science.gov (United States)

    Nomura, Yoshihiko; Matsuda, Ryutaro; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    The authors proposed a high-quality and small-capacity lecture-video-file creating system for distance e-learning system. Examining the feature of the lecturing scene, the authors ingeniously employ two kinds of image-capturing equipment having complementary characteristics : one is a digital video camera with a low resolution and a high frame rate, and the other is a digital still camera with a high resolution and a very low frame rate. By managing the two kinds of image-capturing equipment, and by integrating them with image processing, we can produce course materials with the greatly reduced file capacity : the course materials satisfy the requirements both for the temporal resolution to see the lecturer's point-indicating actions and for the high spatial resolution to read the small written letters. As a result of a comparative experiment, the e-lecture using the proposed system was confirmed to be more effective than an ordinary lecture from the viewpoint of educational effect.

  13. 17 CFR 232.304 - Graphic, image, audio and video material.

    Science.gov (United States)

    2010-04-01

    ... delivered to investors and others is deemed part of the electronic filing and subject to the civil liability..., image, audio or video material, they are not subject to the civil liability and anti-fraud provisions of...

  14. Video visual analytics

    OpenAIRE

    Höferlin, Markus Johannes

    2013-01-01

    The amount of video data recorded world-wide is tremendously growing and has already reached hardly manageable dimensions. It originates from a wide range of application areas, such as surveillance, sports analysis, scientific video analysis, surgery documentation, and entertainment, and its analysis represents one of the challenges in computer science. The vast amount of video data renders manual analysis by watching the video data impractical. However, automatic evaluation of video material...

  15. Video Game Genre Affordances for Physics Education

    Science.gov (United States)

    Anagnostou, Kostas; Pappa, Anastasia

    2011-01-01

    In this work, the authors analyze the video game genres' features and investigate potential mappings to specific didactic approaches in the context of Physics education. To guide the analysis, the authors briefly review the main didactic approaches for Physics and identify qualities that can be projected into game features. Based on the…

  16. Web Based Video Educational Resources for Surgeons

    Directory of Open Access Journals (Sweden)

    Petre Vlah-Horea BOŢIANU

    2015-12-01

    Full Text Available During the last years, video files showing different surgical procedures have become extremely available and popular on the internet. They are available on both free and unrestricted sites, as well as on dedicated sites which control the medical quality of the information. Honest presentation and a minimal video-editing to include information about the procedure are mandatory to achieve a product with a true educational value. The integration of the web-based video educational resources in the continuing medical information system seems to be limited and the true educational impact very difficult to assess. A review of the available literature dedicated on this subject shows that the main challenge is related to the human factor and not to the available technology.

  17. A Novel Quantum Video Steganography Protocol with Large Payload Based on MCQI Quantum Video

    Science.gov (United States)

    Qu, Zhiguo; Chen, Siyi; Ji, Sai

    2017-11-01

    As one of important multimedia forms in quantum network, quantum video attracts more and more attention of experts and scholars in the world. A secure quantum video steganography protocol with large payload based on the video strip encoding method called as MCQI (Multi-Channel Quantum Images) is proposed in this paper. The new protocol randomly embeds the secret information with the form of quantum video into quantum carrier video on the basis of unique features of video frames. It exploits to embed quantum video as secret information for covert communication. As a result, its capacity are greatly expanded compared with the previous quantum steganography achievements. Meanwhile, the new protocol also achieves good security and imperceptibility by virtue of the randomization of embedding positions and efficient use of redundant frames. Furthermore, the receiver enables to extract secret information from stego video without retaining the original carrier video, and restore the original quantum video as a follow. The simulation and experiment results prove that the algorithm not only has good imperceptibility, high security, but also has large payload.

  18. Temporal Segmentation of MPEG Video Streams

    Directory of Open Access Journals (Sweden)

    Janko Calic

    2002-06-01

    Full Text Available Many algorithms for temporal video partitioning rely on the analysis of uncompressed video features. Since the information relevant to the partitioning process can be extracted directly from the MPEG compressed stream, higher efficiency can be achieved utilizing information from the MPEG compressed domain. This paper introduces a real-time algorithm for scene change detection that analyses the statistics of the macroblock features extracted directly from the MPEG stream. A method for extraction of the continuous frame difference that transforms the 3D video stream into a 1D curve is presented. This transform is then further employed to extract temporal units within the analysed video sequence. Results of computer simulations are reported.

  19. PNW River Reach Files -- 1:100k Watercourses (arcs)

    Data.gov (United States)

    Pacific States Marine Fisheries Commission — This feature class includes the ARC features from the 2001 version of the PNW River Reach files Arc/INFO coverage. Separate, companion feature classes are also...

  20. PNW River Reach Files -- 1:100k Waterbodies (polygons)

    Data.gov (United States)

    Pacific States Marine Fisheries Commission — This feature class includes the POLYGON waterbody features from the 2001 version of the PNW River Reach files Arc/INFO coverage. Separate, companion feature classes...

  1. Perceptual compressive sensing scalability in mobile video

    Science.gov (United States)

    Bivolarski, Lazar

    2011-09-01

    Scalability features embedded within the video sequences allows for streaming over heterogeneous networks to a variety of end devices. Compressive sensing techniques that will allow for lowering the complexity increase the robustness of the video scalability are reviewed. Human visual system models are often used in establishing perceptual metrics that would evaluate quality of video. Combining of perceptual and compressive sensing approach outlined from recent investigations. The performance and the complexity of different scalability techniques are evaluated. Application of perceptual models to evaluation of the quality of compressive sensing scalability is considered in the near perceptually lossless case and to the appropriate coding schemes is reviewed.

  2. What do home videos tell us about early motor and socio-communicative behaviours in children with autistic features during the second year of life--An exploratory study.

    Science.gov (United States)

    Zappella, Michele; Einspieler, Christa; Bartl-Pokorny, Katrin D; Krieber, Magdalena; Coleman, Mary; Bölte, Sven; Marschik, Peter B

    2015-10-01

    Little is known about the first half year of life of individuals later diagnosed with autism spectrum disorders (ASD). There is even a complete lack of observations on the first 6 months of life of individuals with transient autistic behaviours who improved in their socio-communicative functions in the pre-school age. To compare early development of individuals with transient autistic behaviours and those later diagnosed with ASD. Exploratory study; retrospective home video analysis. 18 males, videoed between birth and the age of 6 months (ten individuals later diagnosed with ASD; eight individuals who lost their autistic behaviours after the age of 3 and achieved age-adequate communicative abilities, albeit often accompanied by tics and attention deficit). The detailed video analysis focused on general movements (GMs), the concurrent motor repertoire, eye contact, responsive smiling, and pre-speech vocalisations. Abnormal GMs were observed more frequently in infants later diagnosed with ASD, whereas all but one infant with transient autistic behaviours had normal GMs (p<0.05). Eye contact and responsive smiling were inconspicuous for all individuals. Cooing was not observable in six individuals across both groups. GMs might be one of the markers which could assist the earlier identification of ASD. We recommend implementing the GM assessment in prospective studies on ASD. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. No-Reference Video Quality Assessment using MPEG Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2013-01-01

    We present a method for No-Reference (NR) Video Quality Assessment (VQA) for decoded video without access to the bitstream. This is achieved by extracting and pooling features from a NR image quality assessment method used frame by frame. We also present methods to identify the video coding...... and estimate the video coding parameters for MPEG-2 and H.264/AVC which can be used to improve the VQA. The analysis differs from most other video coding analysis methods since it is without access to the bitstream. The results show that our proposed method is competitive with other recent NR VQA methods...

  4. Max Weber Visits America: A Review of the Video

    Directory of Open Access Journals (Sweden)

    Michael Wise

    2006-11-01

    Full Text Available The North Carolina Sociological Society is proud to announce the long-awaited video of Max Weber's trip to North Carolina as retold by two of his cousins. Max Weber made a trip to visit relatives in Mount Airy, North Carolina, in 1904. This 2004 narrative by Larry Keeter and Stephen Hall is the story of locating and interviewing two living eyewitnesses (1976 to Max Weber's trip. The video includes information about Weber's contributions to modern sociology. Dowloadable files are provided using the .mp4 format. The video should appeal to students and professors interested in Max Weber. It can be included in courses ranging from introductory sociology to theory.

  5. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... support group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ... group for me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support ...

  6. Video Games and Citizenship

    National Research Council Canada - National Science Library

    Bourgonjon, Jeroen; Soetaert, Ronald

    2013-01-01

    ... by exploring a particular aspect of digitization that affects young people, namely video games. They explore the new social spaces which emerge in video game culture and how these spaces relate to community building and citizenship...

  7. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Doctor Find a Provider Meet the Team Blog Articles News Resources Links Videos Podcasts Webinars For the ... Doctor Find a Provider Meet the Team Blog Articles News Provider Directory Donate Resources Links Videos Podcasts ...

  8. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Doctor Find a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For ... Doctor Find a Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources Links Videos ...

  9. Digital Video in Research

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2012-01-01

    questions of our media literacy pertaining to authoring multimodal texts (visual, verbal, audial, etc.) in research practice and the status of multimodal texts in academia. The implications of academic video extend to wider issues of how researchers harness opportunities to author different types of texts......Is video becoming “the new black” in academia, if so, what are the challenges? The integration of video in research methodology (for collection, analysis) is well-known, but the use of “academic video” for dissemination is relatively new (Eriksson and Sørensen). The focus of this paper is academic...... video, or short video essays produced for the explicit purpose of communicating research processes, topics, and research-based knowledge (see the journal of academic videos: www.audiovisualthinking.org). Video is increasingly used in popular showcases for video online, such as YouTube and Vimeo, as well...

  10. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Back Support Groups Is a support group for me? Find a Group Upcoming Events Video Library Photo ... Support Groups Back Is a support group for me? Find a group Back Upcoming events Video Library ...

  11. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork ... for me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ...

  12. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For the Media For Clinicians For ... Family Caregivers Glossary Menu In this section Links Videos Podcasts Webinars For the Media For Clinicians For ...

  13. Videos, Podcasts and Livechats

    Science.gov (United States)

    ... the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For the Media For Clinicians For ... Family Caregivers Glossary Menu In this section Links Videos Podcasts Webinars For the Media For Clinicians For ...

  14. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For the Media ... a Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources Links Videos Podcasts Webinars ...

  15. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer ... me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork ...

  16. Video Screen Capture Basics

    Science.gov (United States)

    Dunbar, Laura

    2014-01-01

    This article is an introduction to video screen capture. Basic information of two software programs, QuickTime for Mac and BlueBerry Flashback Express for PC, are also discussed. Practical applications for video screen capture are given.

  17. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... News Resources Links Videos Podcasts Webinars For the Media For Clinicians For Policymakers For Family Caregivers Glossary ... this section Links Videos Podcasts Webinars For the Media For Clinicians For Policymakers For Family Caregivers Glossary ...

  18. SVC VIDEO STREAM ALLOCATION AND ADAPTATION IN HETEROGENEOUS NETWORK

    Directory of Open Access Journals (Sweden)

    E. A. Pakulova

    2016-07-01

    Full Text Available The paper deals with video data transmission in format H.264/SVC standard with QoS requirements satisfaction. The Sender-Side Path Scheduling (SSPS algorithm and Sender-Side Video Adaptation (SSVA algorithm were developed. SSPS algorithm gives the possibility to allocate video traffic among several interfaces while SSVA algorithm dynamically changes the quality of video sequence in relation to QoS requirements. It was shown that common usage of two developed algorithms enables to aggregate throughput of access networks, increase parameters of Quality of Experience and decrease losses in comparison with Round Robin algorithm. For evaluation of proposed solution, the set-up was made. The trace files with throughput of existing public networks were used in experiments. Based on this information the throughputs of networks were limited and losses for paths were set. The results of research may be used for study and transmission of video data in heterogeneous wireless networks.

  19. Intelligent Analysis for Georeferenced Video Using Context-Based Random Graphs

    OpenAIRE

    Jiangfan Feng; Hu Song

    2013-01-01

    Video sensor networks are formed by the joining of heterogeneous sensor nodes, which is frequently reported as video of communication functionally bound to geographical locations. Decomposition of georeferenced video stream presents the expression of video from spatial feature set. Although it has been studied extensively, spatial relations underlying the scenario are not well understood, which are important to understand the semantics of georeferenced video and behavior of elements. Here we ...

  20. Transmission of compressed video

    Science.gov (United States)

    Pasch, H. L.

    1990-09-01

    An overview of video coding is presented. The aim is not to give a technical summary of possible coding techniques, but to address subjects related to video compression in general and to the transmission of compressed video in more detail. Bit rate reduction is in general possible by removing redundant information; removing information the eye does not use anyway; and reducing the quality of the video. The codecs which are used for reducing the bit rate, can be divided into two groups: Constant Bit rate Codecs (CBC's), which keep the bit rate constant, but vary the video quality; and Variable Bit rate Codecs (VBC's), which keep the video quality constant by varying the bit rate. VBC's can be in general reach a higher video quality than CBC's using less bandwidth, but need a transmission system that allows the bandwidth of a connection to fluctuate in time. The current and the next generation of the PSTN does not allow this; ATM might. There are several factors which influence the quality of video: the bit error rate of the transmission channel, slip rate, packet loss rate/packet insertion rate, end-to-end delay, phase shift between voice and video, and bit rate. Based on the bit rate of the coded video, the following classification of coded video can be made: High Definition Television (HDTV); Broadcast Quality Television (BQTV); video conferencing; and video telephony. The properties of these classes are given. The video conferencing and video telephony equipment available now and in the next few years can be divided into three categories: conforming to 1984 CCITT standard for video conferencing; conforming to 1988 CCITT standard; and conforming to no standard.

  1. Making good physics videos

    Science.gov (United States)

    Lincoln, James

    2017-05-01

    Online videos are an increasingly important way technology is contributing to the improvement of physics teaching. Students and teachers have begun to rely on online videos to provide them with content knowledge and instructional strategies. Online audiences are expecting greater production value, and departments are sometimes requesting educators to post video pre-labs or to flip our classrooms. In this article, I share my advice on creating engaging physics videos.

  2. Desktop video conferencing

    OpenAIRE

    Potter, Ray; Roberts, Deborah

    2007-01-01

    This guide aims to provide an introduction to Desktop Video Conferencing. You may be familiar with video conferencing, where participants typically book a designated conference room and communicate with another group in a similar room on another site via a large screen display. Desktop video conferencing (DVC), as the name suggests, allows users to video conference from the comfort of their own office, workplace or home via a desktop/laptop Personal Computer. DVC provides live audio and visua...

  3. 47 CFR 79.3 - Video description of video programming.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Video description of video programming. 79.3... CLOSED CAPTIONING AND VIDEO DESCRIPTION OF VIDEO PROGRAMMING § 79.3 Video description of video programming. (a) Definitions. For purposes of this section the following definitions shall apply: (1...

  4. An analysis of technology usage for streaming digital video in support of a preclinical curriculum.

    Science.gov (United States)

    Dev, P; Rindfleisch, T C; Kush, S J; Stringer, J R

    2000-01-01

    Usage of streaming digital video of lectures in preclinical courses was measured by analysis of the data in the log file maintained on the web server. We observed that students use the video when it is available. They do not use it to replace classroom attendance but rather for review before examinations or when a class has been missed. Usage of video has not increased significantly for any course within the 18 month duration of this project.

  5. Learnable pooling with Context Gating for video classification

    OpenAIRE

    Miech, Antoine; Laptev, Ivan; Sivic, Josef

    2017-01-01

    Common video representations often deploy an average or maximum pooling of pre-extracted frame features over time. Such an approach provides a simple means to encode feature distributions, but is likely to be suboptimal. As an alternative, we here explore combinations of learnable pooling techniques such as Soft Bag-of-words, Fisher Vectors , NetVLAD, GRU and LSTM to aggregate video features over time. We also introduce a learnable non-linear network unit, named Context Gating, aiming at mode...

  6. Video Self-Modeling

    Science.gov (United States)

    Buggey, Tom; Ogle, Lindsey

    2012-01-01

    Video self-modeling (VSM) first appeared on the psychology and education stage in the early 1970s. The practical applications of VSM were limited by lack of access to tools for editing video, which is necessary for almost all self-modeling videos. Thus, VSM remained in the research domain until the advent of camcorders and VCR/DVD players and,…

  7. Tracing Sequential Video Production

    DEFF Research Database (Denmark)

    Otrel-Cass, Kathrin; Khalid, Md. Saifuddin

    2015-01-01

    With an interest in learning that is set in collaborative situations, the data session presents excerpts from video data produced by two of fifteen students from a class of 5th semester techno-anthropology course. Students used video cameras to capture the time they spent working with a scientist...... video, nature of the interactional space, and material and spatial semiotics....

  8. Developing a Promotional Video

    Science.gov (United States)

    Epley, Hannah K.

    2014-01-01

    There is a need for Extension professionals to show clientele the benefits of their program. This article shares how promotional videos are one way of reaching audiences online. An example is given on how a promotional video has been used and developed using iMovie software. Tips are offered for how professionals can create a promotional video and…

  9. Secured web-based video repository for multicenter studies

    Science.gov (United States)

    Yan, Ling; Hicks, Matt; Winslow, Korey; Comella, Cynthia; Ludlow, Christy; Jinnah, H. A; Rosen, Ami R; Wright, Laura; Galpern, Wendy R; Perlmutter, Joel S

    2015-01-01

    Background We developed a novel secured web-based dystonia video repository for the Dystonia Coalition, part of the Rare Disease Clinical Research network funded by the Office of Rare Diseases Research and the National Institute of Neurological Disorders and Stroke. A critical component of phenotypic data collection for all projects of the Dystonia Coalition includes a standardized video of each participant. We now describe our method for collecting, serving and securing these videos that is widely applicable to other studies. Methods Each recruiting site uploads standardized videos to a centralized secured server for processing to permit website posting. The streaming technology used to view the videos from the website does not allow downloading of video files. With appropriate institutional review board approval and agreement with the hosting institution, users can search and view selected videos on the website using customizable, permissions-based access that maintains security yet facilitates research and quality control. Results This approach provides a convenient platform for researchers across institutions to evaluate and analyze shared video data. We have applied this methodology for quality control, confirmation of diagnoses, validation of rating scales, and implementation of new research projects. Conclusions We believe our system can be a model for similar projects that require access to common video resources. PMID:25630890

  10. Secured web-based video repository for multicenter studies.

    Science.gov (United States)

    Yan, Ling; Hicks, Matt; Winslow, Korey; Comella, Cynthia; Ludlow, Christy; Jinnah, H A; Rosen, Ami R; Wright, Laura; Galpern, Wendy R; Perlmutter, Joel S

    2015-04-01

    We developed a novel secured web-based dystonia video repository for the Dystonia Coalition, part of the Rare Disease Clinical Research network funded by the Office of Rare Diseases Research and the National Institute of Neurological Disorders and Stroke. A critical component of phenotypic data collection for all projects of the Dystonia Coalition includes a standardized video of each participant. We now describe our method for collecting, serving and securing these videos that is widely applicable to other studies. Each recruiting site uploads standardized videos to a centralized secured server for processing to permit website posting. The streaming technology used to view the videos from the website does not allow downloading of video files. With appropriate institutional review board approval and agreement with the hosting institution, users can search and view selected videos on the website using customizable, permissions-based access that maintains security yet facilitates research and quality control. This approach provides a convenient platform for researchers across institutions to evaluate and analyze shared video data. We have applied this methodology for quality control, confirmation of diagnoses, validation of rating scales, and implementation of new research projects. We believe our system can be a model for similar projects that require access to common video resources. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Struggles and Solutions for Streaming Video in the Online Classroom

    Science.gov (United States)

    Fruin, Christine

    2012-01-01

    The upcoming round of exemptions to the Digital Millennium Copyright Act of 1998 anticircumvention provision and the questions raised by the copyright infringement lawsuit filed against the against University of California, Los Angeles (UCLA) for its streaming video practices illustrate the problematic state of the law concerning the digitization…

  12. Simulation and video software development for soil consolidation testing

    NARCIS (Netherlands)

    Karim, Usama F.A.

    2003-01-01

    The development techniques and file structures of CTM, a novel multi-media (computer simulation and video) package on consolidation and laboratory consolidation testing, are presented in this paper. A courseware tool called Authorware proved to be versatile for building the package and the paper

  13. Music, videos and the risk for CERN

    CERN Multimedia

    IT Department

    2010-01-01

    Do you like listening to music while working? What about watching videos during leisure time? Sure this is fun. Having your colleagues participating in this is even more fun. However, this fun is usually not free. There are music and film companies who earn their living from music and videos. Thus, if you want to listen to music or watch films at CERN, make sure that you own the proper rights to do so (and you have the agreement of your supervisor to do this during working hours). Note that these rights are personal: You usually do not have the right to share this music or these videos with third parties without violating copyrights. Therefore, making copyrighted music and videos public, or sharing music and video files as well as other copyrighted material, is forbidden at CERN --- and also outside CERN. It violates the CERN Computing Rules (http://cern.ch/ComputingRules) and it contradicts CERN's Code of Coduct (https://cern.ch/hr-info/codeofconduct.asp) which expects each of us to behave ethically and be ...

  14. A Multi-view Approach for Detecting Non-Cooperative Users in Online Video Sharing Systems

    OpenAIRE

    Langbehn, Hendrickson Reiter; Ricci, Saulo M. R.; Gonçalves, Marcos A.; Almeida, Jussara Marques; Pappa, Gisele Lobo; Benevenuto, Fabrício

    2010-01-01

    Most online video sharing systems (OVSSs), such as YouTube and Yahoo! Video, have several mechanisms for supporting interactions among users. One such mechanism is the  video response feature in YouTube, which allows a user to post a video in response to another video. While increasingly popular, the video response feature opens the opportunity for non-cooperative users to  introduce  ``content pollution'' into the system, thus causing loss of service effectiveness and credibility as w...

  15. Intelligent video surveillance systems

    CERN Document Server

    Dufour, Jean-Yves

    2012-01-01

    Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi

  16. VBR video traffic models

    CERN Document Server

    Tanwir, Savera

    2014-01-01

    There has been a phenomenal growth in video applications over the past few years. An accurate traffic model of Variable Bit Rate (VBR) video is necessary for performance evaluation of a network design and for generating synthetic traffic that can be used for benchmarking a network. A large number of models for VBR video traffic have been proposed in the literature for different types of video in the past 20 years. Here, the authors have classified and surveyed these models and have also evaluated the models for H.264 AVC and MVC encoded video and discussed their findings.

  17. Video Texture Synthesis Based on Flow-Like Stylization Painting

    Directory of Open Access Journals (Sweden)

    Qian Wenhua

    2014-01-01

    Full Text Available The paper presents an NP-video rendering system based on natural phenomena. It provides a simple nonphotorealistic video synthesis system in which user can obtain a flow-like stylization painting and infinite video scene. Firstly, based on anisotropic Kuwahara filtering in conjunction with line integral convolution, the phenomena video scene can be rendered to flow-like stylization painting. Secondly, the methods of frame division, patches synthesis, will be used to synthesize infinite playing video. According to selection examples from different natural video texture, our system can generate stylized of flow-like and infinite video scenes. The visual discontinuities between neighbor frames are decreased, and we also preserve feature and details of frames. This rendering system is easy and simple to implement.

  18. Flip Video for Dummies

    CERN Document Server

    Hutsko, Joe

    2010-01-01

    The full-color guide to shooting great video with the Flip Video camera. The inexpensive Flip Video camera is currently one of the hottest must-have gadgets. It's portable and connects easily to any computer to transfer video you shoot onto your PC or Mac. Although the Flip Video camera comes with a quick-start guide, it lacks a how-to manual, and this full-color book fills that void! Packed with full-color screen shots throughout, Flip Video For Dummies shows you how to shoot the best possible footage in a variety of situations. You'll learn how to transfer video to your computer and then edi

  19. Mediating Tourist Experiences. Access to Places via Shared Videos

    DEFF Research Database (Denmark)

    Tussyadiah, Iis; Fesenmaier, D.R.

    2009-01-01

    The emergence of new media using multimedia features has generated a new set of mediators for tourists' experiences. This study examines two hypotheses regarding the roles that online travel videos play as mediators of tourist experiences. The results confirm that online shared videos can provide...... mental pleasure to viewers by stimulating fantasies and daydreams, as well as bringing back past travel memories. In addition, the videos act as a narrative transportation, providing access to foreign landscapes and socioscapes....

  20. Artificial Intelligence in Video Games: Towards a Unified Framework

    OpenAIRE

    Safadi, Firas

    2015-01-01

    The work presented in this dissertation revolves around the problem of designing artificial intelligence (AI) for video games. This problem becomes increasingly challenging as video games grow in complexity. With modern video games frequently featuring sophisticated and realistic environments, the need for smart and comprehensive agents that understand the various aspects of these environments is pressing. Although machine learning techniques are being successfully applied in a multitude of d...

  1. Obtaining video descriptors for a content-based video information system

    Science.gov (United States)

    Bescos, Jesus; Martinez, Jose M.; Cabrera, Julian M.; Cisneros, Guillermo

    1998-09-01

    This paper describes the first stages of a research project that is currently being developed in the Image Processing Group of the UPM. The aim of this effort is to add video capabilities to the Storage and Retrieval Information System already working at our premises. Here we will focus on the early design steps of a Video Information System. For this purpose, we present a review of most of the reported techniques for video temporal segmentation and semantic segmentation, previous steps to afford the content extraction task, and we discuss them to select the more suitable ones. We then outline a block design of a temporal segmentation module, and present guidelines to the design of the semantic segmentation one. All these operations trend to facilitate automation in the extraction of low level features and semantic features that will finally take part of the video descriptors.

  2. Defocus cue and saliency preserving video compression

    Science.gov (United States)

    Khanna, Meera Thapar; Chaudhury, Santanu; Lall, Brejesh

    2016-11-01

    There are monocular depth cues present in images or videos that aid in depth perception in two-dimensional images or videos. Our objective is to preserve the defocus depth cue present in the videos along with the salient regions during compression application. A method is provided for opportunistic bit allocation during the video compression using visual saliency information comprising both the image features, such as color and contrast, and the defocus-based depth cue. The method is divided into two steps: saliency computation followed by compression. A nonlinear method is used to combine pure and defocus saliency maps to form the final saliency map. Then quantization values are assigned on the basis of these saliency values over a frame. The experimental results show that the proposed scheme yields good results over standard H.264 compression as well as pure and defocus saliency methods.

  3. 831 Files

    Data.gov (United States)

    Social Security Administration — SSA-831 file is a collection of initial and reconsideration adjudicative level DDS disability determinations. (A few hearing level cases are also present, but the...

  4. ACONC Files

    Data.gov (United States)

    U.S. Environmental Protection Agency — ACONC files containing simulated ozone and PM2.5 fields that were used to create the model difference plots shown in the journal article. This dataset is associated...

  5. Video game characteristics, happiness and flow as predictors of addiction among video game players: a pilot study

    OpenAIRE

    Hull, DC; Williams, GA; Griffiths, MD

    2013-01-01

    Aims:\\ud Video games provide opportunities for positive psychological experiences such as flow-like phenomena during play and general happiness that could be associated with gaming achievements. However, research has shown that specific features of game play may be associated with problematic behaviour associated with addiction-like experiences. The study was aimed at analysing whether certain structural characteristics of video games, flow, and global happiness could be predictive of video g...

  6. Video transect images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): data from 2000 (NODC Accession 0000728)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2000 at 23 sites, some of which had multiple depths. Estimates of substrate...

  7. Video transect images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): data from 2002 (NODC Accession 0000961)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2002 at 23 sites, some of which had multiple depths. Estimates of substrate...

  8. Video transect images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): data from year 1999 (NODC Accession 0000671)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (JPEG files) from CRAMP surveys taken in 1999 at 26 sites, some of which had multiple depths. Estimates of substrate...

  9. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): Data from 2000 (NODC Accession 0000728)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2000 at 23 sites, some of which had multiple depths. Estimates of substrate...

  10. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP):Data from 2003 (NODC Accession 0001732)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2003 at 15 sites, some of which had multiple depths. Estimates of substrate...

  11. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): Data from 2003 (NODC Accession 0001732)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2003 at 15 sites, some of which had multiple depths. Estimates of substrate...

  12. Video Transect Images (1999) from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP) (NODC Accession 0000671)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (JPEG files) from CRAMP surveys taken in 1999 at 26 sites, some of which had multiple depths. Estimates of substrate...

  13. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): Data from 2002 (NODC Accession 0000961)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2002 at 23 sites, some of which had multiple depths. Estimates of substrate...

  14. The Effect of Video Context on Foreign Language Learning.

    Science.gov (United States)

    Secules, Teresa; And Others

    1992-01-01

    Two experiments are reported that compare teacher-managed videotaped instructional materials featuring native speakers in everyday situations (using the "French in Action" video-based curriculum) to more traditional pedagogical methods involving a variety of classroom exercises and drills. The benefits of video use are discussed. (27…

  15. An Overview of Structural Characteristics in Problematic Video Game Playing.

    Science.gov (United States)

    Griffiths, Mark D; Nuyens, Filip

    2017-01-01

    There are many different factors involved in how and why people develop problems with video game playing. One such set of factors concerns the structural characteristics of video games (i.e., the structure, elements, and components of the video games themselves). Much of the research examining the structural characteristics of video games was initially based on research and theorizing from the gambling studies field. The present review briefly overviews the key papers in the field to date. The paper examines a number of areas including (i) similarities in structural characteristics of gambling and video gaming, (ii) structural characteristics in video games, (iii) narrative and flow in video games, (iv) structural characteristic taxonomies for video games, and (v) video game structural characteristics and game design ethics. Many of the studies carried out to date are small-scale, and comprise self-selected convenience samples (typically using self-report surveys or non-ecologically valid laboratory experiments). Based on the small amount of empirical data, it appears that structural features that take a long time to achieve in-game are the ones most associated with problematic video game play (e.g., earning experience points, managing in-game resources, mastering the video game, getting 100% in-game). The study of video games from a structural characteristic perspective is of benefit to many different stakeholders including academic researchers, video game players, and video game designers, as well as those interested in prevention and policymaking by making the games more socially responsible. It is important that researchers understand and recognize the psycho-social effects and impacts that the structural characteristics of video games can have on players, both positive and negative.

  16. Identifiable Data Files - Denominator File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Denominator File combines Medicare beneficiary entitlement status information from administrative enrollment records with third-party payer information and GHP...

  17. Highlight detection for video content analysis through double filters

    Science.gov (United States)

    Sun, Zhonghua; Chen, Hexin; Chen, Mianshu

    2005-07-01

    Highlight detection is a form of video summarization techniques aiming at including the most expressive or attracting parts in the video. Most video highlights selection research work has been performed on sports video, detecting certain objects or events such as goals in soccer video, touch down in football and others. In this paper, we present a highlight detection method for film video. Highlight section in a film video is not like that in sports video that usually has certain objects or events. The methods to determine a highlight part in a film video can exhibit as three aspects: (a) locating obvious audio event, (b) detecting expressive visual content around the obvious audio location, (c) selecting the preferred portion of the extracted audio-visual highlight segments. We define a double filters model to detect the potential highlights in video. First obvious audio location is determined through filtering the obvious audio features, and then we perform the potential visual salience detection around the potential audio highlight location. Finally the production from the audio-visual double filters is compared with a preference threshold to determine the final highlights. The user study results indicate that the double filters detection approach is an effective method for highlight detection for video content analysis.

  18. Understanding Video Games

    DEFF Research Database (Denmark)

    Heide Smith, Jonas; Tosca, Susana Pajares; Egenfeldt-Nielsen, Simon

    From Pong to PlayStation 3 and beyond, Understanding Video Games is the first general introduction to the exciting new field of video game studies. This textbook traces the history of video games, introduces the major theories used to analyze games such as ludology and narratology, reviews...... the economics of the game industry, examines the aesthetics of game design, surveys the broad range of game genres, explores player culture, and addresses the major debates surrounding the medium, from educational benefits to the effects of violence. Throughout the book, the authors ask readers to consider...... larger questions about the medium: * What defines a video game? * Who plays games? * Why do we play games? * How do games affect the player? Extensively illustrated, Understanding Video Games is an indispensable and comprehensive resource for those interested in the ways video games are reshaping...

  19. Reflections on academic video

    Directory of Open Access Journals (Sweden)

    Thommy Eriksson

    2012-11-01

    Full Text Available As academics we study, research and teach audiovisual media, yet rarely disseminate and mediate through it. Today, developments in production technologies have enabled academic researchers to create videos and mediate audiovisually. In academia it is taken for granted that everyone can write a text. Is it now time to assume that everyone can make a video essay? Using the online journal of academic videos Audiovisual Thinking and the videos published in it as a case study, this article seeks to reflect on the emergence and legacy of academic audiovisual dissemination. Anchoring academic video and audiovisual dissemination of knowledge in two critical traditions, documentary theory and semiotics, we will argue that academic video is in fact already present in a variety of academic disciplines, and that academic audiovisual essays are bringing trends and developments that have long been part of academic discourse to their logical conclusion.

  20. Video game use and cognitive performance: does it vary with the presence of problematic video game use?

    Science.gov (United States)

    Collins, Emily; Freeman, Jonathan

    2014-03-01

    Action video game players have been found to outperform nonplayers on a variety of cognitive tasks. However, several failures to replicate these video game player advantages have indicated that this relationship may not be straightforward. Moreover, despite the discovery that problematic video game players do not appear to demonstrate the same superior performance as nonproblematic video game players in relation to multiple object tracking paradigms, this has not been investigated for other tasks. Consequently, this study compared gamers and nongamers in task switching ability, visual short-term memory, mental rotation, enumeration, and flanker interference, as well as investigated the influence of self-reported problematic video game use. A total of 66 participants completed the experiment, 26 of whom played action video games, including 20 problematic players. The results revealed no significant effect of playing action video games, nor any influence of problematic video game play. This indicates that the previously reported cognitive advantages in video game players may be restricted to specific task features or samples. Furthermore, problematic video game play may not have a detrimental effect on cognitive performance, although this is difficult to ascertain considering the lack of video game player advantage. More research is therefore sorely needed.

  1. Video Shot Boundary Detection based on Multifractal Analisys

    Directory of Open Access Journals (Sweden)

    B. D. Reljin

    2011-11-01

    Full Text Available Extracting video shots is an essential preprocessing step to almost all video analysis, indexing, and other content-based operations. This process is equivalent to detecting the shot boundaries in a video. In this paper we presents video Shot Boundary Detection (SBD based on Multifractal Analysis (MA. Low-level features (color and texture features are extracted from each frame in video sequence. Features are concatenated in feature vectors (FVs and stored in feature matrix. Matrix rows correspond to FVs of frames from video sequence, while columns are time series of particular FV component. Multifractal analysis is applied to FV component time series, and shot boundaries are detected as high singularities of time series above pre defined treshold. Proposed SBD method is tested on real video sequence with 64 shots, with manually labeled shot boundaries. Detection accuracy depends on number FV components used. For only one FV component detection accuracy lies in the range 76-92% (depending on selected threshold, while by combining two FV components all shots are detected completely (accuracy of 100%.

  2. Sound for digital video

    CERN Document Server

    Holman, Tomlinson

    2013-01-01

    Achieve professional quality sound on a limited budget! Harness all new, Hollywood style audio techniques to bring your independent film and video productions to the next level.In Sound for Digital Video, Second Edition industry experts Tomlinson Holman and Arthur Baum give you the tools and knowledge to apply recent advances in audio capture, video recording, editing workflow, and mixing to your own film or video with stunning results. This fresh edition is chockfull of techniques, tricks, and workflow secrets that you can apply to your own projects from preproduction

  3. Green Power Partnership Videos

    Science.gov (United States)

    The Green Power Partnership develops videos on a regular basis that explore a variety of topics including, Green Power partnership, green power purchasing, Renewable energy certificates, among others.

  4. Patent landscape for royalty-free video coding

    Science.gov (United States)

    Reader, Cliff

    2016-09-01

    Digital video coding is over 60 years old and the first major video coding standard - H.261 - is over 25 years old, yet today there are more patents than ever related to, or evaluated as essential to video coding standards. This paper examines the historical development of video coding standards, from the perspective of when the significant contributions for video coding technology were made, what performance can be attributed to those contributions and when original patents were filed for those contributions. These patents have now expired, so the main video coding tools, which provide the significant majority of coding performance, are now royalty-free. The deployment of video coding tools in a standard involves several related developments. The tools themselves have evolved over time to become more adaptive, taking advantage of the increased complexity afforded by advances in semiconductor technology. In most cases, the improvement in performance for any given tool has been incremental, although significant improvement has occurred in aggregate across all tools. The adaptivity must be mirrored by the encoder and decoder, and advances have been made in reducing the overhead of signaling adaptive modes and parameters. Efficient syntax has been developed to provide such signaling. Furthermore, efficient ways of implementing the tools with limited precision, simple mathematical operators have been developed. Correspondingly, categories of patents related to video coding can be defined. Without discussing active patents, this paper provides the timeline of the developments of video coding and lays out the landscape of patents related to video coding. This provides a foundation on which royalty free video codec design can take place.

  5. Automatic topics segmentation for TV news video

    Science.gov (United States)

    Hmayda, Mounira; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    Automatic identification of television programs in the TV stream is an important task for operating archives. This article proposes a new spatio-temporal approach to identify the programs in TV stream into two main steps: First, a reference catalogue for video features visual jingles built. We operate the features that characterize the instances of the same program type to identify the different types of programs in the flow of television. The role of video features is to represent the visual invariants for each visual jingle using appropriate automatic descriptors for each television program. On the other hand, programs in television streams are identified by examining the similarity of the video signal for visual grammars in the catalogue. The main idea of the identification process is to compare the visual similarity of the video signal features in the flow of television to the catalogue. After presenting the proposed approach, the paper overviews encouraging experimental results on several streams extracted from different channels and compounds of several programs.

  6. The Effects of Reviews in Video Tutorials

    Science.gov (United States)

    van der Meij, H.; van der Meij, J.

    2016-01-01

    This study investigates how well a video tutorial for software training that is based on Demonstration-Based Teaching supports user motivation and performance. In addition, it is studied whether reviews significantly contribute to these measures. The Control condition employs a tutorial with instructional features added to a dynamic task…

  7. Heartbeat Rate Measurement from Facial Video

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal

    2016-01-01

    by combining a ‘Good feature to track’ and a ‘Supervised descent method’ in order to overcome the limitations of currently available facial video based HR measuring systems. Such limitations include, e.g., unrealistic restriction of the subject’s movement and artificial lighting during data capture. A face...

  8. Segment scheduling method for reducing 360° video streaming latency

    Science.gov (United States)

    Gudumasu, Srinivas; Asbun, Eduardo; He, Yong; Ye, Yan

    2017-09-01

    360° video is an emerging new format in the media industry enabled by the growing availability of virtual reality devices. It provides the viewer a new sense of presence and immersion. Compared to conventional rectilinear video (2D or 3D), 360° video poses a new and difficult set of engineering challenges on video processing and delivery. Enabling comfortable and immersive user experience requires very high video quality and very low latency, while the large video file size poses a challenge to delivering 360° video in a quality manner at scale. Conventionally, 360° video represented in equirectangular or other projection formats can be encoded as a single standards-compliant bitstream using existing video codecs such as H.264/AVC or H.265/HEVC. Such method usually needs very high bandwidth to provide an immersive user experience. While at the client side, much of such high bandwidth and the computational power used to decode the video are wasted because the user only watches a small portion (i.e., viewport) of the entire picture. Viewport dependent 360°video processing and delivery approaches spend more bandwidth on the viewport than on non-viewports and are therefore able to reduce the overall transmission bandwidth. This paper proposes a dual buffer segment scheduling algorithm for viewport adaptive streaming methods to reduce latency when switching between high quality viewports in 360° video streaming. The approach decouples the scheduling of viewport segments and non-viewport segments to ensure the viewport segment requested matches the latest user head orientation. A base layer buffer stores all lower quality segments, and a viewport buffer stores high quality viewport segments corresponding to the most recent viewer's head orientation. The scheduling scheme determines viewport requesting time based on the buffer status and the head orientation. This paper also discusses how to deploy the proposed scheduling design for various viewport adaptive video

  9. Educational video recording and editing for the hand surgeon.

    Science.gov (United States)

    Rehim, Shady A; Chung, Kevin C

    2015-05-01

    Digital video recordings are increasingly used across various medical and surgical disciplines including hand surgery for documentation of patient care, resident education, scientific presentations, and publications. In recent years, the introduction of sophisticated computer hardware and software technology has simplified the process of digital video production and improved means of disseminating large digital data files. However, the creation of high-quality surgical video footage requires a basic understanding of key technical considerations, together with creativity and sound aesthetic judgment of the videographer. In this article we outline the practical steps involved in equipment preparation, video recording, editing, and archiving, as well as guidance for the choice of suitable hardware and software equipment. Copyright © 2015 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  10. Image processing tool for automatic feature recognition and quantification

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xing; Stoddard, Ryan J.

    2017-05-02

    A system for defining structures within an image is described. The system includes reading of an input file, preprocessing the input file while preserving metadata such as scale information and then detecting features of the input file. In one version the detection first uses an edge detector followed by identification of features using a Hough transform. The output of the process is identified elements within the image.

  11. Videos Bridging Asia and Africa: Overcoming Cultural and Institutional Barriers in Technology-Mediated Rural Learning

    Science.gov (United States)

    Van Mele, Paul; Wanvoeke, Jonas; Akakpo, Cyriaque; Dacko, Rosaline Maiga; Ceesay, Mustapha; Beavogui, Louis; Soumah, Malick; Anyang, Robert

    2010-01-01

    Will African farmers watch and learn from videos featuring farmers in Bangladesh? Learning videos on rice seed management were made with rural women in Bangladesh. By using a new approach, called zooming-in, zooming-out, the videos were of regional relevance and locally appropriate. When the Africa Rice Center (AfricaRice) introduced them to…

  12. Correction of Line Interleaving Displacement in Frame Captured Aerial Video Imagery

    Science.gov (United States)

    B. Cooke; A. Saucier

    1995-01-01

    Scientists with the USDA Forest Service are currently assessing the usefulness of aerial video imagery for various purposes including midcycle inventory updates. The potential of video image data for these purposes may be compromised by scan line interleaving displacement problems. Interleaving displacement problems cause features in video raster datasets to have...

  13. Real-time Classification of Gorilla Video Segments in Affective Categories Using Crowd-Sourced Annotations

    NARCIS (Netherlands)

    Schavemaker, J.G.M.; Thomas, E.D.R.; Havekes, A.

    2014-01-01

    In this contribution we present a method to classify segments of gorilla videos1 in different affective categories. The classification method is trained by crowd sourcing affective annotation. The trained classification then uses video features (computed from the video segments) to classify a new

  14. A Method for Counting Moving People in Video Surveillance Videos

    Directory of Open Access Journals (Sweden)

    Mario Vento

    2010-01-01

    Full Text Available People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem. This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an ϵ-SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  15. A Method for Counting Moving People in Video Surveillance Videos

    Directory of Open Access Journals (Sweden)

    Conte Donatello

    2010-01-01

    Full Text Available People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem. This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an -SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  16. A Method for Counting Moving People in Video Surveillance Videos

    Science.gov (United States)

    Conte, Donatello; Foggia, Pasquale; Percannella, Gennaro; Tufano, Francesco; Vento, Mario

    2010-12-01

    People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem). This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an [InlineEquation not available: see fulltext.]-SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  17. Long-term video surveillance and automated analyses of hibernating bats in Virginia and Indiana, winters 2011-2014.

    Science.gov (United States)

    Hayman, David T.S.; Cryan, Paul; Fricker, Paul D.; Dannemiller, Nicholas G.

    2017-01-01

    This data release includes video files and image-processing results used to conduct the analyses of hibernation patterns in groups of bats reported by Hayman et al. (2017), "Long-term video surveillance and automated analyses reveal arousal patterns in groups of hibernating bats.”  Thermal-imaging surveillance video cameras were used to observe little brown bats (Myotis lucifugus) in a cave in Virginia and Indiana bats (M. sodalis) in a cave in Indiana during three winters between 2011 and 2014.  There are 740 video files used for analysis (‘Analysis videos’), organized into 7 folders by state/site and winter.  Total size of the video data set is 14.1 gigabytes.  Each video file in this analysis set represents one 24-hour period of observation, time-lapsed at a rate of one frame per 30 seconds of real time (video plays at 30 frames per second).  A folder of illustrative videos is also included, which shows all of the analysis days for one winter of monitoring merged into a single video clip, time-lapsed at a rate of one frame per two hours of real time.  The associated image-processing results are included in 7 data files, each representing computer derived values of mean pixel intensity in every 10th frame of the 740 time-lapsed video files, concatenated by site and winter of observation.  Details on the format of these data, as well as how they were processed and derived are included in Hayman et al. (2017) and with the project metadata on Science Base.Hayman, DTS, Cryan PM, Fricker PD, Dannemiller NG. 2017. Long-term video surveillance and automated analyses reveal arousal patterns in groups of hibernating bats. Methods Ecol Evol. 2017;00:1-9. https://doi.org/10.1111/2041-210X.12823

  18. Video-based face recognition via convolutional neural networks

    Science.gov (United States)

    Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming

    2017-06-01

    Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.

  19. A Method for Estimating Surveillance Video Georeferences

    Directory of Open Access Journals (Sweden)

    Aleksandar Milosavljević

    2017-07-01

    Full Text Available The integration of a surveillance camera video with a three-dimensional (3D geographic information system (GIS requires the georeferencing of that video. Since a video consists of separate frames, each frame must be georeferenced. To georeference a video frame, we rely on the information about the camera view at the moment that the frame was captured. A camera view in 3D space is completely determined by the camera position, orientation, and field-of-view. Since the accurate measuring of these parameters can be extremely difficult, in this paper we propose a method for their estimation based on matching video frame coordinates of certain point features with their 3D geographic locations. To obtain these coordinates, we rely on high-resolution orthophotos and digital elevation models (DEM of the area of interest. Once an adequate number of points are matched, Levenberg–Marquardt iterative optimization is applied to find the most suitable video frame georeference, i.e., position and orientation of the camera.

  20. Segmentation Based Video Steganalysis to Detect Motion Vector Modification

    Directory of Open Access Journals (Sweden)

    Peipei Wang

    2017-01-01

    Full Text Available This paper presents a steganalytic approach against video steganography which modifies motion vector (MV in content adaptive manner. Current video steganalytic schemes extract features from fixed-length frames of the whole video and do not take advantage of the content diversity. Consequently, the effectiveness of the steganalytic feature is influenced by video content and the problem of cover source mismatch also affects the steganalytic performance. The goal of this paper is to propose a steganalytic method which can suppress the differences of statistical characteristics caused by video content. The given video is segmented to subsequences according to block’s motion in every frame. The steganalytic features extracted from each category of subsequences with close motion intensity are used to build one classifier. The final steganalytic result can be obtained by fusing the results of weighted classifiers. The experimental results have demonstrated that our method can effectively improve the performance of video steganalysis, especially for videos of low bitrate and low embedding ratio.

  1. Electronic evaluation for video commercials by impression index.

    Science.gov (United States)

    Kong, Wanzeng; Zhao, Xinxin; Hu, Sanqing; Vecchiato, Giovanni; Babiloni, Fabio

    2013-12-01

    How to evaluate the effect of commercials is significantly important in neuromarketing. In this paper, we proposed an electronic way to evaluate the influence of video commercials on consumers by impression index. The impression index combines both the memorization and attention index during consumers observing video commercials by tracking the EEG activity. It extracts features from scalp EEG to evaluate the effectiveness of video commercials in terms of time-frequency-space domain. And, the general global field power was used as an impression index for evaluation of video commercial scenes as time series. Results of experiment demonstrate that the proposed approach is able to track variations of the cerebral activity related to cognitive task such as observing video commercials, and help to judge whether the scene in video commercials is impressive or not by EEG signals.

  2. No-Reference Video Quality Assessment using Codec Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2015-01-01

    A no-reference video quality assessment (VQA) method is presented for videos distorted by H.264/AVC and MPEG-2. The assessment is performed without access to the bit-stream. Instead we analyze and estimate coefficients based on decoded pixels. The approach involves distinguishing between the two...... types of videos, estimating the level of quantization used in the I-frames, and exploiting this information to assess the video quality. In order to do this for H.264/AVC, the distribution of the DCT-coefficients after intra-prediction and deblocking are modeled. To obtain VQA features for H.264/AVC, we...... propose a novel estimation method of the quantization in H.264/AVC videos without bitstream access, which can also be used for Peak Signalto-Noise Ratio (PSNR) estimation. The results from the MPEG-2 and H.264/AVC analysis are mapped to a perceptual measure of video quality by Support Vector Regression...

  3. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer Support Program Community Connections Overview ... group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork Peer Support Program ...

  4. Digital Video Editing

    Science.gov (United States)

    McConnell, Terry

    2004-01-01

    Monica Adams, head librarian at Robinson Secondary in Fairfax country, Virginia, states that librarians should have the technical knowledge to support projects related to digital video editing. The process of digital video editing and the cables, storage issues and the computer system with software is described.

  5. AudioMove Video

    DEFF Research Database (Denmark)

    2012-01-01

    Live drawing video experimenting with low tech techniques in the field of sketching and visual sense making. In collaboration with Rune Wehner and Teater Katapult.......Live drawing video experimenting with low tech techniques in the field of sketching and visual sense making. In collaboration with Rune Wehner and Teater Katapult....

  6. Making Good Physics Videos

    Science.gov (United States)

    Lincoln, James

    2017-01-01

    Online videos are an increasingly important way technology is contributing to the improvement of physics teaching. Students and teachers have begun to rely on online videos to provide them with content knowledge and instructional strategies. Online audiences are expecting greater production value, and departments are sometimes requesting educators…

  7. SECRETS OF SONG VIDEO

    Directory of Open Access Journals (Sweden)

    Chernyshov Alexander V.

    2014-04-01

    Full Text Available The article focuses on the origins of the song videos as TV and Internet-genre. In addition, it considers problems of screen images creation depending on the musical form and the text of a songs in connection with relevant principles of accent and phraseological video editing and filming techniques as well as with additional frames and sound elements.

  8. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... support group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer ... group for me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork ...

  9. The Video Generation.

    Science.gov (United States)

    Provenzo, Eugene F., Jr.

    1992-01-01

    Video games are neither neutral nor harmless but represent very specific social and symbolic constructs. Research on the social content of today's video games reveals that sex bias and gender stereotyping are widely evident throughout the Nintendo games. Violence and aggression also pervade the great majority of the games. (MLF)

  10. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos ... member of our patient care team. Managing Your Arthritis Managing Your Arthritis Managing Chronic Pain and Depression ...

  11. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed to help you learn more about Rheumatoid Arthritis (RA). You will learn how the diagnosis of ...

  12. Rheumatoid Arthritis Educational Video Series

    Science.gov (United States)

    ... Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed to help you learn more about Rheumatoid Arthritis (RA). You will learn how the diagnosis of ...

  13. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... questions Clinical Studies Publications Catalog Photos and Images Spanish Language Information Grants and Funding Extramural Research Division ... Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video ...

  14. Social video content delivery

    CERN Document Server

    Wang, Zhi; Zhu, Wenwu

    2016-01-01

    This brief presents new architecture and strategies for distribution of social video content. A primary framework for socially-aware video delivery and a thorough overview of the possible approaches is provided. The book identifies the unique characteristics of socially-aware video access and social content propagation, revealing the design and integration of individual modules that are aimed at enhancing user experience in the social network context. The change in video content generation, propagation, and consumption for online social networks, has significantly challenged the traditional video delivery paradigm. Given the massive amount of user-generated content shared in online social networks, users are now engaged as active participants in the social ecosystem rather than as passive receivers of media content. This revolution is being driven further by the deep penetration of 3G/4G wireless networks and smart mobile devices that are seamlessly integrated with online social networking and media-sharing s...

  15. Automatic inpainting scheme for video text detection and removal.

    Science.gov (United States)

    Mosleh, Ali; Bouguila, Nizar; Ben Hamza, Abdessamad

    2013-11-01

    We present a two stage framework for automatic video text removal to detect and remove embedded video texts and fill-in their remaining regions by appropriate data. In the video text detection stage, text locations in each frame are found via an unsupervised clustering performed on the connected components produced by the stroke width transform (SWT). Since SWT needs an accurate edge map, we develop a novel edge detector which benefits from the geometric features revealed by the bandlet transform. Next, the motion patterns of the text objects of each frame are analyzed to localize video texts. The detected video text regions are removed, then the video is restored by an inpainting scheme. The proposed video inpainting approach applies spatio-temporal geometric flows extracted by bandlets to reconstruct the missing data. A 3D volume regularization algorithm, which takes advantage of bandlet bases in exploiting the anisotropic regularities, is introduced to carry out the inpainting task. The method does not need extra processes to satisfy visual consistency. The experimental results demonstrate the effectiveness of both our proposed video text detection approach and the video completion technique, and consequently the entire automatic video text removal and restoration process.

  16. Neural Basis of Video Gaming: A Systematic Review

    Science.gov (United States)

    Palaus, Marc; Marron, Elena M.; Viejo-Sobera, Raquel; Redolar-Ripoll, Diego

    2017-01-01

    Background: Video gaming is an increasingly popular activity in contemporary society, especially among young people, and video games are increasing in popularity not only as a research tool but also as a field of study. Many studies have focused on the neural and behavioral effects of video games, providing a great deal of video game derived brain correlates in recent decades. There is a great amount of information, obtained through a myriad of methods, providing neural correlates of video games. Objectives: We aim to understand the relationship between the use of video games and their neural correlates, taking into account the whole variety of cognitive factors that they encompass. Methods: A systematic review was conducted using standardized search operators that included the presence of video games and neuro-imaging techniques or references to structural or functional brain changes. Separate categories were made for studies featuring Internet Gaming Disorder and studies focused on the violent content of video games. Results: A total of 116 articles were considered for the final selection. One hundred provided functional data and 22 measured structural brain changes. One-third of the studies covered video game addiction, and 14% focused on video game related violence. Conclusions: Despite the innate heterogeneity of the field of study, it has been possible to establish a series of links between the neural and cognitive aspects, particularly regarding attention, cognitive control, visuospatial skills, cognitive workload, and reward processing. However, many aspects could be improved. The lack of standardization in the different aspects of video game related research, such as the participants' characteristics, the features of each video game genre and the diverse study goals could contribute to discrepancies in many related studies. PMID:28588464

  17. Neural Basis of Video Gaming: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Marc Palaus

    2017-05-01

    Full Text Available Background: Video gaming is an increasingly popular activity in contemporary society, especially among young people, and video games are increasing in popularity not only as a research tool but also as a field of study. Many studies have focused on the neural and behavioral effects of video games, providing a great deal of video game derived brain correlates in recent decades. There is a great amount of information, obtained through a myriad of methods, providing neural correlates of video games.Objectives: We aim to understand the relationship between the use of video games and their neural correlates, taking into account the whole variety of cognitive factors that they encompass.Methods: A systematic review was conducted using standardized search operators that included the presence of video games and neuro-imaging techniques or references to structural or functional brain changes. Separate categories were made for studies featuring Internet Gaming Disorder and studies focused on the violent content of video games.Results: A total of 116 articles were considered for the final selection. One hundred provided functional data and 22 measured structural brain changes. One-third of the studies covered video game addiction, and 14% focused on video game related violence.Conclusions: Despite the innate heterogeneity of the field of study, it has been possible to establish a series of links between the neural and cognitive aspects, particularly regarding attention, cognitive control, visuospatial skills, cognitive workload, and reward processing. However, many aspects could be improved. The lack of standardization in the different aspects of video game related research, such as the participants' characteristics, the features of each video game genre and the diverse study goals could contribute to discrepancies in many related studies.

  18. Neural Basis of Video Gaming: A Systematic Review.

    Science.gov (United States)

    Palaus, Marc; Marron, Elena M; Viejo-Sobera, Raquel; Redolar-Ripoll, Diego

    2017-01-01

    Background: Video gaming is an increasingly popular activity in contemporary society, especially among young people, and video games are increasing in popularity not only as a research tool but also as a field of study. Many studies have focused on the neural and behavioral effects of video games, providing a great deal of video game derived brain correlates in recent decades. There is a great amount of information, obtained through a myriad of methods, providing neural correlates of video games. Objectives: We aim to understand the relationship between the use of video games and their neural correlates, taking into account the whole variety of cognitive factors that they encompass. Methods: A systematic review was conducted using standardized search operators that included the presence of video games and neuro-imaging techniques or references to structural or functional brain changes. Separate categories were made for studies featuring Internet Gaming Disorder and studies focused on the violent content of video games. Results: A total of 116 articles were considered for the final selection. One hundred provided functional data and 22 measured structural brain changes. One-third of the studies covered video game addiction, and 14% focused on video game related violence. Conclusions: Despite the innate heterogeneity of the field of study, it has been possible to establish a series of links between the neural and cognitive aspects, particularly regarding attention, cognitive control, visuospatial skills, cognitive workload, and reward processing. However, many aspects could be improved. The lack of standardization in the different aspects of video game related research, such as the participants' characteristics, the features of each video game genre and the diverse study goals could contribute to discrepancies in many related studies.

  19. Automatic Metadata Generation Through Analysis of Narration Within Instructional Videos.

    Science.gov (United States)

    Rafferty, Joseph; Nugent, Chris; Liu, Jun; Chen, Liming

    2015-09-01

    Current activity recognition based assistive living solutions have adopted relatively rigid models of inhabitant activities. These solutions have some deficiencies associated with the use of these models. To address this, a goal-oriented solution has been proposed. In a goal-oriented solution, goal models offer a method of flexibly modelling inhabitant activity. The flexibility of these goal models can dynamically produce a large number of varying action plans that may be used to guide inhabitants. In order to provide illustrative, video-based, instruction for these numerous actions plans, a number of video clips would need to be associated with each variation. To address this, rich metadata may be used to automatically match appropriate video clips from a video repository to each specific, dynamically generated, activity plan. This study introduces a mechanism of automatically generating suitable rich metadata representing actions depicted within video clips to facilitate such video matching. This performance of this mechanism was evaluated using eighteen video files; during this evaluation metadata was automatically generated with a high level of accuracy.

  20. Automatic processing of CERN video, audio and photo archives

    Science.gov (United States)

    Kwiatek, M.

    2008-07-01

    The digitalization of CERN audio-visual archives, a major task currently in progress, will generate over 40 TB of video, audio and photo files. Storing these files is one issue, but a far more important challenge is to provide long-time coherence of the archive and to make these files available on-line with minimum manpower investment. An infrastructure, based on standard CERN services, has been implemented, whereby master files, stored in the CERN Distributed File System (DFS), are discovered and scheduled for encoding into lightweight web formats based on predefined profiles. Changes in master files, conversion profiles or in the metadata database (read from CDS, the CERN Document Server) are automatically detected and the media re-encoded whenever necessary. The encoding processes are run on virtual servers provided on-demand by the CERN Server Self Service Centre, so that new servers can be easily configured to adapt to higher load. Finally, the generated files are made available from the CERN standard web servers with streaming implemented using Windows Media Services.

  1. Selectively De-animating and Stabilizing Videos

    Science.gov (United States)

    2014-12-11

    welcoming me into their classes. I have learnt so much from them. My graduate life has been wonderful due to the great friends and lab mates I have...techniques for interactive control of video stabilization. The first step in current video stabilization methods is to track feature points that esti- mate ...Transactions on 18.11 (2012), pp. 1868–1879. [103] Gregory P. Sutton and Malcolm Burrows. “Biomechanics of jumping in the flea ”. In: J Exp Biol 214.5

  2. Video-based convolutional neural networks for activity recognition from robot-centric videos

    Science.gov (United States)

    Ryoo, M. S.; Matthies, Larry

    2016-05-01

    In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.

  3. On the Fly Porn Video Blocking Using Distributed Multi-Gpu and Data Mining Approach

    OpenAIRE

    Urvesh Devani; Valmik B Nikam; Meshram, B B

    2014-01-01

    Preventing users from accessing adult videos and at the same time allowing them to access good educational videos and other materials through campus wide network is a big challenge for organizations. Major existing web filtering systems are textual content or link analysis based. As a result, potential users cannot access qualitative and informative video content which is available online. Adult content detection in video based on motion features or skin detection requires sig...

  4. Short-Term Effects of Prosocial Video Games on Aggression: An Event-Related Potential Study

    OpenAIRE

    Yanling eLiu; Yanling eLiu; Zhaojun eTeng; Haiying eLan; Xin eZhang; Dezhong eYao

    2015-01-01

    Previous research has shown that exposure to violent video games increases aggression, whereas exposure to prosocial video games can reduce aggressive behavior. However, little is known about the neural correlates of these behavioral effects. This work is the first to investigate the electrophysiological features of the relationship between playing a prosocial video game and inhibition of aggressive behavior. Forty-nine subjects played either a prosocial or a neutral video game for 20 minutes...

  5. Short-term effects of prosocial video games on aggression: an event-related potential study

    OpenAIRE

    Liu, Yanling; Teng, Zhaojun; Lan, Haiying; Zhang, Xin; Yao, Dezhong

    2015-01-01

    Previous research has shown that exposure to violent video games increases aggression, whereas exposure to prosocial video games can reduce aggressive behavior. However, little is known about the neural correlates of these behavioral effects. This work is the first to investigate the electrophysiological features of the relationship between playing a prosocial video game and inhibition of aggressive behavior. Forty-nine subjects played either a prosocial or a neutral video game for 20 min, th...

  6. Video Analysis in Multi-Intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Key, Everett Kiusan [Univ. of Washington, Seattle, WA (United States); Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Van Buren, Kendra Lu [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warren, Will [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-27

    This is a project which was performed by a graduated high school student at Los Alamos National Laboratory (LANL). The goal of the Multi-intelligence (MINT) project is to determine the state of a facility from multiple data streams. The data streams are indirect observations. The researcher is using DARHT (Dual-Axis Radiographic Hydrodynamic Test Facility) as a proof of concept. In summary, videos from the DARHT facility contain a rich amount of information. Distribution of car activity can inform us about the state of the facility. Counting large vehicles shows promise as another feature for identifying the state of operations. Signal processing techniques are limited by the low resolution and compression of the videos. We are working on integrating these features with features obtained from other data streams to contribute to the MINT project. Future work can pursue other observations, such as when the gate is functioning or non-functioning.

  7. Consumer-based technology for distribution of surgical videos for objective evaluation.

    Science.gov (United States)

    Gonzalez, Ray; Martinez, Jose M; Lo Menzo, Emanuele; Iglesias, Alberto R; Ro, Charles Y; Madan, Atul K

    2012-08-01

    The Global Operative Assessment of Laparoscopic Skill (GOALS) is one validated metric utilized to grade laparoscopic skills and has been utilized to score recorded operative videos. To facilitate easier viewing of these recorded videos, we are developing novel techniques to enable surgeons to view these videos. The objective of this study is to determine the feasibility of utilizing widespread current consumer-based technology to assist in distributing appropriate videos for objective evaluation. Videos from residents were recorded via a direct connection from the camera processor via an S-video output via a cable into a hub to connect to a standard laptop computer via a universal serial bus (USB) port. A standard consumer-based video editing program was utilized to capture the video and record in appropriate format. We utilized mp4 format, and depending on the size of the file, the videos were scaled down (compressed), their format changed (using a standard video editing program), or sliced into multiple videos. Standard available consumer-based programs were utilized to convert the video into a more appropriate format for handheld personal digital assistants. In addition, the videos were uploaded to a social networking website and video sharing websites. Recorded cases of laparoscopic cholecystectomy in a porcine model were utilized. Compression was required for all formats. All formats were accessed from home computers, work computers, and iPhones without difficulty. Qualitative analyses by four surgeons demonstrated appropriate quality to grade for these formats. Our preliminary results show promise that, utilizing consumer-based technology, videos can be easily distributed to surgeons to grade via GOALS via various methods. Easy accessibility may help make evaluation of resident videos less complicated and cumbersome.

  8. Gamifying Video Object Segmentation.

    Science.gov (United States)

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  9. FileMaker Pro 9

    CERN Document Server

    Coffey, Geoff

    2007-01-01

    FileMaker Pro 9: The Missing Manual is the clear, thorough and accessible guide to the latest version of this popular desktop database program. FileMaker Pro lets you do almost anything with the information you give it. You can print corporate reports, plan your retirement, or run a small country -- if you know what you're doing. This book helps non-technical folks like you get in, get your database built, and get the results you need. Pronto.The new edition gives novices and experienced users the scoop on versions 8.5 and 9. It offers complete coverage of timesaving new features such as the Q

  10. Interactive video algorithms and technologies

    CERN Document Server

    Hammoud, Riad

    2006-01-01

    This book covers both algorithms and technologies of interactive videos, so that businesses in IT and data managements, scientists and software engineers in video processing and computer vision, coaches and instructors that use video technology in teaching, and finally end-users will greatly benefit from it. This book contains excellent scientific contributions made by a number of pioneering scientists and experts from around the globe. It consists of five parts. The first part introduces the reader to interactive video and video summarization and presents effective methodologies for automatic abstraction of a single video sequence, a set of video sequences, and a combined audio-video sequence. In the second part, a list of advanced algorithms and methodologies for automatic and semi-automatic analysis and editing of audio-video documents are presented. The third part tackles a more challenging level of automatic video re-structuring, filtering of video stream by extracting of highlights, events, and meaningf...

  11. Categorizing Video Game Audio

    DEFF Research Database (Denmark)

    Westerberg, Andreas Rytter; Schoenau-Fog, Henrik

    2015-01-01

    This paper dives into the subject of video game audio and how it can be categorized in order to deliver a message to a player in the most precise way. A new categorization, with a new take on the diegetic spaces, can be used a tool of inspiration for sound- and game-designers to rethink how...... they can use audio in video games. The conclusion of this study is that the current models' view of the diegetic spaces, used to categorize video game audio, is not t to categorize all sounds. This can however possibly be changed though a rethinking of how the player interprets audio....

  12. Brains on video games

    OpenAIRE

    Bavelier, Daphne; Green, C. Shawn; Han, Doug Hyun; Renshaw, Perry F.; Merzenich, Michael M.; Gentile, Douglas A.

    2011-01-01

    The popular press is replete with stories about the effects of video and computer games on the brain. Sensationalist headlines claiming that video games ‘damage the brain’ or ‘boost brain power’ do not do justice to the complexities and limitations of the studies involved, and create a confusing overall picture about the effects of gaming on the brain. Here, six experts in the field shed light on our current understanding of the positive and negative ways in which playing video games can affe...

  13. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  14. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  15. Understanding Legacy Features with Featureous

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2011-01-01

    Feature-centric comprehension of source code is essential during software evolution. However, such comprehension is oftentimes difficult to achieve due the discrepancies between structural and functional units of object-oriented programs. We present a tool for feature-centric analysis of legacy...... Java programs called Featureous that addresses this issue. Featureous allows a programmer to easily establish feature-code traceability links and to analyze their characteristics using a number of visualizations. Featureous is an extension to the NetBeans IDE, and can itself be extended by third...

  16. An analysis of lecture video utilization in undergraduate medical education: associations with performance in the courses

    Directory of Open Access Journals (Sweden)

    Chandrasekhar Arcot

    2009-01-01

    Full Text Available Abstract Background Increasing numbers of medical schools are providing videos of lectures to their students. This study sought to analyze utilization of lecture videos by medical students in their basic science courses and to determine if student utilization was associated with performance on exams. Methods Streaming videos of lectures (n = 149 to first year and second year medical students (n = 284 were made available through a password-protected server. Server logs were analyzed over a 10-week period for both classes. For each lecture, the logs recorded time and location from which students accessed the file. A survey was administered at the end of the courses to obtain additional information about student use of the videos. Results There was a wide disparity in the level of use of lecture videos by medical students with the majority of students accessing the lecture videos sparingly (60% of the students viewed less than 10% of the available videos. The anonymous student survey revealed that students tended to view the videos by themselves from home during weekends and prior to exams. Students who accessed lecture videos more frequently had significantly (p Conclusion We conclude that videos of lectures are used by relatively few medical students and that individual use of videos is associated with the degree to which students are having difficulty with the subject matter.

  17. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Care Disease Types FAQ Handout for Patients and Families Is It Right for You How to Get ... For the Media For Clinicians For Policymakers For Family Caregivers Glossary Menu In this section Links Videos ...

  18. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Donate Search Search What Is It Definition Pediatric Palliative Care Disease Types FAQ Handout for Patients and Families ... Policymakers For Family Caregivers Glossary Resources Browse our palliative care resources below: Links Videos Podcasts Webinars For the ...

  19. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Click to learn more... LOGIN CALENDAR DONATE NEWS Home Learn Back Learn about acoustic neuroma AN Facts ... Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video Ronson and Kerri Albany Support ...

  20. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Donate Search Search What Is It Definition Pediatric Palliative Care Disease Types FAQ Handout for Patients and ... Policymakers For Family Caregivers Glossary Resources Browse our palliative care resources below: Links Videos Podcasts Webinars For ...

  1. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Click to learn more... LOGIN CALENDAR DONATE NEWS Home Learn Back Learn about acoustic neuroma AN Facts ... Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video Howard of NJ Gloria hiking ...

  2. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Mission, Vision & Values Shop ANA Leadership & Staff Annual Reports Acoustic Neuroma Association 600 Peachtree Parkway Suite 108 ... About ANA Mission, Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video English English ...

  3. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Disease Types Stories FAQ Handout for Patients and Families Is It Right for You How to Get ... For the Media For Clinicians For Policymakers For Family Caregivers Glossary Menu In this section Links Videos ...

  4. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Search Search What Is It Definition Pediatric Palliative Care Disease Types FAQ Handout for Patients and Families ... For Family Caregivers Glossary Resources Browse our palliative care resources below: Links Videos Podcasts Webinars For the ...

  5. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Educational Video Scott at the Grand Canyon Proton Center load more hold SHIFT key to load all load all Stay Connected with ANA Newly Diagnosed Living with AN Healthcare Providers Acoustic Neuroma Association Donate Now Newly Diagnosed ...

  6. The video violence debate.

    Science.gov (United States)

    Lande, R G

    1993-04-01

    Some researchers and theorists are convinced that graphic scenes of violence on television and in movies are inextricably linked to human aggression. Others insist that a link has not been conclusively established. This paper summarizes scientific studies that have informed these two perspectives. Although many instances of children and adults imitating video violence have been documented, no court has imposed liability for harm allegedly resulting from a video program, an indication that considerable doubt still exists about the role of video violence in stimulating human aggression. The author suggests that a small group of vulnerable viewers are probably more impressionable and therefore more likely to suffer deleterious effects from violent programming. He proposes that research on video violence be narrowed to identifying and describing the vulnerable viewer.

  7. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... a patient kit Keywords Join/Renew Programs Back Support Groups Is a support group for me? Find ... Events Video Library Photo Gallery One-on-One Support ANetwork Peer Support Program Community Connections Overview Find ...

  8. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Click to learn more... LOGIN CALENDAR DONATE NEWS Home Learn Back Learn about acoustic neuroma AN Facts ... Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video English English Arabic Catalan Chinese ( ...

  9. Video i VIA

    DEFF Research Database (Denmark)

    2012-01-01

    Artiklen beskriver et udviklingsprojekt, hvor 13 grupper af lærere på tværs af fag og uddannelser producerede video til undervsioningsbrug. Der beskrives forskellige tilgange og anvendelser samt læringen i projektet...

  10. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... to your Doctor Find a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts ... to your Doctor Find a Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources ...

  11. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Click to learn more... LOGIN CALENDAR DONATE NEWS Home Learn Back Learn about acoustic neuroma AN Facts ... Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video Keck Medicine of USC ANWarriors ...

  12. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... illness: Toby’s palliative care story Access the Provider Directory Handout for Patients and Families Is it Right ... Provider Meet the Team Blog Articles News Provider Directory Donate Resources Links Videos Podcasts Webinars For the ...

  13. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Click to learn more... LOGIN EVENTS DONATE NEWS Home Learn Back Learn about acoustic neuroma AN Facts ... Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video Scott at the Grand Canyon ...

  14. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Is a support group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer Support Program Community Connections Overview Find a Meeting ...

  15. Photos and Videos

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Observers are required to take photos and/or videos of all incidentally caught sea turtles, marine mammals, seabirds and unusual or rare fish. On the first 3...

  16. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... All rights reserved. GetPalliativeCare.org does not provide medical advice, diagnosis or treatment. ... the Team Blog Articles & Stories News Provider Directory Donate Resources Links Videos ...

  17. SEFIS Video Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This is a fishery-independent survey that collects data on reef fish in southeast US waters using multiple gears, including chevron traps, video cameras, ROVs,...

  18. HUD GIS Boundary Files

    Data.gov (United States)

    Department of Housing and Urban Development — The HUD GIS Boundary Files are intended to supplement boundary files available from the U.S. Census Bureau. The files are for community planners interested in...

  19. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... search for current job openings visit HHS USAJobs Home > NEI YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia Animations Blindness Cataract ...

  20. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... Amaurosis Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia NEI Home Contact Us A-Z Site Map NEI on Social Media Information in Spanish (Información en español) Website, ...

  1. Studenterproduceret video til eksamen

    DEFF Research Database (Denmark)

    Jensen, Kristian Nøhr; Hansen, Kenneth

    2016-01-01

    Formålet med denne artikel er at vise, hvordan læringsdesign og stilladsering kan anvendes til at skabe en ramme for studenterproduceret video til eksamen på videregående uddannelser. Artiklen tager udgangspunkt i en problemstilling, hvor uddannelsesinstitutionerne skal håndtere og koordinere...... de fagfaglige og mediefaglige undervisere et redskab til at fokusere og koordinere indsatsen frem mod målet med, at de studerende producerer og anvender video til eksamen....

  2. Video Games and Citizenship

    OpenAIRE

    Bourgonjon, Jeroen; Soetaert, Ronald

    2013-01-01

    In their article "Video Games and Citizenship" Jeroen Bourgonjon and Ronald Soetaert argue that digitization problematizes and broadens our perspective on culture and popular media, and that this has important ramifications for our understanding of citizenship. Bourgonjon and Soetaert respond to the call of Gert Biesta for the contextualized study of young people's practices by exploring a particular aspect of digitization that affects young people, namely video games. They explore the new so...

  3. Android Video Streaming

    Science.gov (United States)

    2014-05-01

    be processed by a nearby high -performance computing asset and returned to a squad of Soldiers with annotations indicating the location of friendly and...is to change the resolution, bitrate, and/or framerate of the video being transmitted to the client, reducing the bandwidth requirements of the...video. This solution is typically not viable because a progressive download is required to have a constant resolution, bitrate, and framerate because

  4. A Super-resolution Reconstruction Algorithm for Surveillance Video

    Directory of Open Access Journals (Sweden)

    Jian Shao

    2017-01-01

    Full Text Available Recent technological developments have resulted in surveillance video becoming a primary method of preserving public security. Many city crimes are observed in surveillance video. The most abundant evidence collected by the police is also acquired through surveillance video sources. Surveillance video footage offers very strong support for solving criminal cases, therefore, creating an effective policy, and applying useful methods to the retrieval of additional evidence is becoming increasingly important. However, surveillance video has had its failings, namely, video footage being captured in low resolution (LR and bad visual quality. In this paper, we discuss the characteristics of surveillance video and describe the manual feature registration – maximum a posteriori – projection onto convex sets to develop a super-resolution reconstruction method, which improves the quality of surveillance video. From this method, we can make optimal use of information contained in the LR video image, but we can also control the image edge clearly as well as the convergence of the algorithm. Finally, we make a suggestion on how to adjust the algorithm adaptability by analyzing the prior information of target image.

  5. H.264/AVC Video Compressed Traces: Multifractal and Fractal Analysis

    Directory of Open Access Journals (Sweden)

    Samčović Andreja

    2006-01-01

    Full Text Available Publicly available long video traces encoded according to H.264/AVC were analyzed from the fractal and multifractal points of view. It was shown that such video traces, as compressed videos (H.261, H.263, and MPEG-4 Version 2 exhibit inherent long-range dependency, that is, fractal, property. Moreover they have high bit rate variability, particularly at higher compression ratios. Such signals may be better characterized by multifractal (MF analysis, since this approach describes both local and global features of the process. From multifractal spectra of the frame size video traces it was shown that higher compression ratio produces broader and less regular MF spectra, indicating to higher MF nature and the existence of additive components in video traces. Considering individual frames (I, P, and B and their MF spectra one can approve additive nature of compressed video and the particular influence of these frames to a whole MF spectrum. Since compressed video occupies a main part of transmission bandwidth, results obtained from MF analysis of compressed video may contribute to more accurate modeling of modern teletraffic. Moreover, by appropriate choice of the method for estimating MF quantities, an inverse MF analysis is possible, that means, from a once derived MF spectrum of observed signal it is possible to recognize and extract parts of the signal which are characterized by particular values of multifractal parameters. Intensive simulations and results obtained confirm the applicability and efficiency of MF analysis of compressed video.

  6. A new video studio for CERN

    CERN Multimedia

    Anaïs Vernede

    2011-01-01

    On Monday, 14 February 2011 CERN's new video studio was inaugurated with a recording of "Spotlight on CERN", featuring an interview with the DG, Rolf Heuer.   CERN's new video studio. Almost all international organisations have a studio for their audiovisual communications, and now it's CERN’s turn to acquire such a facility. “In the past, we've made videos using the Globe audiovisual facilities and sometimes using the small photographic studio, which is equipped with simple temporary sets that aren’t really suitable for video,” explains Jacques Fichet, head of CERN‘s audiovisual service. Once the decision had been taken to create the new 100 square-metre video studio, the work took only five months to complete. The studio, located in Building 510, is equipped with a cyclorama (a continuous smooth white wall used as a background) measuring 3 m in height and 16 m in length, as well as a teleprompter, a rail-mounted camera dolly fo...

  7. CPAP compliance: video education may help!

    Science.gov (United States)

    Jean Wiese, H; Boethel, Carl; Phillips, Barbara; Wilson, John F; Peters, Jane; Viggiano, Theresa

    2005-03-01

    CPAP remains the treatment of choice for Obstructive Sleep Apnea Hypopnea Syndrome (OSAHS), but compliance with CPAP is poor. Of many interventions tried to improve CPAP compliance, only education and humidification have been shown to be of benefit. Our purpose was to develop and pilot test a video to enhance patient understanding of obstructive sleep apnea and of the purpose, logistics, and benefits of CPAP use in patients newly diagnosed with OSAHS. A patient's CPAP compliance in the first few weeks after starting its use is predictive of long-term compliance with CPAP treatment. It is imperative that patients grasp at the outset both the severity of OSAHS and the effectiveness of CPAP therapy. An educational video script was written based on recommendations for patient educational video materials and covering identified misconceptions about OSAHS and perceived barriers to CPAP use. The videotape is 15 min in length and features two middle-aged males, one African-American and one Euro-American, discussing OSAHS and CPAP in a factory break room. In a randomized two-group design with a control group, patients with newly diagnosed OSAHS, and who viewed the CPAP educational video on their first clinic, were significantly more likely to use their machine and to return for a 1-month clinic visit than were those in the control group. Viewing of a patient education video at the initial visit was found to significantly improve the rate of return for the follow-up visit.

  8. Robust Adaptable Video Copy Detection

    DEFF Research Database (Denmark)

    Assent, Ira; Kremer, Hardy

    2009-01-01

    Video copy detection should be capable of identifying video copies subject to alterations e.g. in video contrast or frame rates. We propose a video copy detection scheme that allows for adaptable detection of videos that are altered temporally (e.g. frame rate change) and/or visually (e.g. change...... in contrast). Our query processing combines filtering and indexing structures for efficient multistep computation of video copies under this model. We show that our model successfully identifies altered video copies and does so more reliably than existing models....

  9. Proposed patient motion monitoring system using feature point tracking with a web camera.

    Science.gov (United States)

    Miura, Hideharu; Ozawa, Shuichi; Matsuura, Takaaki; Yamada, Kiyoshi; Nagata, Yasushi

    2017-12-01

    Patient motion monitoring systems play an important role in providing accurate treatment dose delivery. We propose a system that utilizes a web camera (frame rate up to 30 fps, maximum resolution of 640 × 480 pixels) and an in-house image processing software (developed using Microsoft Visual C++ and OpenCV). This system is simple to use and convenient to set up. The pyramidal Lucas-Kanade method was applied to calculate motions for each feature point by analysing two consecutive frames. The image processing software employs a color scheme where the defined feature points are blue under stable (no movement) conditions and turn red along with a warning message and an audio signal (beeping alarm) for large patient movements. The initial position of the marker was used by the program to determine the marker positions in all the frames. The software generates a text file that contains the calculated motion for each frame and saves it as a compressed audio video interleave (AVI) file. We proposed a patient motion monitoring system using a web camera, which is simple and convenient to set up, to increase the safety of treatment delivery.

  10. Big Davids – Real-Time Sensor Classification. Real-time classification of gorilla video segments in affective categories using crowd-sourced annotations

    NARCIS (Netherlands)

    Havekes, A.; Thomas, E.D.R.; Schavemaker, J.G.M.

    2013-01-01

    In this sheet book we present a method to classify segments of gorilla videos in different affective categories. The classification method is trained by crowd sourcing annotation. The trained classification than uses video features (computed from the video segments) to classify a new video segment

  11. Using video in childbirth research.

    Science.gov (United States)

    Harte, J Davis; Homer, Caroline Se; Sheehan, Athena; Leap, Nicky; Foureur, Maralyn

    2017-03-01

    Conducting video-research in birth settings raises challenges for ethics review boards to view birthing women and research-midwives as capable, autonomous decision-makers. This study aimed to gain an understanding of how the ethical approval process was experienced and to chronicle the perceived risks and benefits. The Birth Unit Design project was a 2012 Australian ethnographic study that used video recording to investigate the physical design features in the hospital birthing space that might influence both verbal and non-verbal communication and the experiences of childbearing women, midwives and supporters. Participants and research context: Six women, 11 midwives and 11 childbirth supporters were filmed during the women's labours in hospital birth units and interviewed 6 weeks later. Ethical considerations: The study was approved by an Australian Health Research Ethics Committee after a protracted process of negotiation. The ethics committee was influenced by a traditional view of research as based on scientific experiments resulting in a poor understanding of video-ethnographic research, a paradigmatic view of the politics and practicalities of modern childbirth processes, a desire to protect institutions from litigation, and what we perceived as a paternalistic approach towards protecting participants, one that was at odds with our aim to facilitate situations in which women could make flexible, autonomous decisions about how they might engage with the research process. The perceived need for protection was overly burdensome and against the wishes of the participants themselves; ultimately, this limited the capacity of the study to improve care for women and babies. Recommendations are offered for those involved in ethical approval processes for qualitative research in childbirth settings. The complexity of issues within childbirth settings, as in most modern healthcare settings, should be analysed using a variety of research approaches, beyond efficacy

  12. PNW River Reach Files -- 1:100k LLID Routed Streams (routes)

    Data.gov (United States)

    Pacific States Marine Fisheries Commission — This feature class includes the ROUTE features from the 2001 version of the PNW River Reach files Arc/INFO coverage. Separate, companion feature classes are also...

  13. Video Inter-frame Forgery Identification Based on Optical Flow Consistency

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2014-03-01

    Full Text Available Identifying inter-frame forgery is a hot topic in video forensics. In this paper, we propose a method based on the assumption that the optical flows are consistent in an original video, while in forgeries the consistency will be destroyed. We first extract optical flow from frames of videos and then calculate the optical flow consistency after normalization and quantization as distinguishing feature to identify inter-frame forgeries. We train the Support Vector Machine to classify original videos and video forgeries with optical flow consistency feature of some sample videos and test the classification accuracy in a large database. Experimental results show that the proposed method is efficient in classifying original videos and forgeries. Furthermore, the proposed method performs also pretty well in classifying frame insertion and frame deletion forgeries.

  14. Runway Detection From Map, Video and Aircraft Navigational Data

    Science.gov (United States)

    2016-03-01

    are corrected using image-processing techniques, such as the Hough transform for linear features. 14. SUBJECT TERMS runway, map, aircraft...video, detection, rotation matrix, Hough transform. 15. NUMBER OF PAGES 87 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18...as the Hough transform for linear features. vi THIS PAGE INTENTIONALLY LEFT BLANK vii TABLE OF CONTENTS I. INTRODUCTION

  15. Two Video Analysis Applications Using Foreground/Background Segmentation

    NARCIS (Netherlands)

    Zivkovic, Z.; Petkovic, M.; van Mierlo, R.; van Keulen, Maurice; van der Heijden, Ferdinand; Jonker, Willem; Rijnierse, E.

    Probably the most frequently solved problem when videos are analyzed is segmenting a foreground object from its background in an image. After some regions in an image are detected as the foreground objects, some features are extracted that describe the segmented regions. These features together with

  16. Teacher Explanation of Physics Concepts: A Video Study

    Science.gov (United States)

    Geelan, David

    2013-01-01

    Video recordings of Year 11 physics lessons were analyzed to identify key features of teacher explanations. Important features of the explanations used included teachers' ability to move between qualitative and quantitative modes of discussion, attention to what students require to succeed in high stakes examinations, thoughtful use of…

  17. Image and video search engine for the World Wide Web

    Science.gov (United States)

    Smith, John R.; Chang, Shih-Fu

    1997-01-01

    We describe a visual information system prototype for searching for images and videos on the World-Wide Web. New visual information in the form of images, graphics, animations and videos is being published on the Web at an incredible rate. However, cataloging this visual data is beyond the capabilities of current text-based Web search engines. In this paper, we describe a complete system by which visual information on the Web is (1) collected by automated agents, (2) processed in both text and visual feature domains, (3) catalogued and (4) indexed for fast search and retrieval. We introduce an image and video search engine which utilizes both text-based navigation and content-based technology for searching visually through the catalogued images and videos. Finally, we provide an initial evaluation based upon the cataloging of over one half million images and videos collected from the Web.

  18. Processing Decoded Video for LCD-LED Backlight Display

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan

    on local LED-LCD backlight. Second, removing the digital video codec artifacts such as blocking and ringing artifacts by post-processing algorithms. A novel algorithm based on image features with optimal balance between visual quality and power consumption was developed. In addition, to remove flickering......The quality of digital images and video signal on visual media such as TV screens and LCD displays is affected by two main factors; the display technology and compression standards. Accurate knowledge about the characteristics of display and the video signal can be utilized to develop advanced...... algorithms for signal (image or video) enhancement. One particular application of such algorithms is the case of LCDs with dynamic local backlight. The thesis addressed two main problems; first, designing algorithms that improve the visual quality of perceived image and video and reduce power consumption...

  19. Eye-Movement Tracking Using Compressed Video Images

    Science.gov (United States)

    Mulligan, Jeffrey B.; Beutter, Brent R.; Hull, Cynthia H. (Technical Monitor)

    1994-01-01

    Infrared video cameras offer a simple noninvasive way to measure the position of the eyes using relatively inexpensive equipment. Several commercial systems are available which use special hardware to localize features in the image in real time, but the constraint of realtime performance limits the complexity of the applicable algorithms. In order to get better resolution and accuracy, we have used off-line processing to apply more sophisticated algorithms to the images. In this case, a major technical challenge is the real-time acquisition and storage of the video images. This has been solved using a strictly digital approach, exploiting the burgeoning field of hardware video compression. In this paper we describe the algorithms we have developed for tracking the movements of the eyes in video images, and present experimental results showing how the accuracy is affected by the degree of video compression.

  20. Characteristics of file sharing and peer to peer networking | Opara ...

    African Journals Online (AJOL)

    A peer-to-peer (p2p) network allows computer hardware and software to function without the need for special server devices. While file sharing is the practice of distributing or providing access to digitally stored information, such as computer programs, multi-media (audio, video) resources, documents, or electronic books.

  1. Seafloor video footage and still-frame grabs from U.S. Geological Survey cruises in Hawaiian nearshore waters

    Science.gov (United States)

    Gibbs, Ann E.; Cochran, Susan A.; Tierney, Peter W.

    2013-01-01

    Underwater video footage was collected in nearshore waters (video footage collected during four USGS cruises and more than 10,200 still images extracted from the videos, including still frames from every 10 seconds along transect lines, and still frames showing both an overview and a near-bottom view from fixed stations. Environmental Systems Research Institute (ESRI) shapefiles of individual video and still-image locations, and Google Earth kml files with explanatory text and links to the video and still images, are included. This report documents the various camera systems and methods used to collect the videos, and the techniques and software used to convert the analog video tapes into digital data in order to process the images for optimum viewing and to extract the still images, along with a brief summary of each survey cruise.

  2. Video Shot Boundary Recognition Based on Adaptive Locality Preserving Projections

    Directory of Open Access Journals (Sweden)

    Yongliang Xiao

    2013-01-01

    Full Text Available A novel video shot boundary recognition method is proposed, which includes two stages of video feature extraction and shot boundary recognition. Firstly, we use adaptive locality preserving projections (ALPP to extract video feature. Unlike locality preserving projections, we define the discriminating similarity with mode prior probabilities and adaptive neighborhood selection strategy which make ALPP more suitable to preserve the local structure and label information of the original data. Secondly, we use an optimized multiple kernel support vector machine to classify video frames into boundary and nonboundary frames, in which the weights of different types of kernels are optimized with an ant colony optimization method. Experimental results show the effectiveness of our method.

  3. Overview - Be Smart. Be Well. STD Videos

    Centers for Disease Control (CDC) Podcasts

    2010-03-15

    This video, produced by Be Smart. Be Well., raises awareness of Sexually Transmitted Diseases (STDs): 1) What are they? 2) Why they matter? and, 3) What can I do about them? Footage courtesy of Be Smart. Be Well., featuring CDC's Dr. John Douglas, Division of Sexually Transmitted Disease Prevention.  Created: 3/15/2010 by National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention (NCHHSTP).   Date Released: 3/15/2010.

  4. Web life: The Periodic Table of Videos

    Science.gov (United States)

    2009-01-01

    Eagle-eyed readers may spot a change in this column. Previously known as Blog life, it highlighted top picks from the physics blogosphere, and was itself an outgrowth of an earlier column on physics books, Shelf life. The new Web life column will continue to feature the best of physics blogging, but it will also include other types of Web content of interest to Physics World readers. First up is a periodic table of videos from Nottingham University in the UK.

  5. Talking Video in 'Everyday Life'

    DEFF Research Database (Denmark)

    McIlvenny, Paul

    For better or worse, video technologies have made their way into many domains of social life, for example in the domain of therapeutics. Techniques such as Marte Meo, Video Interaction Guidance (ViG), Video-Enhanced Reflection on Communication, Video Home Training and Video intervention....../prevention (VIP) all promote the use of video as a therapeutic tool. This paper focuses on media therapeutics and the various in situ uses of video technologies in the mass media for therapeutic purposes. Reality TV parenting programmes such as Supernanny, Little Angels, The House of Tiny Tearaways, Honey, We......’re Killing the Kids, and Driving Mum and Dad Mad all use video as a prominent element of not only the audiovisual spectacle of reality television but also the interactional therapy, counselling, coaching and/or instruction intrinsic to these programmes. Thus, talk-on-video is used to intervene...

  6. Tuning HDF5 subfiling performance on parallel file systems

    Energy Technology Data Exchange (ETDEWEB)

    Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chaarawi, Mohamad [Intel Corp. (United States); Koziol, Quincey [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mainzer, John [The HDF Group (United States); Willmore, Frank [The HDF Group (United States)

    2017-05-12

    Subfiling is a technique used on parallel file systems to reduce locking and contention issues when multiple compute nodes interact with the same storage target node. Subfiling provides a compromise between the single shared file approach that instigates the lock contention problems on parallel file systems and having one file per process, which results in generating a massive and unmanageable number of files. In this paper, we evaluate and tune the performance of recently implemented subfiling feature in HDF5. In specific, we explain the implementation strategy of subfiling feature in HDF5, provide examples of using the feature, and evaluate and tune parallel I/O performance of this feature with parallel file systems of the Cray XC40 system at NERSC (Cori) that include a burst buffer storage and a Lustre disk-based storage. We also evaluate I/O performance on the Cray XC30 system, Edison, at NERSC. Our results show performance benefits of 1.2X to 6X performance advantage with subfiling compared to writing a single shared HDF5 file. We present our exploration of configurations, such as the number of subfiles and the number of Lustre storage targets to storing files, as optimization parameters to obtain superior I/O performance. Based on this exploration, we discuss recommendations for achieving good I/O performance as well as limitations with using the subfiling feature.

  7. Compression of mixed video and graphics images for TV systems

    Science.gov (United States)

    van der Schaar-Mitrea, Mihaela; de With, Peter H. N.

    1998-01-01

    The diversity in TV images has augmented with the increased application of computer graphics. In this paper we study z coding system that supports both the lossless coding of such graphics data and regular lossy video compression. The lossless coding techniques are based on runlength and arithmetical coding. For video compression, we introduce a simple block predictive coding technique featuring individual pixel access, so that it enables a gradual shift from lossless coding of graphics to the lossy coding of video. An overall bit rate control completes the system. Computer simulations show a very high quality with a compression factor between 2-3.

  8. FPGA Implementation of Video Transmission System Based on LTE

    Directory of Open Access Journals (Sweden)

    Lu Yan

    2015-01-01

    Full Text Available In order to support high-definition video transmission, an implementation of video transmission system based on Long Term Evolution is designed. This system is developed on Xilinx Virtex-6 FPGA ML605 Evaluation Board. The paper elaborates the features of baseband link designed in Xilinx ISE and protocol stack designed in Xilinx SDK, and introduces the process of setting up hardware and software platform in Xilinx XPS. According to test, this system consumes less hardware resource and is able to transmit bidirectional video clearly and stably.

  9. Transition logo detection for sports videos highlight extraction

    Science.gov (United States)

    Su, Po-Chyi; Wang, Yu-Wei; Chen, Chien-Chang

    2006-10-01

    This paper presents a highlight extraction scheme for sports videos. The approach makes use of the transition logos inserted preceding and following the slow motion replays by the broadcaster, which demonstrate highlights of the game. First, the features of a MPEG compressed video are retrieved for subsequent processing. After the shot boundary detection procedure, the processing units are formed and the units with fast moving scenes are then selected. Finally, the detection of overlaying objects is performed to signal the appearance of a transition logo. Experimental results show the feasibility of this promising method for sports videos highlight extraction.

  10. Research of Video Steganalysis Algorithm Based on H265 Protocol

    Directory of Open Access Journals (Sweden)

    Wu Kaicheng

    2015-01-01

    This paper researches LSB matching VSA based on H265 protocol with the research background of 26 original Video sequences, it firstly extracts classification features out from training samples as input of SVM, and trains in SVM to obtain high-quality category classification model, and then tests whether there is suspicious information in the video sample. The experimental results show that VSA algorithm based on LSB matching can be more practical to obtain all frame embedded secret information and carrier and video of local frame embedded. In addition, VSA adopts the method of frame by frame with a strong robustness in resisting attack in the corresponding time domain.

  11. Video y desarrollo rural

    Directory of Open Access Journals (Sweden)

    Fraser Colin

    2015-01-01

    Full Text Available Las primeras experiencias de video rural fueron realizadas en Perú y México. El proyecto peruano es conocido como CESPAC (Centro de Servicios de Pedagogía Audiovisual para la Capacitación. Con financiamiento externo de la FAO fue iniciado en la década del 70. El proyecto mexicano fue bautizado con el nombre de PRODERITH (Programa de Desarrollo Rural Integrado del Trópico Húmedo. Su componente de video rural tuvo un éxito muy particular a nivel de base.La evaluación concluyó en que el video rural como sistema de comunicación social para el desarrollo es excelente y de bajo costo

  12. A Big Video Manifesto

    DEFF Research Database (Denmark)

    Mcilvenny, Paul Bruce; Davidsen, Jacob

    2017-01-01

    For the last few years, we have witnessed a hype about the potential results and insights that quantitative big data can bring to the social sciences. The wonder of big data has moved into education, traffic planning, and disease control with a promise of making things better with big numbers...... and beautiful visualisations. However, we also need to ask what the tools of big data can do both for the Humanities and for more interpretative approaches and methods. Thus, we prefer to explore how the power of computation, new sensor technologies and massive storage can also help with video-based qualitative...... inquiry, such as video ethnography, ethnovideo, performance documentation, anthropology and multimodal interaction analysis. That is why we put forward, half-jokingly at first, a Big Video manifesto to spur innovation in the Digital Humanities....

  13. Online video examination

    DEFF Research Database (Denmark)

    Qvist, Palle

    courses are accredited to the master programme. The programme is online, worldwide and on demand. It recruits students from all over the world. The programme is organized exemplary in accordance the principles in the problem-based and project-based learning method used at Aalborg University where students......The Master programme in Problem-Based Learning in Engineering and Science, MPBL (www.mpbl.aau.dk), at Aalborg University, is an international programme offering formalized staff development. The programme is also offered in smaller parts as single subject courses (SSC). Passed single subject...... have large influence on their own teaching, learning and curriculum. The programme offers streamed videos in combination with other learning resources. It is a concept which offers video as pure presentation - video lectures - but also as an instructional tool which gives the students the possibility...

  14. Brains on video games.

    Science.gov (United States)

    Bavelier, Daphne; Green, C Shawn; Han, Doug Hyun; Renshaw, Perry F; Merzenich, Michael M; Gentile, Douglas A

    2011-11-18

    The popular press is replete with stories about the effects of video and computer games on the brain. Sensationalist headlines claiming that video games 'damage the brain' or 'boost brain power' do not do justice to the complexities and limitations of the studies involved, and create a confusing overall picture about the effects of gaming on the brain. Here, six experts in the field shed light on our current understanding of the positive and negative ways in which playing video games can affect cognition and behaviour, and explain how this knowledge can be harnessed for educational and rehabilitation purposes. As research in this area is still in its early days, the contributors of this Viewpoint also discuss several issues and challenges that should be addressed to move the field forward.

  15. Surgical videos online: a survey of prominent sources and future trends.

    Science.gov (United States)

    Dinscore, Amanda; Andres, Amy

    2010-01-01

    This article determines the extent of the online availability and quality of surgical videos for the educational benefit of the surgical community. A comprehensive survey was performed that compared a number of online sites providing surgical videos according to their content, production quality, authority, audience, navigability, and other features. Methods for evaluating video content are discussed as well as possible future directions and emerging trends. Surgical videos are a valuable tool for demonstrating and teaching surgical technique and, despite room for growth in this area, advances in streaming video technology have made providing and accessing these resources easier than ever before.

  16. 3D video coding: an overview of present and upcoming standards

    Science.gov (United States)

    Merkle, Philipp; Müller, Karsten; Wiegand, Thomas

    2010-07-01

    An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.

  17. Provider of Services File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The POS file consists of two data files, one for CLIA labs and one for 18 other provider types. The file names are CLIA and OTHER. If downloading the file, note it...

  18. Data Management Rubric for Video Data in Organismal Biology.

    Science.gov (United States)

    Brainerd, Elizabeth L; Blob, Richard W; Hedrick, Tyson L; Creamer, Andrew T; Müller, Ulrike K

    2017-07-01

    Standards-based data management facilitates data preservation, discoverability, and access for effective data reuse within research groups and across communities of researchers. Data sharing requires community consensus on standards for data management, such as storage and formats for digital data preservation, metadata (i.e., contextual data about the data) that should be recorded and stored, and data access. Video imaging is a valuable tool for measuring time-varying phenotypes in organismal biology, with particular application for research in functional morphology, comparative biomechanics, and animal behavior. The raw data are the videos, but videos alone are not sufficient for scientific analysis. Nearly endless videos of animals can be found on YouTube and elsewhere on the web, but these videos have little value for scientific analysis because essential metadata such as true frame rate, spatial calibration, genus and species, weight, age, etc. of organisms, are generally unknown. We have embarked on a project to build community consensus on video data management and metadata standards for organismal biology research. We collected input from colleagues at early stages, organized an open workshop, "Establishing Standards for Video Data Management," at the Society for Integrative and Comparative Biology meeting in January 2017, and then collected two more rounds of input on revised versions of the standards. The result we present here is a rubric consisting of nine standards for video data management, with three levels within each standard: good, better, and best practices. The nine standards are: (1) data storage; (2) video file formats; (3) metadata linkage; (4) video data and metadata access; (5) contact information and acceptable use; (6) camera settings; (7) organism(s); (8) recording conditions; and (9) subject matter/topic. The first four standards address data preservation and interoperability for sharing, whereas standards 5-9 establish minimum metadata

  19. Solar Features

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Collection includes a variety of solar feature datasets contributed by a number of national and private solar observatories located worldwide.

  20. Site Features

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset consists of various site features from multiple Superfund sites in U.S. EPA Region 8. These data were acquired from multiple sources at different times...

  1. Feature Extraction

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Feature selection and reduction are key to robust multivariate analyses. In this talk I will focus on pros and cons of various variable selection methods and focus on those that are most relevant in the context of HEP.

  2. Video library for video imaging detection at intersection stop lines.

    Science.gov (United States)

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  3. Online coupled camera pose estimation and dense reconstruction from video

    Science.gov (United States)

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  4. User aware video streaming

    Science.gov (United States)

    Kerofsky, Louis; Jagannath, Abhijith; Reznik, Yuriy

    2015-03-01

    We describe the design of a video streaming system using adaptation to viewing conditions to reduce the bitrate needed for delivery of video content. A visual model is used to determine sufficient resolution needed under various viewing conditions. Sensors on a mobile device estimate properties of the viewing conditions, particularly the distance to the viewer. We leverage the framework of existing adaptive bitrate streaming systems such as HLS, Smooth Streaming or MPEG-DASH. The client rate selection logic is modified to include a sufficient resolution computed using the visual model and the estimated viewing conditions. Our experiments demonstrate significant bitrate savings compare to conventional streaming methods which do not exploit viewing conditions.

  5. Contextual analysis of videos

    CERN Document Server

    Thida, Myo; Monekosso, Dorothy

    2013-01-01

    Video context analysis is an active and vibrant research area, which provides means for extracting, analyzing and understanding behavior of a single target and multiple targets. Over the last few decades, computer vision researchers have been working to improve the accuracy and robustness of algorithms to analyse the context of a video automatically. In general, the research work in this area can be categorized into three major topics: 1) counting number of people in the scene 2) tracking individuals in a crowd and 3) understanding behavior of a single target or multiple targets in the scene.

  6. Video-based rendering

    CERN Document Server

    Magnor, Marcus A

    2005-01-01

    Driven by consumer-market applications that enjoy steadily increasing economic importance, graphics hardware and rendering algorithms are a central focus of computer graphics research. Video-based rendering is an approach that aims to overcome the current bottleneck in the time-consuming modeling process and has applications in areas such as computer games, special effects, and interactive TV. This book offers an in-depth introduction to video-based rendering, a rapidly developing new interdisciplinary topic employing techniques from computer graphics, computer vision, and telecommunication en

  7. CERN Video News

    CERN Multimedia

    2003-01-01

    From Monday you can see on the web the new edition of CERN's Video News. Thanks to a collaboration between the audiovisual teams at CERN and Fermilab, you can see a report made by the American laboratory. The clip concerns the LHC magnets that are being constructed at Fermilab. Also in the programme: the spectacular rotation of one of the ATLAS coils, the arrival at CERN of the first American magnet made at Brookhaven, the story of the discovery 20 years ago of the W and Z bosons at CERN. http://www.cern.ch/video or Bulletin web page.

  8. Video mining using combinations of unsupervised and supervised learning techniques

    Science.gov (United States)

    Divakaran, Ajay; Miyahara, Koji; Peker, Kadir A.; Radhakrishnan, Regunathan; Xiong, Ziyou

    2003-12-01

    We discuss the meaning and significance of the video mining problem, and present our work on some aspects of video mining. A simple definition of video mining is unsupervised discovery of patterns in audio-visual content. Such purely unsupervised discovery is readily applicable to video surveillance as well as to consumer video browsing applications. We interpret video mining as content-adaptive or "blind" content processing, in which the first stage is content characterization and the second stage is event discovery based on the characterization obtained in stage 1. We discuss the target applications and find that using a purely unsupervised approach are too computationally complex to be implemented on our product platform. We then describe various combinations of unsupervised and supervised learning techniques that help discover patterns that are useful to the end-user of the application. We target consumer video browsing applications such as commercial message detection, sports highlights extraction etc. We employ both audio and video features. We find that supervised audio classification combined with unsupervised unusual event discovery enables accurate supervised detection of desired events. Our techniques are computationally simple and robust to common variations in production styles etc.

  9. Video game characteristics, happiness and flow as predictors of addiction among video game players: A pilot study.

    Science.gov (United States)

    Hull, Damien C; Williams, Glenn A; Griffiths, Mark D

    2013-09-01

    Video games provide opportunities for positive psychological experiences such as flow-like phenomena during play and general happiness that could be associated with gaming achievements. However, research has shown that specific features of game play may be associated with problematic behaviour associated with addiction-like experiences. The study was aimed at analysing whether certain structural characteristics of video games, flow, and global happiness could be predictive of video game addiction. A total of 110 video game players were surveyed about a game they had recently played by using a 24-item checklist of structural characteristics, an adapted Flow State Scale, the Oxford Happiness Questionnaire, and the Game Addiction Scale. The study revealed decreases in general happiness had the strongest role in predicting increases in gaming addiction. One of the nine factors of the flow experience was a significant predictor of gaming addiction - perceptions of time being altered during play. The structural characteristic that significantly predicted addiction was its social element with increased sociability being associated with higher levels of addictive-like experiences. Overall, the structural characteristics of video games, elements of the flow experience, and general happiness accounted for 49.2% of the total variance in Game Addiction Scale levels. Implications for interventions are discussed, particularly with regard to making players more aware of time passing and in capitalising on benefits of social features of video game play to guard against addictive-like tendencies among video game players.

  10. Video special effects editing in MPEG-2 compressed video

    OpenAIRE

    Fernando, WAC; Canagarajah, CN; Bull, David

    2000-01-01

    With the increase of digital technology in video production, several types of complex video special effects editing have begun to appear in video clips. In this paper we consider fade-out and fade-in special effects editing in MPEG-2 compressed video without full frame decompression and motion estimation. We estimated the DCT coefficients and use these coefficients together with the existing motion vectors to produce these special effects editing in compressed domain. Results show that both o...

  11. GIFT-Grab: Real-time C++ and Python multi-channel video capture, processing and encoding API

    Directory of Open Access Journals (Sweden)

    Dzhoshkun Ismail Shakir

    2017-10-01

    Full Text Available GIFT-Grab is an open-source API for acquiring, processing and encoding video streams in real time. GIFT-Grab supports video acquisition using various frame-grabber hardware as well as from standard-compliant network streams and video files. The current GIFT-Grab release allows for multi-channel video acquisition and encoding at the maximum frame rate of supported hardware – 60 frames per second (fps. GIFT-Grab builds on well-established highly configurable multimedia libraries including FFmpeg and OpenCV. GIFT-Grab exposes a simplified high-level API, aimed at facilitating integration into client applications with minimal coding effort. The core implementation of GIFT-Grab is in C++11. GIFT-Grab also features a Python API compatible with the widely used scientific computing packages NumPy and SciPy. GIFT-Grab was developed for capturing multiple simultaneous intra-operative video streams from medical imaging devices. Yet due to the ubiquity of video processing in research, GIFT-Grab can be used in many other areas. GIFT-Grab is hosted and managed on the software repository of the Centre for Medical Image Computing (CMIC at University College London, and is also mirrored on GitHub. In addition it is available for installation from the Python Package Index (PyPI via the pip installation tool. Funding statement: This work was supported through an Innovative Engineering for Health award by the Wellcome Trust [WT101957], the Engineering and Physical Sciences Research Council (EPSRC [NS/A000027/1] and a National Institute for Health Research Biomedical Research Centre UCLH/UCL High Impact Initiative. Sébastien Ourselin receives funding from the EPSRC (EP/H046410/1, EP/J020990/1, EP/K005278 and the MRC (MR/J01107X/1. Luis C. García-Peraza-Herrera is supported by the EPSRC-funded UCL Centre for Doctoral Training in Medical Imaging (EP/L016478/1.

  12. A video annotation methodology for interactive video sequence generation

    NARCIS (Netherlands)

    C.A. Lindley; R.A. Earnshaw; J.A. Vince

    2001-01-01

    textabstractThe FRAMES project within the RDN CRC (Cooperative Research Centre for Research Data Networks) has developed an experimental environment for dynamic virtual video sequence synthesis from databases of video data. A major issue for the development of dynamic interactive video applications

  13. Streaming Video--The Wave of the Video Future!

    Science.gov (United States)

    Brown, Laura

    2004-01-01

    Videos and DVDs give the teachers more flexibility than slide projectors, filmstrips, and 16mm films but teachers and students are excited about a new technology called streaming. Streaming allows the educators to view videos on demand via the Internet, which works through the transfer of digital media like video, and voice data that is received…

  14. Analisis Kualitas Layanan Video Live Streaming pada Jaringan Lokal Universitas Telkom

    Directory of Open Access Journals (Sweden)

    Anggelina I Diwi

    2014-09-01

    Full Text Available Streaming adalah salah satu bentuk teknologi yang memperkenankan file digunakan secara langsung tanpa menunggu selesainya unggahan (download dan berlangsung secara kontinyu tanpa interupsi. Untuk mengaplikasikan video streaming kedalam jaringan, diperlukan pertama-tama untuk mengkalkulasi bandwidth yang tersedia, untuk mendukung transmisi data. Bandwidth merupakan parameter penting untuk streaming di dalam jaringan. Makin besar bandwidth yang tersedia, makin baik kualitas video yang ditampilkan. Penelitian ini bertujuan untuk mengetahui kebutuhan bandwidth untuk layanan video live streaming; metode yang digunakan di dalam penelitian ini adalah dengan mengadakan pengukuran unjuk kerja jaringan secara langsung di lapangan, yaitu LAN di lingkungan Universitas Telkom. Implementasi media streaming server-client di dalam penelitian ini menggunakan file video yang berbeda, berdasarkan jumlah frame yang dikirim (fps. Skenario video streaming ini dilakukan dengan menggunakan latar belakang  trafik  yang  bervariasi,  untuk  melihat  pengaruhnya terhadap parameter QoS jaringan. Pengujian performansi Quality of Service (QoS dalam implementasi video live streaming ini menggunakan software network analyzer Wireshark. Hasil penilitian menunjukkan, bahwa video dengan laju frame lebih besar dari 15 fps, memberikan jitter dan throughput yang besar pula.

  15. Analysis of simulated angiographic procedures: part 1--capture and presentation of audio and video recordings.

    Science.gov (United States)

    Duncan, James R; Glaiberman, Craig B

    2006-12-01

    To assess different methods of recording angiographic simulations and to determine how such recordings might be used for training and research. Two commercially available high-fidelity angiography simulations, the Mentice Vascular Interventional Simulation Trainer and the Simbionix AngioMentor, were used for data collection. Video and audio records of simulated procedures were created by different methods, including software-based screen capture, video splitters and converters, and external cameras. Recording parameters were varied, and the recordings were transferred to computer workstations for postprocessing and presentation. The information displayed on the simulators' computer screens could be captured by each method. Although screen-capture software provided the highest resolution, workflow considerations favored a hardware-based solution that duplicated the video signal and recorded the data stream(s) at lower resolutions. Additional video and audio recording devices were used to monitor the angiographer's actions during the simulated procedures. The multiple audio and video files were synchronized and composited with personal computers equipped with commercially available video editing software. Depending on the needs of the intended audience, the resulting files could be distributed and displayed at full or reduced resolutions. The capture, editing, presentation, and distribution of synchronized multichannel audio and video recordings holds great promise for angiography training and simulation research. To achieve this potential, technical challenges will need to be met, and content will need to be tailored to suit the needs of trainees and researchers.

  16. 78 FR 73214 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Science.gov (United States)

    2013-12-05

    ..., video display, signs or billboards, motion pictures, or telephone directories (other than routine... organization's material relating to specific types or classes of securities or services, with the Department at... a television or video retail communication pursuant to a filing requirement, then the member...

  17. 78 FR 73223 - Self-Regulatory Organizations; NYSE MKT LLC; Notice of Filing and Immediate Effectiveness of...

    Science.gov (United States)

    2013-12-05

    ... periodical, radio, television, telephone or audio recording, video display, signs or billboards, motion... to specific types or classes of securities or services, with the Department at least 10 business days... board'' of a television or video retail communication pursuant to a filing requirement, then the member...

  18. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... Grants and Funding Extramural Research Division of Extramural Science Programs Division of Extramural Activities Extramural Contacts NEI ... Amaurosis Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded ...

  19. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five ... was designed to help you learn more about Rheumatoid Arthritis (RA). You will learn how the diagnosis ...

  20. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Our Staff Rheumatology Specialty Centers You are here: Home / Patient Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video ... to take a more active role in your care. The information in these videos should not take ...

  1. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... will allow you to take a more active role in your care. The information in these videos ... Stategies to Increase your Level of Physical Activity Role of Body Weight in Osteoarthritis Educational Videos for ...

  2. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... here. Will You Support the Education of Arthritis Patients? Each year, over 1 million people visit this ... of Body Weight in Osteoarthritis Educational Videos for Patients Rheumatoid Arthritis Educational Video Series Psoriatic Arthritis 101 ...

  3. Scanning laser video camera/ microscope

    Science.gov (United States)

    Wang, C. P.; Bow, R. T.

    1984-10-01

    A laser scanning system capable of scanning at standard video rate has been developed. The scanning mirrors, circuit design and system performance, as well as its applications to video cameras and ultra-violet microscopes, are discussed.

  4. Astronomy Video Contest

    Science.gov (United States)

    McFarland, John

    2008-05-01

    During Galileo's lifetime his staunchest supporter was Johannes Kepler, Imperial Mathematician to the Holy Roman Emperor. Johannes Kepler will be in St. Louis to personally offer a tribute to Galileo. Set Galileo's astronomy discoveries to music and you get the newest song by the well known acappella group, THE CHROMATICS. The song, entitled "Shoulders of Giants” was written specifically for IYA-2009 and will be debuted at this conference. The song will also be used as a base to create a music video by synchronizing a person's own images to the song's lyrics and tempo. Thousands of people already do this for fun and post their videos on YOU TUBE and other sites. The ASTRONOMY VIDEO CONTEST will be launched as a vehicle to excite, enthuse and educate people about astronomy and science. It will be an annual event administered by the Johannes Kepler Project and will continue to foster the goals of IYA-2009 for years to come. During this presentation the basic categories, rules, and prizes for the Astronomy Video Contest will be covered and finally the new song "Shoulders of Giants” by THE CHROMATICS will be unveiled

  5. Provocative Video Scenarios

    DEFF Research Database (Denmark)

    Caglio, Agnese

    This paper presents the use of ”provocative videos”, as a tool to support and deepen findings from ethnographic investigation on the theme of remote videocommunication. The videos acted as a resource to also investigate potential for novel technologies supporting continuous connection between...

  6. Video Content Foraging

    NARCIS (Netherlands)

    van Houten, Ynze; Schuurman, Jan Gerrit; Verhagen, Pleunes Willem; Enser, Peter; Kompatsiaris, Yiannis; O’Connor, Noel E.; Smeaton, Alan F.; Smeulders, Arnold W.M.

    2004-01-01

    With information systems, the real design problem is not increased access to information, but greater efficiency in finding useful information. In our approach to video content browsing, we try to match the browsing environment with human information processing structures by applying ideas from

  7. Internet video search

    NARCIS (Netherlands)

    Snoek, C.G.M.; Smeulders, A.W.M.

    2011-01-01

    In this tutorial, we focus on the challenges in internet video search, present methods how to achieve state-of-the-art performance while maintaining efficient execution, and indicate how to obtain improvements in the near future. Moreover, we give an overview of the latest developments and future

  8. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Donate Resources Links Videos Podcasts Webinars For the Media For Clinicians For Policymakers For Family Caregivers Glossary Sign Up for Our Blog Subscribe to Blog Enter your email address to subscribe to this blog and receive notifications of new posts by email. Email Address CLOSE Home About ...

  9. Scalable Video Coding

    NARCIS (Netherlands)

    Choupani, R.

    2017-01-01

    With the rapid improvements in digital communication technologies, distributing high-definition visual information has become more widespread. However, the available technologies were not sufficient to support the rising demand for high-definition video. This situation is further complicated when

  10. Video processing project

    CSIR Research Space (South Africa)

    Globisch, R

    2009-03-01

    Full Text Available Video processing source code for algorithms and tools used in software media pipelines (e.g. image scalers, colour converters, etc.) The currently available source code is written in C++ with their associated libraries and DirectShow- Filters....

  11. Video narrativer i sygeplejerskeuddannelsen

    DEFF Research Database (Denmark)

    Jensen, Inger

    2009-01-01

    I artiklen gives nogle bud på hvordan video narrativer kan bruges i sygeplejerskeuddannelsen som triggers, der åbner for diskussioner og udvikling af meningsfulde holdninger til medmennesker. Det belyses også hvordan undervisere i deres didaktiske overvejelser kan inddrage elementer fra teori om...

  12. Streaming-video produktion

    DEFF Research Database (Denmark)

    Grønkjær, Poul

    2004-01-01

     E-learning Lab på Aalborg Universitet har i forbindelse med forskningsprojektet Virtuelle Læringsformer og Læringsmiljøer foretaget en række praktiske eksperimenter med streaming-video produktioner. Hensigten med denne artikel er at formidle disse erfaringer. Artiklen beskriver hele produktionsf...... E-learning Lab på Aalborg Universitet har i forbindelse med forskningsprojektet Virtuelle Læringsformer og Læringsmiljøer foretaget en række praktiske eksperimenter med streaming-video produktioner. Hensigten med denne artikel er at formidle disse erfaringer. Artiklen beskriver hele...... produktionsforløbet: fra ide til færdigt produkt, forskellige typer af præsentationer, dramaturgiske overvejelser samt en konceptskitse. Streaming-video teknologien er nu så udviklet med et så tilfredsstillende audiovisuelt udtryk at vi kan begynde at fokusere på, hvilket indhold der er velegnet til at blive gjort...... tilgængeligt uafhængigt af tid og sted. Afslutningsvis er der en række kildehenvisninger, blandt andet en oversigt over de streaming-video produktioner, som denne artikel bygger på....

  13. Characteristics of Instructional Videos

    Science.gov (United States)

    Beheshti, Mobina; Taspolat, Ata; Kaya, Omer Sami; Sapanca, Hamza Fatih

    2018-01-01

    Nowadays, video plays a significant role in education in terms of its integration into traditional classes, the principal delivery system of information in classes particularly in online courses as well as serving as a foundation of many blended classes. Hence, education is adopting a modern approach of instruction with the target of moving away…

  14. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available Home About Donate Search Search What Is It Definition Pediatric Palliative Care Disease Types FAQ Handout for Patients and Families Is It Right for You How to Get It Talk to your Doctor Find a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts ...

  15. Mobiele video voor bedrijfscommunicatie

    NARCIS (Netherlands)

    Niamut, O.A.; Weerdt, C.A. van der; Havekes, A.

    2009-01-01

    Het project Penta Mobilé liep van juni tot november 2009 en had als doel de mogelijkheden van mobiele video voor bedrijfscommunicatie toepassingen in kaart te brengen. Dit onderzoek werd uitgevoerd samen met vijf (‘Penta’) partijen: Business Tales, Condor Digital, European Communication Projects

  16. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Surgery What is acoustic neuroma Diagnosing Symptoms Side effects ... Groups Is a support group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer Support Program Community Connections Overview Find a Meeting ...

  17. FILE WINISIS

    Directory of Open Access Journals (Sweden)

    Apallidya Sitepu

    2012-07-01

    Full Text Available Winisis (CDS/ISIS for Windows menyediakan sarana yang memungkinkan pembuatan pangkalan data menjadijauh lebih mudah dibandingkan dengan versi DOS. Meskipun demikian, mengisi tabel definisi ruas (field definition table tetap memerlukan ketelitian. Cara yang paling mudah dalam menciptakan pangkalan data adalah menyalin (copy pangkalan data pihak lain dan menempatkannya dalam komputer anda.Kebutuhan untuk menyalin berkas (file dari pangkalan data lain pada umumnya muncul jika berkas tersebut terlalu rumit untuk dibuat dan diketik, misalnya berkas fonnat cetak (print format yang panjang atau label indeks (field selection table. Pangkalan data duplikat pada umumnya diperlukan untuk menyirnpan cadangan (backup cantuman pangkalan data, untuk menyediakan komputer sebagai katalog pemakai, atau untuk membuat terbitan. Artikel ini dimaksudkan untuk membantu pustakawan, dokumentalis dan arsiparis menyalin berkas atau pangkalan data Winisis. Agar dapat menyalin dengan benar, pemakai hams mengetahui direktori atau map(folder, dan berkas-berkas Winisis. Tulisan inijuga membahas prosedur memasang Winisis dalam jaringan (LAN.

  18. Developing a Video Steganography Toolkit

    OpenAIRE

    Ridgway, James; Stannett, Mike

    2014-01-01

    Although techniques for separate image and audio steganography are widely known, relatively little has been described concerning the hiding of information within video streams ("video steganography"). In this paper we review the current state of the art in this field, and describe the key issues we have encountered in developing a practical video steganography system. A supporting video is also available online at http://www.youtube.com/watch?v=YhnlHmZolRM

  19. SCTP as scalable video coding transport

    Science.gov (United States)

    Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.

    2013-12-01

    This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.

  20. Gaze location prediction for broadcast football video.

    Science.gov (United States)

    Cheng, Qin; Agrafiotis, Dimitris; Achim, Alin M; Bull, David R

    2013-12-01

    The sensitivity of the human visual system decreases dramatically with increasing distance from the fixation location in a video frame. Accurate prediction of a viewer's gaze location has the potential to improve bit allocation, rate control, error resilience, and quality evaluation in video compression. Commercially, delivery of football video content is of great interest because of the very high number of consumers. In this paper, we propose a gaze location prediction system for high definition broadcast football video. The proposed system uses knowledge about the context, extracted through analysis of a gaze tracking study that we performed, to build a suitable prior map. We further classify the complex context into different categories through shot classification thus allowing our model to prelearn the task pertinence of each object category and build the prior map automatically. We thus avoid the limitation of assigning the viewers a specific task, allowing our gaze prediction system to work under free-viewing conditions. Bayesian integration of bottom-up features and top-down priors is finally applied to predict the gaze locations. Results show that the prediction performance of the proposed model is better than that of other top-down models that we adapted to this context.

  1. Recent advances in intelligent image search and video retrieval

    CERN Document Server

    2017-01-01

    This book initially reviews the major feature representation and extraction methods and effective learning and recognition approaches, which have broad applications in the context of intelligent image search and video retrieval. It subsequently presents novel methods, such as improved soft assignment coding, Inheritable Color Space (InCS) and the Generalized InCS framework, the sparse kernel manifold learner method, the efficient Support Vector Machine (eSVM), and the Scale-Invariant Feature Transform (SIFT) features in multiple color spaces. Lastly, the book presents clothing analysis for subject identification and retrieval, and performance evaluation methods of video analytics for traffic monitoring. Digital images and videos are proliferating at an amazing speed in the fields of science, engineering and technology, media and entertainment. With the huge accumulation of such data, keyword searches and manual annotation schemes may no longer be able to meet the practical demand for retrieving relevant conte...

  2. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia Animations Blindness Cataract Convergence Insufficiency Diabetic Eye Disease Dilated Eye Exam Dry Eye For Kids Glaucoma ...

  3. SPECIAL REPORT: Creating Conference Video

    Directory of Open Access Journals (Sweden)

    Noel F. Peden

    2008-12-01

    Full Text Available Capturing video at a conference is easy. Doing it so the product is useful is another matter. Many subtle problems come into play so that video and audio obtained can be used to create a final product. This article discusses what the author learned in the two years of shooting and editing video for Code4Lib conference.

  4. CERN Video News on line

    CERN Multimedia

    2003-01-01

    The latest CERN video news is on line. In this issue : an interview with the Director General and reports on the new home for the DELPHI barrel and the CERN firemen's spectacular training programme. There's also a vintage video news clip from 1954. See: www.cern.ch/video or Bulletin web page

  5. We All Stream for Video

    Science.gov (United States)

    Technology & Learning, 2008

    2008-01-01

    More than ever, teachers are using digital video to enhance their lessons. In fact, the number of schools using video streaming increased from 30 percent to 45 percent between 2004 and 2006, according to Market Data Retrieval. Why the popularity? For starters, video-streaming products are easy to use. They allow teachers to punctuate lessons with…

  6. Social Properties of Mobile Video

    Science.gov (United States)

    Mitchell, April Slayden; O'Hara, Kenton; Vorbau, Alex

    Mobile video is now an everyday possibility with a wide array of commercially available devices, services, and content. These new technologies have created dramatic shifts in the way video-based media can be produced, consumed, and delivered by people beyond the familiar behaviors associated with fixed TV and video technologies. Such technology revolutions change the way users behave and change their expectations in regards to their mobile video experiences. Building upon earlier studies of mobile video, this paper reports on a study using diary techniques and ethnographic interviews to better understand how people are using commercially available mobile video technologies in their everyday lives. Drawing on reported episodes of mobile video behavior, the study identifies the social motivations and values underpinning these behaviors that help characterize mobile video consumption beyond the simplistic notion of viewing video only to kill time. This paper also discusses the significance of user-generated content and the usage of video in social communities through the description of two mobile video technology services that allow users to create and share content. Implications for adoption and design of mobile video technologies and services are discussed as well.

  7. Video Analysis of Rolling Cylinders

    Science.gov (United States)

    Phommarach, S.; Wattanakasiwich, P.; Johnston, I.

    2012-01-01

    In this work, we studied the rolling motion of solid and hollow cylinders down an inclined plane at different angles. The motions were captured on video at 300 frames s[superscript -1], and the videos were analyzed frame by frame using video analysis software. Data from the real motion were compared with the theory of rolling down an inclined…

  8. Video Games and Digital Literacies

    Science.gov (United States)

    Steinkuehler, Constance

    2010-01-01

    Today's youth are situated in a complex information ecology that includes video games and print texts. At the basic level, video game play itself is a form of digital literacy practice. If we widen our focus from the "individual player + technology" to the online communities that play them, we find that video games also lie at the nexus of a…

  9. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos ... Your Arthritis Managing Chronic Pain and Depression in Arthritis Nutrition & Rheumatoid Arthritis Arthritis and Health-related Quality of Life ...

  10. 2011 Joint Service Power Expo. Volume 2. Video Files

    Science.gov (United States)

    2011-05-05

    2011power.html[3/22/2016 1:21:48 PM] BA-XX90 Batteries 12102 - “Next Generation 5X90 Battery” Mr. Carlos Negrete, New Technologies Engineering Manager, SAFT...Pulse Energy Management System – Real Time Optimization of Remote Deployment Energy Systems”, Mr. Bruce Cullen , Manager - Remote Communities

  11. Enhanced video display and navigation for networked streaming video and networked video playlists

    Science.gov (United States)

    Deshpande, Sachin

    2006-01-01

    In this paper we present an automatic enhanced video display and navigation capability for networked streaming video and networked video playlists. Our proposed method uses Synchronized Multimedia Integration Language (SMIL) as presentation language and Real Time Streaming Protocol (RTSP) as network remote control protocol to automatically generate a "enhanced video strip" display for easy navigation. We propose and describe two approaches - a smart client approach and a smart server approach. We also describe a prototype system implementation of our proposed approach.

  12. Decay data file based on the ENSDF file

    Energy Technology Data Exchange (ETDEWEB)

    Katakura, J. [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-03-01

    A decay data file with the JENDL (Japanese Evaluated Nuclear Data Library) format based on the ENSDF (Evaluated Nuclear Structure Data File) file was produced as a tentative one of special purpose files of JENDL. The problem using the ENSDF file as primary source data of the JENDL decay data file is presented. (author)

  13. Mobile, portable lightweight wireless video recording solutions for homeland security, defense, and law enforcement applications

    Science.gov (United States)

    Sandy, Matt; Goldburt, Tim; Carapezza, Edward M.

    2015-05-01

    It is desirable for executive officers of law enforcement agencies and other executive officers in homeland security and defense, as well as first responders, to have some basic information about the latest trend on mobile, portable lightweight wireless video recording solutions available on the market. This paper reviews and discusses a number of studies on the use and effectiveness of wireless video recording solutions. It provides insights into the features of wearable video recording devices that offer excellent applications for the category of security agencies listed in this paper. It also provides answers to key questions such as: how to determine the type of video recording solutions most suitable for the needs of your agency, the essential features to look for when selecting a device for your video needs, and the privacy issues involved with wearable video recording devices.

  14. Technical Evaluation Report 13: Online Video Conferencing Products

    Directory of Open Access Journals (Sweden)

    Pam Craven

    2002-10-01

    Full Text Available This is the first in Athabasca University’s series of evaluation reports to feature online Webcam and videoconferencing products. While Webcam software generates a simple visual presentation from a live online camera, videoconferencing products contain a wider range of interactive features serving multi-point interactions between participants. In many online situations, the addition of video images to a live presentation can add substantially to its educational effectiveness. Ten products/ online services are reviewed, supporting a wide range of video-based activities.

  15. Health Topic XML File Description: MedlinePlus

    Science.gov (United States)

    ... page: https://medlineplus.gov/xmldescription.html Health Topic XML File Description: MedlinePlus To use the sharing features on this page, please enable JavaScript. Description of XML Tags Definitions of every possible tag in the ...

  16. The reliability of national videos related to the kidney stones on YouTube.

    Science.gov (United States)

    Serinken, Mustafa; Eken, Cenker; Erdemir, Fikret; Eliçabuk, Hayri; Başer, Aykut

    2016-03-01

    Kidney stones are one of the most common disorders of the urinary tract. With increasing awareness, a larger proportion of patients are seeking medical knowledge from the Internet. In present study, the features, reliability and efficacy of videos on YouTube related to the treatment of kidney stones were evaluated. In December 2014, YouTube was searched using keywords "nephrolithiasis"; "renal calculi"; "renal stones"; and "kidney stones" for videos uploaded containing relevant information about the disease. Only videos in Turkish were included in the study. Two physician viewers watched each video and classified them as useful, partially useful and useless according to European Association of Urology (EAU) Guidelines. The source, length, number of views, number of favourable opinions, and days since uploaded date of the all videos were evaluated. A total of 600 videos were analysed The median length of videos was 6.7±10.4 (median: 3, IQR: 0.03-58) minutes. Each video was viewed at an average of 2368 (min: 11, max: 97133) times. Most of the videos (32.8%) were created by academicians and physicians. Nearly half (47.4%) of the videos were uploaded in 2014. The majority of the videos (62.5%) contained information for treatment. Percutaneous nephrolithotomy and ureterorenoscopy were the most common treatment modalities (32.8% and 28.0%, respectively ) in these videos. A statistically significant difference was not detected between view numbers and source of videos (p=0.87). However, there was a statistically significant difference between usefulness to the viewers and source of videos. Hospital -based videos were detected to be more useful (p=0.000). As a result, videos that would be prepared in internet environment by professional individuals or organizations in a way which would attract attention and be easily comprehended by the public could contribute to the knowledge and education of our society about the stone disease which is commonly seen in our country.

  17. Rapid prototyping of an automated video surveillance system: a hardware-software co-design approach

    Science.gov (United States)

    Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.; Ives, Robert W.

    2011-06-01

    FPGA devices with embedded DSP and memory blocks, and high-speed interfaces are ideal for real-time video processing applications. In this work, a hardware-software co-design approach is proposed to effectively utilize FPGA features for a prototype of an automated video surveillance system. Time-critical steps of the video surveillance algorithm are designed and implemented in the FPGAs logic elements to maximize parallel processing. Other non timecritical tasks are achieved by executing a high level language program on an embedded Nios-II processor. Pre-tested and verified video and interface functions from a standard video framework are utilized to significantly reduce development and verification time. Custom and parallel processing modules are integrated into the video processing chain by Altera's Avalon Streaming video protocol. Other data control interfaces are achieved by connecting hardware controllers to a Nios-II processor using Altera's Avalon Memory Mapped protocol.

  18. Representing videos in tangible products

    Science.gov (United States)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  19. Small Moving Vehicle Detection in a Satellite Video of an Urban Area

    OpenAIRE

    Tao Yang; Xiwen Wang,; Bowei Yao; Jing Li; Yanning Zhang; Zhannan He; Wencheng Duan

    2016-01-01

    Vehicle surveillance of a wide area allows us to learn much about the daily activities and traffic information. With the rapid development of remote sensing, satellite video has become an important data source for vehicle detection, which provides a broader field of surveillance. The achieved work generally focuses on aerial video with moderately-sized objects based on feature extraction. However, the moving vehicles in satellite video imagery range from just a few pixels to dozens of pixels ...

  20. Video Inter-frame Forgery Identification Based on Optical Flow Consistency

    OpenAIRE

    Qi Wang; Zhaohong Li; Zhenzhen Zhang; Qinglong Ma

    2014-01-01

    Identifying inter-frame forgery is a hot topic in video forensics. In this paper, we propose a method based on the assumption that the optical flows are consistent in an original video, while in forgeries the consistency will be destroyed. We first extract optical flow from frames of videos and then calculate the optical flow consistency after normalization and quantization as distinguishing feature to identify inter-frame forgeries. We train the Support Vector Machine to classify original vi...

  1. Geotail Video News Release

    Science.gov (United States)

    1992-01-01

    The Geotail mission, part of the International Solar Terrestrial Physics (ISTP) program, measures global energy flow and transformation in the magnetotail to increase understanding of fundamental magnetospheric processes. The satellite was launched on July 24, 1992 onboard a Delta II rocket. This video shows with animation the solar wind, and its effect on the Earth. The narrator explains that the Geotail spacecraft was designed and built by the Institute of Space and Astronautical Science (ISAS), the Japanese Space Agency. The mission objectives are reviewed by one of the scientist in a live view. The video also shows an animation of the orbit, while the narrator explains the orbit and the reason for the small launch window.

  2. CARACTERIZACION VOZ Y VIDEO

    Directory of Open Access Journals (Sweden)

    Octavio José Salcedo Parra

    2011-11-01

    Full Text Available La motivación para caracterizar el tráfico de voz y video está en la necesidad de las empresas proveedoras de servicio en mantener redes de transporte de información con capacidades acordes a los requerimientos de los usuarios.  Poder determinar en forma oportuna como los elementos técnicos que hacen parte de las redes afectan su desempeño, teniendo en cuenta que cada tipo de servicio es afectado en mayor o menor medida por dichos elementos dentro de los que tenemos el jitter, las demoras y las pérdidas de paquetes entre otros. El presente trabajo muestra varios casos de caracterización de tráfico tanto de voz como de video en las que se utilizan una diversidad de técnicas para diferentes tipos de servicio.

  3. Smartphone based automatic organ validation in ultrasound video.

    Science.gov (United States)

    Vaish, Pallavi; Bharath, R; Rajalakshmi, P

    2017-07-01

    Telesonography involves transmission of ultrasound video from remote areas to the doctors for getting diagnosis. Due to the lack of trained sonographers in remote areas, the ultrasound videos scanned by these untrained persons do not contain the proper information that is required by a physician. As compared to standard methods for video transmission, mHealth driven systems need to be developed for transmitting valid medical videos. To overcome this problem, we are proposing an organ validation algorithm to evaluate the ultrasound video based on the content present. This will guide the semi skilled person to acquire the representative data from patient. Advancement in smartphone technology allows us to perform high medical image processing on smartphone. In this paper we have developed an Application (APP) for a smartphone which can automatically detect the valid frames (which consist of clear organ visibility) in an ultrasound video and ignores the invalid frames (which consist of no-organ visibility), and produces a compressed sized video. This is done by extracting the GIST features from the Region of Interest (ROI) of the frame and then classifying the frame using SVM classifier with quadratic kernel. The developed application resulted with the accuracy of 94.93% in classifying valid and invalid images.

  4. Star Wars in psychotherapy: video games in the office.

    Science.gov (United States)

    Ceranoglu, Tolga Atilla

    2010-01-01

    Video games are used in medical practice during psycho-education in chronic disease management, physical therapy, rehabilitation following traumatic brain injury, and as an adjunct in pain management during medical procedures or cancer chemotherapy. In psychiatric practice, video games aid in social skills training of children with developmental delays and in cognitive behavioral therapy (CBT). This most popular children's toy may prove a useful tool in dynamic psychotherapy of youth. The author provides a framework for using video games in psychotherapy by considering the characteristics of video games and describes the ways their use has facilitated various stages of therapeutic process. Just as other play techniques build a relationship and encourage sharing of emotional themes, sitting together in front of a console and screen facilitates a relationship and allows a safe path for the patient's conflict to emerge. During video game play, the therapist may observe thought processes, impulsivity, temperament, decision-making, and sharing, among other aspects of a child's clinical presentation. Several features inherent to video games require a thoughtful approach as resistance and transference in therapy may be elaborated differently in comparison to more traditional toys. Familiarity with the video game content and its dynamics benefits child mental health clinicians in their efforts to help children and their families.

  5. Video summarization using line segments, angles and conic parts.

    Science.gov (United States)

    Salehin, Md Musfequs; Paul, Manoranjan; Kabir, Muhammad Ashad

    2017-01-01

    Video summarization is a process to extract objects and their activities from a video and represent them in a condensed form. Existing methods for video summarization fail to detect moving (dynamic) objects in the low color contrast area of a video frame due to the pixel intensities of objects and non-objects are almost similar. However, edges of objects are prominent in the low contrast regions. Moreover, to represent objects, geometric primitives (such as lines, arcs) are distinguishable and high level shape descriptors than edges. In this paper, a novel method is proposed for video summarization using geometric primitives such as conic parts, line segments and angles. Using these features, objects are extracted from each video frame. A cost function is applied to measure the dissimilarity of locations of geometric primitives to detect the movement of objects between consecutive frames. The total distance of object movement is calculated and each video frame is assigned a probability score. Finally, a set of key frames is selected based on the probability scores as per user provided skimming ratio or system default skimming ratio. The proposed approach is evaluated using three benchmark datasets-BL-7F, Office, and Lobby. The experimental results show that our approach outperforms the state-of-the-art method in terms of accuracy.

  6. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  7. The video lecture

    OpenAIRE

    Crook, Charles; Schofield, Louise

    2017-01-01

    Vocabulary for describing the structures, roles, and relationships characteristic of traditional, or ‘offline’, education has been seamlessly applied to the designs of ‘online’ education. One example is the lecture, delivered as a video recording. The purpose of this research is to consider the concept of ‘lecture’ as realised in both offline and online contexts. We explore how media differences entail different student experiences and how these differences relate to design decisions associat...

  8. The Future of Video

    OpenAIRE

    Li, F.

    2016-01-01

    Executive Summary \\ud \\ud A range of technological innovations (e.g. smart phones and digital cameras), infrastructural advances (e.g. broadband and 3G/4G wireless networks) and platform developments (e.g. YouTube, Facebook, Snapchat, Instagram, Amazon, and Netflix) are collectively transforming the way video is produced, distributed, consumed, archived – and importantly, monetised. Changes have been observed well beyond the mainstream TV and film industries, and these changes are increasingl...

  9. Video time encoding machines.

    Science.gov (United States)

    Lazar, Aurel A; Pnevmatikakis, Eftychios A

    2011-03-01

    We investigate architectures for time encoding and time decoding of visual stimuli such as natural and synthetic video streams (movies, animation). The architecture for time encoding is akin to models of the early visual system. It consists of a bank of filters in cascade with single-input multi-output neural circuits. Neuron firing is based on either a threshold-and-fire or an integrate-and-fire spiking mechanism with feedback. We show that analog information is represented by the neural circuits as projections on a set of band-limited functions determined by the spike sequence. Under Nyquist-type and frame conditions, the encoded signal can be recovered from these projections with arbitrary precision. For the video time encoding machine architecture, we demonstrate that band-limited video streams of finite energy can be faithfully recovered from the spike trains and provide a stable algorithm for perfect recovery. The key condition for recovery calls for the number of neurons in the population to be above a threshold value.

  10. Robotic video photogrammetry system

    Science.gov (United States)

    Gustafson, Peter C.

    1997-07-01

    For many years, photogrammetry has been in use at TRW. During that time, needs have arisen for highly repetitive measurements. In an effort to satisfy these needs in a timely manner, a specialized Robotic Video Photogrammetry System (RVPS) was developed by TRW in conjunction with outside vendors. The primary application for the RVPS has strict accuracy requirements that demand significantly more images than the previously used film-based system. The time involved in taking these images was prohibitive but by automating the data acquisition process, video techniques became a practical alternative to the more traditional film- based approach. In fact, by applying video techniques, measurement productivity was enhanced significantly. Analysis involved was also brought `on-board' to the RVPS, allowing shop floor acquisition and delivery of results. The RVPS has also been applied in other tasks and was found to make a critical improvement in productivity, allowing many more tests to be run in a shorter time cycle. This paper will discuss the creation of the system and TRW's experiences with the RVPS. Highlighted will be the lessons learned during these efforts and significant attributes of the process not common to the standard application of photogrammetry for industrial measurement. As productivity and ease of use continue to drive the application of photogrammetry in today's manufacturing climate, TRW expects several systems, with technological improvements applied, to be in use in the near future.

  11. Utilizing Video Games

    Science.gov (United States)

    Blaize, L.

    Almost from its birth, the computer and video gaming industry has done an admirable job of communicating the vision and attempting to convey the experience of traveling through space to millions of gamers from all cultures and demographics. This paper will propose several approaches the 100 Year Starship Study can take to use the power of interactive media to stir interest in the Starship and related projects among a global population. It will examine successful gaming franchises from the past that are relevant to the mission and consider ways in which the Starship Study could cooperate with game development studios to bring the Starship vision to those franchises and thereby to the public. The paper will examine ways in which video games can be used to crowd-source research aspects for the Study, and how video games are already considering many of the same topics that will be examined by this Study. Finally, the paper will propose some mechanisms by which the 100 Year Starship Study can establish very close ties with the gaming industry and foster cooperation in pursuit of the Study's goals.

  12. Scorebox extraction from mobile sports videos using Support Vector Machines

    Science.gov (United States)

    Kim, Wonjun; Park, Jimin; Kim, Changick

    2008-08-01

    Scorebox plays an important role in understanding contents of sports videos. However, the tiny scorebox may give the small-display-viewers uncomfortable experience in grasping the game situation. In this paper, we propose a novel framework to extract the scorebox from sports video frames. We first extract candidates by using accumulated intensity and edge information after short learning period. Since there are various types of scoreboxes inserted in sports videos, multiple attributes need to be used for efficient extraction. Based on those attributes, the optimal information gain is computed and top three ranked attributes in terms of information gain are selected as a three-dimensional feature vector for Support Vector Machines (SVM) to distinguish the scorebox from other candidates, such as logos and advertisement boards. The proposed method is tested on various videos of sports games and experimental results show the efficiency and robustness of our proposed method.

  13. Scalable-to-lossless transform domain distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Veselov, Anton

    2010-01-01

    Distributed video coding (DVC) is a novel approach providing new features as low complexity encoding by mainly exploiting the source statistics at the decoder based on the availability of decoder side information. In this paper, scalable-tolossless DVC is presented based on extending a lossy...... TransformDomain Wyner-Ziv (TDWZ) distributed video codec with feedback.The lossless coding is obtained by using a reversible integer DCT.Experimental results show that the performance of the proposed scalable-to-lossless TDWZ video codec can outperform alternatives based on the JPEG 2000 standard. The TDWZ...... codec provides frame by frame encoding. Comparing the lossless coding efficiency, the proposed scalable-to-lossless TDWZ video codec can save up to 5%-13% bits compared to JPEG LS and H.264 Intra frame lossless coding and do so as a scalable-to-lossless coding....

  14. Automatic Person Identification in Camera Video by Motion Correlation

    Directory of Open Access Journals (Sweden)

    Dingbo Duan

    2014-01-01

    Full Text Available Person identification plays an important role in semantic analysis of video content. This paper presents a novel method to automatically label persons in video sequence captured from fixed camera. Instead of leveraging traditional face recognition approaches, we deal with the task of person identification by fusing information from motion sensor platforms, like smart phones, carried on human bodies and extracted from camera video. More specifically, a sequence of motion features extracted from camera video are compared with each of those collected from accelerometers of smart phones. When strong correlation is detected, identity information transmitted from the corresponding smart phone is used to identify the phone wearer. To test the feasibility and efficiency of the proposed method, extensive experiments are conducted which achieved impressive performance.

  15. Soft-assignment random-forest with an application to discriminative representation of human actions in videos

    NARCIS (Netherlands)

    Burghouts, G.J.

    2013-01-01

    The bag-of-features model is a distinctive and robust approach to detect human actions in videos. The discriminative power of this model relies heavily on the quantization of the video features into visual words. The quantization determines how well the visual words describe the human action. Random

  16. Audiovisual laughter detection based on temporal features

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    2008-01-01

    Previous research on automatic laughter detection has mainly been focused on audio-based detection. In this study we present an audio-visual approach to distinguishing laugh- ter from speech based on temporal features and we show that integrating the information from audio and video chan- nels leads

  17. Multi-modal highlight generation for sports videos using an information-theoretic excitability measure

    Science.gov (United States)

    Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.

    2013-12-01

    The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.

  18. PCF File Format.

    Energy Technology Data Exchange (ETDEWEB)

    Thoreson, Gregory G [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-08-01

    PCF files are binary files designed to contain gamma spectra and neutron count rates from radiation sensors. It is the native format for the GAmma Detector Response and Analysis Software (GADRAS) package [1]. It can contain multiple spectra and information about each spectrum such as energy calibration. This document outlines the format of the file that would allow one to write a computer program to parse and write such files.

  19. Standard interface file handbook

    Energy Technology Data Exchange (ETDEWEB)

    Shapiro, A.; Huria, H.C. (Cincinnati Univ., OH (United States))

    1992-10-01

    This handbook documents many of the standard interface file formats that have been adopted by the US Department of Energy to facilitate communications between and portability of, various large reactor physics and radiation transport software packages. The emphasis is on those files needed for use of the VENTURE/PC diffusion-depletion code system. File structures, contents and some practical advice on use of the various files are provided.

  20. Video Analysis: Lessons from Professional Video Editing Practice

    Directory of Open Access Journals (Sweden)

    Eric Laurier

    2008-09-01

    Full Text Available In this paper we join a growing body of studies that learn from vernacular video analysts quite what video analysis as an intelligible course of action might be. Rather than pursuing epistemic questions regarding video as a number of other studies of video analysis have done, our concern here is with the crafts of producing the filmic. As such we examine how audio and video clips are indexed and brought to hand during the logging process, how a first assembly of the film is built at the editing bench and how logics of shot sequencing relate to wider concerns of plotting, genre and so on. In its conclusion we make a number of suggestions about the future directions of studying video and film editors at work. URN: urn:nbn:de:0114-fqs0803378

  1. Surveillance Video Synopsis in GIS

    Directory of Open Access Journals (Sweden)

    Yujia Xie

    2017-10-01

    Full Text Available Surveillance videos contain a considerable amount of data, wherein interesting information to the user is sparsely distributed. Researchers construct video synopsis that contain key information extracted from a surveillance video for efficient browsing and analysis. Geospatial–temporal information of a surveillance video plays an important role in the efficient description of video content. Meanwhile, current approaches of video synopsis lack the introduction and analysis of geospatial-temporal information. Owing to the preceding problems mentioned, this paper proposes an approach called “surveillance video synopsis in GIS”. Based on an integration model of video moving objects and GIS, the virtual visual field and the expression model of the moving object are constructed by spatially locating and clustering the trajectory of the moving object. The subgraphs of the moving object are reconstructed frame by frame in a virtual scene. Results show that the approach described in this paper comprehensively analyzed and created fusion expression patterns between video dynamic information and geospatial–temporal information in GIS and reduced the playback time of video content.

  2. Video Compression Schemes Using Edge Feature on Wireless Video Sensor Networks

    National Research Council Canada - National Science Library

    Nguyen Huu, Phat; Tran-Quang, Vinh; Miyoshi, Takumi

    2012-01-01

    .... In these schemes, we divide the compression process into several small processing components, which are then distributed to multiple nodes along a path from a source node to a cluster head in a cluster...

  3. Enkripsi dan Dekripsi File dengan Algoritma Blowfish pada Perangkat Mobile Berbasis Android

    Directory of Open Access Journals (Sweden)

    Siswo Wardoyo

    2016-03-01

    Full Text Available Cryptography is one of the ways used to secure data in the form of a file with encrypt files so that others are not entitled to know the file is private and confidential. One method is the algorithm Blowfish Cryptography which is a symmetric key using the algorithm to perform encryption and decryption. Applications that are built can perform file encryption-shaped images, videos, and documents. These applications can be running on a mobile phone that has a minimal operating system Android version 2.3. The software used to build these applications is Eclipse. The results of this research indicate that applications built capable of performing encryption and decryption. The results file encryption makes files into another unknown meaning. By using the keys numbered 72 bits or 9 character takes 1,49x108 years to break it with the speed it’s computation is 106 key/sec.

  4. High Definition Video Streaming Using H.264 Video Compression

    OpenAIRE

    Bechqito, Yassine

    2009-01-01

    This thesis presents high definition video streaming using H.264 codec implementation. The experiment carried out in this study was done for an offline streaming video but a model for live high definition streaming is introduced, as well. Prior to the actual experiment, this study describes digital media streaming. Also, the different technologies involved in video streaming are covered. These include streaming architecture and a brief overview on H.264 codec as well as high definition t...

  5. Look at That! Video Chat and Joint Visual Attention Development among Babies and Toddlers

    Science.gov (United States)

    McClure, Elisabeth R.; Chentsova-Dutton, Yulia E.; Holochwost, Steven J.; Parrott, W. G.; Barr, Rachel

    2018-01-01

    Although many relatives use video chat to keep in touch with toddlers, key features of adult-toddler interaction like joint visual attention (JVA) may be compromised in this context. In this study, 25 families with a child between 6 and 24 months were observed using video chat at home with geographically separated grandparents. We define two types…

  6. Using Video as a Stimulus to Reveal Elementary Teachers' Mathematical Knowledge for Teaching

    Science.gov (United States)

    Barlow, Angela T.; Gaddy, Angeline K.; Baxter, Wesley A.

    2017-01-01

    The purpose of this article is to explore the usefulness of a video-based tool for measuring teachers' mathematical knowledge for teaching. Unique to this tool is the use of a video featuring a mathematical disagreement that occurred in an elementary classroom. The authors define mathematical disagreements as instances in which students challenge…

  7. Mapping Self-Guided Learners' Searches for Video Tutorials on YouTube

    Science.gov (United States)

    Garrett, Nathan

    2016-01-01

    While YouTube has a wealth of educational videos, how self-guided learners use these resources has not been fully described. An analysis of search engine queries for help with the use of Microsoft Excel shows that few users search for specific features or functions but instead use very general terms. Because the same videos are returned in…

  8. Contributions of Music Video Exposure to Black Adolescents' Gender and Sexual Schemas

    Science.gov (United States)

    Ward, L. Monique; Hansbrough, Edwina; Walker, Eboni

    2005-01-01

    Although music videos feature prominently in the media diets of many adolescents, little is known of their impact on viewers' conceptions of femininity and masculinity. Accordingly, this study examines the impact of both regular and experimental music video exposure on adolescent viewers' conceptions about gender. Across two testing sessions, 152…

  9. Preparing Preservice Teachers for Instruction on English-Language Development with Video Lesson Modules

    Science.gov (United States)

    Liu, Ping

    2011-01-01

    This study explored the use of video lesson modules in a teaching methodology course to prepare preservice teachers for supporting the English-language development of pupils at K-8 schools. The basic material of a lesson module is a video lesson featuring instruction of an experienced classroom teacher in an English-language development setting of…

  10. Using video in teacher education

    Directory of Open Access Journals (Sweden)

    Jo Towers

    2007-06-01

    Full Text Available This paper draws on a research study of elementary- and secondary-route preservice teachers in a two-year, after-degree teacher preparation programme. The paper includes excerpts of classroom data, taken from the author’s own university classroom, demonstrating preservice teachers’ responses to carefully selected video extracts of children learning mathematics in a high-school class also taught by the author. The paper includes commentary on some of the advantages and limitations of video as a teaching tool, develops an argument for the increased use, in both preservice teacher education and inservice teacher professional development, of videotaped episodes that focus on the learners rather than on the classroom teacher, and explores the value of having the teacher whose classroom is featured on the videos present for the discussion of the episodes. The paper explores the potential offered by video material to foster the belief that teaching is a learning activity by (i refocusing attention on the learner rather than the teacher in the analysis of classroom practices, (ii raising awareness of the importance of reflective practice, and (iii providing a prompt for the imaginative rehearsal of action. Résumé : Le présent article se fonde sur une étude technique portant sur des stagiaires des niveaux primaire et secondaire dans un programme de préparation à l’enseignement de deux ans après l’obtention du diplôme. L’article comprend des extraits de données en salle de classe qui proviennent de la salle de classe de l’université de l’auteur même, illustrant les réponses des stagiaires à des extraits vidéo choisis avec soins, extraits portant su des enfants apprenant les mathématiques dans une classe du secondaire dont l’enseignant est l’auteur. L’article comporte des commentaires sur certains des avantages et limites du vidéo comme outil d’enseignement, il présente un argument pour l’augmentation accrue, à la

  11. Rate-Adaptive Video Compression (RAVC) Universal Video Stick (UVS)

    Science.gov (United States)

    Hench, David L.

    2009-05-01

    The H.264 video compression standard, aka MPEG 4 Part 10 aka Advanced Video Coding (AVC) allows new flexibility in the use of video in the battlefield. This standard necessitates encoder chips to effectively utilize the increased capabilities. Such chips are designed to cover the full range of the standard with designers of individual products given the capability of selecting the parameters that differentiate a broadcast system from a video conferencing system. The SmartCapture commercial product and the Universal Video Stick (UVS) military versions are about the size of a thumb drive with analog video input and USB (Universal Serial Bus) output and allow the user to select the parameters of imaging to the. Thereby, allowing the user to select video bandwidth (and video quality) using four dimensions of quality, on the fly, without stopping video transmission. The four dimensions are: 1) spatial, change from 720 pixel x 480 pixel to 320 pixel x 360 pixel to 160 pixel x 180 pixel, 2) temporal, change from 30 frames/ sec to 5 frames/sec, 3) transform quality with a 5 to 1 range, 4) and Group of Pictures (GOP) that affects noise immunity. The host processor simply wraps the H.264 network abstraction layer packets into the appropriate network packets. We also discuss the recently adopted scalable amendment to H.264 that will allow limit RAVC at any point in the communication chain by throwing away preselected packets.

  12. Video Tracking dalam Digital Compositing untuk Paska Produksi Video

    Directory of Open Access Journals (Sweden)

    Ardiyan Ardiyan

    2012-04-01

    Full Text Available Video Tracking is one of the processes in video postproduction and motion picture digitally. The ability of video tracking method in the production is helpful to realize the concept of the visual. It is considered in the process of visual effects making. This paper presents how the tracking process and its benefits in visual needs, especially for video and motion picture production. Some of the things involved in the process of tracking such as failure to do so are made clear in this discussion. 

  13. Deriving video content type from HEVC bitstream semantics

    Science.gov (United States)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio R.

    2014-05-01

    As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can

  14. A low-light-level video recursive filtering technology based on the three-dimensional coefficients

    Science.gov (United States)

    Fu, Rongguo; Feng, Shu; Shen, Tianyu; Luo, Hao; Wei, Yifang; Yang, Qi

    2017-08-01

    Low light level video is an important method of observation under low illumination condition, but the SNR of low light level video is low, the effect of observation is poor, so the noise reduction processing must be carried out. Low light level video noise mainly includes Gauss noise, Poisson noise, impulse noise, fixed pattern noise and dark current noise. In order to remove the noise in low-light-level video effectively, improve the quality of low-light-level video. This paper presents an improved time domain recursive filtering algorithm with three dimensional filtering coefficients. This algorithm makes use of the correlation between the temporal domain of the video sequence. In the video sequences, the proposed algorithm adaptively adjusts the local window filtering coefficients in space and time by motion estimation techniques, for the different pixel points of the same frame of the image, the different weighted coefficients are used. It can reduce the image tail, and ensure the noise reduction effect well. Before the noise reduction, a pretreatment based on boxfilter is used to reduce the complexity of the algorithm and improve the speed of the it. In order to enhance the visual effect of low-light-level video, an image enhancement algorithm based on guided image filter is used to enhance the edge of the video details. The results of experiment show that the hybrid algorithm can remove the noise of the low-light-level video effectively, enhance the edge feature and heighten the visual effects of video.

  15. Utilizing Implicit User Feedback to Improve Interactive Video Retrieval

    Directory of Open Access Journals (Sweden)

    Stefanos Vrochidis

    2011-01-01

    Full Text Available This paper describes an approach to exploit the implicit user feedback gathered during interactive video retrieval tasks. We propose a framework, where the video is first indexed according to temporal, textual, and visual features and then implicit user feedback analysis is realized using a graph-based methodology. The generated graph encodes the semantic relations between video segments based on past user interaction and is subsequently used to generate recommendations. Moreover, we combine the visual features and implicit feedback information by training a support vector machine classifier with examples generated from the aforementioned graph in order to optimize the query by visual example search. The proposed framework is evaluated by conducting real-user experiments. The results demonstrate that significant improvement in terms of precision and recall is reported after the exploitation of implicit user feedback, while an improved ranking is presented in most of the evaluated queries by visual example.

  16. Features of MCNP6

    Science.gov (United States)

    Goorley, T.; James, M.; Booth, T.; Brown, F.; Bull, J.; Cox, L. J.; Durkee, J.; Elson, J.; Fensin, M.; Forster, R. A.; Hendricks, J.; Hughes, H. G.; Johns, R.; Kiedrowski, B.; Martz, R.; Mashnik, S.; McKinney, G.; Pelowitz, D.; Prael, R.; Sweezy, J.; Waters, L.; Wilcox, T.; Zukaitis, T.

    2014-06-01

    MCNP6 is simply and accurately described as the merger of MCNP5 and MCNPX capabilities, but it is much more than the sum of these two computer codes. MCNP6 is the result of six years of effort by the MCNP5 and MCNPX code development teams. These groups of people, residing in Los Alamos National Laboratory's X Computational Physics Division, Monte Carlo Codes Group (XCP-3) and Nuclear Engineering and Nonproliferation Division, Radiation Transport Modeling Team (NEN-5) respectively, have combined their code development efforts to produce the next evolution of MCNP. While maintenance and major bug fixes will continue for MCNP5 1.60 and MCNPX 2.7.0 for upcoming years, new code development capabilities only will be developed and released in MCNP6. In fact, the initial release of MCNP6 contains numerous new features not previously found in either code. These new features are summarized in this document. Packaged with MCNP6 is also the new production release of the ENDF/B-VII.1 nuclear data files usable by MCNP. The high quality of the overall merged code, usefulness of these new features, along with the desire in the user community to start using the merged code, have led us to make the first MCNP6 production release: MCNP6 version 1. High confidence in the MCNP6 code is based on its performance with the verification and validation test suites, comparisons to its predecessor codes, our automated nightly software debugger tests, the underlying high quality nuclear and atomic databases, and significant testing by many beta testers.

  17. Video – ned med overliggeren

    DEFF Research Database (Denmark)

    Langebæk, Rikke

    2010-01-01

    Århus – nov 2010 ’Podcast og Video i Undervisningen’ Video – helt ned på jorden Rikke Langebæk, DVM, Phd-studerende, Seniordyrlæge, Institut for Mindre Husdyrs Sygdomme, LIFE, KU Anvendelsen af video i undervisningen har mange iøjnefaldende fordele, og der er nok mange, der drømmer om at implemen......Århus – nov 2010 ’Podcast og Video i Undervisningen’ Video – helt ned på jorden Rikke Langebæk, DVM, Phd-studerende, Seniordyrlæge, Institut for Mindre Husdyrs Sygdomme, LIFE, KU Anvendelsen af video i undervisningen har mange iøjnefaldende fordele, og der er nok mange, der drømmer om...

  18. Video Games and Adolescent Fighting

    OpenAIRE

    Ward, Michael R.

    2010-01-01

    Psychologists have found positive correlations between playing violent video games and violent and antisocial attitudes. However, these studies typically do not control for other covariates, particularly sex, that are known to be associated with both video game play and aggression. This study exploits the Youth Risk Behavior Survey, which includes questions on video game play and fighting as well as basic demographic information. With both parametric and nonparametric estimators, as there is ...

  19. Women as Video Game Consumers

    OpenAIRE

    Kiviranta, Hanna

    2017-01-01

    The purpose of this Thesis is to study women as video game consumers through the games that they play. This was done by case studies on the content of five video games from genres that statistically are popular amongst women. To introduce the topic and to build the theoretical framework, the key terms and the video game industry are introduced. The reader is acquainted with theories on consumer behaviour, buying processes and factors that influence our consuming habits. These aspects are...

  20. Quality scalable video data stream

    OpenAIRE

    Wiegand, T.; Kirchhoffer, H.; Schwarz, H

    2008-01-01

    An apparatus for generating a quality-scalable video data stream (36) is described which comprises means (42) for coding a video signal (18) using block-wise transformation to obtain transform blocks (146, 148) of transformation coefficient values for a picture (140) of the video signal, a predetermined scan order (154, 156, 164, 166) with possible scan positions being defined among the transformation coefficient values within the transform blocks so that in each transform block, for each pos...

  1. Video, videoarte, iconoclasmo

    OpenAIRE

    Roncallo Dow, Sergio; Universidad de la Sabana

    2013-01-01

    El propósito de este artículo es realizar un acercamiento a la forma-video y al videoarte desde una perspectiva estética. Para ello, en un primer momento, se hace una reflexión a propósito del estatuto de la imagen en occidente buscado evidenciar su carácter oscuro y el temor que parece haber suscitado desde siempre. Este punto se trabaja sobre algunos postulados platónicos que nos llevan a pensar un posible camino de la superación del iconoclasmo a través del surrealismo, el cine y la fotogr...

  2. Intellectual Video Filming

    DEFF Research Database (Denmark)

    Juel, Henrik

    Like everyone else university students of the humanities are quite used to watching Hollywood productions and professional TV. It requires some didactic effort to redirect their eyes and ears away from the conventional mainstream style and on to new and challenging ways of using the film media...... in favour of worthy causes. However, it is also very rewarding to draw on the creativity, enthusiasm and rapidly improving technical skills of young students, and to guide them to use video equipment themselves for documentary, for philosophical film essays and intellectual debate. In the digital era...

  3. Video fingerprinting for live events

    Science.gov (United States)

    Celik, Mehmet; Haitsma, Jaap; Barvinko, Pavlo; Langelaar, Gerhard; Maas, Martijn

    2009-02-01

    Multimedia fingerprinting (robust hashing) as a content identification technology is emerging as an effective tool for preventing unauthorized distribution of commercial content through user generated content (UGC) sites. Research in the field has mainly considered content types with slow distribution cycles, e.g. feature films, for which reference fingerprint ingestion and database indexing can be performed offline. As a result, research focus has been on improving the robustness and search speed. Live events, such as live sports broadcasts, impose new challenges on a fingerprinting system. For instance, highlights from a soccer match are often available-and viewed-on UGC sites well before the end of the match. In this scenario, the fingerprinting system should be able to ingest and index live content online and offer continuous search capability, where new material is identifiable within minutes of broadcast. In this paper, we concentrate on algorithmic and architectural challenges we faced when developing a video fingerprinting solution for live events. In particular, we discuss how to effectively utilize fast sorting algorithms and a master-slave architecture for fast and continuous ingestion of live broadcasts.

  4. Featuring animacy

    Directory of Open Access Journals (Sweden)

    Elizabeth Ritter

    2015-01-01

    Full Text Available Algonquian languages are famous for their animacy-based grammatical properties—an animacy based noun classification system and direct/inverse system which gives rise to animacy hierarchy effects in the determination of verb agreement. In this paper I provide new evidence for the proposal that the distinctive properties of these languages is due to the use of participant-based features, rather than spatio-temporal ones, for both nominal and verbal functional categories (Ritter & Wiltschko 2009, 2014. Building on Wiltschko (2012, I develop a formal treatment of the Blackfoot aspectual system that assumes a category Inner Aspect (cf. MacDonald 2008, Travis 1991, 2010. Focusing on lexical aspect in Blackfoot, I demonstrate that the classification of both nouns (Seinsarten and verbs (Aktionsarten is based on animacy, rather than boundedness, resulting in a strikingly different aspectual system for both categories. 

  5. Perceptual-components architecture for digital video

    Science.gov (United States)

    Watson, Andrew B.

    1990-01-01

    A perceptual-components architecture for digital video partitions the image stream into signal components in a manner analogous to that used in the human visual system. These components consist of achromatic and opponent color channels, divided into static and motion channels, further divided into bands of particular spatial frequency and orientation. Bits are allocated to an individual band in accord with visual sensitivity to that band and in accord with the properties of visual masking. This architecture is argued to have desirable features such as efficiency, error tolerance, scalability, device independence, and extensibility.

  6. What's New in Computers Video-On-Demand

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 6. What's New in Computers Video-On-Demand. M B Karthikeyan. Feature Article Volume 1 Issue 6 June 1996 pp 69-76. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/001/06/0069-0076 ...

  7. Charging Neutral Cues with Aggressive Meaning through Violent Video Game Play

    National Research Council Canada - National Science Library

    Robert Busching; Barbara Krahe

    2013-01-01

      When playing violent video games, aggressive actions are performed against the background of an originally neutral environment, and associations are formed between cues related to violence and contextual features...

  8. Teacher Explanation of Physics Concepts: a Video Study

    Science.gov (United States)

    Geelan, David

    2012-11-01

    Video recordings of Year 11 physics lessons were analyzed to identify key features of teacher explanations. Important features of the explanations used included teachers' ability to move between qualitative and quantitative modes of discussion, attention to what students require to succeed in high stakes examinations, thoughtful use of analogies, storytelling and references to the history of science, the use of educational technology, and the use of humor. Considerable scope remains for further research into teacher explanations in physics.

  9. Target detection and tracking in infrared video

    Science.gov (United States)

    Deng, Zhihui; Zhu, Jihong

    2017-07-01

    In this paper, we propose a method for target detection and tracking in infrared video. The target is defined by its location and extent in a single frame. In the initialization process, we use an adaptive threshold to segment the target and then extract the fern feature and normalize it as a template. The detector uses the random forest and fern to detect the target in the infrared video. The random forest and fern is a random combination of 2bit Binary Pattern, which is robust to infrared targets with blurred and unknown contours. The tracker uses the gray-value weighted mean-Shift algorithm to track the infrared target which is always brighter than the background. And the tracker can track the deformed target efficiently and quickly. When the target disappears, the detector will redetect the target in the coming infrared image. Finally, we verify the algorithm on the real-time infrared target detection and tracking platform. The result shows that our algorithm performs better than TLD in terms of recall and runtime in infrared video.

  10. Video Automatic Target Tracking System (VATTS) Operating Procedure,

    Science.gov (United States)

    1980-08-15

    AO-AIO𔃾 790 BOM CORP MCLEAN VA F/A 17/8 VIDEO AUTOMATIC TARGE T TRACKING SYSTEM (VATTS) OPERATING PROCEO -ETC(U) AUG Go C STAMM J P ORRESTER, J...Tape Transport Number Two TKI Tektronics I/0 Terminal DS1 Removable Disk Storage Unit DSO Fixed Disk Storage Unit CRT Cathode Ray Tube 1-3 THE BDM...file (mark on Mag Tape) AZEL Quick look at Trial Information Program DUPTAPE Allows for duplication of magnetic tapes CA Cancel ( terminates program on

  11. Austin Community College Video Game Development Certificate

    Science.gov (United States)

    McGoldrick, Robert

    2008-01-01

    The Video Game Development program is designed and developed by leaders in the Austin video game development industry, under the direction of the ACC Video Game Advisory Board. Courses are taught by industry video game developers for those who want to become video game developers. The program offers a comprehensive approach towards learning what's…

  12. SnapVideo: Personalized Video Generation for a Sightseeing Trip.

    Science.gov (United States)

    Zhang, Luming; Jing, Peiguang; Su, Yuting; Zhang, Chao; Shaoz, Ling

    2017-11-01

    Leisure tourism is an indispensable activity in urban people's life. Due to the popularity of intelligent mobile devices, a large number of photos and videos are recorded during a trip. Therefore, the ability to vividly and interestingly display these media data is a useful technique. In this paper, we propose SnapVideo, a new method that intelligently converts a personal album describing of a trip into a comprehensive, aesthetically pleasing, and coherent video clip. The proposed framework contains three main components. The scenic spot identification model first personalizes the video clips based on multiple prespecified audience classes. We then search for some auxiliary related videos from YouTube 1 according to the selected photos. To comprehensively describe a scenery, the view generation module clusters the crawled video frames into a number of views. Finally, a probabilistic model is developed to fit the frames from multiple views into an aesthetically pleasing and coherent video clip, which optimally captures the semantics of a sightseeing trip. Extensive user studies demonstrated the competitiveness of our method from an aesthetic point of view. Moreover, quantitative analysis reflects that semantically important spots are well preserved in the final video clip. 1 https://www.youtube.com/.

  13. Studenterproduceret video til eksamen

    Directory of Open Access Journals (Sweden)

    Kenneth Hansen

    2016-05-01

    Full Text Available Formålet med denne artikel er at vise, hvordan læringsdesign og stilladsering kan anvendes til at skabe en ramme for studenterproduceret video til eksamen på videregående uddannelser. Artiklen tager udgangspunkt i en problemstilling, hvor uddannelsesinstitutionerne skal håndtere og koordinere undervisning inden for både det faglige område og mediefagligt område og sikre en balance mellem en fagfaglighed og en mediefaglig tilgang. Ved at dele opgaven ud på flere faglige resurser, er der mere koordinering, men man kommer omkring problemet med krav til underviserne om dobbelt faglighed ved medieproduktioner. Med afsæt i Lanarca Declarationens perspektiver på læringsdesign og hovedsageligt Jerome Bruners principper for stilladsering, sammensættes en model for understøttelse af videoproduktion af studerende på videregående uddannelser. Ved at anvende denne model for undervisningssessioner og forløb får de fagfaglige og mediefaglige undervisere et redskab til at fokusere og koordinere indsatsen frem mod målet med, at de studerende producerer og anvender video til eksamen.

  14. Charging Neutral Cues with Aggressive Meaning through Violent Video Game Play

    OpenAIRE

    Robert Busching; Barbara Krahé

    2013-01-01

    When playing violent video games, aggressive actions are performed against the background of an originally neutral environment, and associations are formed between cues related to violence and contextual features. This experiment examined the hypothesis that neutral contextual features of a virtual environment become associated with aggressive meaning and acquire the function of primes for aggressive cognitions. Seventy-six participants were assigned to one of two violent video game condition...

  15. Video game use in boys with autism spectrum disorder, ADHD, or typical development.

    Science.gov (United States)

    Mazurek, Micah O; Engelhardt, Christopher R

    2013-08-01

    The study objectives were to examine video game use in boys with autism spectrum disorder (ASD) compared with those with ADHD or typical development (TD) and to examine how specific symptoms and game features relate to problematic video game use across groups. Participants included parents of boys (aged 8-18) with ASD (n = 56), ADHD (n = 44), or TD (n = 41). Questionnaires assessed daily hours of video game use, in-room video game access, video game genres, problematic video game use, ASD symptoms, and ADHD symptoms. Boys with ASD spent more time than did boys with TD playing video games (2.1 vs 1.2 h/d). Both the ASD and ADHD groups had greater in-room video game access and greater problematic video game use than the TD group. Multivariate models showed that inattentive symptoms predicted problematic game use for both the ASD and ADHD groups; and preferences for role-playing games predicted problematic game use in the ASD group only. Boys with ASD spend much more time playing video games than do boys with TD, and boys with ASD and ADHD are at greater risk for problematic video game use than are boys with TD. Inattentive symptoms, in particular, were strongly associated with problematic video game use for both groups, and role-playing game preferences may be an additional risk factor for problematic video game use among children with ASD. These findings suggest a need for longitudinal research to better understand predictors and outcomes of video game use in children with ASD and ADHD.

  16. Fostering science communication and outreach through video production in Dartmouth's IGERT Polar Environmental Change graduate program

    Science.gov (United States)

    Hammond Wagner, C. R.; McDavid, L. A.; Virginia, R. A.

    2013-12-01

    Dartmouth's NSF-supported IGERT Polar Environmental Change graduate program has focused on using video media to foster interdisciplinary thinking and to improve student skills in science communication and public outreach. Researchers, educators, and funding organizations alike recognize the value of video media for making research results more accessible and relevant to diverse audiences and across cultures. We present an affordable equipment set and the basic video training needed as well as available Dartmouth institutional support systems for students to produce outreach videos on climate change and its associated impacts on people. We highlight and discuss the successes and challenges of producing three types of video products created by graduate and undergraduate students affiliated with the Dartmouth IGERT. The video projects created include 1) graduate student profile videos, 2) a series of short student-created educational videos for Greenlandic high school students, and 3) an outreach video about women in science based on the experiences of women students conducting research during the IGERT field seminar at Summit Station and Kangerlussuaq, Greenland. The 'Science in Greenland--It's a Girl Thing' video was featured on The New York Times Dot Earth blog and the Huffington Post Green blog among others and received international recognition. While producing these videos, students 1) identified an audience and created story lines, 2) worked in front of and behind the camera, 3) utilized low-cost digital editing applications, and 4) shared the videos on multiple platforms from social media to live presentations. The three video projects were designed to reach different audiences, and presented unique challenges for content presentation and dissemination. Based on student and faculty assessment, we conclude that the video projects improved student science communication skills and increased public knowledge of polar science and the effects of climate change.

  17. A systematic review of serious video games used for vaccination.

    Science.gov (United States)

    Ohannessian, Robin; Yaghobian, Sarina; Verger, Pierre; Vanhems, Philippe

    2016-08-31

    Vaccination is an effective and proven method of preventing infectious diseases. However, uptake has not been optimal with available vaccines partly due to vaccination hesitancy. Various public health approaches have adressed vaccination hesitancy. Serious video games involving vaccination may represent an innovative public health approach. The aim of this study was to identify, describe, and review existing serious video games on vaccination. A systematic review was performed. Various databases were used to find data on vaccination-related serious video games published from January 1st 2000 to May 15th 2015. Data including featured medical and vaccination content, publication characteristics and games classification were collected for each identified serious game. Sixteen serious video games involved in vaccination were identified. All games were developed in high-income countries between 2003 and 2014. The majority of games were available online and were sponsored by educational/health institutions. All games were free of charge to users. Edugame was the most prevalent serious game subcategory. Twelve games were infectious disease-specific and the majority concerned influenza. The main objective of the games was disease control with a collective perspective. Utilization data was available for two games. Two games were formally evaluated. The use of serious video games for vaccination is an innovative tool for public health. Evaluation of vaccination related serious video games should be encouraged to demonstrate their efficacy and utility. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Student-Built Underwater Video and Data Capturing Device

    Science.gov (United States)

    Whitt, F.

    2016-12-01

    Students from Stockbridge High School Robotics Team invention is a low cost underwater video and data capturing device. This system is capable of shooting time-lapse photography and/or video for up to 3 days of video at a time. It can be used in remote locations without having to change batteries or adding additional external hard drives for data storage. The video capturing device has a unique base and mounting system which houses a pi drive and a programmable raspberry pi with a camera module. This system is powered by two 12 volt batteries, which makes it easier for users to recharge after use. Our data capturing device has the same unique base and mounting system as the underwater camera. The data capturing device consists of an Arduino and SD card shield that is capable of collecting continuous temperature and pH readings underwater. This data will then be logged onto the SD card for easy access and recording. The low cost underwater video and data capturing device can reach depths up to 100 meters while recording 36 hours of video on 1 terabyte of storage. It also features night vision infrared light capabilities. The cost to build our invention is $500. The goal of this was to provide a device that can easily be accessed by marine biologists, teachers, researchers and citizen scientists to capture photographic and water quality data in marine environments over extended periods of time.

  19. Content-Aware Video Adaptation under Low-Bitrate Constraint

    Directory of Open Access Journals (Sweden)

    Hsiao Ming-Ho

    2007-01-01

    Full Text Available With the development of wireless network and the improvement of mobile device capability, video streaming is more and more widespread in such an environment. Under the condition of limited resource and inherent constraints, appropriate video adaptations have become one of the most important and challenging issues in wireless multimedia applications. In this paper, we propose a novel content-aware video adaptation in order to effectively utilize resource and improve visual perceptual quality. First, the attention model is derived from analyzing the characteristics of brightness, location, motion vector, and energy features in compressed domain to reduce computation complexity. Then, through the integration of attention model, capability of client device and correlational statistic model, attractive regions of video scenes are derived. The information object- (IOB- weighted rate distortion model is used for adjusting the bit allocation. Finally, the video adaptation scheme dynamically adjusts video bitstream in frame level and object level. Experimental results validate that the proposed scheme achieves better visual quality effectively and efficiently.

  20. The Future of the Andrew File System

    CERN Multimedia

    CERN. Geneva; Altman, Jeffrey

    2011-01-01

    The talk will discuss the ten operational capabilities that have made AFS unique in the distributed file system space and how these capabilities are being expanded upon to meet the needs of the 21st century. Derrick Brashear and Jeffrey Altman will present a technical road map of new features and technical innovations that are under development by the OpenAFS community and Your File System, Inc. funded by a U.S. Department of Energy Small Business Innovative Research grant. The talk will end with a comparison of AFS to its modern days competitors.