WorldWideScience

Sample records for video sequences obtained

  1. A video annotation methodology for interactive video sequence generation

    NARCIS (Netherlands)

    C.A. Lindley; R.A. Earnshaw; J.A. Vince

    2001-01-01

    textabstractThe FRAMES project within the RDN CRC (Cooperative Research Centre for Research Data Networks) has developed an experimental environment for dynamic virtual video sequence synthesis from databases of video data. A major issue for the development of dynamic interactive video applications

  2. Adaptive deblocking and deringing of H.264/AVC video sequences

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Burini, Nino; Forchhammer, Søren

    2013-01-01

    We present a method to reduce blocking and ringing artifacts in H.264/AVC video sequences. For deblocking, the proposed method uses a quality measure of a block based coded image to find filtering modes. Based on filtering modes, the images are segmented to three classes and a specific deblocking...... filter is applied to each class. Deringing is obtained by an adaptive bilateral filter; spatial and intensity spread parameters are selected adaptively using texture and edge mapping. The analysis of objective and subjective experimental results shows that the proposed algorithm is effective...... in deblocking and deringing low bit-rate H.264 video sequences....

  3. ALOGORITHMS FOR AUTOMATIC RUNWAY DETECTION ON VIDEO SEQUENCES

    Directory of Open Access Journals (Sweden)

    A. I. Logvin

    2015-01-01

    Full Text Available The article discusses algorithm for automatic runway detection on video sequences. The main stages of algorithm are represented. Some methods to increase reliability of recognition are described.

  4. Gait Analysis by Multi Video Sequence Analysis

    DEFF Research Database (Denmark)

    Jensen, Karsten; Juhl, Jens

    2009-01-01

    The project presented in this article aims to develop software so that close-range photogrammetry with sufficient accuracy can be used to point out the most frequent foot mal positions and monitor the effect of the traditional treatment. The project is carried out as a cooperation between...... the Orthopaedic Surgery in Northern Jutland and the Laboratory for Geoinformatics, Aalborg University. The superior requirements on the system are that it shall be without heavy expenses, be easy to install and easy to operate. A first version of the system is designed to measure the navicula height...... and the calcaneus angle during gait. In the introductory phase of the project the task has been to select, purchase and draw up hardware, select and purchase software concerning video streaming and to develop special software concerning automated registration of the position of the foot during gait by Multi Video...

  5. Binocular video ophthalmoscope for simultaneous recording of sequences of the human retina to compare dynamic parameters

    Science.gov (United States)

    Tornow, Ralf P.; Milczarek, Aleksandra; Odstrcilik, Jan; Kolar, Radim

    2017-07-01

    A parallel video ophthalmoscope was developed to acquire short video sequences (25 fps, 250 frames) of both eyes simultaneously with exact synchronization. Video sequences were registered off-line to compensate for eye movements. From registered video sequences dynamic parameters like cardiac cycle induced reflection changes and eye movements can be calculated and compared between eyes.

  6. Obtaining video descriptors for a content-based video information system

    Science.gov (United States)

    Bescos, Jesus; Martinez, Jose M.; Cabrera, Julian M.; Cisneros, Guillermo

    1998-09-01

    This paper describes the first stages of a research project that is currently being developed in the Image Processing Group of the UPM. The aim of this effort is to add video capabilities to the Storage and Retrieval Information System already working at our premises. Here we will focus on the early design steps of a Video Information System. For this purpose, we present a review of most of the reported techniques for video temporal segmentation and semantic segmentation, previous steps to afford the content extraction task, and we discuss them to select the more suitable ones. We then outline a block design of a temporal segmentation module, and present guidelines to the design of the semantic segmentation one. All these operations trend to facilitate automation in the extraction of low level features and semantic features that will finally take part of the video descriptors.

  7. MAP Estimation of Chin and Cheek Contours in Video Sequences

    Directory of Open Access Journals (Sweden)

    Kampmann Markus

    2004-01-01

    Full Text Available An algorithm for the estimation of chin and cheek contours in video sequences is proposed. This algorithm exploits a priori knowledge about shape and position of chin and cheek contours in images. Exploiting knowledge about the shape, a parametric 2D model representing chin and cheek contours is introduced. Exploiting knowledge about the position, a MAP estimator is developed taking into account the observed luminance gradient as well as a priori probabilities of chin and cheek contours positions. The proposed algorithm was tested with head and shoulder video sequences (image resolution CIF. In nearly 70% of all investigated video frames, a subjectively error free estimation could be achieved. The 2D estimate error is measured as on average between 2.4 and .

  8. Tracking of Individuals in Very Long Video Sequences

    DEFF Research Database (Denmark)

    Fihl, Preben; Corlin, Rasmus; Park, Sangho

    2006-01-01

    In this paper we present an approach for automatically detecting and tracking humans in very long video sequences. The detection is based on background subtraction using a multi-mode Codeword method. We enhance this method both in terms of representation and in terms of automatically updating the...

  9. A novel visual saliency detection method for infrared video sequences

    Science.gov (United States)

    Wang, Xin; Zhang, Yuzhen; Ning, Chen

    2017-12-01

    Infrared video applications such as target detection and recognition, moving target tracking, and so forth can benefit a lot from visual saliency detection, which is essentially a method to automatically localize the ;important; content in videos. In this paper, a novel visual saliency detection method for infrared video sequences is proposed. Specifically, for infrared video saliency detection, both the spatial saliency and temporal saliency are considered. For spatial saliency, we adopt a mutual consistency-guided spatial cues combination-based method to capture the regions with obvious luminance contrast and contour features. For temporal saliency, a multi-frame symmetric difference approach is proposed to discriminate salient moving regions of interest from background motions. Then, the spatial saliency and temporal saliency are combined to compute the spatiotemporal saliency using an adaptive fusion strategy. Besides, to highlight the spatiotemporal salient regions uniformly, a multi-scale fusion approach is embedded into the spatiotemporal saliency model. Finally, a Gestalt theory-inspired optimization algorithm is designed to further improve the reliability of the final saliency map. Experimental results demonstrate that our method outperforms many state-of-the-art saliency detection approaches for infrared videos under various backgrounds.

  10. Comparison of sequence reads obtained from three next-generation sequencing platforms.

    Directory of Open Access Journals (Sweden)

    Shingo Suzuki

    Full Text Available Next-generation sequencing technologies enable the rapid cost-effective production of sequence data. To evaluate the performance of these sequencing technologies, investigation of the quality of sequence reads obtained from these methods is important. In this study, we analyzed the quality of sequence reads and SNP detection performance using three commercially available next-generation sequencers, i.e., Roche Genome Sequencer FLX System (FLX, Illumina Genome Analyzer (GA, and Applied Biosystems SOLiD system (SOLiD. A common genomic DNA sample obtained from Escherichia coli strain DH1 was applied to these sequencers. The obtained sequence reads were aligned to the complete genome sequence of E. coli DH1, to evaluate the accuracy and sequence bias of these sequence methods. We found that the fraction of "junk" data, which could not be aligned to the reference genome, was largest in the data set of SOLiD, in which about half of reads could not be aligned. Among data sets after alignment to the reference, sequence accuracy was poorest in GA data sets, suggesting relatively low fidelity of the elongation reaction in the GA method. Furthermore, by aligning the sequence reads to the E. coli strain W3110, we screened sequence differences between two E. coli strains using data sets of three different next-generation platforms. The results revealed that the detected sequence differences were similar among these three methods, while the sequence coverage required for the detection was significantly small in the FLX data set. These results provided valuable information on the quality of short sequence reads and the performance of SNP detection in three next-generation sequencing platforms.

  11. New algorithm for iris recognition based on video sequences

    Science.gov (United States)

    Bourennane, Salah; Fossati, Caroline; Ketchantang, William

    2010-07-01

    Among existing biometrics, iris recognition systems are among the most accurate personal biometric identification systems. However, the acquisition of a workable iris image requires strict cooperation of the user; otherwise, the image will be rejected by a verification module because of its poor quality, inducing a high false reject rate (FRR). The FRR may also increase when iris localization fails or when the pupil is too dilated. To improve the existing methods, we propose to use video sequences acquired in real time by a camera. In order to keep the same computational load to identify the iris, we propose a new method to estimate the iris characteristics. First, we propose a new iris texture characterization based on Fourier-Mellin transform, which is less sensitive to pupil dilatations than previous methods. Then, we develop a new iris localization algorithm that is robust to variations of quality (partial occlusions due to eyelids and eyelashes, light reflects, etc.), and finally, we introduce a fast and new criterion of suitable image selection from an iris video sequence for an accurate recognition. The accuracy of each step of the algorithm in the whole proposed recognition process is tested and evaluated using our own iris video database and several public image databases, such as CASIA, UBIRIS, and BATH.

  12. Three-dimensional fuzzy filter in color video sequence denoising implemented on DSP

    Science.gov (United States)

    Ponomaryov, Volodymyr I.; Montenegro, Hector; Peralta-Fabi, Ricardo

    2013-02-01

    In this paper, we present a Fuzzy 3D filter for color video sequences to suppress impulsive noise. The difference between the designed algorithm in comparison with other state- of-the-art algorithms consists of employing the three RGB bands of the video sequence data and analyzing the fuzzy gradients values obtained in eight directions, finally processing two temporal neighboring frames together. The simulation results have confirmed sufficiently better performance of the novel 3D filter both in terms of objective metrics (PSNR, MAE, NCD, SSIM) as well as in subjective perception via human vision in the color sequences. An efficiency analysis of the designed and other promising filters have been performed on the DSP TMS320DM642 by Texas InstrumentsTM through MATLAB's SimulinkTM module, showing that the 3D filter can be used in real-time processing applications.

  13. Fall Detection for Elderly from Partially Observed Depth-Map Video Sequences Based on View-Invariant Human Activity Representation

    Directory of Open Access Journals (Sweden)

    Rami Alazrai

    2017-03-01

    Full Text Available This paper presents a new approach for fall detection from partially-observed depth-map video sequences. The proposed approach utilizes the 3D skeletal joint positions obtained from the Microsoft Kinect sensor to build a view-invariant descriptor for human activity representation, called the motion-pose geometric descriptor (MPGD. Furthermore, we have developed a histogram-based representation (HBR based on the MPGD to construct a length-independent representation of the observed video subsequences. Using the constructed HBR, we formulate the fall detection problem as a posterior-maximization problem in which the posteriori probability for each observed video subsequence is estimated using a multi-class SVM (support vector machine classifier. Then, we combine the computed posteriori probabilities from all of the observed subsequences to obtain an overall class posteriori probability of the entire partially-observed depth-map video sequence. To evaluate the performance of the proposed approach, we have utilized the Kinect sensor to record a dataset of depth-map video sequences that simulates four fall-related activities of elderly people, including: walking, sitting, falling form standing and falling from sitting. Then, using the collected dataset, we have developed three evaluation scenarios based on the number of unobserved video subsequences in the testing videos, including: fully-observed video sequence scenario, single unobserved video subsequence of random lengths scenarios and two unobserved video subsequences of random lengths scenarios. Experimental results show that the proposed approach achieved an average recognition accuracy of 93 . 6 % , 77 . 6 % and 65 . 1 % , in recognizing the activities during the first, second and third evaluation scenario, respectively. These results demonstrate the feasibility of the proposed approach to detect falls from partially-observed videos.

  14. Motion-compensated scan conversion of interlaced video sequences

    Science.gov (United States)

    Schultz, Richard R.; Stevenson, Robert L.

    1996-03-01

    When an interlaced image sequence is viewed at the rate of sixty frames per second, the human visual system interpolates the data so that the missing fields are not noticeable. However, if frames are viewed individually, interlacing artifacts are quite prominent. This paper addresses the problem of deinterlacing image sequences for the purposes of analyzing video stills and generating high-resolution hardcopy of individual frames. Multiple interlaced frames are temporally integrated to estimate a single progressively-scanned still image, with motion compensation used between frames. A video observation model is defined which incorporates temporal information via estimated interframe motion vectors. The resulting ill- posed inverse problem is regularized through Bayesian maximum a posteriori (MAP) estimation, utilizing a discontinuity-preserving prior model for the spatial data. Progressively- scanned estimates computed from interlaced image sequences are shown at several spatial interpolation factors, since the multiframe Bayesian scan conversion algorithm is capable of simultaneously deinterlacing the data and enhancing spatial resolution. Problems encountered in the estimation of motion vectors from interlaced frames are addressed.

  15. Recognizing surgeon's actions during suture operations from video sequences

    Science.gov (United States)

    Li, Ye; Ohya, Jun; Chiba, Toshio; Xu, Rong; Yamashita, Hiromasa

    2014-03-01

    Because of the shortage of nurses in the world, the realization of a robotic nurse that can support surgeries autonomously is very important. More specifically, the robotic nurse should be able to autonomously recognize different situations of surgeries so that the robotic nurse can pass necessary surgical tools to the medical doctors in a timely manner. This paper proposes and explores methods that can classify suture and tying actions during suture operations from the video sequence that observes the surgery scene that includes the surgeon's hands. First, the proposed method uses skin pixel detection and foreground extraction to detect the hand area. Then, interest points are randomly chosen from the hand area so that their 3D SIFT descriptors are computed. A word vocabulary is built by applying hierarchical K-means to these descriptors, and the words' frequency histogram, which corresponds to the feature space, is computed. Finally, to classify the actions, either SVM (Support Vector Machine), Nearest Neighbor rule (NN) for the feature space or a method that combines "sliding window" with NN is performed. We collect 53 suture videos and 53 tying videos to build the training set and to test the proposed method experimentally. It turns out that the NN gives higher than 90% accuracies, which are better recognition than SVM. Negative actions, which are different from either suture or tying action, are recognized with quite good accuracies, while "Sliding window" did not show significant improvements for suture and tying and cannot recognize negative actions.

  16. Heart rate measurement based on face video sequence

    Science.gov (United States)

    Xu, Fang; Zhou, Qin-Wu; Wu, Peng; Chen, Xing; Yang, Xiaofeng; Yan, Hong-jian

    2015-03-01

    This paper proposes a new non-contact heart rate measurement method based on photoplethysmography (PPG) theory. With this method we can measure heart rate remotely with a camera and ambient light. We collected video sequences of subjects, and detected remote PPG signals through video sequences. Remote PPG signals were analyzed with two methods, Blind Source Separation Technology (BSST) and Cross Spectral Power Technology (CSPT). BSST is a commonly used method, and CSPT is used for the first time in the study of remote PPG signals in this paper. Both of the methods can acquire heart rate, but compared with BSST, CSPT has clearer physical meaning, and the computational complexity of CSPT is lower than that of BSST. Our work shows that heart rates detected by CSPT method have good consistency with the heart rates measured by a finger clip oximeter. With good accuracy and low computational complexity, the CSPT method has a good prospect for the application in the field of home medical devices and mobile health devices.

  17. On the relationship between perceptual impact of source and channel distortions in video sequences

    DEFF Research Database (Denmark)

    Korhonen, Jari; Reiter, Ulrich; You, Junyong

    2010-01-01

    It is known that peak signal-to-noise ratio (PSNR) can be used for assessing the relative qualities of distorted video sequences meaningfully only if the compared sequences contain similar types of distortions. In this paper, we propose a model for rough assessment of the bias in PSNR results, when...... video sequences with both channel and source distortion are compared against video sequences with source distortion only. The proposed method can be used to compare the relative perceptual quality levels of video sequences with different distortion types more reliably than using plain PSNR....

  18. Insertion of impairments in test video sequences for quality assessment based on psychovisual characteristics

    OpenAIRE

    López Velasco, Juan Pedro; Rodrigo Ferrán, Juan Antonio; Jiménez Bermejo, David; Menendez Garcia, Jose Manuel

    2014-01-01

    Assessing video quality is a complex task. While most pixel-based metrics do not present enough correlation between objective and subjective results, algorithms need to correspond to human perception when analyzing quality in a video sequence. For analyzing the perceived quality derived from concrete video artifacts in determined region of interest we present a novel methodology for generating test sequences which allow the analysis of impact of each individual distortion. Through results obt...

  19. Sub-band/transform compression of video sequences

    Science.gov (United States)

    Sauer, Ken; Bauer, Peter

    1992-01-01

    The progress on compression of video sequences is discussed. The overall goal of the research was the development of data compression algorithms for high-definition television (HDTV) sequences, but most of our research is general enough to be applicable to much more general problems. We have concentrated on coding algorithms based on both sub-band and transform approaches. Two very fundamental issues arise in designing a sub-band coder. First, the form of the signal decomposition must be chosen to yield band-pass images with characteristics favorable to efficient coding. A second basic consideration, whether coding is to be done in two or three dimensions, is the form of the coders to be applied to each sub-band. Computational simplicity is of essence. We review the first portion of the year, during which we improved and extended some of the previous grant period's results. The pyramid nonrectangular sub-band coder limited to intra-frame application is discussed. Perhaps the most critical component of the sub-band structure is the design of bandsplitting filters. We apply very simple recursive filters, which operate at alternating levels on rectangularly sampled, and quincunx sampled images. We will also cover the techniques we have studied for the coding of the resulting bandpass signals. We discuss adaptive three-dimensional coding which takes advantage of the detection algorithm developed last year. To this point, all the work on this project has been done without the benefit of motion compensation (MC). Motion compensation is included in many proposed codecs, but adds significant computational burden and hardware expense. We have sought to find a lower-cost alternative featuring a simple adaptation to motion in the form of the codec. In sequences of high spatial detail and zooming or panning, it appears that MC will likely be necessary for the proposed quality and bit rates.

  20. Spatiotemporal Super-Resolution Reconstruction Based on Robust Optical Flow and Zernike Moment for Video Sequences

    Directory of Open Access Journals (Sweden)

    Meiyu Liang

    2013-01-01

    Full Text Available In order to improve the spatiotemporal resolution of the video sequences, a novel spatiotemporal super-resolution reconstruction model (STSR based on robust optical flow and Zernike moment is proposed in this paper, which integrates the spatial resolution reconstruction and temporal resolution reconstruction into a unified framework. The model does not rely on accurate estimation of subpixel motion and is robust to noise and rotation. Moreover, it can effectively overcome the problems of hole and block artifacts. First we propose an efficient robust optical flow motion estimation model based on motion details preserving, then we introduce the biweighted fusion strategy to implement the spatiotemporal motion compensation. Next, combining the self-adaptive region correlation judgment strategy, we construct a fast fuzzy registration scheme based on Zernike moment for better STSR with higher efficiency, and then the final video sequences with high spatiotemporal resolution can be obtained by fusion of the complementary and redundant information with nonlocal self-similarity between the adjacent video frames. Experimental results demonstrate that the proposed method outperforms the existing methods in terms of both subjective visual and objective quantitative evaluations.

  1. Fuzzy Logic-Based Scenario Recognition from Video Sequences

    Directory of Open Access Journals (Sweden)

    E. Elbaşi

    2013-10-01

    Full Text Available In recent years, video surveillance and monitoring have gained importance because of security and safety concerns. Banks, borders, airports, stores, and parking areas are the important application areas. There are two main parts in scenario recognition: Low level processing, including moving object detection and object tracking, and feature extraction. We have developed new features through this work which are RUD (relative upper density, RMD (relative middle density and RLD (relative lower density, and we have used other features such as aspect ratio, width, height, and color of the object. High level processing, including event start-end point detection, activity detection for each frame and scenario recognition for sequence of images. This part is the focus of our research, and different pattern recognition and classification methods are implemented and experimental results are analyzed. We looked into several methods of classification which are decision tree, frequency domain classification, neural network-based classification, Bayes classifier, and pattern recognition methods, which are control charts, and hidden Markov models. The control chart approach, which is a decision methodology, gives more promising results than other methodologies. Overlapping between events is one of the problems, hence we applied fuzzy logic technique to solve this problem. After using this method the total accuracy increased from 95.6 to 97.2.

  2. Real-time UAV trajectory generation using feature points matching between video image sequences

    Science.gov (United States)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  3. Chroma Subsampling Influence on the Perceived Video Quality for Compressed Sequences in High Resolutions

    Directory of Open Access Journals (Sweden)

    Miroslav Uhrina

    2017-01-01

    Full Text Available This paper deals with the influence of chroma subsampling on perceived video quality measured by subjective metrics. The evaluation was done for two most used video codecs H.264/AVC and H.265/HEVC. Eight types of video sequences with Full HD and Ultra HD resolutions depending on content were tested. The experimental results showed that observers did not see the difference between unsubsampled and subsampled sequences, so using subsampled videos is preferable even 50 % of the amount of data can be saved. Also, the minimum bitrates to achieve the good and fair quality by each codec and resolution were determined.

  4. Spatial-temporal forensic analysis of mass casualty incidents using video sequences.

    Science.gov (United States)

    Hao Dong; Juechen Yin; Schafer, James; Ganz, Aura

    2016-08-01

    In this paper we introduce DIORAMA based forensic analysis of mass casualty incidents (MCI) using video sequences. The video sequences captured on site are automatically annotated by metadata, which includes the capture time and the camera location and viewing direction. Using a visual interface the MCI investigators can easily understand the availability of video clips in specific areas of interest, and efficiently review them. The video-based forensic analysis system will enable the MCI investigators to better understand the rescue operations and subsequently improve training procedures.

  5. Detection and Localization of Anomalous Motion in Video Sequences from Local Histograms of Labeled Affine Flows

    Directory of Open Access Journals (Sweden)

    Juan-Manuel Pérez-Rúa

    2017-05-01

    Full Text Available We propose an original method for detecting and localizing anomalous motion patterns in videos from a camera view-based motion representation perspective. Anomalous motion should be taken in a broad sense, i.e., unexpected, abnormal, singular, irregular, or unusual motion. Identifying distinctive dynamic information at any time point and at any image location in a sequence of images is a key requirement in many situations and applications. The proposed method relies on so-called labeled affine flows (LAF involving both affine velocity vectors and affine motion classes. At every pixel, a motion class is inferred from the affine motion model selected in a set of candidate models estimated over a collection of windows. Then, the image is subdivided in blocks where motion class histograms weighted by the affine motion vector magnitudes are computed. They are compared blockwise to histograms of normal behaviors with a dedicated distance. More specifically, we introduce the local outlier factor (LOF to detect anomalous blocks. LOF is a local flexible measure of the relative density of data points in a feature space, here the space of LAF histograms. By thresholding the LOF value, we can detect an anomalous motion pattern in any block at any time instant of the video sequence. The threshold value is automatically set in each block by means of statistical arguments. We report comparative experiments on several real video datasets, demonstrating that our method is highly competitive for the intricate task of detecting different types of anomalous motion in videos. Specifically, we obtain very competitive results on all the tested datasets: 99.2% AUC for UMN, 82.8% AUC for UCSD, and 95.73% accuracy for PETS 2009, at the frame level.

  6. Summarization of Surveillance Video Sequences Using Face Quality Assessment

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.; Rahmati, Mohammad

    2011-01-01

    Constant working surveillance cameras in public places, such as airports and banks, produce huge amount of video data. Faces in such videos can be extracted in real time. However, most of these detected faces are either redundant or useless. Redundant information adds computational costs to facial...

  7. Dactyl Alphabet Gesture Recognition in a Video Sequence Using Microsoft Kinect

    Science.gov (United States)

    Artyukhin, S. G.; Mestetskiy, L. M.

    2015-05-01

    This paper presents an efficient framework for solving the problem of static gesture recognition based on data obtained from the web cameras and depth sensor Kinect (RGB-D - data). Each gesture given by a pair of images: color image and depth map. The database store gestures by it features description, genereated by frame for each gesture of the alphabet. Recognition algorithm takes as input a video sequence (a sequence of frames) for marking, put in correspondence with each frame sequence gesture from the database, or decide that there is no suitable gesture in the database. First, classification of the frame of the video sequence is done separately without interframe information. Then, a sequence of successful marked frames in equal gesture is grouped into a single static gesture. We propose a method combined segmentation of frame by depth map and RGB-image. The primary segmentation is based on the depth map. It gives information about the position and allows to get hands rough border. Then, based on the color image border is specified and performed analysis of the shape of the hand. Method of continuous skeleton is used to generate features. We propose a method of skeleton terminal branches, which gives the opportunity to determine the position of the fingers and wrist. Classification features for gesture is description of the position of the fingers relative to the wrist. The experiments were carried out with the developed algorithm on the example of the American Sign Language. American Sign Language gesture has several components, including the shape of the hand, its orientation in space and the type of movement. The accuracy of the proposed method is evaluated on the base of collected gestures consisting of 2700 frames.

  8. Quality-Aware Estimation of Facial Landmarks in Video Sequences

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.

    2015-01-01

    Face alignment in video is a primitive step for facial image analysis. The accuracy of the alignment greatly depends on the quality of the face image in the video frames and low quality faces are proven to cause erroneous alignment. Thus, this paper proposes a system for quality aware face...... alignment by using a Supervised Decent Method (SDM) along with a motion based forward extrapolation method. The proposed system first extracts faces from video frames. Then, it employs a face quality assessment technique to measure the face quality. If the face quality is high, the proposed system uses SDM...... for facial landmark detection. If the face quality is low the proposed system corrects the facial landmarks that are detected by SDM. Depending upon the face velocity in consecutive video frames and face quality measure, two algorithms are proposed for correction of landmarks in low quality faces by using...

  9. Finding and Improving the Key-Frames of Long Video Sequences for Face Recognition

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2010-01-01

    of such video sequences by any enhancement or even face recognition algorithm is demanding. Thus, there is a need for a mechanism to summarize the input video sequence to a set of key-frames and then applying an enhancement algorithm to this subset. This paper presents a system doing exactly this. The system......Face recognition systems are very sensitive to the quality and resolution of their input face images. This makes such systems unreliable when working with long surveillance video sequences without employing some selection and enhancement algorithms. On the other hand, processing all the frames...... uses face quality assessment to select the key-frames and a hybrid super-resolution to enhance the face image quality. The suggested system that employs a linear associator face recognizer to evaluate the enhanced results has been tested on real surveillance video sequences and the experimental results...

  10. Using statistical analysis and artificial intelligence tools for automatic assessment of video sequences

    Science.gov (United States)

    Ekobo Akoa, Brice; Simeu, Emmanuel; Lebowsky, Fritz

    2014-01-01

    This paper proposes two novel approaches to Video Quality Assessment (VQA). Both approaches attempt to develop video evaluation techniques capable of replacing human judgment when rating video quality in subjective experiments. The underlying study consists of selecting fundamental quality metrics based on Human Visual System (HVS) models and using artificial intelligence solutions as well as advanced statistical analysis. This new combination enables suitable video quality ratings while taking as input multiple quality metrics. The first method uses a neural network based machine learning process. The second method consists in evaluating the video quality assessment using non-linear regression model. The efficiency of the proposed methods is demonstrated by comparing their results with those of existing work done on synthetic video artifacts. The results obtained by each method are compared with scores from a database resulting from subjective experiments.

  11. STUDY OF BLOCKING EFFECT ELIMINATION METHODS BY MEANS OF INTRAFRAME VIDEO SEQUENCE INTERPOLATION

    Directory of Open Access Journals (Sweden)

    I. S. Rubina

    2015-01-01

    Full Text Available The paper deals with image interpolation methods and their applicability to eliminate some of the artifacts related to both the dynamic properties of objects in video sequences and algorithms used in the order of encoding steps. The main drawback of existing methods is the high computational complexity, unacceptable in video processing. Interpolation of signal samples for blocking - effect elimination at the output of the convertion encoding is proposed as a part of the study. It was necessary to develop methods for improvement of compression ratio and quality of the reconstructed video data by blocking effect elimination on the borders of the segments by intraframe interpolating of video sequence segments. The main point of developed methods is an adaptive recursive algorithm application with adaptive-sized interpolation kernel both with and without the brightness gradient consideration at the boundaries of objects and video sequence blocks. Within theoretical part of the research, methods of information theory (RD-theory and data redundancy elimination, methods of pattern recognition and digital signal processing, as well as methods of probability theory are used. Within experimental part of the research, software implementation of compression algorithms with subsequent comparison of the implemented algorithms with the existing ones was carried out. Proposed methods were compared with the simple averaging algorithm and the adaptive algorithm of central counting interpolation. The advantage of the algorithm based on the adaptive kernel size selection interpolation is in compression ratio increasing by 30%, and the advantage of the modified algorithm based on the adaptive interpolation kernel size selection is in the compression ratio increasing by 35% in comparison with existing algorithms, interpolation and quality of the reconstructed video sequence improving by 3% compared to the one compressed without interpolation. The findings will be

  12. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    The ldquoatmosphere-space interactions monitorrdquo (ASIM) is a payload to be mounted on one of the external platforms of the Columbus module of the International Space Station (ISS). The instruments include six video cameras, six photometers and one X-ray detector. The main scientific objective...... of the mission is to study transient luminous events (TLE) above severe thunderstorms: the sprites, jets and elves. Other atmospheric phenomena are also studied including aurora, gravity waves and meteors. As part of the ASIM Phase B study, on-board processing of data from the cameras is being developed...

  13. A Novel Face Segmentation Algorithm from a Video Sequence for Real-Time Face Recognition

    Directory of Open Access Journals (Sweden)

    Sudhaker Samuel RD

    2007-01-01

    Full Text Available The first step in an automatic face recognition system is to localize the face region in a cluttered background and carefully segment the face from each frame of a video sequence. In this paper, we propose a fast and efficient algorithm for segmenting a face suitable for recognition from a video sequence. The cluttered background is first subtracted from each frame, in the foreground regions, a coarse face region is found using skin colour. Then using a dynamic template matching approach the face is efficiently segmented. The proposed algorithm is fast and suitable for real-time video sequence. The algorithm is invariant to large scale and pose variation. The segmented face is then handed over to a recognition algorithm based on principal component analysis and linear discriminant analysis. The online face detection, segmentation, and recognition algorithms take an average of 0.06 second on a 3.2 GHz P4 machine.

  14. An Efficient Solution for Hand Gesture Recognition from Video Sequence

    Directory of Open Access Journals (Sweden)

    PRODAN, R.-C.

    2012-08-01

    Full Text Available The paper describes a system of hand gesture recognition by image processing for human robot interaction. The recognition and interpretation of the hand postures acquired through a video camera allow the control of the robotic arm activity: motion - translation and rotation in 3D - and tightening/releasing the clamp. A gesture dictionary was defined and heuristic algorithms for recognition were developed and tested. The system can be used for academic and industrial purposes, especially for those activities where the movements of the robotic arm were not previously scheduled, for training the robot easier than using a remote control. Besides the gesture dictionary, the novelty of the paper consists in a new technique for detecting the relative positions of the fingers in order to recognize the various hand postures, and in the achievement of a robust system for controlling robots by postures of the hands.

  15. A DNA sequence obtained by replacement of the dopamine RNA aptamer bases is not an aptamer

    DEFF Research Database (Denmark)

    Álvarez-Martos, Isabel; Ferapontova, Elena

    2017-01-01

    of dopamine is a 57 nucleotides long RNA sequence reported in 1997 (Biochemistry, 1997, 36, 9726). Later, it was suggested that the DNA homologue of the RNA aptamer retains the specificity of dopamine binding (Biochem. Biophys. Res. Commun., 2009, 388, 732). Here, we show that the DNA sequence obtained...... by the replacement of the RNA aptamer bases for their DNA analogues is not able of specific biorecognition of dopamine, in contrast to the original RNA aptamer sequence. This DNA sequence binds dopamine and structurally related catecholamine neurotransmitters non-specifically, as any DNA sequence, and, thus...

  16. Video Enhancement and Dynamic Range Control of HDR Sequences for Automotive Applications

    Directory of Open Access Journals (Sweden)

    Giovanni Ramponi

    2007-01-01

    Full Text Available CMOS video cameras with high dynamic range (HDR output are particularly suitable for driving assistance applications, where lighting conditions can strongly vary, going from direct sunlight to dark areas in tunnels. However, common visualization devices can only handle a low dynamic range, and thus a dynamic range reduction is needed. Many algorithms have been proposed in the literature to reduce the dynamic range of still pictures. Anyway, extending the available methods to video is not straightforward, due to the peculiar nature of video data. We propose an algorithm for both reducing the dynamic range of video sequences and enhancing its appearance, thus improving visual quality and reducing temporal artifacts. We also provide an optimized version of our algorithm for a viable hardware implementation on an FPGA. The feasibility of this implementation is demonstrated by means of a case study.

  17. Using Grounded Theory to Analyze Qualitative Observational Data that is Obtained by Video Recording

    Directory of Open Access Journals (Sweden)

    Colin Griffiths

    2013-06-01

    Full Text Available This paper presents a method for the collection and analysis of qualitative data that is derived by observation and that may be used to generate a grounded theory. Video recordings were made of the verbal and non-verbal interactions of people with severe and complex disabilities and the staff who work with them. Three dyads composed of a student/teacher or carer and a person with a severe or profound intellectual disability were observed in a variety of different activities that took place in a school. Two of these recordings yielded 25 minutes of video, which was transcribed into narrative format. The nature of the qualitative micro data that was captured is described and the fit between such data and classic grounded theory is discussed. The strengths and weaknesses of the use of video as a tool to collect data that is amenable to analysis using grounded theory are considered. The paper concludes by suggesting that using classic grounded theory to analyze qualitative data that is collected using video offers a method that has the potential to uncover and explain patterns of non-verbal interactions that were not previously evident.

  18. Secondary structure-based analysis of mouse brain small RNA sequences obtained by using next-generation sequencing.

    Science.gov (United States)

    Kiyosawa, Hidenori; Okumura, Akio; Okui, Saya; Ushida, Chisato; Kawai, Gota

    2015-08-01

    In order to find novel structured small RNAs, next-generation sequencing was applied to small RNA fractions with lengths ranging from 40 to 140 nt and secondary structure-based clustering was performed. Sequences of structured RNAs were effectively clustered and analyzed by secondary structure. Although more than 99% of the obtained sequences were known RNAs, 16 candidate mouse structured small non-coding RNAs (MsncRs) were isolated. Based on these results, the merits of secondary structure-based analysis are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Hybridized genetic-immune based strategy to obtain optimal feasible assembly sequences

    Directory of Open Access Journals (Sweden)

    Bala Murali Gunji

    2017-06-01

    Full Text Available An appropriate sequence of assembly operations increases the productivity and enhances product quality there by decrease the overall cost and manufacturing lead time. Achieving such assembly sequence is a complex combinatorial optimization problem with huge search space and multiple assembly qualifying criteria. The purpose of the current research work is to develop an intelligent strategy to obtain an optimal assembly sequence subjected to the assembly predicates. This paper presents a novel hybrid artificial intelligent technique, which executes Artificial Immune System (AIS in combination with the Genetic Algorithm (GA to find out an optimal feasible assembly sequence from the possible assembly sequence. Two immune models are introduced in the current research work: (1 Bone marrow model for generating possible assembly sequence and reduce the system redundancy and (2 Negative selection model for obtaining feasible assembly sequence. Later, these two models are integrated with GA in order to obtain an optimal assembly sequence. The proposed AIS-GA algorithm aims at enhancing the performance of AIS by incorporating GA as a local search strategy to achieve global optimum solution for assemblies with large number of parts. The proposed algorithm is implemented on a mechanical assembly composed of eleven parts joined by several connectors. The method is found to be successful in achieving global optimum solution with less computational time compared to traditional artificial intelligent techniques.

  20. GrabCut-Based Human Segmentation in Video Sequences

    Science.gov (United States)

    Hernández-Vela, Antonio; Reyes, Miguel; Ponce, Víctor; Escalera, Sergio

    2012-01-01

    In this paper, we present a fully-automatic Spatio-Temporal GrabCut human segmentation methodology that combines tracking and segmentation. GrabCut initialization is performed by a HOG-based subject detection, face detection, and skin color model. Spatial information is included by Mean Shift clustering whereas temporal coherence is considered by the historical of Gaussian Mixture Models. Moreover, full face and pose recovery is obtained by combining human segmentation with Active Appearance Models and Conditional Random Fields. Results over public datasets and in a new Human Limb dataset show a robust segmentation and recovery of both face and pose using the presented methodology. PMID:23202215

  1. Automatic Representation and Segmentation of Video Sequences via a Novel Framework Based on the nD-EVM and Kohonen Networks

    Directory of Open Access Journals (Sweden)

    José-Yovany Luis-García

    2016-01-01

    Full Text Available Recently in the Computer Vision field, a subject of interest, at least in almost every video application based on scene content, is video segmentation. Some of these applications are indexing, surveillance, medical imaging, event analysis, and computer-guided surgery, for naming some of them. To achieve their goals, these applications need meaningful information about a video sequence, in order to understand the events in its corresponding scene. Therefore, we need semantic information which can be obtained from objects of interest that are present in the scene. In order to recognize objects we need to compute features which aid the finding of similarities and dissimilarities, among other characteristics. For this reason, one of the most important tasks for video and image processing is segmentation. The segmentation process consists in separating data into groups that share similar features. Based on this, in this work we propose a novel framework for video representation and segmentation. The main workflow of this framework is given by the processing of an input frame sequence in order to obtain, as output, a segmented version. For video representation we use the Extreme Vertices Model in the n-Dimensional Space while we use the Discrete Compactness descriptor as feature and Kohonen Self-Organizing Maps for segmentation purposes.

  2. HIERARCHICAL ADAPTIVE ROOD PATTERN SEARCH FOR MOTION ESTIMATION AT VIDEO SEQUENCE ANALYSIS

    Directory of Open Access Journals (Sweden)

    V. T. Nguyen

    2016-05-01

    Full Text Available Subject of Research.The paper deals with the motion estimation algorithms for the analysis of video sequences in compression standards MPEG-4 Visual and H.264. Anew algorithm has been offered based on the analysis of the advantages and disadvantages of existing algorithms. Method. Thealgorithm is called hierarchical adaptive rood pattern search (Hierarchical ARPS, HARPS. This new algorithm includes the classic adaptive rood pattern search ARPS and hierarchical search MP (Hierarchical search or Mean pyramid. All motion estimation algorithms have been implemented using MATLAB package and tested with several video sequences. Main Results. The criteria for evaluating the algorithms were: speed, peak signal to noise ratio, mean square error and mean absolute deviation. The proposed method showed a much better performance at a comparable error and deviation. The peak signal to noise ratio in different video sequences shows better and worse results than characteristics of known algorithms so it requires further investigation. Practical Relevance. Application of this algorithm in MPEG-4 and H.264 codecs instead of the standard can significantly reduce compression time. This feature enables to recommend it in telecommunication systems for multimedia data storing, transmission and processing.

  3. A DNA sequence obtained by replacement of the dopamine RNA aptamer bases is not an aptamer.

    Science.gov (United States)

    Álvarez-Martos, Isabel; Ferapontova, Elena E

    2017-08-05

    A unique specificity of the aptamer-ligand biorecognition and binding facilitates bioanalysis and biosensor development, contributing to discrimination of structurally related molecules, such as dopamine and other catecholamine neurotransmitters. The aptamer sequence capable of specific binding of dopamine is a 57 nucleotides long RNA sequence reported in 1997 (Biochemistry, 1997, 36, 9726). Later, it was suggested that the DNA homologue of the RNA aptamer retains the specificity of dopamine binding (Biochem. Biophys. Res. Commun., 2009, 388, 732). Here, we show that the DNA sequence obtained by the replacement of the RNA aptamer bases for their DNA analogues is not able of specific biorecognition of dopamine, in contrast to the original RNA aptamer sequence. This DNA sequence binds dopamine and structurally related catecholamine neurotransmitters non-specifically, as any DNA sequence, and, thus, is not an aptamer and cannot be used neither for in vivo nor in situ analysis of dopamine in the presence of structurally related neurotransmitters. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Classification of video sequences into chosen generalized use classes of target size and lighting level.

    Science.gov (United States)

    Leszczuk, Mikołaj; Dudek, Łukasz; Witkowski, Marcin

    The VQiPS (Video Quality in Public Safety) Working Group, supported by the U.S. Department of Homeland Security, has been developing a user guide for public safety video applications. According to VQiPS, five parameters have particular importance influencing the ability to achieve a recognition task. They are: usage time-frame, discrimination level, target size, lighting level, and level of motion. These parameters form what are referred to as Generalized Use Classes (GUCs). The aim of our research was to develop algorithms that would automatically assist classification of input sequences into one of the GUCs. Target size and lighting level parameters were approached. The experiment described reveals the experts' ambiguity and hesitation during the manual target size determination process. However, the automatic methods developed for target size classification make it possible to determine GUC parameters with 70 % compliance to the end-users' opinion. Lighting levels of the entire sequence can be classified with an efficiency reaching 93 %. To make the algorithms available for use, a test application has been developed. It is able to process video files and display classification results, the user interface being very simple and requiring only minimal user interaction.

  5. Intra Frame Coding In Advanced Video Coding Standard (H.264) to Obtain Consistent PSNR and Reduce Bit Rate for Diagonal Down Left Mode Using Gaussian Pulse

    Science.gov (United States)

    Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma

    2017-08-01

    Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids

  6. Anticipatory Eye Movements While Watching Continuous Action Across Shots in Video Sequences: A Developmental Study.

    Science.gov (United States)

    Kirkorian, Heather L; Anderson, Daniel R

    2017-07-01

    Eye movements were recorded as 12-month-olds (n = 15), 4-year-olds (n = 17), and adults (n = 19) watched a 15-min video with sequences of shots conveying continuous motion. The central question was whether, and at what age, viewers anticipate the reappearance of objects following cuts to new shots. Adults were more likely than younger viewers to make anticipatory eye movements. Four-year-olds responded to transitions more slowly and tended to fixate the center of the screen. Infants' eye movement patterns reflected a tendency to react rather than anticipate. Findings are consistent with the hypothesis that adults integrate content across shots and understand how space is represented in edited video. Results are interpreted with respect to a developing understanding of film editing due to experience and cognitive maturation. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.

  7. Sequence Capture and Phylogenetic Utility of Genomic Ultraconserved Elements Obtained from Pinned Insect Specimens.

    Directory of Open Access Journals (Sweden)

    Bonnie B Blaimer

    Full Text Available Obtaining sequence data from historical museum specimens has been a growing research interest, invigorated by next-generation sequencing methods that allow inputs of highly degraded DNA. We applied a target enrichment and next-generation sequencing protocol to generate ultraconserved elements (UCEs from 51 large carpenter bee specimens (genus Xylocopa, representing 25 species with specimen ages ranging from 2-121 years. We measured the correlation between specimen age and DNA yield (pre- and post-library preparation DNA concentration and several UCE sequence capture statistics (raw read count, UCE reads on target, UCE mean contig length and UCE locus count with linear regression models. We performed piecewise regression to test for specific breakpoints in the relationship of specimen age and DNA yield and sequence capture variables. Additionally, we compared UCE data from newer and older specimens of the same species and reconstructed their phylogeny in order to confirm the validity of our data. We recovered 6-972 UCE loci from samples with pre-library DNA concentrations ranging from 0.06-9.8 ng/μL. All investigated DNA yield and sequence capture variables were significantly but only moderately negatively correlated with specimen age. Specimens of age 20 years or less had significantly higher pre- and post-library concentrations, UCE contig lengths, and locus counts compared to specimens older than 20 years. We found breakpoints in our data indicating a decrease of the initial detrimental effect of specimen age on pre- and post-library DNA concentration and UCE contig length starting around 21-39 years after preservation. Our phylogenetic results confirmed the integrity of our data, giving preliminary insights into relationships within Xylocopa. We consider the effect of additional factors not measured in this study on our age-related sequence capture results, such as DNA fragmentation and preservation method, and discuss the promise of the UCE

  8. Enhancer sequences from Arabidopsis thaliana obtained by library transformation of Nicotiana tabacum.

    Science.gov (United States)

    Ott, R W; Chua, N H

    1990-09-01

    In this paper we report on the use of a bidirectional enhancer cloning vehicle to isolate and characterize new enhancer sequences from Arabidopsis thaliana. A library of A. thaliana genomic Sau3A segments was constructed in Escherichia coli in the binary plasmid enhancer cloning vehicle pROA97. The T-DNA based vector carries abbreviated TATA regions from the cauliflower mosaic virus 35S transcription unit upstream of two genes. The library was transferred via triparental mating into Agrobacterium tumefaciens. The neomycin phosphotransferase II gene was used for selection of kanamycin-resistant transformed tobacco callus cells. Approximately 1100 transgenic plants were regenerated and assayed for expression of the E. coli beta-glucuronidase (GUS) gene in leaves, stems, roots, or seeds. Plasmids carrying putative enhancer sequences were rescued from the genomes of transgenic plants and the cloned sequences were assayed for enhancer function in genetic selection experiments. Plants were regenerated from the kanamycin-resistant calli obtained in the secondary transformation experiments. Histochemical analysis of GUS activity in the leaf, stem, and root tissues of transgenic plants showed a variety of expression patterns. The DNA sequences are presented of five Arabidopsis segments which confer enhancer function.

  9. Automatic real-time tracking of fetal mouth in fetoscopic video sequence for supporting fetal surgeries

    Science.gov (United States)

    Xu, Rong; Xie, Tianliang; Ohya, Jun; Zhang, Bo; Sato, Yoshinobu; Fujie, Masakatsu G.

    2013-03-01

    Recently, a minimally invasive surgery (MIS) called fetoscopic tracheal occlusion (FETO) was developed to treat severe congenital diaphragmatic hernia (CDH) via fetoscopy, by which a detachable balloon is placed into the fetal trachea for preventing pulmonary hypoplasia through increasing the pressure of the chest cavity. This surgery is so dangerous that a supporting system for navigating surgeries is deemed necessary. In this paper, to guide a surgical tool to be inserted into the fetal trachea, an automatic approach is proposed to detect and track the fetal face and mouth via fetoscopic video sequencing. More specifically, the AdaBoost algorithm is utilized as a classifier to detect the fetal face based on Haarlike features, which calculate the difference between the sums of the pixel intensities in each adjacent region at a specific location in a detection window. Then, the CamShift algorithm based on an iterative search in a color histogram is applied to track the fetal face, and the fetal mouth is fitted by an ellipse detected via an improved iterative randomized Hough transform approach. The experimental results demonstrate that the proposed automatic approach can accurately detect and track the fetal face and mouth in real-time in a fetoscopic video sequence, as well as provide an effective and timely feedback to the robot control system of the surgical tool for FETO surgeries.

  10. Bacteria obtained from a sequencing batch reactor that are capable of growth on dehydroabietic acid.

    OpenAIRE

    Mohn, W W

    1995-01-01

    Eleven isolates capable of growth on the resin acid dehydroabietic acid (DhA) were obtained from a sequencing batch reactor designed to treat a high-strength process stream from a paper mill. The isolates belonged to two groups, represented by strains DhA-33 and DhA-35, which were characterized. In the bioreactor, bacteria like DhA-35 were more abundant than those like DhA-33. The population in the bioreactor of organisms capable of growth on DhA was estimated to be 1.1 x 10(6) propagules per...

  11. Hardware architectures for real time processing of High Definition video sequences

    OpenAIRE

    Genovese, Mariangela

    2014-01-01

    Actually, application fields, such as medicine, space exploration, surveillance, authentication, HDTV, and automated industry inspection, require capturing, storing and processing continuous streams of video data. Consequently, different process techniques (video enhancement, segmentation, object detection, or video compression, as examples) are involved in these applications. Such techniques often require a significant number of operations depending on the algorithm complexity and the video ...

  12. Metagenomes obtained by "deep sequencing" - what do they tell about the EBPR communities

    DEFF Research Database (Denmark)

    Albertsen, Mads; Saunders, Aaron Marc; Nielsen, Kåre Lehmann

    -diversity at genome level and the implications for stable plant operation and P-removal will be an interesting question to investigate further. One current limitation for application of metagenomics and metatranscriptomics on a systems level is the need of more reference genomes that are closely related......Metagenomes obtained by "deep sequencing" - what do they tell about the EBPR communities? Mads Albertsen1, Aaron M. Saunders1, Kåre L. Nielsen1 and Per H. Nielsen1 1 Department of Biotechnology, Chemistry and Environmental Engineering, Aalborg University, Aalborg, Denmark Presenting Author: Mads...... Albertsen Keywords: Metagenomics; Accumulibacter; Micro-diversity; Enhanced Biological Phosphorus Removal Introduction Metagenomics, or environmental genomics, provides comprehensive information about the entire microbial community of a certain ecosystem, e.g. a wastewater treatment plant. So far...

  13. Predicting human activities in sequences of actions in RGB-D videos

    Science.gov (United States)

    Jardim, David; Nunes, Luís.; Dias, Miguel

    2017-03-01

    In our daily activities we perform prediction or anticipation when interacting with other humans or with objects. Prediction of human activity made by computers has several potential applications: surveillance systems, human computer interfaces, sports video analysis, human-robot-collaboration, games and health-care. We propose a system capable of recognizing and predicting human actions using supervised classifiers trained with automatically labeled data evaluated in our human activity RGB-D dataset (recorded with a Kinect sensor) and using only the position of the main skeleton joints to extract features. Using conditional random fields (CRFs) to model the sequential nature of actions in a sequence has been used before, but where other approaches try to predict an outcome or anticipate ahead in time (seconds), we try to predict what will be the next action of a subject. Our results show an activity prediction accuracy of 89.9% using an automatically labeled dataset.

  14. Study of recognizing multiple persons' complicated hand gestures from the video sequence acquired by a moving camera

    Science.gov (United States)

    Dan, Luo; Ohya, Jun

    2010-02-01

    Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.

  15. Dual-polarization radar rainfall estimation in Korea according to raindrop shapes obtained by using a 2-D video disdrometer

    Science.gov (United States)

    Kim, Hae-Lim; Suk, Mi-Kyung; Park, Hye-Sook; Lee, Gyu-Won; Ko, Jeong-Seok

    2016-08-01

    Polarimetric measurements are sensitive to the sizes, concentrations, orientations, and shapes of raindrops. Thus, rainfall rates calculated from polarimetric radar are influenced by the raindrop shapes and canting. The mean raindrop shape can be obtained from long-term raindrop size distribution (DSD) observations, and the shapes of raindrops can play an important role in polarimetric rainfall algorithms based on differential reflectivity (ZDR) and specific differential phase (KDP). However, the mean raindrop shape is associated with the variation of the DSD, which can change depending on precipitation types and climatic regimes. Furthermore, these relationships have not been studied extensively on the Korean Peninsula. In this study, we present a method to find optimal polarimetric rainfall algorithms for the Korean Peninsula by using data provided by both a two-dimensional video disdrometer (2DVD) and the Bislsan S-band dual-polarization radar. First, a new axis-ratio relation was developed to improve radar rainfall estimations. Second, polarimetric rainfall algorithms were derived by using different axis-ratio relations. The rain gauge data were used to represent the ground truth situation, and the estimated radar-point hourly mean rain rates obtained from the different polarimetric rainfall algorithms were compared with the hourly rain rates measured by a rain gauge. The daily calibration biases of horizontal reflectivity (ZH) and differential reflectivity (ZDR) were calculated by comparing ZH and ZDR radar measurements with the same parameters simulated by the 2DVD. Overall, the derived new axis ratio was similar to the existing axis ratio except for both small particles (≤ 2 mm) and large particles (≥ 5.5 mm). The shapes of raindrops obtained by the new axis-ratio relation carried out with the 2DVD were more oblate than the shapes obtained by the existing relations. The combined polarimetric rainfall relations using ZDR and KDP were more efficient than

  16. Learning with Technology: Video Modeling with Concrete-Representational-Abstract Sequencing for Students with Autism Spectrum Disorder

    Science.gov (United States)

    Yakubova, Gulnoza; Hughes, Elizabeth M.; Shinaberry, Megan

    2016-01-01

    The purpose of this study was to determine the effectiveness of a video modeling intervention with concrete-representational-abstract instructional sequence in teaching mathematics concepts to students with autism spectrum disorder (ASD). A multiple baseline across skills design of single-case experimental methodology was used to determine the…

  17. Complete genome sequence of citrus huanglongbing bacterium, 'Candidatus Liberibacter asiaticus' obtained through metagenomics.

    Science.gov (United States)

    Duan, Yongping; Zhou, Lijuan; Hall, David G; Li, Wenbin; Doddapaneni, Harshavardhan; Lin, Hong; Liu, Li; Vahling, Cheryl M; Gabriel, Dean W; Williams, Kelly P; Dickerman, Allan; Sun, Yijun; Gottwald, Tim

    2009-08-01

    Citrus huanglongbing is the most destructive disease of citrus worldwide. It is spread by citrus psyllids and is associated with a low-titer, phloem-limited infection by any of three uncultured species of alpha-Proteobacteria, 'Candidatus Liberibacter asiaticus', 'Ca. L. americanus', and 'Ca. L. africanus'. A complete circular 'Ca. L. asiaticus' genome has been obtained by metagenomics, using the DNA extracted from a single 'Ca. L. asiaticus'-infected psyllid. The 1.23-Mb genome has an average 36.5% GC content. Annotation revealed a high percentage of genes involved in both cell motility (4.5%) and active transport in general (8.0%), which may contribute to its virulence. 'Ca. L. asiaticus' appears to have a limited ability for aerobic respiration and is likely auxotrophic for at least five amino acids. Consistent with its intracellular nature, 'Ca. L. asiaticus' lacks type III and type IV secretion systems as well as typical free-living or plant-colonizing extracellular degradative enzymes. 'Ca. L. asiaticus' appears to have all type I secretion system genes needed for both multidrug efflux and toxin effector secretion. Multi-protein phylogenetic analysis confirmed 'Ca. L. asiaticus' as an early-branching and highly divergent member of the family Rhizobiaceae. This is the first genome sequence of an uncultured alpha-proteobacteria that is both an intracellular plant pathogen and insect symbiont.

  18. Draft genome sequences of 9 LA-MRSA ST5 isolates obtained from humans after short term swine contact

    Science.gov (United States)

    Livestock associated methicillin resistant Staphylococcus aureus (LA-MRSA) sequence type 5 have raised concerns surrounding the potential for these isolates to colonize or cause disease in humans with swine contact. Here, we report draft genome sequences for 9 LA-MRSA ST5 isolates obtained from huma...

  19. Complete Genome Sequence of the Goatpox Virus Strain Gorgan Obtained Directly from a Commercial Live Attenuated Vaccine.

    Science.gov (United States)

    Mathijs, Elisabeth; Vandenbussche, Frank; Haegeman, Andy; Al-Majali, Ahmad; De Clercq, Kris; Van Borm, Steven

    2016-10-13

    This is a report of the complete genome sequence of the goatpox virus strain Gorgan, which was obtained directly from a commercial live attenuated vaccine (Caprivac, Jordan Bio-Industries Centre). Copyright © 2016 Mathijs et al.

  20. Complete Genome Sequence of Bluetongue Virus Serotype 1 Circulating in Italy, Obtained through a Fast Next-Generation Sequencing Protocol

    Science.gov (United States)

    Marcacci, Maurilia; Ancora, Massimo; Mangone, Iolanda; Leone, Alessandra; Marini, Valeria; Cammà, Cesare; Savini, Giovanni

    2014-01-01

    A field strain of the bluetongue virus serotype 1 (BTV-1) was isolated from infected sheep in Sardinia, Italy, in October 2013. The genome was sequenced using Ion Torrent technology. BTV-1 strain SAD2013 belongs to the Western topotype of BTV-1, clustering with BTV-1 strains isolated in Europe and northern Africa since 2006. PMID:24526649

  1. A Macro-Observation Scheme for Abnormal Event Detection in Daily-Life Video Sequences

    Directory of Open Access Journals (Sweden)

    Chiu Wei-Yao

    2010-01-01

    Full Text Available Abstract We propose a macro-observation scheme for abnormal event detection in daily life. The proposed macro-observation representation records the time-space energy of motions of all moving objects in a scene without segmenting individual object parts. The energy history of each pixel in the scene is instantly updated with exponential weights without explicitly specifying the duration of each activity. Since possible activities in daily life are numerous and distinct from each other and not all abnormal events can be foreseen, images from a video sequence that spans sufficient repetition of normal day-to-day activities are first randomly sampled. A constrained clustering model is proposed to partition the sampled images into groups. The new observed event that has distinct distance from any of the cluster centroids is then classified as an anomaly. The proposed method has been evaluated in daily work of a laboratory and BEHAVE benchmark dataset. The experimental results reveal that it can well detect abnormal events such as burglary and fighting as long as they last for a sufficient duration of time. The proposed method can be used as a support system for the scene that requires full time monitoring personnel.

  2. 3D modeling of architectural objects from video data obtained with the fixed focal length lens geometry

    Science.gov (United States)

    Deliś, Paulina; Kędzierski, Michał; Fryśkowska, Anna; Wilińska, Michalina

    2013-12-01

    The article describes the process of creating 3D models of architectural objects on the basis of video images, which had been acquired by a Sony NEX-VG10E fixed focal length video camera. It was assumed, that based on video and Terrestrial Laser Scanning data it is possible to develop 3D models of architectural objects. The acquisition of video data was preceded by the calibration of video camera. The process of creating 3D models from video data involves the following steps: video frames selection for the orientation process, orientation of video frames using points with known coordinates from Terrestrial Laser Scanning (TLS), generating a TIN model using automatic matching methods. The above objects have been measured with an impulse laser scanner, Leica ScanStation 2. Created 3D models of architectural objects were compared with 3D models of the same objects for which the self-calibration bundle adjustment process was performed. In this order a PhotoModeler Software was used. In order to assess the accuracy of the developed 3D models of architectural objects, points with known coordinates from Terrestrial Laser Scanning were used. To assess the accuracy a shortest distance method was used. Analysis of the accuracy showed that 3D models generated from video images differ by about 0.06 ÷ 0.13 m compared to TLS data. Artykuł zawiera opis procesu opracowania modeli 3D obiektów architektonicznych na podstawie obrazów wideo pozyskanych kamerą wideo Sony NEX-VG10E ze stałoogniskowym obiektywem. Przyjęto założenie, że na podstawie danych wideo i danych z naziemnego skaningu laserowego (NSL) możliwe jest opracowanie modeli 3D obiektów architektonicznych. Pozyskanie danych wideo zostało poprzedzone kalibracją kamery wideo. Model matematyczny kamery był oparty na rzucie perspektywicznym. Proces opracowania modeli 3D na podstawie danych wideo składał się z następujących etapów: wybór klatek wideo do procesu orientacji, orientacja klatek wideo na

  3. Measuring eye movements during locomotion: filtering techniques for obtaining velocity signals from a video-based eye monitor

    Science.gov (United States)

    Das, V. E.; Thomas, C. W.; Zivotofsky, A. Z.; Leigh, R. J.

    1996-01-01

    Video-based eye-tracking systems are especially suited to studying eye movements during naturally occurring activities such as locomotion, but eye velocity records suffer from broad band noise that is not amenable to conventional filtering methods. We evaluated the effectiveness of combined median and moving-average filters by comparing prefiltered and postfiltered records made synchronously with a video eye-tracker and the magnetic search coil technique, which is relatively noise free. Root-mean-square noise was reduced by half, without distorting the eye velocity signal. To illustrate the practical use of this technique, we studied normal subjects and patients with deficient labyrinthine function and compared their ability to hold gaze on a visual target that moved with their heads (cancellation of the vestibulo-ocular reflex). Patients and normal subjects performed similarly during active head rotation but, during locomotion, patients held their eyes more steadily on the visual target than did subjects.

  4. Subjective quality of video sequences rendered on LCD with local backlight dimming at different lighting conditions

    DEFF Research Database (Denmark)

    Mantel, Claire; Korhonen, Jari; Pedersen, Jesper Mørkhøj

    2015-01-01

    This paper focuses on the influence of ambient light on the perceived quality of videos displayed on Liquid Crystal Display (LCD) with local backlight dimming. A subjective test assessing the quality of videos with two backlight dimming methods and three lighting conditions, i.e. no light, low...

  5. Model-free 3D face shape reconstruction from video sequences

    NARCIS (Netherlands)

    van Dam, C.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    In forensic comparison of facial video data, often only the best quality frontal face frames are selected, and hence much video data is ignored. To improve 2D facial comparison for law enforcement and forensic investigation, we introduce a model-free 3D shape reconstruction algorithm based on 2D

  6. Landmark-based model-free 3D face shape reconstruction from video sequences

    NARCIS (Netherlands)

    van Dam, C.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan; Broemme, A.; Busch, C.

    2013-01-01

    In forensic comparison of facial video data, often only the best quality frontal face frames are selected, and hence potentially useful video data is ignored. To improve 2D facial comparison for law enforcement and forensic investigation, we introduce a model-free 3D shape reconstruction algorithm

  7. Draft genome sequences of 50 MRSA ST5 isolates obtained from a U.S. hospital

    Science.gov (United States)

    Methicillin resistant Staphylococcus aureus (MRSA) can be a commensal or pathogen in humans. Pathogenicity and disease are related to the acquisition of mobile genetic elements encoding virulence and antimicrobial resistance genes. Here, we report draft genome sequences for 50 clinical MRSA isolates...

  8. Draft genome sequences of 1 MSSA and 7 MRSA ST5 isolates obtained from California

    Science.gov (United States)

    Staphylococcus aureus is a commensal of humans that can cause a spectrum of diseases. An isolate’s capacity to cause disease is partially attributed to the acquisition of novel mobile genetic elements. This report provides the draft genome sequence of one methicillin susceptible and seven methicilli...

  9. Yeast diversity and novel yeast D1/D2 sequences from corn phylloplane obtained by a culture-independent approach.

    Science.gov (United States)

    Nasanit, Rujikan; Jaibangyang, Sopin; Tantirungkij, Manee; Limtong, Savitree

    2016-12-01

    Culture-independent techniques have recently been used for evaluation of microbial diversity in the environment since it addresses the problem of unculturable microorganisms. In this study, the diversity of epiphytic yeasts from corn (Zea mays Linn.) phylloplanes in Thailand was investigated using this technique and sequence-based analysis of the D1/D2 domains of the large subunit ribosomal DNA sequences. Thirty-seven samples of corn leaf were collected randomly from 10 provinces. The DNA was extracted from leaf washing samples and the D1/D2 domains were amplified. The PCR products were cloned and then screened by colony PCR. A total of 1049 clones were obtained from 37 clone libraries. From this total, 329 clones (213 sequences) were closely related to yeast strains in the GenBank database, and they were clustered into 77 operational taxonomic units (OTUs) with a similarity threshold of 99 %. The majority of sequences (98.5 %) were classified into the phylum Basidiomycota. Sixteen known yeast species were identified. Interestingly, more than 65 % of the D1/D2 sequences obtained by this technique were suggested to be sequences from new yeast taxa. The predominant yeast sequences detected belonged to the order Ustilaginales with relative frequency of 68.0 %. The most common known yeast species detected on the leaf samples were Pseudozyma hubeiensis pro tem. and Moesziomyces antarcticus with frequency of occurrence of 24.3 and 21.6 %, respectively.

  10. Real-Time Recognition of Action Sequences Using a DistributedVideo Sensor Network

    Directory of Open Access Journals (Sweden)

    Vinod Kulathumani

    2013-07-01

    Full Text Available In this paper, we describe how information obtained from multiple views usinga network of cameras can be effectively combined to yield a reliable and fast humanactivity recognition system. First, we present a score-based fusion technique for combininginformation from multiple cameras that can handle the arbitrary orientation of the subjectwith respect to the cameras and that does not rely on a symmetric deployment of thecameras. Second, we describe how longer, variable duration, inter-leaved action sequencescan be recognized in real-time based on multi-camera data that is continuously streaming in.Our framework does not depend on any particular feature extraction technique, and as aresult, the proposed system can easily be integrated on top of existing implementationsfor view-specific classifiers and feature descriptors. For implementation and testing of theproposed system, we have used computationally simple locality-specific motion informationextracted from the spatio-temporal shape of a human silhouette as our feature descriptor.This lends itself to an efficient distributed implementation, while maintaining a high framecapture rate. We demonstrate the robustness of our algorithms by implementing them ona portable multi-camera, video sensor network testbed and evaluating system performanceunder different camera network configurations.

  11. Sequencing and Analysis of Globally Obtained Human Respiratory Syncytial Virus A and B Genomes

    Science.gov (United States)

    Bose, Michael E.; He, Jie; Shrivastava, Susmita; Nelson, Martha I.; Bera, Jayati; Halpin, Rebecca A.; Town, Christopher D.; Lorenzi, Hernan A.; Noyola, Daniel E.; Falcone, Valeria; Gerna, Giuseppe; De Beenhouwer, Hans; Videla, Cristina; Kok, Tuckweng; Venter, Marietjie; Williams, John V.; Henrickson, Kelly J.

    2015-01-01

    Background Human respiratory syncytial virus (RSV) is the leading cause of respiratory tract infections in children globally, with nearly all children experiencing at least one infection by the age of two. Partial sequencing of the attachment glycoprotein gene is conducted routinely for genotyping, but relatively few whole genome sequences are available for RSV. The goal of our study was to sequence the genomes of RSV strains collected from multiple countries to further understand the global diversity of RSV at a whole-genome level. Methods We collected RSV samples and isolates from Mexico, Argentina, Belgium, Italy, Germany, Australia, South Africa, and the USA from the years 1998-2010. Both Sanger and next-generation sequencing with the Illumina and 454 platforms were used to sequence the whole genomes of RSV A and B. Phylogenetic analyses were performed using the Bayesian and maximum likelihood methods of phylogenetic inference. Results We sequenced the genomes of 34 RSVA and 23 RSVB viruses. Phylogenetic analysis showed that the RSVA genome evolves at an estimated rate of 6.72 × 10-4 substitutions/site/year (95% HPD 5.61 × 10-4 to 7.6 × 10-4) and for RSVB the evolutionary rate was 7.69 × 10-4 substitutions/site/year (95% HPD 6.81 × 10-4 to 8.62 × 10-4). We found multiple clades co-circulating globally for both RSV A and B. The predominant clades were GA2 and GA5 for RSVA and BA for RSVB. Conclusions Our analyses showed that RSV circulates on a global scale with the same predominant clades of viruses being found in countries around the world. However, the distribution of clades can change rapidly as new strains emerge. We did not observe a strong spatial structure in our trees, with the same three main clades of RSV co-circulating globally, suggesting that the evolution of RSV is not strongly regionalized. PMID:25793751

  12. Improvement of Spectral Editing in Solids: A Sequence for Obtaining 13CH + 13CH 2-Only 13C Spectra

    Science.gov (United States)

    Burns, Sean T.; Wu, Xiaoling; Zilm, Kurt W.

    2000-04-01

    An improved spectral editing method for solids is described which allows one to obtain a set of subspectra in roughly two-thirds the amount of time as our original CPPI editing method for the same signal to noise. This improvement is afforded by a new pulse sequence that is used to acquire a 13CH + 13CH2 spectrum which has very little 13CH3 or nonprotonated carbon contamination. By using this new sequence the 13CH-only subspectrum is obtained much more efficiently. Criteria for optimizing the signal to noise in the edited subspectra are discussed.

  13. Improvement of spectral editing in solids: A sequence for obtaining (13)CH + (13)CH(2)-only (13)C spectra

    Science.gov (United States)

    Burns; Wu; Zilm

    2000-04-01

    An improved spectral editing method for solids is described which allows one to obtain a set of subspectra in roughly two-thirds the amount of time as our original CPPI editing method for the same signal to noise. This improvement is afforded by a new pulse sequence that is used to acquire a (13)CH + (13)CH(2) spectrum which has very little (13)CH(3) or nonprotonated carbon contamination. By using this new sequence the (13)CH-only subspectrum is obtained much more efficiently. Criteria for optimizing the signal to noise in the edited subspectra are discussed. Copyright 2000 Academic Press.

  14. Genome sequences of rare, uncultured bacteria obtained by differential coverage binning of multiple metagenomes

    DEFF Research Database (Denmark)

    Albertsen, Mads; Hugenholtz, Philip; Skarshewski, Adam

    2013-01-01

    Reference genomes are required to understand the diverse roles of microorganisms in ecology, evolution, human and animal health, but most species remain uncultured. Here we present a sequence composition–independent approach to recover high-quality microbial genomes from deeply sequenced...... metagenomes. Multiple metagenomes of the same community, which differ in relative population abundances, were used to assemble 31 bacterial genomes, including rare (genomes were assembled into complete or near-complete chromosomes....... Four belong to the candidate bacterial phylum TM7 and represent the most complete genomes for this phylum to date (relative abundances, 0.06–1.58%). Reanalysis of published metagenomes reveals that differential coverage binning facilitates recovery of more complete and higher fidelity genome bins than...

  15. Mutation spectrum of six genes in Chinese phenylketonuria patients obtained through next-generation sequencing.

    Directory of Open Access Journals (Sweden)

    Ying Gu

    Full Text Available BACKGROUND: The identification of gene variants plays an important role in the diagnosis of genetic diseases. METHODOLOGY/PRINCIPAL FINDINGS: To develop a rapid method for the diagnosis of phenylketonuria (PKU and tetrahydrobiopterin (BH4 deficiency, we designed a multiplex, PCR-based primer panel to amplify all the exons and flanking regions (50 bp average of six PKU-associated genes (PAH, PTS, GCH1, QDPR, PCBD1 and GFRP. The Ion Torrent Personal Genome Machine (PGM System was used to detect mutations in all the exons of these six genes. We tested 93 DNA samples from blood specimens from 35 patients and their parents (32 families and 26 healthy adults. Using strict bioinformatic criteria, this sequencing data provided, on average, 99.14% coverage of the 39 exons at more than 70-fold mean depth of coverage. We found 23 previously documented variants in the PAH gene and six novel mutations in the PAH and PTS genes. A detailed analysis of the mutation spectrum of these patients is described in this study. CONCLUSIONS/SIGNIFICANCE: These results were confirmed by Sanger sequencing. In conclusion, benchtop next-generation sequencing technology can be used to detect mutations in monogenic diseases and can detect both point mutations and indels with high sensitivity, fidelity and throughput at a lower cost than conventional methods in clinical applications.

  16. Characterization of new Schistosoma mansoni microsatellite loci in sequences obtained from public DNA databases and microsatellite enriched genomic libraries

    Directory of Open Access Journals (Sweden)

    NB Rodrigues

    2002-10-01

    Full Text Available In the last decade microsatellites have become one of the most useful genetic markers used in a large number of organisms due to their abundance and high level of polymorphism. Microsatellites have been used for individual identification, paternity tests, forensic studies and population genetics. Data on microsatellite abundance comes preferentially from microsatellite enriched libraries and DNA sequence databases. We have conducted a search in GenBank of more than 16,000 Schistosoma mansoni ESTs and 42,000 BAC sequences. In addition, we obtained 300 sequences from CA and AT microsatellite enriched genomic libraries. The sequences were searched for simple repeats using the RepeatMasker software. Of 16,022 ESTs, we detected 481 (3% sequences that contained 622 microsatellites (434 perfect, 164 imperfect and 24 compounds. Of the 481 ESTs, 194 were grouped in 63 clusters containing 2 to 15 ESTs per cluster. Polymorphisms were observed in 16 clusters. The 287 remaining ESTs were orphan sequences. Of the 42,017 BAC end sequences, 1,598 (3.8% contained microsatellites (2,335 perfect, 287 imperfect and 79 compounds. The 1,598 BAC end sequences 80 were grouped into 17 clusters containing 3 to 17 BAC end sequences per cluster. Microsatellites were present in 67 out of 300 sequences from microsatellite enriched libraries (55 perfect, 38 imperfect and 15 compounds. From all of the observed loci 55 were selected for having the longest perfect repeats and flanking regions that allowed the design of primers for PCR amplification. Additionally we describe two new polymorphic microsatellite loci.

  17. The complete mitochondrial genome of Porites harrisoni (Cnidaria: Scleractinia) obtained using next-generation sequencing

    KAUST Repository

    Terraneo, Tullia Isotta

    2018-02-24

    In this study, we sequenced the complete mitochondrial genome of Porites harrisoni using ezRAD and Illumina technology. Genome length consisted of 18,630 bp, with a base composition of 25.92% A, 13.28% T, 23.06% G, and 37.73% C. Consistent with other hard corals, P. harrisoni mitogenome was arranged in 13 protein-coding genes, 2 rRNA, and 2 tRNA genes. nad5 and cox1 contained embedded Group I Introns of 11,133 bp and 965 bp, respectively.

  18. Sequence and phylogenetic analysis of chicken anaemia virus obtained from backyard and commercial chickens in Nigeria : research communication

    Directory of Open Access Journals (Sweden)

    D.O. Oluwayelu

    2008-09-01

    Full Text Available This work reports the first molecular analysis study of chicken anaemia virus (CAV in backyard chickens in Africa using molecular cloning and sequence analysis to characterize CAV strains obtained from commercial chickens and Nigerian backyard chickens. Partial VP1 gene sequences were determined for three CAVs from commercial chickens and for six CAV variants present in samples from a backyard chicken. Multiple alignment analysis revealed that the 6 % and 4 % nucleotide diversity obtained respectively for the commercial and backyard chicken strains translated to only 2 % amino acid diversity for each breed. Overall, the amino acid composition of Nigerian CAVs was found to be highly conserved. Since the partial VP1 gene sequence of two backyard chicken cloned CAV strains (NGR/Cl-8 and NGR/Cl-9 were almost identical and evolutionarily closely related to the commercial chicken strains NGR-1, and NGR-4 and NGR-5, respectively, we concluded that CAV infections had crossed the farm boundary.

  19. Functional and Structural Overview of G-Protein-Coupled Receptors Comprehensively Obtained from Genome Sequences

    Directory of Open Access Journals (Sweden)

    Makiko Suwa

    2011-04-01

    Full Text Available An understanding of the functional mechanisms of G-protein-coupled receptors (GPCRs is very important for GPCR-related drug design. We have developed an integrated GPCR database (SEVENS http://sevens.cbrc.jp/ that includes 64,090 reliable GPCR genes comprehensively identified from 56 eukaryote genome sequences, and overviewed the sequences and structure spaces of the GPCRs. In vertebrates, the number of receptors for biological amines, peptides, etc. is conserved in most species, whereas the number of chemosensory receptors for odorant, pheromone, etc. significantly differs among species. The latter receptors tend to be single exon type or a few exon type and show a high ratio in the numbers of GPCRs, whereas some families, such as Class B and Class C receptors, have long lengths due to the presence of many exons. Statistical analyses of amino acid residues reveal that most of the conserved residues in Class A GPCRs are found in the cytoplasmic half regions of transmembrane (TM helices, while residues characteristic to each subfamily found on the extracellular half regions. The 69 of Protein Data Bank (PDB entries of complete or fragmentary structures could be mapped on the TM/loop regions of Class A GPCRs covering 14 subfamilies.

  20. Metagenomes obtained by "deep sequencing" - what do they tell about the EBPR communities?

    DEFF Research Database (Denmark)

    Albertsen, Mads; Saunders, Aaron Marc; Nielsen, Kåre Lehmann

    2013-01-01

    Metagenomics enables studies of the genomic potential of complex microbial communities by sequencing bulk genomic DNA directly from the environment. Knowledge of the genetic potential of a community can be used to formulate and test ecological hypotheses about stability and performance. In this s......Metagenomics enables studies of the genomic potential of complex microbial communities by sequencing bulk genomic DNA directly from the environment. Knowledge of the genetic potential of a community can be used to formulate and test ecological hypotheses about stability and performance....... In this study deep metagenomics and fluorescence in situ hybridization (FISH) were used to study a full-scale wastewater treatment plant with enhanced biological phosphorus removal (EBPR) and compared to an existing EBPR metagenome. EBPR is a widely used process that relies on a complex community...... of microorganisms to function properly. Insight into community and species level stability and dynamics is valuable for knowledge driven optimization of the EBPR process. The metagenomes of the EBPR communities were distinct compared to metagenomes of communities from a wide range of other environments, which could...

  1. Metagenome sequence analysis of filamentous microbial communities obtained from geochemically distinct geothermal channels reveals specialization of three aquificales lineages

    DEFF Research Database (Denmark)

    Takacs-Vesbach, Cristina; Inskeep, William P; Jay, Zackary J

    2013-01-01

    The Aquificales are thermophilic microorganisms that inhabit hydrothermal systems worldwide and are considered one of the earliest lineages of the domain Bacteria. We analyzed metagenome sequence obtained from six thermal "filamentous streamer" communities (∼40 Mbp per site), which targeted three...

  2. Draft Genome Sequence of Enterotoxigenic Escherichia coli Strain E24377A, Obtained from a Tribal Drinking Water Source in India.

    Science.gov (United States)

    Tamhankar, Ashok J; Nerkar, Sandeep S; Khadake, Prashant P; Akolkar, Dadasaheb B; Apurwa, Sachin R; Deshpande, Uday; Khedkar, Smita U; Stålsby-Lundborg, Cecilia

    2015-04-02

    Enterotoxigenic Escherichia coli (ETEC) is a major cause of diarrheal disease in humans and animals. Its dissemination can occur through water sources contaminated by it. Here, we report for the first time the draft genome sequence of ETEC strain E24377A, obtained from a tribal drinking water source in India. Copyright © 2015 Tamhankar et al.

  3. Draft Genome Sequence of Enterotoxigenic Escherichia coli Strain E24377A, Obtained from a Tribal Drinking Water Source in India

    OpenAIRE

    Tamhankar, Ashok J.; Nerkar, Sandeep S.; Khadake, Prashant P.; Akolkar, Dadasaheb B.; Apurwa, Sachin R.; Deshpande, Uday; Khedkar, Smita U.; St?lsby-Lundborg, Cecilia

    2015-01-01

    Enterotoxigenic Escherichia coli (ETEC) is a major cause of diarrheal disease in humans and animals. Its dissemination can occur through water sources contaminated by it. Here, we report for the first time the draft genome sequence of ETEC strain E24377A, obtained from a tribal drinking water source in India.

  4. A new approach to obtain metric data from video surveillance: Preliminary evaluation of a low-cost stereo-photogrammetric system.

    Science.gov (United States)

    Russo, Paolo; Gualdi-Russo, Emanuela; Pellegrinelli, Alberto; Balboni, Juri; Furini, Alessio

    2017-02-01

    Using an interdisciplinary approach the authors demonstrate the possibility to obtain reliable anthropometric data of a subject by means of a new video surveillance system. In general the use of current video surveillance systems provides law enforcement with useful data to solve many crimes. Unfortunately the quality of the images and the way in which they are taken often makes it very difficult to judge the compatibility between suspect and perpetrator. In this paper, the authors present the results obtained with a low-cost photogrammetric video surveillance system based on a pair of common surveillance cameras synchronized with each other. The innovative aspect of the system is that it allows estimation with considerable accuracy not only of body height (error 0.1-3.1cm, SD 1.8-4.5cm) but also of other anthropometric characters of the subject, consequently with better determination of the biological profile and greatly increased effectiveness of the judgment of compatibility. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. The sequence spectrum of frameshift reversions obtained with a novel adaptive mutation assay in Saccharomyces cerevisiae

    Directory of Open Access Journals (Sweden)

    Erich Heidenreich

    2016-12-01

    Full Text Available Research on the mechanisms of adaptive mutagenesis in resting, i.e. non-replicating cells relies on appropriate mutation assays. Here we provide a novel procedure for the detection of frameshift-reverting mutations in yeast. Proliferation of non-reverted cells in this assay is suppressed by the lack of a fermentable carbon source. The test allele was constructed in a way that the reversions mimic microsatellite instability, a condition often found in cancer cells. We show the cell numbers during these starvation conditions and provide a DNA sequence spectrum of a representative set of revertants. The data in this article support the publication "Glucose starvation as a selective tool for the study of adaptive mutations in Saccharomyces cerevisiae" (Heidenreich and Steinboeck, 2016 [1].

  6. Determination of exterior parameters for video image sequences from helicopter by block adjustment with combined vertical and oblique images

    Science.gov (United States)

    Zhang, Jianqing; Zhang, Yong; Zhang, Zuxun

    2003-09-01

    Determination of image exterior parameters is a key aspect for the realization of automatic texture mapping of buildings in the reconstruction of real 3D city models. This paper reports about an application of automatic aerial triangulation on a block with three video image sequences, one vertical image sequence to buildings' roofs and two oblique image sequences to buildings' walls. A new process procedure is developed in order to auto matching homologous points between images in oblique and vertical images. Two strategies are tested. One is treating three strips as independent blocks and executing strip block adjustment respectively, the other is creating a block with three strips, using the new image matching procedure to extract large number of tie points and executing block adjustment. The block adjustment results of these two strategies are also compared.

  7. Metagenomes obtained by 'deep sequencing' - what do they tell about the enhanced biological phosphorus removal communities?

    Science.gov (United States)

    Albertsen, Mads; Saunders, Aaron M; Nielsen, Kåre L; Nielsen, Per H

    2013-01-01

    Metagenomics enables studies of the genomic potential of complex microbial communities by sequencing bulk genomic DNA directly from the environment. Knowledge of the genetic potential of a community can be used to formulate and test ecological hypotheses about stability and performance. In this study deep metagenomics and fluorescence in situ hybridization (FISH) were used to study a full-scale wastewater treatment plant with enhanced biological phosphorus removal (EBPR), and the results were compared to an existing EBPR metagenome. EBPR is a widely used process that relies on a complex community of microorganisms to function properly. Insight into community and species level stability and dynamics is valuable for knowledge-driven optimization of the EBPR process. The metagenomes of the EBPR communities were distinct compared to metagenomes of communities from a wide range of other environments, which could be attributed to selection pressures of the EBPR process. The metabolic potential of one of the key microorganisms in the EPBR process, Accumulibacter, was investigated in more detail in the two plants, revealing a potential importance of phage predation on the dynamics of Accumulibacter populations. The results demonstrate that metagenomics can be used as a powerful tool for system wide characterization of the EBPR community as well as for a deeper understanding of the function of specific community members. Furthermore, we discuss and illustrate some of the general pitfalls in metagenomics and stress the need of additional DNA extraction independent information in metagenome studies.

  8. Characterization of expressed sequence tags obtained by SSH during somatic embryogenesis in Cichorium intybus L.

    Science.gov (United States)

    Legrand, Sylvain; Hendriks, Theo; Hilbert, Jean-Louis; Quillet, Marie-Christine

    2007-06-06

    Somatic embryogenesis (SE) is an asexual propagation pathway requiring a somatic-to-embryonic transition of differentiated somatic cells toward embryogenic cells capable of producing embryos in a process resembling zygotic embryogenesis. In chicory, genetic variability with respect to the formation of somatic embryos was detected between plants from a population of Cichorium intybus L. landrace Koospol. Though all plants from this population were self incompatible, we managed by repeated selfing to obtain a few seeds from one highly embryogenic (E) plant, K59. Among the plants grown from these seeds, one plant, C15, was found to be non-embryogenic (NE) under our SE-inducing conditions. Being closely related, we decided to exploit the difference in SE capacity between K59 and its descendant C15 to study gene expression during the early stages of SE in chicory. Cytological analysis indicated that in K59 leaf explants the first cell divisions leading to SE were observed at day 4 of culture. In contrast, in C15 explants no cell divisions were observed and SE development seemed arrested before cell reactivation. Using mRNAs isolated from leaf explants from both genotypes after 4 days of culture under SE-inducing conditions, an E and a NE cDNA-library were generated by SSH. A total of 3,348 ESTs from both libraries turned out to represent a maximum of 2,077 genes. In silico subtraction analysis sorted only 33 genes as differentially expressed in the E or NE genotype, indicating that SSH had resulted in an effective normalisation. Real-time RT-PCR was used to verify the expression levels of 48 genes represented by ESTs from either library. The results showed preferential expression of genes related to protein synthesis and cell division in the E genotype, and related to defence in the NE genotype. In accordance with the cytological observations, mRNA levels in explants from K59 and C15 collected at day 4 of SE culture reflected differential gene expression that presumably

  9. Characterization of expressed sequence tags obtained by SSH during somatic embryogenesis in Cichorium intybus L

    Directory of Open Access Journals (Sweden)

    Quillet Marie-Christine

    2007-06-01

    Full Text Available Abstract Background Somatic embryogenesis (SE is an asexual propagation pathway requiring a somatic-to-embryonic transition of differentiated somatic cells toward embryogenic cells capable of producing embryos in a process resembling zygotic embryogenesis. In chicory, genetic variability with respect to the formation of somatic embryos was detected between plants from a population of Cichorium intybus L. landrace Koospol. Though all plants from this population were self incompatible, we managed by repeated selfing to obtain a few seeds from one highly embryogenic (E plant, K59. Among the plants grown from these seeds, one plant, C15, was found to be non-embryogenic (NE under our SE-inducing conditions. Being closely related, we decided to exploit the difference in SE capacity between K59 and its descendant C15 to study gene expression during the early stages of SE in chicory. Results Cytological analysis indicated that in K59 leaf explants the first cell divisions leading to SE were observed at day 4 of culture. In contrast, in C15 explants no cell divisions were observed and SE development seemed arrested before cell reactivation. Using mRNAs isolated from leaf explants from both genotypes after 4 days of culture under SE-inducing conditions, an E and a NE cDNA-library were generated by SSH. A total of 3,348 ESTs from both libraries turned out to represent a maximum of 2,077 genes. In silico subtraction analysis sorted only 33 genes as differentially expressed in the E or NE genotype, indicating that SSH had resulted in an effective normalisation. Real-time RT-PCR was used to verify the expression levels of 48 genes represented by ESTs from either library. The results showed preferential expression of genes related to protein synthesis and cell division in the E genotype, and related to defence in the NE genotype. Conclusion In accordance with the cytological observations, mRNA levels in explants from K59 and C15 collected at day 4 of SE

  10. Persistent Target Tracking Using Likelihood Fusion in Wide-Area and Full Motion Video Sequences

    Science.gov (United States)

    2012-07-01

    problems associated with a moving platform including gimbal -based stabilization errors, relative motion where sensor and target are both moving, seams in...Image Processing, 2000, pp. 561–564. [46] A. Hafiane, K. Palaniappan, and G. Seetharaman, “ UAV -video registra- tion using block-based features,” in IEEE

  11. SSR-patchwork: An optimized protocol to obtain a rapid and inexpensive SSR library using first-generation sequencing technology.

    Science.gov (United States)

    Di Maio, Antonietta; De Castro, Olga

    2013-01-01

    We have optimized a version of a microsatellite loci isolation protocol for first-generation sequencing (FGS) technologies. The protocol is optimized to reduce the cost and number of steps, and it combines some procedures from previous simple sequence repeat (SSR) protocols with several key improvements that significantly affect the final yield of the SSR library. This protocol may be accessible for laboratories with a moderate budget or for which next-generation sequencing (NGS) is not readily available. • We drew from classic protocols for library enrichment by digestion, ligation, amplification, hybridization, cloning, and sequencing. Three different systems were chosen: two with very different genome sizes (Galdieria sulphuraria, 10 Mbp; Pancratium maritimum, 30 000 Mbp), and a third with an undetermined genome size (Kochia saxicola). Moreover, we also report the optimization of the sequencing reagents. A good frequency of the obtained microsatellite loci was achieved. • The method presented here is very detailed; comparative tests with other SSR protocols are also reported. This optimized protocol is a promising tool for low-cost genetic studies and the rapid, simple construction of homemade SSR libraries for small and large genomes.

  12. An innovative experimental sequence on electromagnetic induction and eddy currents based on video analysis and cheap data acquisition

    Science.gov (United States)

    Bonanno, A.; Bozzo, G.; Sapia, P.

    2017-11-01

    In this work, we present a coherent sequence of experiments on electromagnetic (EM) induction and eddy currents, appropriate for university undergraduate students, based on a magnet falling through a drilled aluminum disk. The sequence, leveraging on the didactical interplay between the EM and mechanical aspects of the experiments, allows us to exploit the students’ awareness of mechanics to elicit their comprehension of EM phenomena. The proposed experiments feature two kinds of measurements: (i) kinematic measurements (performed by means of high-speed video analysis) give information on the system’s kinematics and, via appropriate numerical data processing, allow us to get dynamic information, in particular on energy dissipation; (ii) induced electromagnetic field (EMF) measurements (by using a homemade multi-coil sensor connected to a cheap data acquisition system) allow us to quantitatively determine the inductive effects of the moving magnet on its neighborhood. The comparison between experimental results and the predictions from an appropriate theoretical model (of the dissipative coupling between the moving magnet and the conducting disk) offers many educational hints on relevant topics related to EM induction, such as Maxwell’s displacement current, magnetic field flux variation, and the conceptual link between induced EMF and induced currents. Moreover, the didactical activity gives students the opportunity to be trained in video analysis, data acquisition and numerical data processing.

  13. Simulation of video sequences for an accurate evaluation of tracking algorithms on complex scenes

    Science.gov (United States)

    Dubreu, Christine; Manzanera, Antoine; Bohain, Eric

    2008-04-01

    As target tracking is arousing more and more interest, the necessity to reliably assess tracking algorithms in any conditions is becoming essential. The evaluation of such algorithms requires a database of sequences representative of the whole range of conditions in which the tracking system is likely to operate, together with its associated ground truth. However, building such a database with real sequences, and collecting the associated ground truth appears to be hardly possible and very time-consuming. Therefore, more and more often, synthetic sequences are generated by complex and heavy simulation platforms to evaluate the performance of tracking algorithms. Some methods have also been proposed using simple synthetic sequences generated without such complex simulation platforms. These sequences are generated from a finite number of discriminating parameters, and are statistically representative, as regards these parameters, of real sequences. They are very simple and not photorealistic, but can be reliably used for low-level tracking algorithms evaluation in any operating conditions. The aim of this paper is to assess the reliability of these non-photorealistic synthetic sequences for evaluation of tracking systems on complex-textured objects, and to show how the number of parameters can be increased to synthesize more elaborated scenes and deal with more complex dynamics, including occlusions and three-dimensional deformations.

  14. Development and preliminary evaluation of an online educational video about whole-genome sequencing for research participants, patients, and the general public.

    Science.gov (United States)

    Sanderson, Saskia C; Suckiel, Sabrina A; Zweig, Micol; Bottinger, Erwin P; Jabs, Ethylin Wang; Richardson, Lynne D

    2016-05-01

    As whole-genome sequencing (WGS) increases in availability, WGS educational aids are needed for research participants, patients, and the general public. Our aim was therefore to develop an accessible and scalable WGS educational aid. We engaged multiple stakeholders in an iterative process over a 1-year period culminating in the production of a novel 10-minute WGS educational animated video, "Whole Genome Sequencing and You" (https://goo.gl/HV8ezJ). We then presented the animated video to 281 online-survey respondents (the video-information group). There were also two comparison groups: a written-information group (n = 281) and a no-information group (n = 300). In the video-information group, 79% reported the video was easy to understand, satisfaction scores were high (mean 4.00 on 1-5 scale, where 5 = high satisfaction), and knowledge increased significantly. There were significant differences in knowledge compared with the no-information group but few differences compared with the written-information group. Intention to receive personal results from WGS and decisional conflict in response to a hypothetical scenario did not differ between the three groups. The educational animated video, "Whole Genome Sequencing and You," was well received by this sample of online-survey respondents. Further work is needed to evaluate its utility as an aid to informed decision making about WGS in other populations.Genet Med 18 5, 501-512.

  15. Performance Analysis of Video Transmission Using Sequential Distortion Minimization Method for Digital Video Broadcasting Terrestrial

    Directory of Open Access Journals (Sweden)

    Novita Astin

    2016-12-01

    Full Text Available This paper presents about the transmission of Digital Video Broadcasting system with streaming video resolution 640x480 on different IQ rate and modulation. In the video transmission, distortion often occurs, so the received video has bad quality. Key frames selection algorithm is flexibel on a change of video, but on these methods, the temporal information of a video sequence is omitted. To minimize distortion between the original video and received video, we aimed at adding methodology using sequential distortion minimization algorithm. Its aim was to create a new video, better than original video without significant loss of content between the original video and received video, fixed sequentially. The reliability of video transmission was observed based on a constellation diagram, with the best result on IQ rate 2 Mhz and modulation 8 QAM. The best video transmission was also investigated using SEDIM (Sequential Distortion Minimization Method and without SEDIM. The experimental result showed that the PSNR (Peak Signal to Noise Ratio average of video transmission using SEDIM was an increase from 19,855 dB to 48,386 dB and SSIM (Structural Similarity average increase 10,49%. The experimental results and comparison of proposed method obtained a good performance. USRP board was used as RF front-end on 2,2 GHz.

  16. Detection of hepatitis C virus sequences in brain tissue obtained in recurrent hepatitis C after liver transplantation.

    Science.gov (United States)

    Vargas, Hugo E; Laskus, Tomasz; Radkowski, Marek; Wilkinson, Jeff; Balan, Vijay; Douglas, David D; Harrison, M Edwyn; Mulligan, David C; Olden, Kevin; Adair, Debra; Rakela, Jorge

    2002-11-01

    Patients with chronic hepatitis C frequently report tiredness, easy fatigability, and depression. The aim of this study is to determine whether hepatitis C virus (HCV) replication could be found in brain tissue in patients with hepatitis C and depression. We report two patients with recurrent hepatitis C after liver transplantation who also developed severe depression. One patient died of multiorgan failure and the other, septicemia caused by Staphylococcus aureussis. Both patients had evidence of severe hepatitis C recurrence with features of cholestatic fibrosing hepatitis. We were able to study samples of their central nervous system obtained at autopsy for evidence of HCV replication. The presence of HCV RNA-negative strand, which is the viral replicative form, was determined by strand-specific Tth-based reverse-transcriptase polymerase chain reaction. Viral sequences were compared by means of single-strand conformation polymorphism and direct sequencing. HCV RNA-negative strands were found in subcortical white matter from one patient and cerebral cortex from the other patient. HCV RNA-negative strands amplified from brain tissue differed by several nucleotide substitutions from serum consensus sequences in the 5' untranslated region. These findings support the concept of HCV neuroinvasion, and we speculate that it may provide a biological substrate to neuropsychiatric disorders observed in patients with chronic hepatitis C. The exact lineage of cells permissive for HCV replication and the possible interaction between viral replication and cerebral function that may lead to depression remain to be elucidated.

  17. All 37 Mitochondrial Genes of Aphid Aphis craccivora Obtained from Transcriptome Sequencing: Implications for the Evolution of Aphids.

    Science.gov (United States)

    Song, Nan; Zhang, Hao; Li, Hu; Cai, Wanzhi

    2016-01-01

    The availability of mitochondrial genome data for Aphididae, one of the economically important insect pest families, in public databases is limited. The advent of next generation sequencing technology provides the potential to generate mitochondrial genome data for many species timely and cost-effectively. In this report, we used transcriptome sequencing technology to determine all the 37 mitochondrial genes of the cowpea aphid, Aphis craccivora. This method avoids the necessity of finding suitable primers for long PCRs or primer-walking amplicons, and is proved to be effective in obtaining the whole set of mitochondrial gene data for insects with difficulty in sequencing mitochondrial genome by PCR-based strategies. Phylogenetic analyses of aphid mitochondrial genome data show clustering based on tribe level, and strongly support the monophyly of the family Aphididae. Within the monophyletic Aphidini, three samples from Aphis grouped together. In another major clade of Aphididae, Pterocomma pilosum was recovered as a potential sister-group of Cavariella salicicola, as part of Macrosiphini.

  18. All 37 Mitochondrial Genes of Aphid Aphis craccivora Obtained from Transcriptome Sequencing: Implications for the Evolution of Aphids.

    Directory of Open Access Journals (Sweden)

    Nan Song

    Full Text Available The availability of mitochondrial genome data for Aphididae, one of the economically important insect pest families, in public databases is limited. The advent of next generation sequencing technology provides the potential to generate mitochondrial genome data for many species timely and cost-effectively. In this report, we used transcriptome sequencing technology to determine all the 37 mitochondrial genes of the cowpea aphid, Aphis craccivora. This method avoids the necessity of finding suitable primers for long PCRs or primer-walking amplicons, and is proved to be effective in obtaining the whole set of mitochondrial gene data for insects with difficulty in sequencing mitochondrial genome by PCR-based strategies. Phylogenetic analyses of aphid mitochondrial genome data show clustering based on tribe level, and strongly support the monophyly of the family Aphididae. Within the monophyletic Aphidini, three samples from Aphis grouped together. In another major clade of Aphididae, Pterocomma pilosum was recovered as a potential sister-group of Cavariella salicicola, as part of Macrosiphini.

  19. Real-Time Human Detection for Aerial Captured Video Sequences via Deep Models

    Directory of Open Access Journals (Sweden)

    Nouar AlDahoul

    2018-01-01

    Full Text Available Human detection in videos plays an important role in various real life applications. Most of traditional approaches depend on utilizing handcrafted features which are problem-dependent and optimal for specific tasks. Moreover, they are highly susceptible to dynamical events such as illumination changes, camera jitter, and variations in object sizes. On the other hand, the proposed feature learning approaches are cheaper and easier because highly abstract and discriminative features can be produced automatically without the need of expert knowledge. In this paper, we utilize automatic feature learning methods which combine optical flow and three different deep models (i.e., supervised convolutional neural network (S-CNN, pretrained CNN feature extractor, and hierarchical extreme learning machine for human detection in videos captured using a nonstatic camera on an aerial platform with varying altitudes. The models are trained and tested on the publicly available and highly challenging UCF-ARG aerial dataset. The comparison between these models in terms of training, testing accuracy, and learning speed is analyzed. The performance evaluation considers five human actions (digging, waving, throwing, walking, and running. Experimental results demonstrated that the proposed methods are successful for human detection task. Pretrained CNN produces an average accuracy of 98.09%. S-CNN produces an average accuracy of 95.6% with soft-max and 91.7% with Support Vector Machines (SVM. H-ELM has an average accuracy of 95.9%. Using a normal Central Processing Unit (CPU, H-ELM’s training time takes 445 seconds. Learning in S-CNN takes 770 seconds with a high performance Graphical Processing Unit (GPU.

  20. Facial Expression Recognition from Video Sequences Based on Spatial-Temporal Motion Local Binary Pattern and Gabor Multiorientation Fusion Histogram

    Directory of Open Access Journals (Sweden)

    Lei Zhao

    2017-01-01

    Full Text Available This paper proposes novel framework for facial expressions analysis using dynamic and static information in video sequences. First, based on incremental formulation, discriminative deformable face alignment method is adapted to locate facial points to correct in-plane head rotation and break up facial region from background. Then, spatial-temporal motion local binary pattern (LBP feature is extracted and integrated with Gabor multiorientation fusion histogram to give descriptors, which reflect static and dynamic texture information of facial expressions. Finally, a one-versus-one strategy based multiclass support vector machine (SVM classifier is applied to classify facial expressions. Experiments on Cohn-Kanade (CK + facial expression dataset illustrate that integrated framework outperforms methods using single descriptors. Compared with other state-of-the-art methods on CK+, MMI, and Oulu-CASIA VIS datasets, our proposed framework performs better.

  1. Detection of distorted frames in retinal video-sequences via machine learning

    Science.gov (United States)

    Kolar, Radim; Liberdova, Ivana; Odstrcilik, Jan; Hracho, Michal; Tornow, Ralf P.

    2017-07-01

    This paper describes detection of distorted frames in retinal sequences based on set of global features extracted from each frame. The feature vector is consequently used in classification step, in which three types of classifiers are tested. The best classification accuracy 96% has been achieved with support vector machine approach.

  2. Measuring Sandy Bottom Dynamics by Exploiting Depth from Stereo Video Sequences

    DEFF Research Database (Denmark)

    Musumeci, Rosaria E.; Farinella, Giovanni M.; Foti, Enrico

    2013-01-01

    In this paper an imaging system for measuring sandy bottom dynamics is proposed. The system exploits stereo sequences and projected laser beams to build the 3D shape of the sandy bottom during time. The reconstruction is used by experts of the field to perform accurate measurements and analysis...

  3. Metagenome Sequence Analysis of Filamentous Microbial Communities Obtained from Geochemically Distinct Geothermal Channels Reveals Specialization of Three Aquificales Lineages

    Directory of Open Access Journals (Sweden)

    Cristina eTakacs-vesbach

    2013-05-01

    Full Text Available The Aquificales are thermophilic microorganisms that inhabit hydrothermal systems worldwide and are considered one of the earliest lineages of the domain Bacteria. We analyzed metagenome sequence obtained from six thermal ‘filamentous streamer’ communities (~40 Mbp per site, which targeted three different groups of Aquificales found in Yellowstone National Park (YNP. Unassembled metagenome sequence and PCR-amplified 16S rRNA gene libraries revealed that acidic, sulfidic sites were dominated by Hydrogenobaculum (Aquificaceae populations, whereas the circumneutral pH (6.5 - 7.8 sites containing dissolved sulfide were dominated by Sulfurihydrogenibium spp. (Hydrogenothermaceae. Thermocrinis (Aquificaceae populations were found primarily in the circumneutral sites with undetectable sulfide, and to a lesser extent in one sulfidic system at pH 8. Phylogenetic analysis of assembled sequence containing 16S rRNA genes as well as conserved protein-encoding genes revealed that the composition and function of these communities varied across geochemical conditions. Each Aquificales lineage contained genes for CO2 fixation by the reverse TCA cycle, but only the Sulfurihydrogenibium populations perform citrate cleavage using ATP citrate lyase (Acl. The Aquificaceae populations use an alternative pathway catalyzed by two separate enzymes, citryl CoA synthetase (Ccs and citryl CoA lyase (Ccl. All three Aquificales lineages contained evidence of aerobic respiration, albeit due to completely different types of heme Cu oxidases (subunit I involved in oxygen reduction. The distribution of Aquificales populations and differences among functional genes involved in energy generation and electron transport is consistent with the hypothesis that geochemical parameters (e.g., pH, sulfide, H2, O2 have resulted in niche specialization among members of the Aquificales.

  4. Improved Complete Genome Sequence of the Extremely Radioresistant Bacterium Deinococcus radiodurans R1 Obtained Using PacBio Single-Molecule Sequencing

    OpenAIRE

    Hua, Xiaoting; Hua, Yuejin

    2016-01-01

    The genome sequence of Deinococcus radiodurans R1 was published in 1999. We resequenced D.?radiodurans R1 using PacBio and compared the sequence with the published one. Large insertions and single nucleotide polymorphisms (SNPs) were observed among the genome sequences. A more accurate genome sequence will be helpful to studies of D.?radiodurans.

  5. Digital video.

    Science.gov (United States)

    Johnson, Don; Johnson, Mike

    2004-04-01

    The process of digital capture, editing, and archiving video has become an important aspect of documenting arthroscopic surgery. Recording the arthroscopic findings before and after surgery is an essential part of the patient's medical record. The hardware and software has become more reasonable to purchase, but the learning curve to master the software is steep. Digital video is captured at the time of arthroscopy to a hard disk, and written to a CD at the end of the operative procedure. The process of obtaining video of open procedures is more complex. Outside video of the procedure is recorded on digital tape with a digital video camera. The camera must be plugged into a computer to capture the video on the hard disk. Adobe Premiere software is used to edit the video and render the finished video to the hard drive. This finished video is burned onto a CD. We outline the choice of computer hardware and software for the manipulation of digital video. The techniques of backup and archiving the completed projects and files also are outlined. The uses of digital video for education and the formats that can be used in PowerPoint presentations are discussed.

  6. Next-generation sequencing for molecular diagnosis of lung adenocarcinoma specimens obtained by fine needle aspiration cytology

    Science.gov (United States)

    Qiu, Tian; Guo, Huiqin; Zhao, Huan; Wang, Luhua; Zhang, Zhihui

    2015-06-01

    Identification of multi-gene variations has led to the development of new targeted therapies in lung adenocarcinoma patients, and identification of an appropriate patient population with a reliable screening method is the key to the overall success of tumor targeted therapies. In this study, we used the Ion Torrent next-generation sequencing (NGS) technique to screen for mutations in 89 cases of lung adenocarcinoma metastatic lymph node specimens obtained by fine-needle aspiration cytology (FNAC). Of the 89 specimens, 30 (34%) were found to harbor epidermal growth factor receptor (EGFR) kinase domain mutations. Seven (8%) samples harbored KRAS mutations, and three (3%) samples had BRAF mutations involving exon 11 (G469A) and exon 15 (V600E). Eight (9%) samples harbored PIK3CA mutations. One (1%) sample had a HRAS G12C mutation. Thirty-two (36%) samples (36%) harbored TP53 mutations. Other genes including APC, ATM, MET, PTPN11, GNAS, HRAS, RB1, SMAD4 and STK11 were found each in one case. Our study has demonstrated that NGS using the Ion Torrent technology is a useful tool for gene mutation screening in lung adenocarcinoma metastatic lymph node specimens obtained by FNAC, and may promote the development of new targeted therapies in lung adenocarcinoma patients.

  7. MCTP system model based on linear programming optimization of apertures obtained from sequencing patient image data maps

    Energy Technology Data Exchange (ETDEWEB)

    Ureba, A. [Dpto. Fisiología Médica y Biofísica. Facultad de Medicina, Universidad de Sevilla, E-41009 Sevilla (Spain); Salguero, F. J. [Nederlands Kanker Instituut, Antoni van Leeuwenhoek Ziekenhuis, 1066 CX Ámsterdam, The Nederlands (Netherlands); Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Leal, A., E-mail: alplaza@us.es [Dpto. Fisiología Médica y Biofísica, Facultad de Medicina, Universidad de Sevilla, E-41009 Sevilla (Spain); Miras, H. [Servicio de Radiofísica, Hospital Universitario Virgen Macarena, E-41009 Sevilla (Spain); Linares, R.; Perucha, M. [Servicio de Radiofísica, Hospital Infanta Luisa, E-41010 Sevilla (Spain)

    2014-08-15

    irradiation case (Case II) solved with photon and electron modulated beams (IMRT + MERT); and a prostatic bed case (Case III) with a pronounced concave-shaped PTV by using volumetric modulated arc therapy. In the three cases, the required target prescription doses and constraints on organs at risk were fulfilled in a short enough time to allow routine clinical implementation. The quality assurance protocol followed to check CARMEN system showed a high agreement with the experimental measurements. Conclusions: A Monte Carlo treatment planning model exclusively based on maps performed from patient imaging data has been presented. The sequencing of these maps allows obtaining deliverable apertures which are weighted for modulation under a linear programming formulation. The model is able to solve complex radiotherapy treatments with high accuracy in an efficient computation time.

  8. MCTP system model based on linear programming optimization of apertures obtained from sequencing patient image data maps.

    Science.gov (United States)

    Ureba, A; Salguero, F J; Barbeiro, A R; Jimenez-Ortega, E; Baeza, J A; Miras, H; Linares, R; Perucha, M; Leal, A

    2014-08-01

    with photon and electron modulated beams (IMRT + MERT); and a prostatic bed case (Case III) with a pronounced concave-shaped PTV by using volumetric modulated arc therapy. In the three cases, the required target prescription doses and constraints on organs at risk were fulfilled in a short enough time to allow routine clinical implementation. The quality assurance protocol followed to check CARMEN system showed a high agreement with the experimental measurements. A Monte Carlo treatment planning model exclusively based on maps performed from patient imaging data has been presented. The sequencing of these maps allows obtaining deliverable apertures which are weighted for modulation under a linear programming formulation. The model is able to solve complex radiotherapy treatments with high accuracy in an efficient computation time.

  9. Improved Complete Genome Sequence of the Extremely Radioresistant Bacterium Deinococcus radiodurans R1 Obtained Using PacBio Single-Molecule Sequencing.

    Science.gov (United States)

    Hua, Xiaoting; Hua, Yuejin

    2016-09-01

    The genome sequence of Deinococcus radiodurans R1 was published in 1999. We resequenced D. radiodurans R1 using PacBio and compared the sequence with the published one. Large insertions and single nucleotide polymorphisms (SNPs) were observed among the genome sequences. A more accurate genome sequence will be helpful to studies of D. radiodurans. Copyright © 2016 Hua and Hua.

  10. A High-Throughput and Low-Complexity H.264/AVC Intra 16×16 Prediction Architecture for HD Video Sequences

    Directory of Open Access Journals (Sweden)

    M. Orlandić

    2014-11-01

    Full Text Available H.264/AVC compression standard provides tools and solutions for an efficient coding of video sequences of various resolutions. Spatial redundancy in a video frame is removed by use of intra prediction algorithm. There are three block-wise types of intra prediction: 4×4, 8×8 and 16×16. This paper proposes an efficient, low-complexity architecture for intra 16×16 prediction that provides real-time processing of HD video sequences. All four prediction (V, H, DC, Plane modes are supported in the implementation. The high-complexity plane mode computes a number of intermediate parameters required for creating prediction pixels. The local memory buffers are used for storing intermediate reconstructed data used as reference pixels in intra prediction process. The high throughput is achieved by 16-pixel parallelism and the proposed prediction process takes 48 cycles for processing one macroblock. The proposed architecture is synthesized and implemented on Kintex 705 -XC7K325T board and requires 94 MHz to encode a video sequence of HD 4k×2k (3840×2160 resolution at 60 fps in real time. This represents a significant improvement compared to the state of the art.

  11. Interactive segmentation of tongue contours in ultrasound video sequences using quality maps

    Science.gov (United States)

    Ghrenassia, Sarah; Ménard, Lucie; Laporte, Catherine

    2014-03-01

    Ultrasound (US) imaging is an effective and non invasive way of studying the tongue motions involved in normal and pathological speech, and the results of US studies are of interest for the development of new strategies in speech therapy. State-of-the-art tongue shape analysis techniques based on US images depend on semi-automated tongue segmentation and tracking techniques. Recent work has mostly focused on improving the accuracy of the tracking techniques themselves. However, occasional errors remain inevitable, regardless of the technique used, and the tongue tracking process must thus be supervised by a speech scientist who will correct these errors manually or semi-automatically. This paper proposes an interactive framework to facilitate this process. In this framework, the user is guided towards potentially problematic portions of the US image sequence by a segmentation quality map that is based on the normalized energy of an active contour model and automatically produced during tracking. When a problematic segmentation is identified, corrections to the segmented contour can be made on one image and propagated both forward and backward in the problematic subsequence, thereby improving the user experience. The interactive tools were tested in combination with two different tracking algorithms. Preliminary results illustrate the potential of the proposed framework, suggesting that the proposed framework generally improves user interaction time, with little change in segmentation repeatability.

  12. Constructing a sequence of palaeoDEMs to obtain erosion rates in a drainage basin.N

    Science.gov (United States)

    Castelltort, F. Xavier; Carles Balasch, J.; Cirés, Jordi; Colombo, Ferran

    2017-04-01

    DEMs made in a present-day drainage basin, considering it as a geomorphic unit, represent the end result of a landscape evolution. This process has had to follow a model of erosion. Trying to establish a conceptual erosion model in landscape evolution represents the first difficulty in constructing a sequence of palaeoDEMs. But if one is able to do it, the result will be easier and believable. The next step to do is to make a catalogue of base level types present in the drainage basin. The list has to include elements with determinate position and elevation (x, y, z) from the centre of the basin until hillslopes. A list of base level types may contain fluvial terrace remnants, erosive surfaces, palaeosols, alluvial covers of glacis, alluvial fans, rockfalls, landslides and scree zones. It is very important to know the spatial and temporal relations between the elements of the list, even if they are disconnected by erosion processes. Relative chronologies have to be set for all elements of the catalogue, and as far as possible absolute chronologies. To do it,it is essential to have established first the spatial relations between them, including those elements that are gone. Moreover, it is also essential to have adapted all the elements to the conceptual erosion model proposed. In this step, it has to be kept in mind that erosion rates can be very different in determinate areas within the same geomorphic unit. Erosion processes are focused in specific zones while other areas are maintained in stability. A good technique to construct a palaeoDEM is to start making, by hand, a map of contour lines. At this point, it is valuable to use the elements' catalogue. The use of those elements belonging to the same palaeosurface will result in a map. Several maps can be obtained from a catalogue. Contour maps can be gridded into a 3D surface by means of a specific application and a set of surfaces will be obtained. Algebraic operations can be done with palaeoDEMs obtaining

  13. Matching trajectories between video sequences by exploiting a sparse projective invariant representation.

    Science.gov (United States)

    Nunziati, Walter; Sclaroff, Stan; Del Bimbo, Alberto

    2010-03-01

    Identifying correspondences between trajectory segments observed from nonsynchronized cameras is important for reconstruction of the complete trajectory of moving targets in a large scene. Such a reconstruction can be obtained from motion data by comparing the trajectory segments and estimating both the spatial and temporal alignments. Exhaustive testing of all possible correspondences of trajectories over a temporal window is only viable in the cases with a limited number of moving targets and large view overlaps. Therefore, alternative solutions are required for situations with several trajectories that are only partially visible in each view. In this paper, we propose a new method that is based on view-invariant representation of trajectories, which is used to produce a sparse set of salient points for trajectory segments observed in each view. Only the neighborhoods at these salient points in the view--invariant representation are then used to estimate the spatial and temporal alignment of trajectory pairs in different views. It is demonstrated that, for planar scenes, the method is able to recover with good precision and efficiency both spatial and temporal alignments, even given relatively small overlap between views and arbitrary (unknown) temporal shifts of the cameras. The method also provides the same capabilities in the case of trajectories that are only locally planar, but exhibit some nonplanarity at a global level.

  14. 4DCAPTURE: a general purpose software package for capturing and analyzing two- and three-dimensional motion data acquired from video sequences

    Science.gov (United States)

    Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake

    2003-07-01

    4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The

  15. Coding Local and Global Binary Visual Features Extracted From Video Sequences.

    Science.gov (United States)

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.

  16. Draft Genome Sequences of Six Mycobacterium immunogenum, Strains Obtained from a Chloraminated Drinking Water Distribution System Simulator

    Science.gov (United States)

    We report the draft genome sequences of six Mycobacterium immunogenum isolated from a chloraminated drinking water distribution system simulator subjected to changes in operational parameters. M. immunogenum, a rapidly growing mycobacteria previously reported as the cause of hyp...

  17. Subjective Video Quality Assessment in H.264/AVC Video Coding Standard

    Directory of Open Access Journals (Sweden)

    Z. Miličević

    2012-11-01

    Full Text Available This paper seeks to provide an approach for subjective video quality assessment in the H.264/AVC standard. For this purpose a special software program for the subjective assessment of quality of all the tested video sequences is developed. It was developed in accordance with recommendation ITU-T P.910, since it is suitable for the testing of multimedia applications. The obtained results show that in the proposed selective intra prediction and optimized inter prediction algorithm there is a small difference in picture quality (signal-to-noise ratio between decoded original and modified video sequences.

  18. Draft Genome Sequences of Two Clinical Isolates of Burkholderia mallei Obtained from Nasal Swabs of Glanderous Equines in India.

    Science.gov (United States)

    Singha, Harisankar; Malik, Praveen; Saini, Sheetal; Khurana, Sandip K; Elschner, Mandy C; Mertens, Katja; Barth, Stefanie A; Tripathi, Bhupendra N; Singh, Raj K

    2017-04-06

    Burkholderia mallei is a Gram-negative coccobacillus which causes glanders-a fatal disease of equines that may occasionally be transmitted to humans. Several cases of outbreaks have been reported from India since 2006. This paper presents draft genome sequences of two B. mallei strains isolated from equines affected by glanders in India. Copyright © 2017 Singha et al.

  19. Interactive video algorithms and technologies

    CERN Document Server

    Hammoud, Riad

    2006-01-01

    This book covers both algorithms and technologies of interactive videos, so that businesses in IT and data managements, scientists and software engineers in video processing and computer vision, coaches and instructors that use video technology in teaching, and finally end-users will greatly benefit from it. This book contains excellent scientific contributions made by a number of pioneering scientists and experts from around the globe. It consists of five parts. The first part introduces the reader to interactive video and video summarization and presents effective methodologies for automatic abstraction of a single video sequence, a set of video sequences, and a combined audio-video sequence. In the second part, a list of advanced algorithms and methodologies for automatic and semi-automatic analysis and editing of audio-video documents are presented. The third part tackles a more challenging level of automatic video re-structuring, filtering of video stream by extracting of highlights, events, and meaningf...

  20. Obtaining representative community profiles of anaerobic digesters through optimisation of 16S rRNA amplicon sequencing protocols

    DEFF Research Database (Denmark)

    Kirkegaard, Rasmus Hansen; McIlroy, Simon Jon; Karst, Søren Michael

    RNA gene amplicon sequencing is rapid, cheap, high throughput, and has high taxonomic resolution. However, biases are introduced in multiple steps of this approach, including non-representative DNA extraction and uneven taxonomic coverage of selected PCR primers, potentially giving a skewed view...... of the community composition . As such sample specific optimisation and standardisation of DNA extraction, as well PCR primer selection, are essential to minimising the potential for such biases. The aim of this study was to develop a protocol for optimized community profiling of anaerobic digesters. The Fast......DNA SPIN kit was selected and the mechanical lysis parameters optimised for extraction of genomic DNA from mesophilic and thermophilic anaerobic digester samples. Different primer sets were compared for targeting the archaea and bacteria, both together and individually . Shotgun sequencing...

  1. Whole Genome DNA Sequence Analysis of Salmonella subspecies enterica serotype Tennessee obtained from related peanut butter foodborne outbreaks.

    Science.gov (United States)

    Wilson, Mark R; Brown, Eric; Keys, Chris; Strain, Errol; Luo, Yan; Muruvanda, Tim; Grim, Christopher; Jean-Gilles Beaubrun, Junia; Jarvis, Karen; Ewing, Laura; Gopinath, Gopal; Hanes, Darcy; Allard, Marc W; Musser, Steven

    2016-01-01

    Establishing an association between possible food sources and clinical isolates requires discriminating the suspected pathogen from an environmental background, and distinguishing it from other closely-related foodborne pathogens. We used whole genome sequencing (WGS) to Salmonella subspecies enterica serotype Tennessee (S. Tennessee) to describe genomic diversity across the serovar as well as among and within outbreak clades of strains associated with contaminated peanut butter. We analyzed 71 isolates of S. Tennessee from disparate food, environmental, and clinical sources and 2 other closely-related Salmonella serovars as outgroups (S. Kentucky and S. Cubana), which were also shot-gun sequenced. A whole genome single nucleotide polymorphism (SNP) analysis was performed using a maximum likelihood approach to infer phylogenetic relationships. Several monophyletic lineages of S. Tennessee with limited SNP variability were identified that recapitulated several food contamination events. S. Tennessee clades were separated from outgroup salmonellae by more than sixteen thousand SNPs. Intra-serovar diversity of S. Tennessee was small compared to the chosen outgroups (1,153 SNPs), suggesting recent divergence of some S. Tennessee clades. Analysis of all 1,153 SNPs structuring an S. Tennessee peanut butter outbreak cluster revealed that isolates from several food, plant, and clinical isolates were very closely related, as they had only a few SNP differences between them. SNP-based cluster analyses linked specific food sources to several clinical S. Tennessee strains isolated in separate contamination events. Environmental and clinical isolates had very similar whole genome sequences; no markers were found that could be used to discriminate between these sources. Finally, we identified SNPs within variable S. Tennessee genes that may be useful markers for the development of rapid surveillance and typing methods, potentially aiding in traceback efforts during future

  2. Sequencing of two sunflower chlorotic mottle virus isolates obtained from different natural hosts shed light on its evolutionary history.

    Science.gov (United States)

    Bejerman, N; Giolitti, F; de Breuil, S; Lenardon, S

    2013-02-01

    Sunflower chlorotic mottle virus (SuCMoV), the most prevalent virus of sunflower in Argentina, was reported naturally infecting not only sunflower but also weeds. To understand SuCMoV evolution and improve the knowledge on its variability, the complete genomic sequences of two SuCMoV isolates collected from Dipsacus fullonum (-dip) and Ibicella lutea (-ibi) were determined from three overlapping cDNA clones and subjected to phylogenetic and recombination analyses. SuCMoV-dip and -ibi genomes were 9,953-nucleotides (nt) long; their sequences contained an open reading frame of 9,561 nucleotides, which encoded a polyprotein of 3,187 amino acids flanked by a 5'-noncoding region (NCR) of 135 nt and a 3'-NCR of 257 nt. SuCMoV-dip and -ibi genome nucleotide sequences were 90.9 identical and displayed 90 and 94.6 % identity to that of SuCMoV-C, and 90.8 and 91.4 % identity to that of SuCMoV-CRS, respectively. P1 of SuCMoV-dip and -ibi was 3-nt longer than that of SuCMoV-CRS, but 12-nt shorter than that of SuCMoV-C. Two recombination events were detected in SuCMoV genome and the analysis of d(N)/d(S) ratio among SuCMoV complete sequences showed that the genomic regions are under different evolutionary constraints, suggesting that SuCMoV evolution would be conservative. Our findings provide evidence that mutation and recombination would have played important roles in the evolutionary history of SuCMoV.

  3. Whole Genome DNA Sequence Analysis of Salmonella subspecies enterica serotype Tennessee obtained from related peanut butter foodborne outbreaks.

    Directory of Open Access Journals (Sweden)

    Mark R Wilson

    Full Text Available Establishing an association between possible food sources and clinical isolates requires discriminating the suspected pathogen from an environmental background, and distinguishing it from other closely-related foodborne pathogens. We used whole genome sequencing (WGS to Salmonella subspecies enterica serotype Tennessee (S. Tennessee to describe genomic diversity across the serovar as well as among and within outbreak clades of strains associated with contaminated peanut butter. We analyzed 71 isolates of S. Tennessee from disparate food, environmental, and clinical sources and 2 other closely-related Salmonella serovars as outgroups (S. Kentucky and S. Cubana, which were also shot-gun sequenced. A whole genome single nucleotide polymorphism (SNP analysis was performed using a maximum likelihood approach to infer phylogenetic relationships. Several monophyletic lineages of S. Tennessee with limited SNP variability were identified that recapitulated several food contamination events. S. Tennessee clades were separated from outgroup salmonellae by more than sixteen thousand SNPs. Intra-serovar diversity of S. Tennessee was small compared to the chosen outgroups (1,153 SNPs, suggesting recent divergence of some S. Tennessee clades. Analysis of all 1,153 SNPs structuring an S. Tennessee peanut butter outbreak cluster revealed that isolates from several food, plant, and clinical isolates were very closely related, as they had only a few SNP differences between them. SNP-based cluster analyses linked specific food sources to several clinical S. Tennessee strains isolated in separate contamination events. Environmental and clinical isolates had very similar whole genome sequences; no markers were found that could be used to discriminate between these sources. Finally, we identified SNPs within variable S. Tennessee genes that may be useful markers for the development of rapid surveillance and typing methods, potentially aiding in traceback efforts

  4. Influence of DNA extraction on oral microbial profiles obtained via 16S rRNA gene sequencing

    Directory of Open Access Journals (Sweden)

    Loreto Abusleme

    2014-04-01

    Full Text Available Background and objective: The advent of next-generation sequencing has significantly facilitated characterization of the oral microbiome. Despite great efforts in streamlining the processes of sequencing and data curation, upstream steps required for amplicon library generation could still influence 16S rRNA gene-based microbial profiles. Among upstream processes, DNA extraction is a critical step that could represent a great source of bias. Accounting for bias introduced by extraction procedures is important when comparing studies that use different methods. Identifying the method that best portrays communities is also desirable. Accordingly, the aim of this study was to evaluate bias introduced by different DNA extraction procedures on oral microbiome profiles. Design: Four DNA extraction methods were tested on mock communities consisting of seven representative oral bacteria. Additionally, supragingival plaque samples were collected from seven individuals and divided equally to test two commonly used DNA extraction procedures. Amplicon libraries of the 16S rRNA gene were generated and sequenced via 454-pyrosequencing. Results: Evaluation of mock communities revealed that DNA yield and bacterial species representation varied with DNA extraction methods. Despite producing the lowest yield of DNA, a method that included bead beating was the only protocol capable of detecting all seven species in the mock community. Comparison of the performance of two commonly used methods (crude lysis and a chemical/enzymatic lysis+column-based DNA isolation on plaque samples showed no effect of extraction protocols on taxa prevalence but global community structure and relative abundance of individual taxa were affected. At the phylum level, the latter method improved the recovery of Actinobacteria, Bacteroidetes, and Spirochaetes over crude lysis. Conclusion: DNA extraction distorts microbial profiles in simulated and clinical oral samples, reinforcing the

  5. Optimization of preservation and storage time of sponge tissues to obtain quality mRNA for next-generation sequencing.

    Science.gov (United States)

    Riesgo, Ana; Pérez-Porro, Alicia R; Carmona, Susana; Leys, Sally P; Giribet, Gonzalo

    2012-03-01

    Transcriptome sequencing with next-generation sequencing technologies has the potential for addressing many long-standing questions about the biology of sponges. Transcriptome sequence quality depends on good cDNA libraries, which requires high-quality mRNA. Standard protocols for preserving and isolating mRNA often require optimization for unusual tissue types. Our aim was assessing the efficiency of two preservation modes, (i) flash freezing with liquid nitrogen (LN₂) and (ii) immersion in RNAlater, for the recovery of high-quality mRNA from sponge tissues. We also tested whether the long-term storage of samples at -80 °C affects the quantity and quality of mRNA. We extracted mRNA from nine sponge species and analysed the quantity and quality (A260/230 and A260/280 ratios) of mRNA according to preservation method, storage time, and taxonomy. The quantity and quality of mRNA depended significantly on the preservation method used (LN₂) outperforming RNAlater), the sponge species, and the interaction between them. When the preservation was analysed in combination with either storage time or species, the quantity and A260/230 ratio were both significantly higher for LN₂-preserved samples. Interestingly, individual comparisons for each preservation method over time indicated that both methods performed equally efficiently during the first month, but RNAlater lost efficiency in storage times longer than 2 months compared with flash-frozen samples. In summary, we find that for long-term preservation of samples, flash freezing is the preferred method. If LN₂ is not available, RNAlater can be used, but mRNA extraction during the first month of storage is advised. © 2011 Blackwell Publishing Ltd.

  6. Obtaining retrotransposon sequences, analysis of their genomic distribution and use of retrotransposon-derived genetic markers in lentil (Lens culinaris Medik.).

    Science.gov (United States)

    Rey-Baños, Rita; Sáenz de Miera, Luis E; García, Pedro; Pérez de la Vega, Marcelino

    2017-01-01

    Retrotransposons with long terminal repeats (LTR-RTs) are widespread mobile elements in eukaryotic genomes. We obtained a total of 81 partial LTR-RT sequences from lentil corresponding to internal retrotransposon components and LTRs. Sequences were obtained by PCR from genomic DNA. Approximately 37% of the LTR-RT internal sequences presented premature stop codons, pointing out that these elements must be non-autonomous. LTR sequences were obtained using the iPBS technique which amplifies sequences between LTR-RTs. A total of 193 retrotransposon-derived genetic markers, mainly iPBS, were used to obtain a genetic linkage map from 94 F7 inbred recombinant lines derived from the cross between the cultivar Lupa and the wild ancestor L. culinaris subsp. orientalis. The genetic map included 136 markers located in eight linkage groups. Clusters of tightly linked retrotransposon-derived markers were detected in linkage groups LG1, LG2, and LG6, hence denoting a non-random genomic distribution. Phylogenetic analyses identified the LTR-RT families in which internal and LTR sequences are included. Ty3-gypsy elements were more frequent than Ty1-copia, mainly due to the high Ogre element frequency in lentil, as also occurs in other species of the tribe Vicieae. LTR and internal sequences were used to analyze in silico their distribution among the contigs of the lentil draft genome. Up to 8.8% of the lentil contigs evidenced the presence of at least one LTR-RT similar sequence. A statistical analysis suggested a non-random distribution of these elements within of the lentil genome. In most cases (between 97% and 72%, depending on the LTR-RT type) none of the internal sequences flanked by the LTR sequence pair was detected, suggesting that defective and non-autonomous LTR-RTs are very frequent in lentil. Results support that LTR-RTs are abundant and widespread throughout of the lentil genome and that they are a suitable source of genetic markers useful to carry out further genetic

  7. Obtaining retrotransposon sequences, analysis of their genomic distribution and use of retrotransposon-derived genetic markers in lentil (Lens culinaris Medik..

    Directory of Open Access Journals (Sweden)

    Rita Rey-Baños

    Full Text Available Retrotransposons with long terminal repeats (LTR-RTs are widespread mobile elements in eukaryotic genomes. We obtained a total of 81 partial LTR-RT sequences from lentil corresponding to internal retrotransposon components and LTRs. Sequences were obtained by PCR from genomic DNA. Approximately 37% of the LTR-RT internal sequences presented premature stop codons, pointing out that these elements must be non-autonomous. LTR sequences were obtained using the iPBS technique which amplifies sequences between LTR-RTs. A total of 193 retrotransposon-derived genetic markers, mainly iPBS, were used to obtain a genetic linkage map from 94 F7 inbred recombinant lines derived from the cross between the cultivar Lupa and the wild ancestor L. culinaris subsp. orientalis. The genetic map included 136 markers located in eight linkage groups. Clusters of tightly linked retrotransposon-derived markers were detected in linkage groups LG1, LG2, and LG6, hence denoting a non-random genomic distribution. Phylogenetic analyses identified the LTR-RT families in which internal and LTR sequences are included. Ty3-gypsy elements were more frequent than Ty1-copia, mainly due to the high Ogre element frequency in lentil, as also occurs in other species of the tribe Vicieae. LTR and internal sequences were used to analyze in silico their distribution among the contigs of the lentil draft genome. Up to 8.8% of the lentil contigs evidenced the presence of at least one LTR-RT similar sequence. A statistical analysis suggested a non-random distribution of these elements within of the lentil genome. In most cases (between 97% and 72%, depending on the LTR-RT type none of the internal sequences flanked by the LTR sequence pair was detected, suggesting that defective and non-autonomous LTR-RTs are very frequent in lentil. Results support that LTR-RTs are abundant and widespread throughout of the lentil genome and that they are a suitable source of genetic markers useful to carry

  8. Plant DNA detection from grasshopper guts: A step-by-step protocol, from tissue preparation to obtaining plant DNA sequences.

    Science.gov (United States)

    Avanesyan, Alina

    2014-02-01

    A PCR-based method of identifying ingested plant DNA in gut contents of Melanoplus grasshoppers was developed. Although previous investigations have focused on a variety of insects, there are no protocols available for plant DNA detection developed for grasshoppers, agricultural pests that significantly influence plant community composition. • The developed protocol successfully used the noncoding region of the chloroplast trnL (UAA) gene and was tested in several feeding experiments. Plant DNA was obtained at seven time points post-ingestion from whole guts and separate gut sections, and was detectable up to 12 h post-ingestion in nymphs and 22 h post-ingestion in adult grasshoppers. • The proposed protocol is an effective, relatively quick, and low-cost method of detecting plant DNA from the grasshopper gut and its different sections. This has important applications, from exploring plant "movement" during food consumption, to detecting plant-insect interactions.

  9. Dengue virus type 2 in Cuba, 1997: conservation of E gene sequence in isolates obtained at different times during the epidemic.

    Science.gov (United States)

    Rodriguez-Roche, R; Alvarez, M; Gritsun, T; Rosario, D; Halstead, S; Kouri, G; Gould, E A; Guzman, M G

    2005-03-01

    It was recently reported that disease severity increased during the 1997 Cuban dengue 2 virus epidemic and it was suggested that this might be explained by the appearance of neutralization resistant escape mutants. We investigated these observations and ideas by sequencing 20 dengue 2 virus isolates obtained during the early (low case fatality rate) and the late (high case fatality rate) phases of the outbreak. Our results showed total conservation of the E gene sequence for these isolates suggesting that the selection of envelope gene escape mutants was not the determinant of increased disease severity. Alignment of these sequences with those available in GenBank, followed by Maximum likelihood phylogenetic analysis generated a tree, which indicated that our isolates are closely related to the virus that circulated in Venezuela in 1997/98 and subsequently in Martinique in 1998. This "American/Asian" genotype has therefore gradually dispersed across the Caribbean region during the past 5 years.

  10. DeTeCt 3.0: A software tool to detect impacts of small objects in video observations of Jupiter obtained by amateur astronomers

    Science.gov (United States)

    Juaristi, J.; Delcroix, M.; Hueso, R.; Sánchez-Lavega, A.

    2017-09-01

    Impacts of small size objects (10-20 m in diameter) with Jupiter atmosphere result in luminous superbolides that can be observed from the Earth with small size telescopes. Impacts of this kind have been observed four times by amateur astronomers since July 2010. The probability of observing one of these events is very small. Amateur astronomers observe Jupiter using fast video cameras that record thousands of frames during a few minutes which combine into a single image that generally results in a high-resolution image. Flashes are brief, faint and often lost by image reconstruction software. We present major upgrades in a software tool DeTeCt initially developed by amateur astronomer Marc Delcroix and our current project to maximize the chances of detecting more of these impacts in Jupiter.

  11. Defect detection on videos using neural network

    Directory of Open Access Journals (Sweden)

    Sizyakin Roman

    2017-01-01

    Full Text Available In this paper, we consider a method for defects detection in a video sequence, which consists of three main steps; frame compensation, preprocessing by a detector, which is base on the ranking of pixel values, and the classification of all pixels having anomalous values using convolutional neural networks. The effectiveness of the proposed method shown in comparison with the known techniques on several frames of the video sequence with damaged in natural conditions. The analysis of the obtained results indicates the high efficiency of the proposed method. The additional use of machine learning as postprocessing significantly reduce the likelihood of false alarm.

  12. Plant DNA Detection from Grasshopper Guts: A Step-by-Step Protocol, from Tissue Preparation to Obtaining Plant DNA Sequences

    Directory of Open Access Journals (Sweden)

    Alina Avanesyan

    2014-02-01

    Full Text Available Premise of the study: A PCR-based method of identifying ingested plant DNA in gut contents of Melanoplus grasshoppers was developed. Although previous investigations have focused on a variety of insects, there are no protocols available for plant DNA detection developed for grasshoppers, agricultural pests that significantly influence plant community composition. Methods and Results: The developed protocol successfully used the noncoding region of the chloroplast trnL (UAA gene and was tested in several feeding experiments. Plant DNA was obtained at seven time points post-ingestion from whole guts and separate gut sections, and was detectable up to 12 h post-ingestion in nymphs and 22 h post-ingestion in adult grasshoppers. Conclusions: The proposed protocol is an effective, relatively quick, and low-cost method of detecting plant DNA from the grasshopper gut and its different sections. This has important applications, from exploring plant “movement” during food consumption, to detecting plant–insect interactions.

  13. Top-Down and Bottom-Up Cues Based Moving Object Detection for Varied Background Video Sequences

    Directory of Open Access Journals (Sweden)

    Chirag I. Patel

    2014-01-01

    there is no need for background formulation and updates as it is background independent. Many bottom-up approaches and one combination of bottom-up and top-down approaches are proposed in the present paper. The proposed approaches seem more efficient due to inessential requirement of learning background model and due to being independent of previous video frames. Results indicate that the proposed approach works even against slight movements in the background and in various outdoor conditions.

  14. Research of Video Steganalysis Algorithm Based on H265 Protocol

    Directory of Open Access Journals (Sweden)

    Wu Kaicheng

    2015-01-01

    This paper researches LSB matching VSA based on H265 protocol with the research background of 26 original Video sequences, it firstly extracts classification features out from training samples as input of SVM, and trains in SVM to obtain high-quality category classification model, and then tests whether there is suspicious information in the video sample. The experimental results show that VSA algorithm based on LSB matching can be more practical to obtain all frame embedded secret information and carrier and video of local frame embedded. In addition, VSA adopts the method of frame by frame with a strong robustness in resisting attack in the corresponding time domain.

  15. Routinely obtained chest X-rays after elective video-assisted thoracoscopic surgery can be omitted in most patients; a retrospective, observational study

    DEFF Research Database (Denmark)

    Bjerregaard, Lars S; Jensen, Katrine; Petersen, René Horsleben

    2015-01-01

    divided into three groups according to the degree of pulmonary resection. The chest X-rays (obtained anterior-posterior in one plane with the patient in the supine position) were categorized as abnormal if showing pneumothorax >5 cm, possible intra-thoracic bleeding and/or a displaced chest tube. Medical....... Proportions of abnormal chest X-rays were unequally distributed between groups (p thoracic bleeding, six showed pneumothorax >5 cm and one showed a kinked chest...... tube. All the patients with possible intra-thoracic bleeding were re-explored in the operating theatre the same day. CONCLUSIONS: Only 10 of 1097 chest X-rays (0.9 %) obtained routinely after elective VATS procedures led to a clinical intervention, supporting the abandon of routine chest X rays...

  16. How to evaluate objective video quality metrics reliably

    DEFF Research Database (Denmark)

    Korhonen, Jari; Burini, Nino; You, Junyong

    2012-01-01

    The typical procedure for evaluating the performance of different objective quality metrics and indices involves comparisons between subjective quality ratings and the quality indices obtained using the objective metrics in question on the known video sequences. Several correlation indicators can...... as processing of subjective data. We also suggest some general guidelines for researchers to make comparison studies of objective video quality metrics more reliable and useful for the practitioners in the field....

  17. Modeling camera orientation and 3D structure from a sequence of images taken by a perambulating commercial video camera

    Science.gov (United States)

    M-Rouhani, Behrouz; Anderson, James A. D. W.

    1997-04-01

    In this paper we report the degree of reliability of image sequences taken by off-the-shelf TV cameras for modeling camera rotation and reconstructing 3D structure using computer vision techniques. This is done in spite of the fact that computer vision systems usually use imaging devices that are specifically designed for the human vision. Our scenario consists of a static scene and a mobile camera moving through the scene. The scene is any long axial building dominated by features along the three principal orientations and with at least one wall containing prominent repetitive planar features such as doors, windows bricks etc. The camera is an ordinary commercial camcorder moving along the axial axis of the scene and is allowed to rotate freely within the range +/- 10 degrees in all directions. This makes it possible that the camera be held by a walking unprofessional cameraman with normal gait, or to be mounted on a mobile robot. The system has been tested successfully on sequence of images of a variety of structured, but fairly cluttered scenes taken by different walking cameramen. The potential application areas of the system include medicine, robotics and photogrammetry.

  18. Multi-virulence-locus sequence typing of 4b Listeria monocytogenes isolates obtained from different sources in India over a 10-year period.

    Science.gov (United States)

    Doijad, Swapnil; Lomonaco, Sara; Poharkar, Krupali; Garg, Sandeep; Knabel, Stephen; Barbuddhe, Sukhadeo; Jayarao, Bhushan

    2014-07-01

    Listeria monocytogenes is an emerging foodborne pathogen responsible for listeriosis. The incidence of listeriosis has increased during the last 2 decades due to the increase in consumption of ready-to-eat foods and change in food consumption habits. Outbreaks and sporadic cases of listeriosis have been reported in developed countries. These reports have helped determine the safety practices needed to control listeriosis. Although L. monocytogenes has been reported from humans, animals, and a variety of foods in India, limited data exist with respect to prevalence and distribution of L. monocytogenes in the Indian subcontinent. The Indian Listeria Culture Collection Centre in Goa maintains all of the isolates received for subtyping and molecular characterization. Of the listerial isolate collection maintained by this center, three fourths of the isolates are of 4b serotype, while the number of other serotypes is very low. Therefore, we screened L. monocytogenes serotype 4b isolates to determine their relevance to previously defined epidemics and/or outbreaks using multi-virulence-locus sequence typing (MVLST). A total of 25 isolates in serogroup 4b of L. monocytogenes were randomly selected from a repository of 156 L. monocytogenes 4b isolates obtained from different sources in India over a period of 10 years. MVLST sequence types (virulence types, VTs) were compared to known epidemic clones and other known isolates in the L. monocytogenes MVLST database. The 25 isolates were grouped into three clusters. Cluster I comprised 21 isolates including animal (n=9), human (n=4), and food (n=8), which matched Epidemic Clone I (ECI, VT20). Three isolates-two from animal and one from food-formed a cluster while a single animal isolate was placed into two novel VTs (VT98 and VT99), respectively. Based on these findings, it can be inferred that ECI has been isolated from a variety of sources and places and has persisted in India for at least 10 years.

  19. Difficulty in obtaining the complete mRNA coding sequence at 5' region (5' end mRNA artifact): Causes, consequences in biology and medicine and possible solutions for obtaining the actual amino acid sequence of proteins (Review).

    Science.gov (United States)

    Vitale, Lorenza; Caracausi, Maria; Casadei, Raffaella; Pelleri, Maria Chiara; Piovesan, Allison

    2017-05-01

    The known difficulty in obtaining the actual full length, complete sequence of a messenger RNA (mRNA) may lead to the erroneous determination of its coding sequence at the 5' region (5' end mRNA artifact), and consequently to the wrong assignment of the translation start codon, leading to the inaccurate prediction of the encoded polypeptide at its amino terminus. Among the known human genes whose study was affected by this artifact, we can include disco interacting protein 2 homolog A (DIP2A; KIAA0184), Down syndrome critical region 1 (DSCR1), SON DNA binding protein (SON), trefoil factor 3 (TFF3) and URB1 ribosome biogenesis 1 homolog (URB1; KIAA0539) on chromosome 21, as well as receptor for activated C kinase 1 (RACK1, also known as GNB2L1), glutaminyl‑tRNA synthetase (QARS) and tyrosyl-DNA phosphodiesterase 2 (TDP2) along with another 474 loci, including interleukin 16 (IL16). In this review, we discuss the causes of this issue, its quantitative incidence in biomedical research, the consequences in biology and medicine, and the possible solutions for obtaining the actual amino acid sequence of proteins in the post-genomics era.

  20. Routine use of next-generation sequencing for preimplantation genetic diagnosis of blastomeres obtained from embryos on day 3 in fresh in vitro fertilization cycles.

    Science.gov (United States)

    Łukaszuk, Krzysztof; Pukszta, Sebastian; Wells, Dagan; Cybulska, Celina; Liss, Joanna; Płóciennik, Łukasz; Kuczyński, Waldemar; Zabielska, Judyta

    2015-04-01

    To determine the usefulness of semiconductor-based next-generation sequencing (NGS) for cleavage-stage preimplantation genetic diagnosis (PGD) of aneuploidy. Prospective case-control study. A private center for reproductive medicine. A total of 45 patients underwent day-3 embryo biopsy with PGD and fresh cycle transfer. Additionally, 53 patients, matched according to age, anti-Müllerian hormone levels, antral follicles count, and infertility duration were selected as controls. Choice of embryos for transfer was based on the PGD NGS results. Clinical pregnancy rate (PR) per embryo transfer (ET) was the primary outcome. Secondary outcomes were implantation and miscarriage rates. The PR per transfer was higher in the NGS group (84.4% vs. 41.5%). The implantation rate (61.5% vs. 34.8%) was higher in the NGS group. The miscarriage rate was similar in the 2 groups (2.8% vs. 4.6%). We demonstrate the technical feasibility of NGS-based PGD involving cleavage-stage biopsy and fresh ETs. Encouraging data were obtained from a prospective trial using this approach, arguing that cleavage-stage NGS may represent a valuable addition to current aneuploidy screening methods. These findings require further validation in a well-designed randomized controlled trial. ACTRN12614001035617. Copyright © 2015 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  1. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  2. MerCat: a versatile k-mer counter and diversity estimator for database-independent property analysis obtained from metagenomic and/or metatranscriptomic sequencing data

    Energy Technology Data Exchange (ETDEWEB)

    White, Richard A.; Panyala, Ajay R.; Glass, Kevin A.; Colby, Sean M.; Glaesemann, Kurt R.; Jansson, Georg C.; Jansson, Janet K.

    2017-02-21

    MerCat is a parallel, highly scalable and modular property software package for robust analysis of features in next-generation sequencing data. MerCat inputs include assembled contigs and raw sequence reads from any platform resulting in feature abundance counts tables. MerCat allows for direct analysis of data properties without reference sequence database dependency commonly used by search tools such as BLAST and/or DIAMOND for compositional analysis of whole community shotgun sequencing (e.g. metagenomes and metatranscriptomes).

  3. Detection of Mycobacterium avium subspecies paratuberculosis specific IS900 insertion sequences in bulk-tank milk samples obtained from different regions throughout Switzerland

    Directory of Open Access Journals (Sweden)

    Stephan Roger

    2002-06-01

    Full Text Available Abstract Background Since Mycobacterium avium subspecies paratuberculosis (MAP was isolated from intestinal tissue of a human patient suffering Crohn's disease, a controversial discussion exists whether MAP have a role in the etiology of Crohn's disease or not. Raw milk may be a potential vehicle for the transmission of MAP to human population. In a previous paper, we have demonstrated that MAP are found in raw milk samples obtained from a defined region in Switzerland. The aim of this work is to collect data about the prevalence of MAP specific IS900 insertion sequence in bulk-tank milk samples in different regions of Switzerland. Furthermore, we examined eventual correlation between the presence of MAP and the somatic cell counts, the total colony counts and the presence of Enterobacteriaceae. Results 273 (19.7% of the 1384 examined bulk-tank milk samples tested IS900 PCR-positive. The prevalence, however, in the different regions of Switzerland shows significant differences and ranged from 1.7% to 49.2%. Furthermore, there were no statistically significant (p >> 0.05 differences between the somatic cell counts and the total colony counts of PCR-positive and PCR-negative milk samples. Enterobacteriaceae occur as often in IS900 PCR-positive as in PCR-negative milk samples. Conclusion This is the first study, which investigates the prevalence of MAP in bulk-tank milk samples all over Switzerland and infers the herd-level prevalence of MAP infection in dairy herds. The prevalence of 19.7% IS900 PCR-positive bulk-milk samples shows a wide distribution of subclinical MAP-infections in dairy stock in Switzerland. MAP can therefore often be transmitted to humans by raw milk consumption.

  4. SPECIAL REPORT: Creating Conference Video

    Directory of Open Access Journals (Sweden)

    Noel F. Peden

    2008-12-01

    Full Text Available Capturing video at a conference is easy. Doing it so the product is useful is another matter. Many subtle problems come into play so that video and audio obtained can be used to create a final product. This article discusses what the author learned in the two years of shooting and editing video for Code4Lib conference.

  5. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  6. The central region of the msp gene of Treponema denticola has sequence heterogeneity among clinical samples, obtained from patients with periodontitis

    Directory of Open Access Journals (Sweden)

    Miragliotta Luisa

    2010-12-01

    Full Text Available Abstract Background Treponema denticola is an oral spirochete involved in the pathogenesis and progression of periodontal disease. Of its virulence factors, the major surface protein (MSP plays a role in the interaction between the treponeme and host. To understand the possible evolution of this protein, we analyzed the sequence of the msp gene in 17 T. denticola positive clinical samples. Methods Nucleotide and amino acid sequence of MSP have been determined by PCR amplification and sequencing in seventeen T. denticola clinical specimens to evaluate the genetic variability and the philogenetic relationship of the T. denticola msp gene among the different amplified sequence of positive samples. In silico antigenic analysis was performed on each MSP sequences to determined possible antigenic variation. Results The msp sequences showed two highly conserved 5' and 3' ends and a central region that varies substantially. Phylogenetic analysis categorized the 17 specimens into 2 principal groups, suggesting a low rate of evolutionary variability and an elevated degree of conservation of msp in clinically derived genetic material. Analysis of the predicted antigenic variability between isolates, demonstrated that the major differences lay between amino acids 200 and 300. Conclusion These findings showed for the first time, the nucleotide and amino acids variation of the msp gene in infecting T. denticola, in vivo. This data suggested that the antigenic variability found in to the MSP molecule, may be an important factor involved in immune evasion by T. denticola.

  7. Low-Complexity Multiple Description Coding of Video Based on 3D Block Transforms

    Directory of Open Access Journals (Sweden)

    Andrey Norkin

    2007-02-01

    Full Text Available The paper presents a multiple description (MD video coder based on three-dimensional (3D transforms. Two balanced descriptions are created from a video sequence. In the encoder, video sequence is represented in a form of coarse sequence approximation (shaper included in both descriptions and residual sequence (details which is split between two descriptions. The shaper is obtained by block-wise pruned 3D-DCT. The residual sequence is coded by 3D-DCT or hybrid, LOT+DCT, 3D-transform. The coding scheme is targeted to mobile devices. It has low computational complexity and improved robustness of transmission over unreliable networks. The coder is able to work at very low redundancies. The coding scheme is simple, yet it outperforms some MD coders based on motion-compensated prediction, especially in the low-redundancy region. The margin is up to 3 dB for reconstruction from one description.

  8. Estimation of Web video multiplicity

    Science.gov (United States)

    Cheung, SenChing S.; Zakhor, Avideh

    1999-12-01

    With ever more popularity of video web-publishing, many popular contents are being mirrored, reformatted, modified and republished, resulting in excessive content duplication. While such redundancy provides fault tolerance for continuous availability of information, it could potentially create problems for multimedia search engines in that the search results for a given query might become repetitious, and cluttered with a large number of duplicates. As such, developing techniques for detecting similarity and duplication is important to multimedia search engines. In addition, content providers might be interested in identifying duplicates of their content for legal, contractual or other business related reasons. In this paper, we propose an efficient algorithm called video signature to detect similar video sequences for large databases such as the web. The idea is to first form a 'signature' for each video sequence by selection a small number of its frames that are most similar to a number of randomly chosen seed images. Then the similarity between any tow video sequences can be reliably estimated by comparing their respective signatures. Using this method, we achieve 85 percent recall and precision ratios on a test database of 377 video sequences. As a proof of concept, we have applied our proposed algorithm to a collection of 1800 hours of video corresponding to around 45000 clips from the web. Our results indicate that, on average, every video in our collection from the web has around five similar copies.

  9. Complete genome sequence of a copper-resistant bacterium from the citrus phyllosphere, #Stenotrophomonas# sp. strain LM091, obtained using long-read technology

    OpenAIRE

    Richard, Damien; Boyer, Claudine; Lefeuvre, Pierre; Pruvost, Olivier

    2016-01-01

    The Stenotrophomonas genus shows great adaptive potential including resistance to multiple antimicrobials, opportunistic pathogenicity, and production of numerous secondary metabolites. Using long-read technology, we report the sequence of a plant-associated Stenotrophomonas strain originating from the citrus phyllosphere that displays a copper resistance phenotype.(Résumé d'auteur)

  10. Draft Genome Sequence of Uncultivated Toluene-Degrading Desulfobulbaceae Bacterium Tol-SR, Obtained by Stable Isotope Probing Using [13C6]Toluene.

    Science.gov (United States)

    Abu Laban, Nidal; Tan, BoonFei; Dao, Anh; Foght, Julia

    2015-01-15

    The draft genome of a member of the bacterial family Desulfobulbaceae (phylum Deltaproteobacteria) was assembled from the metagenome of a sulfidogenic [(13)C6]toluene-degrading enrichment culture. The "Desulfobulbaceae bacterium Tol-SR" genome is distinguished from related, previously sequenced genomes by suites of genes associated with anaerobic toluene metabolism, including bss, bbs, and bam. Copyright © 2015 Abu Laban et al.

  11. Multicore-based 3D-DWT video encoder

    Science.gov (United States)

    Galiano, Vicente; López-Granado, Otoniel; Malumbres, Manuel P.; Migallón, Hector

    2013-12-01

    Three-dimensional wavelet transform (3D-DWT) encoders are good candidates for applications like professional video editing, video surveillance, multi-spectral satellite imaging, etc. where a frame must be reconstructed as quickly as possible. In this paper, we present a new 3D-DWT video encoder based on a fast run-length coding engine. Furthermore, we present several multicore optimizations to speed-up the 3D-DWT computation. An exhaustive evaluation of the proposed encoder (3D-GOP-RL) has been performed, and we have compared the evaluation results with other video encoders in terms of rate/distortion (R/D), coding/decoding delay, and memory consumption. Results show that the proposed encoder obtains good R/D results for high-resolution video sequences with nearly in-place computation using only the memory needed to store a group of pictures. After applying the multicore optimization strategies over the 3D DWT, the proposed encoder is able to compress a full high-definition video sequence in real-time.

  12. The complete mitochondrial genome of the invasive Ponto-Caspian goby Ponticola kessleri obtained from high-throughput sequencing using the Ion Torrent Personal Genome Machine.

    Science.gov (United States)

    Kalchhauser, Irene; Kutschera, Verena E; Burkhardt-Holm, Patricia

    2016-05-01

    We report the first complete mitochondrial genome (mitogenome) of an invasive Ponto-Caspian goby, Ponticola kessleri (bighead goby, Günther 1891). Ion Torrent PGM sequencing of total DNA from two individuals yielded a contig of 16,971 bp, with overlapping ends located in the repetitive control region, which was validated using Sanger sequencing. The final mitogenome of Ponticola kessleri has a size of 16,890 bp and contains the expected gene configuration of 13 protein-coding genes, 2 rRNA genes and 22 tRNA genes. In a comparison with complete mitogenomes from other goby species, we identified a translocation of tRNA-Glu in the mitogenome of P. kessleri. Rearrangements are unique and rare events, and can thus provide phylogenetic information.

  13. Comprehensive Virus Detection Using Next Generation Sequencing in Grapevine Vascular Tissues of Plants Obtained from the Wine Regions of Bohemia and Moravia (Czech Republic)

    Science.gov (United States)

    2016-01-01

    Comprehensive next generation sequencing virus detection was used to detect the whole spectrum of viruses and viroids in selected grapevines from the Czech Republic. The novel NGS approach was based on sequencing libraries of small RNA isolated from grapevine vascular tissues. Eight previously partially-characterized grapevines of diverse varieties were selected and subjected to analysis: Chardonnay, Laurot, Guzal Kara, and rootstock Kober 125AA from the Moravia wine-producing region; plus Müller-Thurgau and Pinot Noir from the Bohemia wine-producing region, both in the Czech Republic. Using next generation sequencing of small RNA, the presence of 8 viruses and 2 viroids were detected in a set of eight grapevines; therefore, confirming the high effectiveness of the technique in plant virology and producing results supporting previous data on multiple infected grapevines in Czech vineyards. Among the pathogens detected, the Grapevine rupestris vein feathering virus and Grapevine yellow speckle viroid 1 were recorded in the Czech Republic for the first time. PMID:27959951

  14. Comprehensive Virus Detection Using Next Generation Sequencing in Grapevine Vascular Tissues of Plants Obtained from the Wine Regions of Bohemia and Moravia (Czech Republic.

    Directory of Open Access Journals (Sweden)

    Aleš Eichmeier

    Full Text Available Comprehensive next generation sequencing virus detection was used to detect the whole spectrum of viruses and viroids in selected grapevines from the Czech Republic. The novel NGS approach was based on sequencing libraries of small RNA isolated from grapevine vascular tissues. Eight previously partially-characterized grapevines of diverse varieties were selected and subjected to analysis: Chardonnay, Laurot, Guzal Kara, and rootstock Kober 125AA from the Moravia wine-producing region; plus Müller-Thurgau and Pinot Noir from the Bohemia wine-producing region, both in the Czech Republic. Using next generation sequencing of small RNA, the presence of 8 viruses and 2 viroids were detected in a set of eight grapevines; therefore, confirming the high effectiveness of the technique in plant virology and producing results supporting previous data on multiple infected grapevines in Czech vineyards. Among the pathogens detected, the Grapevine rupestris vein feathering virus and Grapevine yellow speckle viroid 1 were recorded in the Czech Republic for the first time.

  15. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  16. Integrating Illumination, Motion, and Shape Models for Robust Face Recognition in Video

    Directory of Open Access Journals (Sweden)

    Keyur Patel

    2008-05-01

    Full Text Available The use of video sequences for face recognition has been relatively less studied compared to image-based approaches. In this paper, we present an analysis-by-synthesis framework for face recognition from video sequences that is robust to large changes in facial pose and lighting conditions. This requires tracking the video sequence, as well as recognition algorithms that are able to integrate information over the entire video; we address both these problems. Our method is based on a recently obtained theoretical result that can integrate the effects of motion, lighting, and shape in generating an image using a perspective camera. This result can be used to estimate the pose and structure of the face and the illumination conditions for each frame in a video sequence in the presence of multiple point and extended light sources. We propose a new inverse compositional estimation approach for this purpose. We then synthesize images using the face model estimated from the training data corresponding to the conditions in the probe sequences. Similarity between the synthesized and the probe images is computed using suitable distance measurements. The method can handle situations where the pose and lighting conditions in the training and testing data are completely disjoint. We show detailed performance analysis results and recognition scores on a large video dataset.

  17. Video Analysis: Lessons from Professional Video Editing Practice

    Directory of Open Access Journals (Sweden)

    Eric Laurier

    2008-09-01

    Full Text Available In this paper we join a growing body of studies that learn from vernacular video analysts quite what video analysis as an intelligible course of action might be. Rather than pursuing epistemic questions regarding video as a number of other studies of video analysis have done, our concern here is with the crafts of producing the filmic. As such we examine how audio and video clips are indexed and brought to hand during the logging process, how a first assembly of the film is built at the editing bench and how logics of shot sequencing relate to wider concerns of plotting, genre and so on. In its conclusion we make a number of suggestions about the future directions of studying video and film editors at work. URN: urn:nbn:de:0114-fqs0803378

  18. Primary structure of rat cardiac beta-adrenergic and muscarinic cholinergic receptors obtained by automated DNA sequence analysis: further evidence for a multigene family.

    Science.gov (United States)

    Gocayne, J; Robinson, D A; FitzGerald, M G; Chung, F Z; Kerlavage, A R; Lentes, K U; Lai, J; Wang, C D; Fraser, C M; Venter, J C

    1987-12-01

    Two cDNA clones, lambda RHM-MF and lambda RHB-DAR, encoding the muscarinic cholinergic receptor and the beta-adrenergic receptor, respectively, have been isolated from a rat heart cDNA library. The cDNA clones were characterized by restriction mapping and automated DNA sequence analysis utilizing fluorescent dye primers. The rat heart muscarinic receptor consists of 466 amino acids and has a calculated molecular weight of 51,543. The rat heart beta-adrenergic receptor consists of 418 amino acids and has a calculated molecular weight of 46,890. The two cardiac receptors have substantial amino acid homology (27.2% identity, 50.6% with favored substitutions). The rat cardiac beta receptor has 88.0% homology (92.5% with favored substitutions) with the human brain beta receptor and the rat cardiac muscarinic receptor has 94.6% homology (97.6% with favored substitutions) with the porcine cardiac muscarinic receptor. The muscarinic cholinergic and beta-adrenergic receptors appear to be as conserved as hemoglobin and cytochrome c but less conserved than histones and are clearly members of a multigene family. These data support our hypothesis, based upon biochemical and immunological evidence, that suggests considerable structural homology and evolutionary conservation between adrenergic and muscarinic cholinergic receptors. To our knowledge, this is the first report utilizing automated DNA sequence analysis to determine the structure of a gene.

  19. Video games

    OpenAIRE

    Kolář, Vojtěch

    2012-01-01

    This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...

  20. Biochemical Characterization, Thermal Stability, and Partial Sequence of a Novel Exo-Polygalacturonase from the Thermophilic Fungus Rhizomucor pusillus A13.36 Obtained by Submerged Cultivation

    Directory of Open Access Journals (Sweden)

    Lucas Vinícius Trindade

    2016-01-01

    Full Text Available This work reports the production of an exo-polygalacturonase (exo-PG by Rhizomucor pusillus A13.36 in submerged cultivation (SmC in a shaker at 45°C for 96 h. A single pectinase was found and purified in order to analyze its thermal stability, by salt precipitation and hydrophobic interaction chromatography. The pectinase has an estimated Mw of approximately 43.5–47 kDa and optimum pH of 4.0 but is stable in pH ranging from 3.5 to 9.5 and has an optimum temperature of 61°C. It presents thermal stability between 30 and 60°C, has 70% activation in the presence of Ca2+, and was tested using citrus pectin with a degree of methyl esterification (DE of 26%. Ea(d for irreversible denaturation was 125.5 kJ/mol with positive variations of entropy and enthalpy for that and ΔG(d values were around 50 kJ/mol. The hydrolysis of polygalacturonate was analyzed by capillary electrophoresis which displayed a pattern of sequential hydrolysis (exo. The partial identification of the primary sequence was done by MS MALDI-TOF and a comparison with data banks showed the highest identity of the sequenced fragments of exo-PG from R. pusillus with an exo-pectinase from Aspergillus fumigatus. Pectin hydrolysis showed a sigmoidal curve for the Michaelis-Menten plot.

  1. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  2. Video databases: automatic retrieval based on content.

    Science.gov (United States)

    Bolle, R. M.; Yeo, B.-L.; Yeung, M.

    Digital video databases are becoming more and more pervasive and finding video of interest in large databases is rapidly becoming a problem. Intelligent means of quick content-based video retrieval and content-based rapid video viewing is, therefore, an important topic of research. Video is a rich source of data, it contains visual and audio information, and in many cases, there is text associated with the video. Content-based video retrieval should use all this information in an efficient and effective way. From a human perspective, a video query can be viewed as an iterated sequence of navigating, searching, browsing, and viewing. This paper addresses video search in terms of these phases.

  3. Automated analysis and annotation of basketball video

    Science.gov (United States)

    Saur, Drew D.; Tan, Yap-Peng; Kulkarni, Sanjeev R.; Ramadge, Peter J.

    1997-01-01

    Automated analysis and annotation of video sequences are important for digital video libraries, content-based video browsing and data mining projects. A successful video annotation system should provide users with useful video content summary in a reasonable processing time. Given the wide variety of video genres available today, automatically extracting meaningful video content for annotation still remains hard by using current available techniques. However, a wide range video has inherent structure such that some prior knowledge about the video content can be exploited to improve our understanding of the high-level video semantic content. In this paper, we develop tools and techniques for analyzing structured video by using the low-level information available directly from MPEG compressed video. Being able to work directly in the video compressed domain can greatly reduce the processing time and enhance storage efficiency. As a testbed, we have developed a basketball annotation system which combines the low-level information extracted from MPEG stream with the prior knowledge of basketball video structure to provide high level content analysis, annotation and browsing for events such as wide- angle and close-up views, fast breaks, steals, potential shots, number of possessions and possession times. We expect our approach can also be extended to structured video in other domains.

  4. Quality scalable video data stream

    OpenAIRE

    Wiegand, T.; Kirchhoffer, H.; Schwarz, H

    2008-01-01

    An apparatus for generating a quality-scalable video data stream (36) is described which comprises means (42) for coding a video signal (18) using block-wise transformation to obtain transform blocks (146, 148) of transformation coefficient values for a picture (140) of the video signal, a predetermined scan order (154, 156, 164, 166) with possible scan positions being defined among the transformation coefficient values within the transform blocks so that in each transform block, for each pos...

  5. Scalable gastroscopic video summarization via similar-inhibition dictionary selection.

    Science.gov (United States)

    Wang, Shuai; Cong, Yang; Cao, Jun; Yang, Yunsheng; Tang, Yandong; Zhao, Huaici; Yu, Haibin

    2016-01-01

    This paper aims at developing an automated gastroscopic video summarization algorithm to assist clinicians to more effectively go through the abnormal contents of the video. To select the most representative frames from the original video sequence, we formulate the problem of gastroscopic video summarization as a dictionary selection issue. Different from the traditional dictionary selection methods, which take into account only the number and reconstruction ability of selected key frames, our model introduces the similar-inhibition constraint to reinforce the diversity of selected key frames. We calculate the attention cost by merging both gaze and content change into a prior cue to help select the frames with more high-level semantic information. Moreover, we adopt an image quality evaluation process to eliminate the interference of the poor quality images and a segmentation process to reduce the computational complexity. For experiments, we build a new gastroscopic video dataset captured from 30 volunteers with more than 400k images and compare our method with the state-of-the-arts using the content consistency, index consistency and content-index consistency with the ground truth. Compared with all competitors, our method obtains the best results in 23 of 30 videos evaluated based on content consistency, 24 of 30 videos evaluated based on index consistency and all videos evaluated based on content-index consistency. For gastroscopic video summarization, we propose an automated annotation method via similar-inhibition dictionary selection. Our model can achieve better performance compared with other state-of-the-art models and supplies more suitable key frames for diagnosis. The developed algorithm can be automatically adapted to various real applications, such as the training of young clinicians, computer-aided diagnosis or medical report generation. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Chimeric rhinoviruses obtained via genetic engineering or artificially induced recombination are viable only if the polyprotein coding sequence derives from the same species.

    Science.gov (United States)

    Schibler, Manuel; Piuz, Isabelle; Hao, Weidong; Tapparel, Caroline

    2015-04-01

    Recombination is a widespread phenomenon that ensures both the stability and variation of RNA viruses. This phenomenon occurs with different frequencies within species of the Enterovirus genus. Intraspecies recombination is described frequently among non-rhinovirus enteroviruses but appears to be sporadic in rhinoviruses. Interspecies recombination is even rarer for rhinoviruses and mostly is related to ancient events which contributed to the speciation of these viruses. We reported that artificially engineered 5' untranslated region (UTR) interspecies rhinovirus/rhinovirus or rhinovirus/non-rhinovirus enterovirus recombinants are fully viable. Using a similar approach, we demonstrated in this study that exchanges of the P1-2A polyprotein region between members of the same rhinovirus species, but not between members of different species, give rise to competent chimeras. To further assess the rhinovirus intra- and interspecies recombination potential, we used artificially induced recombination by cotransfection of 5'-end-deleted and 3'-end-deleted and replication-deficient genomes. In this system, intraspecies recombination also resulted in viable viruses with high frequency, whereas no interspecies rhinovirus recombinants could be recovered. Mapping intraspecies recombination sites within the polyprotein highlighted recombinant hotspots in nonstructural genes and at gene boundaries. Notably, all recombinants occurring at gene junctions presented in-frame sequence duplications, whereas most intragenic recombinants were homologous. Taken together, our results suggest that only intraspecies recombination gives rise to viable rhinovirus chimeras in the polyprotein coding region and that recombination hotspots map to nonstructural genes with in-frame duplications at gene boundaries. These data provide new insights regarding the mechanism and limitations of rhinovirus recombination. Recombination represents a means to ensure both the stability and the variation of RNA

  7. Video Podcasts

    DEFF Research Database (Denmark)

    Nortvig, Anne Mette; Sørensen, Birgitte Holm

    2016-01-01

    This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...

  8. Plant DNA detection from grasshopper guts: A step-by-step protocol, from tissue preparation to obtaining plant DNA sequences1

    Science.gov (United States)

    Avanesyan, Alina

    2014-01-01

    • Premise of the study: A PCR-based method of identifying ingested plant DNA in gut contents of Melanoplus grasshoppers was developed. Although previous investigations have focused on a variety of insects, there are no protocols available for plant DNA detection developed for grasshoppers, agricultural pests that significantly influence plant community composition. • Methods and Results: The developed protocol successfully used the noncoding region of the chloroplast trnL (UAA) gene and was tested in several feeding experiments. Plant DNA was obtained at seven time points post-ingestion from whole guts and separate gut sections, and was detectable up to 12 h post-ingestion in nymphs and 22 h post-ingestion in adult grasshoppers. • Conclusions: The proposed protocol is an effective, relatively quick, and low-cost method of detecting plant DNA from the grasshopper gut and its different sections. This has important applications, from exploring plant “movement” during food consumption, to detecting plant–insect interactions. PMID:25202604

  9. On the definition of adapted audio/video profiles for high-quality video calling services over LTE/4G

    Science.gov (United States)

    Ndiaye, Maty; Quinquis, Catherine; Larabi, Mohamed Chaker; Le Lay, Gwenael; Saadane, Hakim; Perrine, Clency

    2014-01-01

    During the last decade, the important advances and widespread availability of mobile technology (operating systems, GPUs, terminal resolution and so on) have encouraged a fast development of voice and video services like video-calling. While multimedia services have largely grown on mobile devices, the generated increase of data consumption is leading to the saturation of mobile networks. In order to provide data with high bit-rates and maintain performance as close as possible to traditional networks, the 3GPP (The 3rd Generation Partnership Project) worked on a high performance standard for mobile called Long Term Evolution (LTE). In this paper, we aim at expressing recommendations related to audio and video media profiles (selection of audio and video codecs, bit-rates, frame-rates, audio and video formats) for a typical video-calling services held over LTE/4G mobile networks. These profiles are defined according to targeted devices (smartphones, tablets), so as to ensure the best possible quality of experience (QoE). Obtained results indicate that for a CIF format (352 x 288 pixels) which is usually used for smartphones, the VP8 codec provides a better image quality than the H.264 codec for low bitrates (from 128 to 384 kbps). However sequences with high motion, H.264 in slow mode is preferred. Regarding audio, better results are globally achieved using wideband codecs offering good quality except for opus codec (at 12.2 kbps).

  10. Internet video search

    NARCIS (Netherlands)

    Snoek, C.G.M.; Smeulders, A.W.M.

    2011-01-01

    In this tutorial, we focus on the challenges in internet video search, present methods how to achieve state-of-the-art performance while maintaining efficient execution, and indicate how to obtain improvements in the near future. Moreover, we give an overview of the latest developments and future

  11. Lossless Compression of Broadcast Video

    DEFF Research Database (Denmark)

    Martins, Bo; Eriksen, N.; Faber, E.

    1998-01-01

    We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...... artificial sequence containing uncompressible data all the 4:2:2, 8-bit test video material easily compresses losslessly to a rate below 125 Mbit/s. At this rate, video plus overhead can be contained in a single telecom 4th order PDH channel or a single STM-1 channel. Difficult 4:2:2, 10-bit test material...

  12. P2P Video Streaming Strategies based on Scalable Video Coding

    Directory of Open Access Journals (Sweden)

    F.A. López-Fuentes

    2015-02-01

    Full Text Available Video streaming over the Internet has gained significant popularity during the last years, and the academy and industry have realized a great research effort in this direction. In this scenario, scalable video coding (SVC has emerged as an important video standard to provide more functionality to video transmission and storage applications. This paper proposes and evaluates two strategies based on scalable video coding for P2P video streaming services. In the first strategy, SVC is used to offer differentiated quality video to peers with heterogeneous capacities. The second strategy uses SVC to reach a homogeneous video quality between different videos from different sources. The obtained results show that our proposed strategies enable a system to improve its performance and introduce benefits such as differentiated quality of video for clients with heterogeneous capacities and variable network conditions.

  13. Akademisk video

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    Dette kapitel har fokus på metodiske problemstillinger, der opstår i forhold til at bruge (digital) video i forbindelse med forskningskommunikation, ikke mindst online. Video har længe været benyttet i forskningen til dataindsamling og forskningskommunikation. Med digitaliseringen og internettet er...... der dog opstået nye muligheder og udfordringer i forhold til at formidle og distribuere forskningsresultater til forskellige målgrupper via video. Samtidig er klassiske metodologiske problematikker som forskerens positionering i forhold til det undersøgte stadig aktuelle. Både klassiske og nye...... problemstillinger diskuteres i kapitlet, som rammesætter diskussionen ud fra forskellige positioneringsmuligheder: formidler, historiefortæller, eller dialogist. Disse positioner relaterer sig til genrer inden for ’akademisk video’. Afslutningsvis præsenteres en metodisk værktøjskasse med redskaber til planlægning...

  14. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... World Videos. The workshops were run on December 4, 2016, in Cancun in Mexico. The two workshops together received 13 papers. Each paper was then reviewed by at least two expert reviewers in the field. In all, 11 papers were accepted to be presented at the workshops. The topics covered in the papers...

  15. Fast Aerial Video Stitching

    Directory of Open Access Journals (Sweden)

    Jing Li

    2014-10-01

    Full Text Available The highly efficient and robust stitching of aerial video captured by unmanned aerial vehicles (UAVs is a challenging problem in the field of robot vision. Existing commercial image stitching systems have seen success with offline stitching tasks, but they cannot guarantee high-speed performance when dealing with online aerial video sequences. In this paper, we present a novel system which has an unique ability to stitch high-frame rate aerial video at a speed of 150 frames per second (FPS. In addition, rather than using a high-speed vision platform such as FPGA or CUDA, our system is running on a normal personal computer. To achieve this, after the careful comparison of the existing invariant features, we choose the FAST corner and binary descriptor for efficient feature extraction and representation, and present a spatial and temporal coherent filter to fuse the UAV motion information into the feature matching. The proposed filter can remove the majority of feature correspondence outliers and significantly increase the speed of robust feature matching by up to 20 times. To achieve a balance between robustness and efficiency, a dynamic key frame-based stitching framework is used to reduce the accumulation errors. Extensive experiments on challenging UAV datasets demonstrate that our approach can break through the speed limitation and generate an accurate stitching image for aerial video stitching tasks.

  16. Markerless video analysis for movement quantification in pediatric epilepsy monitoring.

    Science.gov (United States)

    Lu, Haiping; Eng, How-Lung; Mandal, Bappaditya; Chan, Derrick W S; Ng, Yen-Ling

    2011-01-01

    This paper proposes a markerless video analytic system for quantifying body part movements in pediatric epilepsy monitoring. The system utilizes colored pajamas worn by a patient in bed to extract body part movement trajectories, from which various features can be obtained for seizure detection and analysis. Hence, it is non-intrusive and it requires no sensor/marker to be attached to the patient's body. It takes raw video sequences as input and a simple user-initialization indicates the body parts to be examined. In background/foreground modeling, Gaussian mixture models are employed in conjunction with HSV-based modeling. Body part detection follows a coarse-to-fine paradigm with graph-cut-based segmentation. Finally, body part parameters are estimated with domain knowledge guidance. Experimental studies are reported on sequences captured in an Epilepsy Monitoring Unit at a local hospital. The results demonstrate the feasibility of the proposed system in pediatric epilepsy monitoring and seizure detection.

  17. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real W...

  18. Video temporal alignment for object viewpoint

    OpenAIRE

    Papazoglou, Anestis; Del Pero, Luca; Ferrari, Vittorio

    2017-01-01

    We address the problem of temporally aligning semantically similar videos, for example two videos of cars on different tracks. We present an alignment method that establishes frame-to-frame correspondences such that the two cars are seen from a similar viewpoint (e.g. facing right), while also being temporally smooth and visually pleasing. Unlike previous works, we do not assume that the videos show the same scripted sequence of events. We compare against three alternative methods, including ...

  19. [Video documentation in forensic practice].

    Science.gov (United States)

    Schyma, C; Schyma, P

    1995-01-01

    The authors report in part 1 about their experiences with the Canon Ex1 Hi camcorder and the possibilities of documentation with the modern video technique. Application examples in legal medicine and criminalistics are described autopsy, scene, reconstruction of crimes etc. The online video documentation of microscopic sessions makes the discussion of findings easier. The use of video films for instruction produces a good resonance. The use of the video documentation can be extended by digitizing (Part 2). Two frame grabbers are presented, with which we obtained good results in digitizing of images captured from video. The best quality of images is achieved by online use of an image analysis chain. Corel 5.0 and PicEd Cora 4.0 allow complete image processings and analysis. The digital image processing influences the objectivity of the documentation. The applicabilities of image libraries are discussed.

  20. Comparison of mutation patterns in full-genome A/H3N2 influenza sequences obtained directly from clinical samples and the same samples after a single MDCK passage.

    Directory of Open Access Journals (Sweden)

    Hong Kai Lee

    Full Text Available Human influenza viruses can be isolated efficiently from clinical samples using Madin-Darby canine kidney (MDCK cells. However, this process is known to induce mutations in the virus as it adapts to this non-human cell-line. We performed a systematic study to record the pattern of MDCK-induced mutations observed across the whole influenza A/H3N2 genome. Seventy-seven clinical samples collected from 2009-2011 were included in the study. Two full influenza genomes were obtained for each sample: one from virus obtained directly from the clinical sample and one from the matching isolate cultured in MDCK cells. Comparison of the full-genome sequences obtained from each of these sources showed that 42% of the 77 isolates had acquired at least one MDCK-induced mutation. The presence or absence of these mutations was independent of viral load or sample origin (in-patients versus out-patients. Notably, all the five hemagglutinin missense mutations were observed at the hemaggutinin 1 domain only, particularly within or proximal to the receptor binding sites and antigenic site of the virus. Furthermore, 23% of the 77 isolates had undergone a MDCK-induced missense mutation, D151G/N, in the neuraminidase segment. This mutation has been found to be associated with reduced drug sensitivity towards the neuraminidase inhibitors and increased viral receptor binding efficiency to host cells. In contrast, none of the neuraminidase sequences obtained directly from the clinical samples contained the D151G/N mutation, suggesting that this mutation may be an indicator of MDCK culture-induced changes. These D151 mutations can confound the interpretation of the hemagglutination inhibition assay and neuraminidase inhibitor resistance results when these are based on MDCK isolates. Such isolates are currently in routine use in the WHO influenza vaccine and drug-resistance surveillance programs. Potential data interpretation miscalls can therefore be avoided by careful

  1. Multi-hypothesis distributed stereo video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Zamarin, Marco; Forchhammer, Søren

    2013-01-01

    Distributed Video Coding (DVC) is a video coding paradigm that exploits the source statistics at the decoder based on the availability of the Side Information (SI). Stereo sequences are constituted by two views to give the user an illusion of depth. In this paper, we present a DVC decoder...

  2. R-clustering for egocentric video segmentation

    NARCIS (Netherlands)

    Talavera Martínez, Estefanía; Radeva, Petia

    2015-01-01

    In this paper, we present a new method for egocentric video temporal segmentation based on integrating a statistical mean change detector and agglomerative clustering(AC) within an energy-minimization framework. Given the tendency of most AC methods to oversegment video sequences when clustering

  3. Simultaneous recordings of human microsaccades and drifts with a contemporary video eye tracker and the search coil technique.

    Directory of Open Access Journals (Sweden)

    Michael B McCamy

    Full Text Available Human eyes move continuously, even during visual fixation. These "fixational eye movements" (FEMs include microsaccades, intersaccadic drift and oculomotor tremor. Research in human FEMs has grown considerably in the last decade, facilitated by the manufacture of noninvasive, high-resolution/speed video-oculography eye trackers. Due to the small magnitude of FEMs, obtaining reliable data can be challenging, however, and depends critically on the sensitivity and precision of the eye tracking system. Yet, no study has conducted an in-depth comparison of human FEM recordings obtained with the search coil (considered the gold standard for measuring microsaccades and drift and with contemporary, state-of-the art video trackers. Here we measured human microsaccades and drift simultaneously with the search coil and a popular state-of-the-art video tracker. We found that 95% of microsaccades detected with the search coil were also detected with the video tracker, and 95% of microsaccades detected with video tracking were also detected with the search coil, indicating substantial agreement between the two systems. Peak/mean velocities and main sequence slopes of microsaccades detected with video tracking were significantly higher than those of the same microsaccades detected with the search coil, however. Ocular drift was significantly correlated between the two systems, but drift speeds were higher with video tracking than with the search coil. Overall, our combined results suggest that contemporary video tracking now approaches the search coil for measuring FEMs.

  4. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition...

  5. Video Analytics

    DEFF Research Database (Denmark)

    include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition......This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...

  6. Video Quality Prediction Models Based on Video Content Dynamics for H.264 Video over UMTS Networks

    Directory of Open Access Journals (Sweden)

    Asiya Khan

    2010-01-01

    Full Text Available The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS networks. In order to characterize the Quality of Service (QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS. The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.

  7. Videography-Based Unconstrained Video Analysis.

    Science.gov (United States)

    Li, Kang; Li, Sheng; Oh, Sangmin; Fu, Yun

    2017-05-01

    Video analysis and understanding play a central role in visual intelligence. In this paper, we aim to analyze unconstrained videos, by designing features and approaches to represent and analyze videography styles in the videos. Videography denotes the process of making videos. The unconstrained videos are defined as the long duration consumer videos that usually have diverse editing artifacts and significant complexity of contents. We propose to construct a videography dictionary, which can be utilized to represent every video clip as a sequence of videography words. In addition to semantic features, such as foreground object motion and camera motion, we also incorporate two novel interpretable features to characterize videography, including the scale information and the motion correlations. We then demonstrate that, by using statistical analysis methods, the unique videography signatures extracted from different events can be automatically identified. For real-world applications, we explore the use of videography analysis for three types of applications, including content-based video retrieval, video summarization (both visual and textual), and videography-based feature pooling. In the experiments, we evaluate the performance of our approach and other methods on a large-scale unconstrained video dataset, and show that the proposed approach significantly benefits video analysis in various ways.

  8. Video time encoding machines.

    Science.gov (United States)

    Lazar, Aurel A; Pnevmatikakis, Eftychios A

    2011-03-01

    We investigate architectures for time encoding and time decoding of visual stimuli such as natural and synthetic video streams (movies, animation). The architecture for time encoding is akin to models of the early visual system. It consists of a bank of filters in cascade with single-input multi-output neural circuits. Neuron firing is based on either a threshold-and-fire or an integrate-and-fire spiking mechanism with feedback. We show that analog information is represented by the neural circuits as projections on a set of band-limited functions determined by the spike sequence. Under Nyquist-type and frame conditions, the encoded signal can be recovered from these projections with arbitrary precision. For the video time encoding machine architecture, we demonstrate that band-limited video streams of finite energy can be faithfully recovered from the spike trains and provide a stable algorithm for perfect recovery. The key condition for recovery calls for the number of neurons in the population to be above a threshold value.

  9. Neural Basis of Video Gaming: A Systematic Review

    OpenAIRE

    Marc Palaus; Marron, Elena M.; Raquel Viejo-Sobera; Diego Redolar-Ripoll

    2017-01-01

    Background: Video gaming is an increasingly popular activity in contemporary society, especially among young people, and video games are increasing in popularity not only as a research tool but also as a field of study. Many studies have focused on the neural and behavioral effects of video games, providing a great deal of video game derived brain correlates in recent decades. There is a great amount of information, obtained through a myriad of methods, providing neural correlates of video ga...

  10. Neural Basis of Video Gaming: A Systematic Review

    OpenAIRE

    Palaus, Marc; Marron, Elena M.; Viejo-Sobera, Raquel; Redolar-Ripoll, Diego

    2017-01-01

    Video gaming is an increasingly popular activity in contemporary society, especially among young people, and video games are increasing in popularity not only as a research tool but also as a field of study. Many studies have focused on the neural and behavioral effects of video games, providing a great deal of video game derived brain correlates in recent decades. There is a great amount of information, obtained through a myriad of methods, providing neural correlates of video games. We aim ...

  11. Super-Resolution Still and Video Reconstruction from MPEG Coded Video

    National Research Council Canada - National Science Library

    Altunbasak, Yucel

    2004-01-01

    Transform coding is a popular and effective compression method for both still images and video sequences, as is evident from its widespread use in international media coding standards such as MPEG, H.263 and JPEG...

  12. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... NEI YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration ... Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: ...

  13. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia ... of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ...

  14. NEI You Tube Videos: Amblyopia

    Science.gov (United States)

    ... YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia ... of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ...

  15. Joint denoising, demosaicing, and chromatic aberration correction for UHD video

    Science.gov (United States)

    Jovanov, Ljubomir; Philips, Wilfried; Damstra, Klaas Jan; Ellenbroek, Frank

    2017-09-01

    High-resolution video capture is crucial for numerous applications such as surveillance, security, industrial inspection, medical imaging and digital entertainment. In the last two decades, we are witnessing a dramatic increase of the spatial resolution and the maximal frame rate of video capturing devices. In order to achieve further resolution increase, numerous challenges will be facing us. Due to the reduced size of the pixel, the amount of light also reduces, leading to the increased noise level. Moreover, the reduced pixel size makes the lens imprecisions more pronounced, which especially applies to chromatic aberrations. Even in the case when high quality lenses are used some chromatic aberration artefacts will remain. Next, noise level additionally increases due to the higher frame rates. To reduce the complexity and the price of the camera, one sensor captures all three colors, by relying on Color Filter Arrays. In order to obtain full resolution color image, missing color components have to be interpolated, i.e. demosaicked, which is more challenging than in the case of lower resolution, due to the increased noise and aberrations. In this paper, we propose a new method, which jointly performs chromatic aberration correction, denoising and demosaicking. By jointly performing the reduction of all artefacts, we are reducing the overall complexity of the system and the introduction of new artefacts. In order to reduce possible flicker we also perform temporal video enhancement. We evaluate the proposed method on a number of publicly available UHD sequences and on sequences recorded in our studio.

  16. Video surveillance using JPEG 2000

    Science.gov (United States)

    Dufaux, Frederic; Ebrahimi, Touradj

    2004-11-01

    This paper describes a video surveillance system which is composed of three key components, smart cameras, a server, and clients, connected through IP-networks in wired or wireless configurations. The system has been designed so as to protect the privacy of people under surveillance. Smart cameras are based on JPEG 2000 compression where an analysis module allows for events detection and regions of interest identification. The resulting regions of interest can then be encoded with better quality and scrambled. Compressed video streams are scrambled and signed for the purpose of privacy and data integrity verification using JPSEC compliant methods. The same bitstream may also be protected for robustness to transmission errors based on JPWL compliant methods. The server receives, stores, manages and transmits the video sequences on wired and wireless channels to a variety of clients and users with different device capabilities, channel characteristics and preferences. Use of seamless scalable coding of video sequences prevents any need for transcoding operations at any point in the system.

  17. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed ... Activity Role of Body Weight in Osteoarthritis Educational Videos for Patients Rheumatoid Arthritis Educational Video Series Psoriatic ...

  18. Evaluation of experimental UAV video change detection

    Science.gov (United States)

    Bartelsen, J.; Saur, G.; Teutsch, C.

    2016-10-01

    During the last ten years, the availability of images acquired from unmanned aerial vehicles (UAVs) has been continuously increasing due to the improvements and economic success of flight and sensor systems. From our point of view, reliable and automatic image-based change detection may contribute to overcoming several challenging problems in military reconnaissance, civil security, and disaster management. Changes within a scene can be caused by functional activities, i.e., footprints or skid marks, excavations, or humidity penetration; these might be recognizable in aerial images, but are almost overlooked when change detection is executed manually. With respect to the circumstances, these kinds of changes may be an indication of sabotage, terroristic activity, or threatening natural disasters. Although image-based change detection is possible from both ground and aerial perspectives, in this paper we primarily address the latter. We have applied an extended approach to change detection as described by Saur and Kr uger,1 and Saur et al.2 and have built upon the ideas of Saur and Bartelsen.3 The commercial simulation environment Virtual Battle Space 3 (VBS3) is used to simulate aerial "before" and "after" image acquisition concerning flight path, weather conditions and objects within the scene and to obtain synthetic videos. Video frames, which depict the same part of the scene, including "before" and "after" changes and not necessarily from the same perspective, are registered pixel-wise against each other by a photogrammetric concept, which is based on a homography. The pixel-wise registration is used to apply an automatic difference analysis, which, to a limited extent, is able to suppress typical errors caused by imprecise frame registration, sensor noise, vegetation and especially parallax effects. The primary concern of this paper is to seriously evaluate the possibilities and limitations of our current approach for image-based change detection with respect

  19. 61214++++','DOAJ-ART-EN'); return false;" href="+++++https://jual.nipissingu.ca/wp-content/uploads/sites/25/2014/06/v61214.m4v">61214++++">Jailed - Video

    Directory of Open Access Journals (Sweden)

    Cameron CULBERT

    2012-07-01

    Full Text Available As the public education system in Northern Ontario continues to take a downward spiral, a plethora of secondary school students are being placed in an alternative educational environment. Juxtaposing the two educational settings reveals very similar methods and characteristics of educating our youth as opposed to using a truly alternative approach to education. This video reviews the relationship between public education and alternative education in a remote Northern Ontario setting. It is my belief that the traditional methods of teaching are not appropriate in educating at risk students in alternative schools. Paper and pencil worksheets do not motivate these students to learn and succeed. Alternative education should emphasize experiential learning, a just in time curriculum based on every unique individual and the students true passion for everyday life. Cameron Culbert was born on February 3rd, 1977 in North Bay, Ontario. His teenage years were split between attending public school and his willed curriculum on the ski hill. Culbert spent 10 years (1996-2002 & 2006-2010 competing for Canada as an alpine ski racer. His passion for teaching and coaching began as an athlete and has now transferred into the classroom and the community. As a graduate of Nipissing University (BA, BEd, MEd. Camerons research interests are alternative education, physical education and technology in the classroom. Currently Cameron is an active educator and coach in Northern Ontario.

  20. Video Design Games

    DEFF Research Database (Denmark)

    Smith, Rachel Charlotte; Christensen, Kasper Skov; Iversen, Ole Sejer

    We introduce Video Design Games to train educators in teaching design. The Video Design Game is a workshop format consisting of three rounds in which participants observe, reflect and generalize based on video snippets from their own practice. The paper reports on a Video Design Game workshop...

  1. Video Salient Object Detection via Fully Convolutional Networks.

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing; Shao, Ling

    This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further

  2. VORTEX: video retrieval and tracking from compressed multimedia databases--template matching from MPEG-2 video compression standard

    Science.gov (United States)

    Schonfeld, Dan; Lelescu, Dan

    1998-10-01

    In this paper, a novel visual search engine for video retrieval and tracking from compressed multimedia databases is proposed. Our approach exploits the structure of video compression standards in order to perform object matching directly on the compressed video data. This is achieved by utilizing motion compensation--a critical prediction filter embedded in video compression standards--to estimate and interpolate the desired method for template matching. Motion analysis is used to implement fast tracking of objects of interest on the compressed video data. Being presented with a query in the form of template images of objects, the system operates on the compressed video in order to find the images or video sequences where those objects are presented and their positions in the image. This in turn enables the retrieval and display of the query-relevant sequences.

  3. Characterization of social video

    Science.gov (United States)

    Ostrowski, Jeffrey R.; Sarhan, Nabil J.

    2009-01-01

    The popularity of social media has grown dramatically over the World Wide Web. In this paper, we analyze the video popularity distribution of well-known social video websites (YouTube, Google Video, and the AOL Truveo Video Search engine) and characterize their workload. We identify trends in the categories, lengths, and formats of those videos, as well as characterize the evolution of those videos over time. We further provide an extensive analysis and comparison of video content amongst the main regions of the world.

  4. Video visual analytics

    OpenAIRE

    Höferlin, Markus Johannes

    2013-01-01

    The amount of video data recorded world-wide is tremendously growing and has already reached hardly manageable dimensions. It originates from a wide range of application areas, such as surveillance, sports analysis, scientific video analysis, surgery documentation, and entertainment, and its analysis represents one of the challenges in computer science. The vast amount of video data renders manual analysis by watching the video data impractical. However, automatic evaluation of video material...

  5. Detectors for scanning video imagers

    Science.gov (United States)

    Webb, Robert H.; Hughes, George W.

    1993-11-01

    In scanning video imagers, a single detector sees each pixel for only 100 ns, so the bandwidth of the detector needs to be about 10 MHz. How this fact influences the choice of detectors for scanning systems is described here. Some important parametric quantities obtained from manufacturer specifications are related and it is shown how to compare detectors when specified quantities differ.

  6. Action Search: Learning to Search for Human Activities in Untrimmed Videos

    KAUST Repository

    Alwassel, Humam

    2017-06-13

    Traditional approaches for action detection use trimmed data to learn sophisticated action detector models. Although these methods have achieved great success at detecting human actions, we argue that huge information is discarded when ignoring the process, through which this trimmed data is obtained. In this paper, we propose Action Search, a novel approach that mimics the way people annotate activities in video sequences. Using a Recurrent Neural Network, Action Search can efficiently explore a video and determine the time boundaries during which an action occurs. Experiments on the THUMOS14 dataset reveal that our model is not only able to explore the video efficiently but also accurately find human activities, outperforming state-of-the-art methods.

  7. Perceptual compressive sensing scalability in mobile video

    Science.gov (United States)

    Bivolarski, Lazar

    2011-09-01

    Scalability features embedded within the video sequences allows for streaming over heterogeneous networks to a variety of end devices. Compressive sensing techniques that will allow for lowering the complexity increase the robustness of the video scalability are reviewed. Human visual system models are often used in establishing perceptual metrics that would evaluate quality of video. Combining of perceptual and compressive sensing approach outlined from recent investigations. The performance and the complexity of different scalability techniques are evaluated. Application of perceptual models to evaluation of the quality of compressive sensing scalability is considered in the near perceptually lossless case and to the appropriate coding schemes is reviewed.

  8. No-Reference Video Quality Assessment by HEVC Codec Analysis

    DEFF Research Database (Denmark)

    Huang, Xin; Søgaard, Jacob; Forchhammer, Søren

    2015-01-01

    the transform coefficients, estimates the distortion, and assesses the video quality. The proposed scheme generates VQA features based on Intra coded frames, and then maps features using an Elastic Net to predict subjective video quality. A set of HEVC coded 4K UHD sequences are tested. Results show......This paper proposes a No-Reference (NR) Video Quality Assessment (VQA) method for videos subject to the distortion given by High Efficiency Video Coding (HEVC). The proposed assessment can be performed either as a BitstreamBased (BB) method or as a Pixel-Based (PB). It extracts or estimates...

  9. Video-assisted segmentation of speech and audio track

    Science.gov (United States)

    Pandit, Medha; Yusoff, Yusseri; Kittler, Josef; Christmas, William J.; Chilton, E. H. S.

    1999-08-01

    Video database research is commonly concerned with the storage and retrieval of visual information invovling sequence segmentation, shot representation and video clip retrieval. In multimedia applications, video sequences are usually accompanied by a sound track. The sound track contains potential cues to aid shot segmentation such as different speakers, background music, singing and distinctive sounds. These different acoustic categories can be modeled to allow for an effective database retrieval. In this paper, we address the problem of automatic segmentation of audio track of multimedia material. This audio based segmentation can be combined with video scene shot detection in order to achieve partitioning of the multimedia material into semantically significant segments.

  10. Encoding Concept Prototypes for Video Event Detection and Summarization

    NARCIS (Netherlands)

    Mazloom, M.; Habibian, A.; Liu, D.; Snoek, C.G.M.; Chang, S.F.

    2015-01-01

    This paper proposes a new semantic video representation for few and zero example event detection and unsupervised video event summarization. Different from existing works, which obtain a semantic representation by training concepts over images or entire video clips, we propose an algorithm that

  11. Defining the cognitive enhancing properties of video games: Steps Towards Standardization and Translation

    OpenAIRE

    Goodwin, Shikha Jain; Dziobek, Derek

    2016-01-01

    Ever since video games were available to the general public, they have intrigued brain researchers for many reasons. There is an enormous amount of diversity in the video game research, ranging from types of video games used, the amount of time spent playing video games, the definition of video gamer versus non-gamer to the results obtained after playing video games. In this paper, our goal is to provide a critical discussion of these issues, along with some steps towards generalization using...

  12. Object Recognition in Videos Utilizing Hierarchical and Temporal Objectness with Deep Neural Networks

    OpenAIRE

    Peng, Liang

    2017-01-01

    This dissertation develops a novel system for object recognition in videos. The input of the system is a set of unconstrained videos containing a known set of objects. The output is the locations and categories for each object in each frame across all videos. Initially, a shot boundary detection algorithm is applied to the videos to divide them into multiple sequences separated by the identified shot boundaries. Since each of these sequences still contains moderate content variations, we furt...

  13. Compact video synopsis via global spatiotemporal optimization.

    Science.gov (United States)

    Nie, Yongwei; Xiao, Chunxia; Sun, Hanqiu; Li, Ping

    2013-10-01

    Video synopsis aims at providing condensed representations of video data sets that can be easily captured from digital cameras nowadays, especially for daily surveillance videos. Previous work in video synopsis usually moves active objects along the time axis, which inevitably causes collisions among the moving objects if compressed much. In this paper, we propose a novel approach for compact video synopsis using a unified spatiotemporal optimization. Our approach globally shifts moving objects in both spatial and temporal domains, which shifting objects temporally to reduce the length of the video and shifting colliding objects spatially to avoid visible collision artifacts. Furthermore, using a multilevel patch relocation (MPR) method, the moving space of the original video is expanded into a compact background based on environmental content to fit with the shifted objects. The shifted objects are finally composited with the expanded moving space to obtain the high-quality video synopsis, which is more condensed while remaining free of collision artifacts. Our experimental results have shown that the compact video synopsis we produced can be browsed quickly, preserves relative spatiotemporal relationships, and avoids motion collisions.

  14. Using Video Monitors for Teaching Color Science.

    Science.gov (United States)

    Chermak, Vincent

    1997-10-01

    The emission spectra of phosphors from video monitors can be used as primary color standards since the color emissions do not vary among manufacturers. As a result, video monitors provide an ideal medium for studying additive and subtractive color mixing. A color monitor, a video graphics program, and a monochromator can be used to obtain both the transmission and absorption spectra of transparent colored films. The graphics program provides the tristimulus values from which the chromaticity coordinates of the films can be obtained. The schema defining these coordinates and how they are used will be described.

  15. Perceived Quality of Full HD Video - Subjective Quality Assessment

    Directory of Open Access Journals (Sweden)

    Juraj Bienik

    2016-01-01

    Full Text Available In recent years, an interest in multimedia services has become a global trend and this trend is still rising. The video quality is a very significant part from the bundle of multimedia services, which leads to a requirement for quality assessment in the video domain. Video quality of a streamed video across IP networks is generally influenced by two factors “transmission link imperfection and efficiency of compression standards. This paper deals with subjective video quality assessment and the impact of the compression standards H.264, H.265 and VP9 on perceived video quality of these compression standards. The evaluation is done for four full HD sequences, the difference of scenes is in the content“ distinction is based on Spatial (SI and Temporal (TI Index of test sequences. Finally, experimental results follow up to 30% bitrate reducing of H.265 and VP9 compared with the reference H.264.

  16. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... support group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ... group for me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support ...

  17. Video Games and Citizenship

    National Research Council Canada - National Science Library

    Bourgonjon, Jeroen; Soetaert, Ronald

    2013-01-01

    ... by exploring a particular aspect of digitization that affects young people, namely video games. They explore the new social spaces which emerge in video game culture and how these spaces relate to community building and citizenship...

  18. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Doctor Find a Provider Meet the Team Blog Articles News Resources Links Videos Podcasts Webinars For the ... Doctor Find a Provider Meet the Team Blog Articles News Provider Directory Donate Resources Links Videos Podcasts ...

  19. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Doctor Find a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For ... Doctor Find a Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources Links Videos ...

  20. Digital Video in Research

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2012-01-01

    questions of our media literacy pertaining to authoring multimodal texts (visual, verbal, audial, etc.) in research practice and the status of multimodal texts in academia. The implications of academic video extend to wider issues of how researchers harness opportunities to author different types of texts......Is video becoming “the new black” in academia, if so, what are the challenges? The integration of video in research methodology (for collection, analysis) is well-known, but the use of “academic video” for dissemination is relatively new (Eriksson and Sørensen). The focus of this paper is academic...... video, or short video essays produced for the explicit purpose of communicating research processes, topics, and research-based knowledge (see the journal of academic videos: www.audiovisualthinking.org). Video is increasingly used in popular showcases for video online, such as YouTube and Vimeo, as well...

  1. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Back Support Groups Is a support group for me? Find a Group Upcoming Events Video Library Photo ... Support Groups Back Is a support group for me? Find a group Back Upcoming events Video Library ...

  2. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork ... for me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ...

  3. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For the Media For Clinicians For ... Family Caregivers Glossary Menu In this section Links Videos Podcasts Webinars For the Media For Clinicians For ...

  4. Videos, Podcasts and Livechats

    Science.gov (United States)

    ... the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For the Media For Clinicians For ... Family Caregivers Glossary Menu In this section Links Videos Podcasts Webinars For the Media For Clinicians For ...

  5. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For the Media ... a Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources Links Videos Podcasts Webinars ...

  6. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer ... me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork ...

  7. Video Screen Capture Basics

    Science.gov (United States)

    Dunbar, Laura

    2014-01-01

    This article is an introduction to video screen capture. Basic information of two software programs, QuickTime for Mac and BlueBerry Flashback Express for PC, are also discussed. Practical applications for video screen capture are given.

  8. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... News Resources Links Videos Podcasts Webinars For the Media For Clinicians For Policymakers For Family Caregivers Glossary ... this section Links Videos Podcasts Webinars For the Media For Clinicians For Policymakers For Family Caregivers Glossary ...

  9. Transmission of compressed video

    Science.gov (United States)

    Pasch, H. L.

    1990-09-01

    An overview of video coding is presented. The aim is not to give a technical summary of possible coding techniques, but to address subjects related to video compression in general and to the transmission of compressed video in more detail. Bit rate reduction is in general possible by removing redundant information; removing information the eye does not use anyway; and reducing the quality of the video. The codecs which are used for reducing the bit rate, can be divided into two groups: Constant Bit rate Codecs (CBC's), which keep the bit rate constant, but vary the video quality; and Variable Bit rate Codecs (VBC's), which keep the video quality constant by varying the bit rate. VBC's can be in general reach a higher video quality than CBC's using less bandwidth, but need a transmission system that allows the bandwidth of a connection to fluctuate in time. The current and the next generation of the PSTN does not allow this; ATM might. There are several factors which influence the quality of video: the bit error rate of the transmission channel, slip rate, packet loss rate/packet insertion rate, end-to-end delay, phase shift between voice and video, and bit rate. Based on the bit rate of the coded video, the following classification of coded video can be made: High Definition Television (HDTV); Broadcast Quality Television (BQTV); video conferencing; and video telephony. The properties of these classes are given. The video conferencing and video telephony equipment available now and in the next few years can be divided into three categories: conforming to 1984 CCITT standard for video conferencing; conforming to 1988 CCITT standard; and conforming to no standard.

  10. Making good physics videos

    Science.gov (United States)

    Lincoln, James

    2017-05-01

    Online videos are an increasingly important way technology is contributing to the improvement of physics teaching. Students and teachers have begun to rely on online videos to provide them with content knowledge and instructional strategies. Online audiences are expecting greater production value, and departments are sometimes requesting educators to post video pre-labs or to flip our classrooms. In this article, I share my advice on creating engaging physics videos.

  11. Desktop video conferencing

    OpenAIRE

    Potter, Ray; Roberts, Deborah

    2007-01-01

    This guide aims to provide an introduction to Desktop Video Conferencing. You may be familiar with video conferencing, where participants typically book a designated conference room and communicate with another group in a similar room on another site via a large screen display. Desktop video conferencing (DVC), as the name suggests, allows users to video conference from the comfort of their own office, workplace or home via a desktop/laptop Personal Computer. DVC provides live audio and visua...

  12. 47 CFR 79.3 - Video description of video programming.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Video description of video programming. 79.3... CLOSED CAPTIONING AND VIDEO DESCRIPTION OF VIDEO PROGRAMMING § 79.3 Video description of video programming. (a) Definitions. For purposes of this section the following definitions shall apply: (1...

  13. COMPARATIVE STUDY OF COMPRESSION TECHNIQUES FOR SYNTHETIC VIDEOS

    OpenAIRE

    Ayman Abdalla; Ahmad Mazhar; Mosa Salah

    2014-01-01

    We evaluate the performance of three state of the art video codecs on synthetic videos. The evaluation is based on both subjective and objective quality metrics. The subjective quality of the compressed video sequences is evaluated using the Double Stimulus Impairment Scale (DSIS) assessment metric while the Peak Signal-to-Noise Ratio (PSNR) is used for the objective evaluation. An extensive number of experiments are conducted to study the effect of frame rate and resolution o...

  14. Object detection in surveillance video from dense trajectories

    OpenAIRE

    Zhai, Mengyao

    2015-01-01

    Detecting objects such as humans or vehicles is a central problem in surveillance video. Myriad standard approaches exist for this problem. At their core, approaches consider either the appearance of people, patterns of their motion, or differences from the background. In this paper we build on dense trajectories, a state-of-the-art approach for describing spatio-temporal patterns in video sequences. We demonstrate an application of dense trajectories to object detection in surveillance video...

  15. Video Self-Modeling

    Science.gov (United States)

    Buggey, Tom; Ogle, Lindsey

    2012-01-01

    Video self-modeling (VSM) first appeared on the psychology and education stage in the early 1970s. The practical applications of VSM were limited by lack of access to tools for editing video, which is necessary for almost all self-modeling videos. Thus, VSM remained in the research domain until the advent of camcorders and VCR/DVD players and,…

  16. Tracing Sequential Video Production

    DEFF Research Database (Denmark)

    Otrel-Cass, Kathrin; Khalid, Md. Saifuddin

    2015-01-01

    With an interest in learning that is set in collaborative situations, the data session presents excerpts from video data produced by two of fifteen students from a class of 5th semester techno-anthropology course. Students used video cameras to capture the time they spent working with a scientist...... video, nature of the interactional space, and material and spatial semiotics....

  17. Developing a Promotional Video

    Science.gov (United States)

    Epley, Hannah K.

    2014-01-01

    There is a need for Extension professionals to show clientele the benefits of their program. This article shares how promotional videos are one way of reaching audiences online. An example is given on how a promotional video has been used and developed using iMovie software. Tips are offered for how professionals can create a promotional video and…

  18. Adaptive sensing and optimal power allocation for wireless video sensors with sigma-delta imager.

    Science.gov (United States)

    Marijan, Malisa; Demirkol, Ilker; Maricić I, Danijel; Sharma, Gaurav; Ignjatovi, Zeljko

    2010-10-01

    We consider optimal power allocation for wireless video sensors (WVSs), including the image sensor subsystem in the system analysis. By assigning a power-rate-distortion (P-R-D) characteristic for the image sensor, we build a comprehensive P-R-D optimization framework for WVSs. For a WVS node operating under a power budget, we propose power allocation among the image sensor, compression, and transmission modules, in order to minimize the distortion of the video reconstructed at the receiver. To demonstrate the proposed optimization method, we establish a P-R-D model for an image sensor based upon a pixel level sigma-delta (Σ∆) image sensor design that allows investigation of the tradeoff between the bit depth of the captured images and spatio-temporal characteristics of the video sequence under the power constraint. The optimization results obtained in this setting confirm that including the image sensor in the system optimization procedure can improve the overall video quality under power constraint and prolong the lifetime of the WVSs. In particular, when the available power budget for a WVS node falls below a threshold, adaptive sensing becomes necessary to ensure that the node communicates useful information about the video content while meeting its power budget.

  19. Temporal Segmentation of MPEG Video Streams

    Directory of Open Access Journals (Sweden)

    Janko Calic

    2002-06-01

    Full Text Available Many algorithms for temporal video partitioning rely on the analysis of uncompressed video features. Since the information relevant to the partitioning process can be extracted directly from the MPEG compressed stream, higher efficiency can be achieved utilizing information from the MPEG compressed domain. This paper introduces a real-time algorithm for scene change detection that analyses the statistics of the macroblock features extracted directly from the MPEG stream. A method for extraction of the continuous frame difference that transforms the 3D video stream into a 1D curve is presented. This transform is then further employed to extract temporal units within the analysed video sequence. Results of computer simulations are reported.

  20. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  1. Intelligent video surveillance systems

    CERN Document Server

    Dufour, Jean-Yves

    2012-01-01

    Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi

  2. VBR video traffic models

    CERN Document Server

    Tanwir, Savera

    2014-01-01

    There has been a phenomenal growth in video applications over the past few years. An accurate traffic model of Variable Bit Rate (VBR) video is necessary for performance evaluation of a network design and for generating synthetic traffic that can be used for benchmarking a network. A large number of models for VBR video traffic have been proposed in the literature for different types of video in the past 20 years. Here, the authors have classified and surveyed these models and have also evaluated the models for H.264 AVC and MVC encoded video and discussed their findings.

  3. Flip Video for Dummies

    CERN Document Server

    Hutsko, Joe

    2010-01-01

    The full-color guide to shooting great video with the Flip Video camera. The inexpensive Flip Video camera is currently one of the hottest must-have gadgets. It's portable and connects easily to any computer to transfer video you shoot onto your PC or Mac. Although the Flip Video camera comes with a quick-start guide, it lacks a how-to manual, and this full-color book fills that void! Packed with full-color screen shots throughout, Flip Video For Dummies shows you how to shoot the best possible footage in a variety of situations. You'll learn how to transfer video to your computer and then edi

  4. Video dosimetry: evaluation of X-radiation dose by video fluoroscopic image; Videodosimetria: avaliacao da dose da radiacao X atraves da imagem videofluroscopica

    Energy Technology Data Exchange (ETDEWEB)

    Nova, Joao Luiz Leocadio da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Centro de Ciencias da Saude. Nucleo de Tecnologia Educacional para a Saude; Lopes, Ricardo Tadeu [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Lab. de Instrumentacao Nuclear

    1996-12-31

    A new methodology to evaluate the entrance surface dose on patients under radiodiagnosis is presented. A phantom is used in video fluoroscopic procedures in on line video signal system. The images are obtained from a Siemens Polymat 50 and are digitalized. The results show that the entrance surface dose can be obtained in real time from video imaging 3 refs., 2 figs., 2 tabs.

  5. Video Salient Object Detection via Fully Convolutional Networks

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing; Shao, Ling

    2018-01-01

    This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: (1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data, and (2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image datasets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the DAVIS dataset (MAE of .06) and the FBMS dataset (MAE of .07), and do so with much improved speed (2fps with all steps).

  6. Temporal compressive imaging for video

    Science.gov (United States)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  7. A novel 3D scalable video compression algorithm

    Science.gov (United States)

    Somasundaram, Siva; Subbalakshmi, Koduvayur P.

    2003-05-01

    In this paper we propose a scalable video coding scheme that utilizes the embedded block coding with optimal truncation (EBCOT) compression algorithm. Three dimensional spatio-temporal decomposition of the video sequence succeeded by compression using the EBCOT generates a SNR and resolution scalable bit stream. The proposed video coding algorithm not only performs closer to the MPEG-4 video coding standard in compression efficiency but also provides better SNR and resolution scalability. Experimental results show that the performance of the proposed algorithm does better than the 3-D SPIHT (Set Partitioning in Hierarchial Trees) algorithm by 1.5dB.

  8. In vivo skin elastography with high-definition optical videos.

    Science.gov (United States)

    Zhang, Yong; Brodell, Robert T; Mostow, Eliot N; Vinyard, Christopher J; Marie, Hazel

    2009-08-01

    Continuous measurements of biomechanical properties of skin provide potentially valuable information to dermatologists for both clinical diagnosis and quantitative assessment of therapy. This paper presents an experimental study on in vivo imaging of skin elastic properties using high-definition optical videos. The objective is to (i) investigate whether skin property abnormalities can be detected in the computed strain elastograms, (ii) quantify property abnormalities with a Relative Strain Index (RSI), so that an objective rating system can be established, (iii) determine whether certain skin diseases are more amenable to optical elastography and (iv) identify factors that may have an adverse impact on the quality of strain elastograms. There are three steps in optical skin elastography: (i) skin deformations are recorded in a video sequence using a high-definition camcorder, (ii) a dense motion field between two adjacent video frames is obtained using a robust optical flow algorithm, with which a cumulative motion field between two frames of a larger interval is derived and (iii) a strain elastogram is computed by applying two weighted gradient filters to the cumulative motion data. Experiments were carried out using videos of 25 patients. In the three cases presented in this article (hypertrophic lichen planus, seborrheic keratosis and psoriasis vulgaris), abnormal tissues associated with the skin diseases were successfully identified in the elastograms. There exists a good correspondence between the shape of property abnormalities and the area of diseased skin. The computed RSI gives a quantitative measure of the magnitude of property abnormalities that is consistent with the skin stiffness observed on clinical examinations. Optical elastography is a promising imaging modality that is capable of capturing disease-induced property changes. Its main advantage is that an elastogram presents a continuous description of the spatial variation of skin properties on

  9. Understanding Video Games

    DEFF Research Database (Denmark)

    Heide Smith, Jonas; Tosca, Susana Pajares; Egenfeldt-Nielsen, Simon

    From Pong to PlayStation 3 and beyond, Understanding Video Games is the first general introduction to the exciting new field of video game studies. This textbook traces the history of video games, introduces the major theories used to analyze games such as ludology and narratology, reviews...... the economics of the game industry, examines the aesthetics of game design, surveys the broad range of game genres, explores player culture, and addresses the major debates surrounding the medium, from educational benefits to the effects of violence. Throughout the book, the authors ask readers to consider...... larger questions about the medium: * What defines a video game? * Who plays games? * Why do we play games? * How do games affect the player? Extensively illustrated, Understanding Video Games is an indispensable and comprehensive resource for those interested in the ways video games are reshaping...

  10. Collaborative Video Sketching

    DEFF Research Database (Denmark)

    Henningsen, Birgitte; Gundersen, Peter Bukovica; Hautopp, Heidi

    2017-01-01

    This paper introduces to what we define as a collaborative video sketching process. This process links various sketching techniques with digital storytelling approaches and creative reflection processes in video productions. Traditionally, sketching has been used by designers across various...... forms and through empirical examples, we present and discuss the video recording of sketching sessions, as well as development of video sketches by rethinking, redoing and editing the recorded sessions. The empirical data is based on workshop sessions with researchers and students from universities...... and university colleges and primary and secondary school teachers. As researchers, we have had different roles in these action research case studies where various video sketching techniques were applied.The analysis illustrates that video sketching can take many forms, and two common features are important...

  11. Reflections on academic video

    Directory of Open Access Journals (Sweden)

    Thommy Eriksson

    2012-11-01

    Full Text Available As academics we study, research and teach audiovisual media, yet rarely disseminate and mediate through it. Today, developments in production technologies have enabled academic researchers to create videos and mediate audiovisually. In academia it is taken for granted that everyone can write a text. Is it now time to assume that everyone can make a video essay? Using the online journal of academic videos Audiovisual Thinking and the videos published in it as a case study, this article seeks to reflect on the emergence and legacy of academic audiovisual dissemination. Anchoring academic video and audiovisual dissemination of knowledge in two critical traditions, documentary theory and semiotics, we will argue that academic video is in fact already present in a variety of academic disciplines, and that academic audiovisual essays are bringing trends and developments that have long been part of academic discourse to their logical conclusion.

  12. A method for obtaining simian immunodeficiency virus RNA sequences from laser capture microdissected and immune captured CD68+ and CD163+ macrophages from frozen tissue sections of bone marrow and brain.

    Science.gov (United States)

    Mallard, Jaclyn; Papazian, Emily; Soulas, Caroline; Nolan, David J; Salemi, Marco; Williams, Kenneth C

    2017-03-01

    Laser capture microdissection (LCM) is used to extract cells or tissue regions for analysis of RNA, DNA or protein. Several methods of LCM are established for different applications, but a protocol for consistently obtaining lentiviral RNA from LCM captured immune cell populations is not described. Obtaining optimal viral RNA for analysis of viral genes from immune-captured cells using immunohistochemistry (IHC) and LCM is challenging. IHC protocols have long antibody incubation times that increase risk of RNA degradation. But, immune capture of specific cell populations like macrophages without staining for virus cannot result in obtaining only a fraction of cells which are productively lentivirally infected. In this study we sought to obtain simian immunodeficiency virus (SIV) RNA from SIV gp120+ and CD68+ monocyte/macrophages in bone marrow (BM) and CD163+ perivascular macrophages in brain of SIV-infected rhesus macaques. Here, we report an IHC protocol with RNase inhibitors that consistently results in optimal quantity and yield of lentiviral RNA from LCM-captured immune cells. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Genotype analysis of Candida albicans isolates obtained from different body locations of patients with superficial candidiasis using PCRs targeting 25S rDNA and ALT repeat sequences of the RPS.

    Science.gov (United States)

    Hattori, Hisao; Iwata, Takako; Nakagawa, Yoshiyuki; Kawamoto, Fumihiko; Tomita, Yasushi; Kikuchi, Akihiko; Kanbe, Toshio

    2006-04-01

    Several molecular biology-based genotyping techniques have been adapted for studying the molecular characteristics of Candida albicans strains, which constitute the majority of the etiologic agents in candidiasis. Recently, we reported a PCR system targeting 25S rDNA and ALT repeat sequences in the repetitive sequence (RPS) for genotyping of C. albicans. To assess the potential of 25S rDNA and RPS-based genotyping for studying the molecular epidemiology of C. albicans, and define the genotypic relationship of C. albicans between invasive and non-invasive lesions in the same individual. C. albicans strains were isolated from infected lesions and commensal sites, such as oral mucosa and/or feces, of patients with superficial candidiasis. The genomic DNAs were amplified by PCRs using P-I and P-II to determine the 25S rDNA- and RPS-based genotypes of the isolates. Genotype A:3 C. albicans constituted the majority of the isolates, followed by A:3/4 and B:3 C. albicans. There was usually one genotype of C. albicans per person. The genotypes of infected lesion isolates and non-infected oral mucosa and/or feces isolates were identical in the same individual, even in serially isolated C. albicans. The results indicate that our combined PCR technique using P-I and P-II is a potential tool for molecular typing of C. albicans, and reveal that the genotypes of isolates are identical in the same individual, independent of the infective and non-infective phases or the body location.

  14. MPEG-2 Compressed-Domain Algorithms for Video Analysis

    Directory of Open Access Journals (Sweden)

    Hesseler Wolfgang

    2006-01-01

    Full Text Available This paper presents new algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Three algorithms for efficient low-level metadata extraction in preprocessing stages are described. The first algorithm detects camera motion using the motion vector field of an MPEG-2 video. The second method extends the idea of motion detection to a limited region of interest, yielding an efficient algorithm to track objects inside video sequences. The third algorithm performs a cut detection using macroblock types and motion vectors.

  15. Video super-resolution using simultaneous motion and intensity calculations

    DEFF Research Database (Denmark)

    Keller, Sune Høgild; Lauze, Francois Bernard; Nielsen, Mads

    2011-01-01

    In this paper we propose an energy based algorithm for motion compensated video super-resolution (VSR) targeted on upscaling of standard definition (SD) video to high definition (HD) video. Since the motion (flow field) of the image sequence is generally unknown, we introduce a formulation...... for super-resolved sequences. Computing super-resolved flows has to our knowledge not been done before. Most advanced super-resolution (SR) methods found in literature cannot be applied to general video with arbitrary scene content and/or arbitrary optical flows, as it is possible with our simultaneous VSR...... method. Series of experiments show that our method outperforms other VSR methods when dealing with general video input and that it continues to provide good results even for large scaling factors, up to 8×8....

  16. Managed Video as a Service for a Video Surveillance Model

    Directory of Open Access Journals (Sweden)

    Dan Benta

    2009-01-01

    Full Text Available The increasing demand for security systems hasresulted in rapid development of video surveillance and videosurveillance has turned into a major area of interest andmanagement challenge. Personal experience in specializedcompanies helped me to adapt demands of users of videosecurity systems to system performance. It is known thatpeople wish to obtain maximum profit with minimum effort,but security is not neglected. Surveillance systems and videomonitoring should provide only necessary information and torecord only when there is activity. Via IP video surveillanceservices provides more safety in this sector, being able torecord information on servers located in other locations thanthe IP cameras. Also, these systems allow real timemonitoring of goods or activities that take place in supervisedperimeters. View live and recording can be done via theInternet from any computer, using a web browser. Access tothe surveillance system is granted after a user and passwordauthentication.

  17. Sound for digital video

    CERN Document Server

    Holman, Tomlinson

    2013-01-01

    Achieve professional quality sound on a limited budget! Harness all new, Hollywood style audio techniques to bring your independent film and video productions to the next level.In Sound for Digital Video, Second Edition industry experts Tomlinson Holman and Arthur Baum give you the tools and knowledge to apply recent advances in audio capture, video recording, editing workflow, and mixing to your own film or video with stunning results. This fresh edition is chockfull of techniques, tricks, and workflow secrets that you can apply to your own projects from preproduction

  18. Green Power Partnership Videos

    Science.gov (United States)

    The Green Power Partnership develops videos on a regular basis that explore a variety of topics including, Green Power partnership, green power purchasing, Renewable energy certificates, among others.

  19. Mississippi's New Forestry Best Management Practices Video

    Science.gov (United States)

    Andrew James Londo; John Benkert Auel

    2004-01-01

    Mississippi's latest version of forestry best management practices (BMPs) for water quality was released in 2000. In conjunction with this release, funds were obtained through a Section 319H grant from the Mississippi Department of Environmental Quality to create a new BMPs video. Additional assistance was obtained from Georgia Pacific, PlumCreek, Weyerhaeuser,...

  20. Research on Agricultural Surveillance Video of Intelligent Tracking

    Science.gov (United States)

    Cai, Lecai; Xu, Jijia; Liangping, Jin; He, Zhiyong

    Intelligent video tracking technology is the digital video processing and analysis of an important field of application in the civilian and military defense have a wide range of applications. In this paper, a systematic study on the surveillance video of the Smart in the agricultural tracking, particularly in target detection and tracking problem of the study, respectively for the static background of the video sequences of moving targets detection and tracking algorithm, the goal of agricultural production for rapid detection and tracking algorithm and Mean Shift-based translation and rotation of the target tracking algorithm. Experimental results show that the system can effectively and accurately track the target in the surveillance video. Therefore, in agriculture for the intelligent video surveillance tracking study, whether it is from the environmental protection or social security, economic efficiency point of view, are very meaningful.

  1. Video Texture Synthesis Based on Flow-Like Stylization Painting

    Directory of Open Access Journals (Sweden)

    Qian Wenhua

    2014-01-01

    Full Text Available The paper presents an NP-video rendering system based on natural phenomena. It provides a simple nonphotorealistic video synthesis system in which user can obtain a flow-like stylization painting and infinite video scene. Firstly, based on anisotropic Kuwahara filtering in conjunction with line integral convolution, the phenomena video scene can be rendered to flow-like stylization painting. Secondly, the methods of frame division, patches synthesis, will be used to synthesize infinite playing video. According to selection examples from different natural video texture, our system can generate stylized of flow-like and infinite video scenes. The visual discontinuities between neighbor frames are decreased, and we also preserve feature and details of frames. This rendering system is easy and simple to implement.

  2. Spatio-temporal image inpainting for video applications

    Directory of Open Access Journals (Sweden)

    Voronin Viacheslav

    2017-01-01

    Full Text Available Video inpainting or completion is a vital video improvement technique used to repair or edit digital videos. This paper describes a framework for temporally consistent video completion. The proposed method allows to remove dynamic objects or restore missing or tainted regions presented in a video sequence by utilizing spatial and temporal information from neighboring scenes. Masking algorithm is used for detection of scratches or damaged portions in video frames. The algorithm iteratively performs the following operations: achieve frame; update the scene model; update positions of moving objects; replace parts of the frame occupied by the objects marked for remove by using a background model. In this paper, we extend an image inpainting algorithm based texture and structure reconstruction by incorporating an improved strategy for video. Our algorithm is able to deal with a variety of challenging situations which naturally arise in video inpainting, such as the correct reconstruction of dynamic textures, multiple moving objects and moving background. Experimental comparisons to state-of-the-art video completion methods demonstrate the effectiveness of the proposed approach. It is shown that the proposed spatio-temporal image inpainting method allows restoring a missing blocks and removing a text from the scenes on videos.

  3. Moving Shadow Detection in Video Using Cepstrum

    Directory of Open Access Journals (Sweden)

    Fuat Cogun

    2013-01-01

    Full Text Available Moving shadows constitute problems in various applications such as image segmentation and object tracking. The main cause of these problems is the misclassification of the shadow pixels as target pixels. Therefore, the use of an accurate and reliable shadow detection method is essential to realize intelligent video processing applications. In this paper, a cepstrum-based method for moving shadow detection is presented. The proposed method is tested on outdoor and indoor video sequences using well-known benchmark test sets. To show the improvements over previous approaches, quantitative metrics are introduced and comparisons based on these metrics are made.

  4. Obtaining of inulin acetate

    OpenAIRE

    Khusenov, Arslonnazar; Rakhmanberdiev, Gappar; Rakhimov, Dilshod; Khalikov, Muzaffar

    2014-01-01

    In the article first obtained inulin ester inulin acetate, by etherification of inulin with acetic anhydride has been exposed. Obtained product has been studied using elementary analysis and IR spectroscopy.

  5. A Video Method to Study Drosophila Sleep

    Science.gov (United States)

    Zimmerman, John E.; Raizen, David M.; Maycock, Matthew H.; Maislin, Greg; Pack, Allan I.

    2008-01-01

    Study Objectives: To use video to determine the accuracy of the infrared beam-splitting method for measuring sleep in Drosophila and to determine the effect of time of day, sex, genotype, and age on sleep measurements. Design: A digital image analysis method based on frame subtraction principle was developed to distinguish a quiescent from a moving fly. Data obtained using this method were compared with data obtained using the Drosophila Activity Monitoring System (DAMS). The location of the fly was identified based on its centroid location in the subtracted images. Measurements and Results: The error associated with the identification of total sleep using DAMS ranged from 7% to 95% and depended on genotype, sex, age, and time of day. The degree of the total sleep error was dependent on genotype during the daytime (P video. Both video and DAMS detected a homeostatic response to sleep deprivation. Conclusions: Video digital analysis is more accurate than DAMS in fly sleep measurements. In particular, conclusions drawn from DAMS measurements regarding daytime sleep and sleep architecture should be made with caution. Video analysis also permits the assessment of fly position and brief movements during sleep. Citation: Zimmerman JE; Raizen DM; Maycock MH; Maislin G; Pack AI. A video method to study drosophila sleep. SLEEP 2008;31(11):1587–1598. PMID:19014079

  6. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    Science.gov (United States)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.

    2008-12-01

    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  7. Online sparse representation for remote sensing compressed-sensed video sampling

    Science.gov (United States)

    Wang, Jie; Liu, Kun; Li, Sheng-liang; Zhang, Li

    2014-11-01

    Most recently, an emerging Compressed Sensing (CS) theory has brought a major breakthrough for data acquisition and recovery. It asserts that a signal, which is highly compressible in a known basis, can be reconstructed with high probability through sampling frequency which is well below Nyquist Sampling Frequency. When applying CS to Remote Sensing (RS) Video imaging, it can directly and efficiently acquire compressed image data by randomly projecting original data to obtain linear and non-adaptive measurements. In this paper, with the help of distributed video coding scheme which is a low-complexity technique for resource limited sensors, the frames of a RS video sequence are divided into Key frames (K frames) and Non-Key frames (CS frames). In other words, the input video sequence consists of many groups of pictures (GOPs) and each GOP consists of one K frame followed by several CS frames. Both of them are measured based on block, but at different sampling rates. In this way, the major encoding computation burden will be shifted to the decoder. At the decoder, the Side Information (SI) is generated for the CS frames using traditional Motion-Compensated Interpolation (MCI) technique according to the reconstructed key frames. The over-complete dictionary is trained by dictionary learning methods based on SI. These learning methods include ICA-like, PCA, K-SVD, MOD, etc. Using these dictionaries, the CS frames could be reconstructed according to sparse-land model. In the numerical experiments, the reconstruction performance of ICA algorithm, which is often evaluated by Peak Signal-to-Noise Ratio (PSNR), has been made compared with other online sparse representation algorithms. The simulation results show its advantages in reducing reconstruction time and robustness in reconstruction performance when applying ICA algorithm to remote sensing video reconstruction.

  8. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer Support Program Community Connections Overview ... group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork Peer Support Program ...

  9. Reviews in instructional video

    NARCIS (Netherlands)

    van der Meij, Hans

    2017-01-01

    This study investigates the effectiveness of a video tutorial for software training whose construction was based on a combination of insights from multimedia learning and Demonstration-Based Training. In the videos, a model of task performance was enhanced with instructional features that were

  10. Digital Video Editing

    Science.gov (United States)

    McConnell, Terry

    2004-01-01

    Monica Adams, head librarian at Robinson Secondary in Fairfax country, Virginia, states that librarians should have the technical knowledge to support projects related to digital video editing. The process of digital video editing and the cables, storage issues and the computer system with software is described.

  11. AudioMove Video

    DEFF Research Database (Denmark)

    2012-01-01

    Live drawing video experimenting with low tech techniques in the field of sketching and visual sense making. In collaboration with Rune Wehner and Teater Katapult.......Live drawing video experimenting with low tech techniques in the field of sketching and visual sense making. In collaboration with Rune Wehner and Teater Katapult....

  12. Making Good Physics Videos

    Science.gov (United States)

    Lincoln, James

    2017-01-01

    Online videos are an increasingly important way technology is contributing to the improvement of physics teaching. Students and teachers have begun to rely on online videos to provide them with content knowledge and instructional strategies. Online audiences are expecting greater production value, and departments are sometimes requesting educators…

  13. SECRETS OF SONG VIDEO

    Directory of Open Access Journals (Sweden)

    Chernyshov Alexander V.

    2014-04-01

    Full Text Available The article focuses on the origins of the song videos as TV and Internet-genre. In addition, it considers problems of screen images creation depending on the musical form and the text of a songs in connection with relevant principles of accent and phraseological video editing and filming techniques as well as with additional frames and sound elements.

  14. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... support group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer ... group for me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork ...

  15. Personal Digital Video Stories

    DEFF Research Database (Denmark)

    Ørngreen, Rikke; Henningsen, Birgitte Sølbeck; Louw, Arnt Vestergaard

    2016-01-01

    agenda focusing on video productions in combination with digital storytelling, followed by a presentation of the digital storytelling features. The paper concludes with a suggestion to initiate research in what is identified as Personal Digital Video (PDV) Stories within longitudinal settings, while...

  16. The Video Generation.

    Science.gov (United States)

    Provenzo, Eugene F., Jr.

    1992-01-01

    Video games are neither neutral nor harmless but represent very specific social and symbolic constructs. Research on the social content of today's video games reveals that sex bias and gender stereotyping are widely evident throughout the Nintendo games. Violence and aggression also pervade the great majority of the games. (MLF)

  17. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos ... member of our patient care team. Managing Your Arthritis Managing Your Arthritis Managing Chronic Pain and Depression ...

  18. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed to help you learn more about Rheumatoid Arthritis (RA). You will learn how the diagnosis of ...

  19. Rheumatoid Arthritis Educational Video Series

    Science.gov (United States)

    ... Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed to help you learn more about Rheumatoid Arthritis (RA). You will learn how the diagnosis of ...

  20. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... questions Clinical Studies Publications Catalog Photos and Images Spanish Language Information Grants and Funding Extramural Research Division ... Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video ...

  1. User-oriented summary extraction for soccer video based on multimodal analysis

    Science.gov (United States)

    Liu, Huayong; Jiang, Shanshan; He, Tingting

    2011-11-01

    An advanced user-oriented summary extraction method for soccer video is proposed in this work. Firstly, an algorithm of user-oriented summary extraction for soccer video is introduced. A novel approach that integrates multimodal analysis, such as extraction and analysis of the stadium features, moving object features, audio features and text features is introduced. By these features the semantic of the soccer video and the highlight mode are obtained. Then we can find the highlight position and put them together by highlight degrees to obtain the video summary. The experimental results for sports video of world cup soccer games indicate that multimodal analysis is effective for soccer video browsing and retrieval.

  2. Social video content delivery

    CERN Document Server

    Wang, Zhi; Zhu, Wenwu

    2016-01-01

    This brief presents new architecture and strategies for distribution of social video content. A primary framework for socially-aware video delivery and a thorough overview of the possible approaches is provided. The book identifies the unique characteristics of socially-aware video access and social content propagation, revealing the design and integration of individual modules that are aimed at enhancing user experience in the social network context. The change in video content generation, propagation, and consumption for online social networks, has significantly challenged the traditional video delivery paradigm. Given the massive amount of user-generated content shared in online social networks, users are now engaged as active participants in the social ecosystem rather than as passive receivers of media content. This revolution is being driven further by the deep penetration of 3G/4G wireless networks and smart mobile devices that are seamlessly integrated with online social networking and media-sharing s...

  3. RST-Resilient Video Watermarking Using Scene-Based Feature Extraction

    OpenAIRE

    Jung Han-Seung; Lee Young-Yoon; Lee Sang Uk

    2004-01-01

    Watermarking for video sequences should consider additional attacks, such as frame averaging, frame-rate change, frame shuffling or collusion attacks, as well as those of still images. Also, since video is a sequence of analogous images, video watermarking is subject to interframe collusion. In order to cope with these attacks, we propose a scene-based temporal watermarking algorithm. In each scene, segmented by scene-change detection schemes, a watermark is embedded temporally to one-dimens...

  4. Enhancing Perceived Quality of Compressed Images and Video with Anisotropic Diffusion and Fuzzy Filtering

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Korhonen, Jari; Forchhammer, Søren

    2013-01-01

    video sequences. For the video sequences, different filters are applied to luminance (Y) and chrominance (U,V) components. The performance of the proposed method has been compared against several other methods by using different objective quality metrics and a subjective comparison study. Both objective...

  5. Understanding Collective Activities of People from Videos.

    Science.gov (United States)

    Wongun Choi; Savarese, Silvio

    2014-06-01

    This paper presents a principled framework for analyzing collective activities at different levels of semantic granularity from videos. Our framework is capable of jointly tracking multiple individuals, recognizing activities performed by individuals in isolation (i.e., atomic activities such as walking or standing), recognizing the interactions between pairs of individuals (i.e., interaction activities) as well as understanding the activities of group of individuals (i.e., collective activities). A key property of our work is that it can coherently combine bottom-up information stemming from detections or fragments of tracks (or tracklets) with top-down evidence. Top-down evidence is provided by a newly proposed descriptor that captures the coherent behavior of groups of individuals in a spatial-temporal neighborhood of the sequence. Top-down evidence provides contextual information for establishing accurate associations between detections or tracklets across frames and, thus, for obtaining more robust tracking results. Bottom-up evidence percolates upwards so as to automatically infer collective activity labels. Experimental results on two challenging data sets demonstrate our theoretical claims and indicate that our model achieves enhances tracking results and the best collective classification results to date.

  6. Dense Trajectories and DHOG for Classification of Viewpoints from Echocardiogram Videos

    Directory of Open Access Journals (Sweden)

    Liqin Huang

    2016-01-01

    Full Text Available In echo-cardiac clinical computer-aided diagnosis, an important step is to automatically classify echocardiography videos from different angles and different regions. We propose a kind of echocardiography video classification algorithm based on the dense trajectory and difference histograms of oriented gradients (DHOG. First, we use the dense grid method to describe feature characteristics in each frame of echocardiography sequence and then track these feature points by applying the dense optical flow. In order to overcome the influence of the rapid and irregular movement of echocardiography videos and get more robust tracking results, we also design a trajectory description algorithm which uses the derivative of the optical flow to obtain the motion trajectory information and associates the different characteristics (e.g., the trajectory shape, DHOG, HOF, and MBH with embedded structural information of the spatiotemporal pyramid. To avoid “dimension disaster,” we apply Fisher’s vector to reduce the dimension of feature description followed by the SVM linear classifier to improve the final classification result. The average accuracy of echocardiography video classification is 77.12% for all eight viewpoints and 100% for three primary viewpoints.

  7. Video Shot Boundary Detection based on Multifractal Analisys

    Directory of Open Access Journals (Sweden)

    B. D. Reljin

    2011-11-01

    Full Text Available Extracting video shots is an essential preprocessing step to almost all video analysis, indexing, and other content-based operations. This process is equivalent to detecting the shot boundaries in a video. In this paper we presents video Shot Boundary Detection (SBD based on Multifractal Analysis (MA. Low-level features (color and texture features are extracted from each frame in video sequence. Features are concatenated in feature vectors (FVs and stored in feature matrix. Matrix rows correspond to FVs of frames from video sequence, while columns are time series of particular FV component. Multifractal analysis is applied to FV component time series, and shot boundaries are detected as high singularities of time series above pre defined treshold. Proposed SBD method is tested on real video sequence with 64 shots, with manually labeled shot boundaries. Detection accuracy depends on number FV components used. For only one FV component detection accuracy lies in the range 76-92% (depending on selected threshold, while by combining two FV components all shots are detected completely (accuracy of 100%.

  8. Deriving video content type from HEVC bitstream semantics

    Science.gov (United States)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio R.

    2014-05-01

    As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can

  9. Blood Pulsation Intensity Video Mapping

    CERN Document Server

    Borges, Pedro Henrique de M

    2016-01-01

    In this study, we make non-invasive, remote, passive measurements of the heart beat frequency and determine the map of blood pulsation intensity in a region of interest (ROI) of skin. The ROI used was the forearm of a volunteer. The method employs a regular video camera and visible light, and the video acquisition takes less than 1 minute. The mean cardiac frequency found in our volunteer was within 1 bpm of the ground-truth value simultaneously obtained via earlobe plethysmography. Using the signals extracted from the video images, we have determined an intensity map for the blood pulsation at the surface of the skin. In this paper we present the experimental and data processing details of the work and well as limitations of the technique. ----------------------------------------- Neste estudo medimos a frequ\\^encia card\\'iaca de forma n\\~ao invasiva, remota e passiva e determinamos o mapa da atividade de pulsa\\c{c}\\~ao sangu\\'inea numa regi\\~ao de interesse (ROI) da pele. A ROI utilizada foi o antebra\\c{c}o...

  10. A new video programme

    CERN Multimedia

    CERN video productions

    2011-01-01

    "What's new @ CERN?", a new monthly video programme, will be broadcast on the Monday of every month on webcast.cern.ch. Aimed at the general public, the programme will cover the latest CERN news, with guests and explanatory features. Tune in on Monday 3 October at 4 pm (CET) to see the programme in English, and then at 4:20 pm (CET) for the French version.   var flash_video_player=get_video_player_path(); insert_player_for_external('Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-0753-kbps-640x360-25-fps-audio-64-kbps-44-kHz-stereo', 'mms://mediastream.cern.ch/MediaArchive/Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-Multirate-200-to-753-kbps-640x360-25-fps.wmv', 'false', 480, 360, 'https://mediastream.cern.ch/MediaArchive/Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-posterframe-640x360-at-10-percent.jpg', '1383406', true, 'Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-0600-kbps-maxH-360-25-fps-...

  11. Gamifying Video Object Segmentation.

    Science.gov (United States)

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  12. Characterization, adaptive traffic shaping, and multiplexing of real-time MPEG II video

    Science.gov (United States)

    Agrawal, Sanjay; Barry, Charles F.; Binnai, Vinay; Kazovsky, Leonid G.

    1997-01-01

    We obtain network traffic model for real-time MPEG-II encoded digital video by analyzing video stream samples from real-time encoders from NUKO Information Systems. MPEG-II sample streams include a resolution intensive movie, City of Joy, an action intensive movie, Aliens, a luminance intensive (black and white) movie, Road To Utopia, and a chrominance intensive (color) movie, Dick Tracy. From our analysis we obtain a heuristic model for the encoded video traffic which uses a 15-stage Markov process to model the I,B,P frame sequences within a group of pictures (GOP). A jointly-correlated Gaussian process is used to model the individual frame sizes. Scene change arrivals are modeled according to a gamma process. Simulations show that our MPEG-II traffic model generates, I,B,P frame sequences and frame sizes that closely match the sample MPEG-II stream traffic characteristics as they relate to latency and buffer occupancy in network queues. To achieve high multiplexing efficiency we propose a traffic shaping scheme which sets preferred 1-frame generation times among a group of encoders so as to minimize the overall variation in total offered traffic while still allowing the individual encoders to react to scene changes. Simulations show that our scheme results in multiplexing gains of up to 10% enabling us to multiplex twenty 6 Mbps MPEG-II video streams instead of 18 streams over an ATM/SONET OC3 link without latency or cell loss penalty. This scheme is due for a patent.

  13. Categorizing Video Game Audio

    DEFF Research Database (Denmark)

    Westerberg, Andreas Rytter; Schoenau-Fog, Henrik

    2015-01-01

    This paper dives into the subject of video game audio and how it can be categorized in order to deliver a message to a player in the most precise way. A new categorization, with a new take on the diegetic spaces, can be used a tool of inspiration for sound- and game-designers to rethink how...... they can use audio in video games. The conclusion of this study is that the current models' view of the diegetic spaces, used to categorize video game audio, is not t to categorize all sounds. This can however possibly be changed though a rethinking of how the player interprets audio....

  14. Brains on video games

    OpenAIRE

    Bavelier, Daphne; Green, C. Shawn; Han, Doug Hyun; Renshaw, Perry F.; Merzenich, Michael M.; Gentile, Douglas A.

    2011-01-01

    The popular press is replete with stories about the effects of video and computer games on the brain. Sensationalist headlines claiming that video games ‘damage the brain’ or ‘boost brain power’ do not do justice to the complexities and limitations of the studies involved, and create a confusing overall picture about the effects of gaming on the brain. Here, six experts in the field shed light on our current understanding of the positive and negative ways in which playing video games can affe...

  15. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  16. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  17. A Method for Estimating Surveillance Video Georeferences

    Directory of Open Access Journals (Sweden)

    Aleksandar Milosavljević

    2017-07-01

    Full Text Available The integration of a surveillance camera video with a three-dimensional (3D geographic information system (GIS requires the georeferencing of that video. Since a video consists of separate frames, each frame must be georeferenced. To georeference a video frame, we rely on the information about the camera view at the moment that the frame was captured. A camera view in 3D space is completely determined by the camera position, orientation, and field-of-view. Since the accurate measuring of these parameters can be extremely difficult, in this paper we propose a method for their estimation based on matching video frame coordinates of certain point features with their 3D geographic locations. To obtain these coordinates, we rely on high-resolution orthophotos and digital elevation models (DEM of the area of interest. Once an adequate number of points are matched, Levenberg–Marquardt iterative optimization is applied to find the most suitable video frame georeference, i.e., position and orientation of the camera.

  18. Video-based face recognition via convolutional neural networks

    Science.gov (United States)

    Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming

    2017-06-01

    Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.

  19. Analysis of unstructured video based on camera motion

    Science.gov (United States)

    Abdollahian, Golnaz; Delp, Edward J.

    2007-01-01

    Although considerable work has been done in management of "structured" video such as movies, sports, and television programs that has known scene structures, "unstructured" video analysis is still a challenging problem due to its unrestricted nature. The purpose of this paper is to address issues in the analysis of unstructured video and in particular video shot by a typical unprofessional user (i.e home video). We describe how one can make use of camera motion information for unstructured video analysis. A new concept, "camera viewing direction," is introduced as the building block of home video analysis. Motion displacement vectors are employed to temporally segment the video based on this concept. We then find the correspondence between the camera behavior with respect to the subjective importance of the information in each segment and describe how different patterns in the camera motion can indicate levels of interest in a particular object or scene. By extracting these patterns, the most representative frames, keyframes, for the scenes are determined and aggregated to summarize the video sequence.

  20. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Care Disease Types FAQ Handout for Patients and Families Is It Right for You How to Get ... For the Media For Clinicians For Policymakers For Family Caregivers Glossary Menu In this section Links Videos ...

  1. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Donate Search Search What Is It Definition Pediatric Palliative Care Disease Types FAQ Handout for Patients and Families ... Policymakers For Family Caregivers Glossary Resources Browse our palliative care resources below: Links Videos Podcasts Webinars For the ...

  2. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Click to learn more... LOGIN CALENDAR DONATE NEWS Home Learn Back Learn about acoustic neuroma AN Facts ... Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video Ronson and Kerri Albany Support ...

  3. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Donate Search Search What Is It Definition Pediatric Palliative Care Disease Types FAQ Handout for Patients and ... Policymakers For Family Caregivers Glossary Resources Browse our palliative care resources below: Links Videos Podcasts Webinars For ...

  4. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Click to learn more... LOGIN CALENDAR DONATE NEWS Home Learn Back Learn about acoustic neuroma AN Facts ... Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video Howard of NJ Gloria hiking ...

  5. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Mission, Vision & Values Shop ANA Leadership & Staff Annual Reports Acoustic Neuroma Association 600 Peachtree Parkway Suite 108 ... About ANA Mission, Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video English English ...

  6. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Disease Types Stories FAQ Handout for Patients and Families Is It Right for You How to Get ... For the Media For Clinicians For Policymakers For Family Caregivers Glossary Menu In this section Links Videos ...

  7. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Search Search What Is It Definition Pediatric Palliative Care Disease Types FAQ Handout for Patients and Families ... For Family Caregivers Glossary Resources Browse our palliative care resources below: Links Videos Podcasts Webinars For the ...

  8. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Educational Video Scott at the Grand Canyon Proton Center load more hold SHIFT key to load all load all Stay Connected with ANA Newly Diagnosed Living with AN Healthcare Providers Acoustic Neuroma Association Donate Now Newly Diagnosed ...

  9. The video violence debate.

    Science.gov (United States)

    Lande, R G

    1993-04-01

    Some researchers and theorists are convinced that graphic scenes of violence on television and in movies are inextricably linked to human aggression. Others insist that a link has not been conclusively established. This paper summarizes scientific studies that have informed these two perspectives. Although many instances of children and adults imitating video violence have been documented, no court has imposed liability for harm allegedly resulting from a video program, an indication that considerable doubt still exists about the role of video violence in stimulating human aggression. The author suggests that a small group of vulnerable viewers are probably more impressionable and therefore more likely to suffer deleterious effects from violent programming. He proposes that research on video violence be narrowed to identifying and describing the vulnerable viewer.

  10. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... a patient kit Keywords Join/Renew Programs Back Support Groups Is a support group for me? Find ... Events Video Library Photo Gallery One-on-One Support ANetwork Peer Support Program Community Connections Overview Find ...

  11. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Click to learn more... LOGIN CALENDAR DONATE NEWS Home Learn Back Learn about acoustic neuroma AN Facts ... Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video English English Arabic Catalan Chinese ( ...

  12. Video i VIA

    DEFF Research Database (Denmark)

    2012-01-01

    Artiklen beskriver et udviklingsprojekt, hvor 13 grupper af lærere på tværs af fag og uddannelser producerede video til undervsioningsbrug. Der beskrives forskellige tilgange og anvendelser samt læringen i projektet...

  13. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... to your Doctor Find a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts ... to your Doctor Find a Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources ...

  14. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Click to learn more... LOGIN CALENDAR DONATE NEWS Home Learn Back Learn about acoustic neuroma AN Facts ... Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video Keck Medicine of USC ANWarriors ...

  15. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... illness: Toby’s palliative care story Access the Provider Directory Handout for Patients and Families Is it Right ... Provider Meet the Team Blog Articles News Provider Directory Donate Resources Links Videos Podcasts Webinars For the ...

  16. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Click to learn more... LOGIN EVENTS DONATE NEWS Home Learn Back Learn about acoustic neuroma AN Facts ... Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video Scott at the Grand Canyon ...

  17. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Is a support group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer Support Program Community Connections Overview Find a Meeting ...

  18. Photos and Videos

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Observers are required to take photos and/or videos of all incidentally caught sea turtles, marine mammals, seabirds and unusual or rare fish. On the first 3...

  19. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... All rights reserved. GetPalliativeCare.org does not provide medical advice, diagnosis or treatment. ... the Team Blog Articles & Stories News Provider Directory Donate Resources Links Videos ...

  20. SEFIS Video Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This is a fishery-independent survey that collects data on reef fish in southeast US waters using multiple gears, including chevron traps, video cameras, ROVs,...

  1. Adaptive Error Resilience for Video Streaming

    Directory of Open Access Journals (Sweden)

    Lakshmi R. Siruvuri

    2009-01-01

    Full Text Available Compressed video sequences are vulnerable to channel errors, to the extent that minor errors and/or small losses can result in substantial degradation. Thus, protecting compressed data against channel errors is imperative. The use of channel coding schemes can be effective in reducing the impact of channel errors, although this requires that extra parity bits to be transmitted, thus utilizing more bandwidth. However, this can be ameliorated if the transmitter can tailor the parity data rate based on its knowledge regarding current channel conditions. This can be achieved via feedback from the receiver to the transmitter. This paper describes a channel emulation system comprised of a server/proxy/client combination that utilizes feedback from the client to adapt the number of Reed-Solomon parity symbols used to protect compressed video sequences against channel errors.

  2. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... search for current job openings visit HHS USAJobs Home > NEI YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia Animations Blindness Cataract ...

  3. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... Amaurosis Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia NEI Home Contact Us A-Z Site Map NEI on Social Media Information in Spanish (Información en español) Website, ...

  4. Studenterproduceret video til eksamen

    DEFF Research Database (Denmark)

    Jensen, Kristian Nøhr; Hansen, Kenneth

    2016-01-01

    Formålet med denne artikel er at vise, hvordan læringsdesign og stilladsering kan anvendes til at skabe en ramme for studenterproduceret video til eksamen på videregående uddannelser. Artiklen tager udgangspunkt i en problemstilling, hvor uddannelsesinstitutionerne skal håndtere og koordinere...... de fagfaglige og mediefaglige undervisere et redskab til at fokusere og koordinere indsatsen frem mod målet med, at de studerende producerer og anvender video til eksamen....

  5. Video Editing System

    Science.gov (United States)

    Schlecht, Leslie E.; Kutler, Paul (Technical Monitor)

    1998-01-01

    This is a proposal for a general use system based, on the SGI IRIS workstation platform, for recording computer animation to videotape. In addition, this system would provide features for simple editing and enhancement. Described here are a list of requirements for the system, and a proposed configuration including the SGI VideoLab Integrator, VideoMedia VLAN animation controller and the Pioneer rewritable laserdisc recorder.

  6. Video Games and Citizenship

    OpenAIRE

    Bourgonjon, Jeroen; Soetaert, Ronald

    2013-01-01

    In their article "Video Games and Citizenship" Jeroen Bourgonjon and Ronald Soetaert argue that digitization problematizes and broadens our perspective on culture and popular media, and that this has important ramifications for our understanding of citizenship. Bourgonjon and Soetaert respond to the call of Gert Biesta for the contextualized study of young people's practices by exploring a particular aspect of digitization that affects young people, namely video games. They explore the new so...

  7. Android Video Streaming

    Science.gov (United States)

    2014-05-01

    be processed by a nearby high -performance computing asset and returned to a squad of Soldiers with annotations indicating the location of friendly and...is to change the resolution, bitrate, and/or framerate of the video being transmitted to the client, reducing the bandwidth requirements of the...video. This solution is typically not viable because a progressive download is required to have a constant resolution, bitrate, and framerate because

  8. No-Reference Video Quality Assessment using Codec Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2015-01-01

    A no-reference video quality assessment (VQA) method is presented for videos distorted by H.264/AVC and MPEG-2. The assessment is performed without access to the bit-stream. Instead we analyze and estimate coefficients based on decoded pixels. The approach involves distinguishing between the two...... types of videos, estimating the level of quantization used in the I-frames, and exploiting this information to assess the video quality. In order to do this for H.264/AVC, the distribution of the DCT-coefficients after intra-prediction and deblocking are modeled. To obtain VQA features for H.264/AVC, we...... propose a novel estimation method of the quantization in H.264/AVC videos without bitstream access, which can also be used for Peak Signalto-Noise Ratio (PSNR) estimation. The results from the MPEG-2 and H.264/AVC analysis are mapped to a perceptual measure of video quality by Support Vector Regression...

  9. H.264/AVC Video Compressed Traces: Multifractal and Fractal Analysis

    Directory of Open Access Journals (Sweden)

    Samčović Andreja

    2006-01-01

    Full Text Available Publicly available long video traces encoded according to H.264/AVC were analyzed from the fractal and multifractal points of view. It was shown that such video traces, as compressed videos (H.261, H.263, and MPEG-4 Version 2 exhibit inherent long-range dependency, that is, fractal, property. Moreover they have high bit rate variability, particularly at higher compression ratios. Such signals may be better characterized by multifractal (MF analysis, since this approach describes both local and global features of the process. From multifractal spectra of the frame size video traces it was shown that higher compression ratio produces broader and less regular MF spectra, indicating to higher MF nature and the existence of additive components in video traces. Considering individual frames (I, P, and B and their MF spectra one can approve additive nature of compressed video and the particular influence of these frames to a whole MF spectrum. Since compressed video occupies a main part of transmission bandwidth, results obtained from MF analysis of compressed video may contribute to more accurate modeling of modern teletraffic. Moreover, by appropriate choice of the method for estimating MF quantities, an inverse MF analysis is possible, that means, from a once derived MF spectrum of observed signal it is possible to recognize and extract parts of the signal which are characterized by particular values of multifractal parameters. Intensive simulations and results obtained confirm the applicability and efficiency of MF analysis of compressed video.

  10. VIDEO-BASED POINT CLOUD GENERATION USING MULTIPLE ACTION CAMERAS

    Directory of Open Access Journals (Sweden)

    T. Teo

    2015-05-01

    Full Text Available Due to the development of action cameras, the use of video technology for collecting geo-spatial data becomes an important trend. The objective of this study is to compare the image-mode and video-mode of multiple action cameras for 3D point clouds generation. Frame images are acquired from discrete camera stations while videos are taken from continuous trajectories. The proposed method includes five major parts: (1 camera calibration, (2 video conversion and alignment, (3 orientation modelling, (4 dense matching, and (5 evaluation. As the action cameras usually have large FOV in wide viewing mode, camera calibration plays an important role to calibrate the effect of lens distortion before image matching. Once the camera has been calibrated, the author use these action cameras to take video in an indoor environment. The videos are further converted into multiple frame images based on the frame rates. In order to overcome the time synchronous issues in between videos from different viewpoints, an additional timer APP is used to determine the time shift factor between cameras in time alignment. A structure form motion (SfM technique is utilized to obtain the image orientations. Then, semi-global matching (SGM algorithm is adopted to obtain dense 3D point clouds. The preliminary results indicated that the 3D points from 4K video are similar to 12MP images, but the data acquisition performance of 4K video is more efficient than 12MP digital images.

  11. Robust Adaptable Video Copy Detection

    DEFF Research Database (Denmark)

    Assent, Ira; Kremer, Hardy

    2009-01-01

    Video copy detection should be capable of identifying video copies subject to alterations e.g. in video contrast or frame rates. We propose a video copy detection scheme that allows for adaptable detection of videos that are altered temporally (e.g. frame rate change) and/or visually (e.g. change...... in contrast). Our query processing combines filtering and indexing structures for efficient multistep computation of video copies under this model. We show that our model successfully identifies altered video copies and does so more reliably than existing models....

  12. Robust video object cosegmentation.

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing; Li, Xuelong; Porikli, Fatih

    2015-10-01

    With ever-increasing volumes of video data, automatic extraction of salient object regions became even more significant for visual analytic solutions. This surge has also opened up opportunities for taking advantage of collective cues encapsulated in multiple videos in a cooperative manner. However, it also brings up major challenges, such as handling of drastic appearance, motion pattern, and pose variations, of foreground objects as well as indiscriminate backgrounds. Here, we present a cosegmentation framework to discover and segment out common object regions across multiple frames and multiple videos in a joint fashion. We incorporate three types of cues, i.e., intraframe saliency, interframe consistency, and across-video similarity into an energy optimization framework that does not make restrictive assumptions on foreground appearance and motion model, and does not require objects to be visible in all frames. We also introduce a spatio-temporal scale-invariant feature transform (SIFT) flow descriptor to integrate across-video correspondence from the conventional SIFT-flow into interframe motion flow from optical flow. This novel spatio-temporal SIFT flow generates reliable estimations of common foregrounds over the entire video data set. Experimental results show that our method outperforms the state-of-the-art on a new extensive data set (ViCoSeg).

  13. Combined Scalable Video Coding Method for Wireless Transmission

    Directory of Open Access Journals (Sweden)

    Achmad Affandi

    2011-08-01

    Full Text Available Mobile video streaming is one of multimedia services that has developed very rapidly. Recently, bandwidth utilization for wireless transmission is the main problem in the field of multimedia communications. In this research, we offer a combination of scalable methods as the most attractive solution to this problem. Scalable method for wireless communication should adapt to input video sequence. Standard ITU (International Telecommunication Union - Joint Scalable Video Model (JSVM is employed to produce combined scalable video coding (CSVC method that match the required quality of video streaming services for wireless transmission. The investigation in this paper shows that combined scalable technique outperforms the non-scalable one, in using bit rate capacity at certain layer.

  14. SVC VIDEO STREAM ALLOCATION AND ADAPTATION IN HETEROGENEOUS NETWORK

    Directory of Open Access Journals (Sweden)

    E. A. Pakulova

    2016-07-01

    Full Text Available The paper deals with video data transmission in format H.264/SVC standard with QoS requirements satisfaction. The Sender-Side Path Scheduling (SSPS algorithm and Sender-Side Video Adaptation (SSVA algorithm were developed. SSPS algorithm gives the possibility to allocate video traffic among several interfaces while SSVA algorithm dynamically changes the quality of video sequence in relation to QoS requirements. It was shown that common usage of two developed algorithms enables to aggregate throughput of access networks, increase parameters of Quality of Experience and decrease losses in comparison with Round Robin algorithm. For evaluation of proposed solution, the set-up was made. The trace files with throughput of existing public networks were used in experiments. Based on this information the throughputs of networks were limited and losses for paths were set. The results of research may be used for study and transmission of video data in heterogeneous wireless networks.

  15. Using content models to build audio-video summaries

    Science.gov (United States)

    Saarela, Janne; Merialdo, Bernard

    1998-12-01

    The amount of digitized video in archives is becoming so huge, that easier access and content browsing tools are desperately needed. Also, video is no longer one big piece of data, but a collection of useful smaller building blocks, which can be accessed and used independently from the original context of presentation. In this paper, we demonstrate a content model for audio video sequences, with the purpose of enabling the automatic generation of video summaries. The model is based on descriptors, which indicate various properties and relations of audio and video segments. In practice, these descriptors could either be generated automatically by methods of analysis, or produced manually (or computer-assisted) by the content provider. We analyze the requirements and characteristics of the different data segments, with respect to the problem of summarization, and we define our model as a set of constraints, which allow to produce good quality summaries.

  16. Automatic Person Identification in Camera Video by Motion Correlation

    Directory of Open Access Journals (Sweden)

    Dingbo Duan

    2014-01-01

    Full Text Available Person identification plays an important role in semantic analysis of video content. This paper presents a novel method to automatically label persons in video sequence captured from fixed camera. Instead of leveraging traditional face recognition approaches, we deal with the task of person identification by fusing information from motion sensor platforms, like smart phones, carried on human bodies and extracted from camera video. More specifically, a sequence of motion features extracted from camera video are compared with each of those collected from accelerometers of smart phones. When strong correlation is detected, identity information transmitted from the corresponding smart phone is used to identify the phone wearer. To test the feasibility and efficiency of the proposed method, extensive experiments are conducted which achieved impressive performance.

  17. Moving Shadow Detection in Video Using Cepstrum Regular Paper

    OpenAIRE

    Cogun, Fuat; Cetin, Ahmet Enis

    2013-01-01

    Moving shadows constitute problems in various applications such as image segmentation and object tracking. The main cause of these problems is the misclassification of the shadow pixels as target pixels. Therefore, the use of an accurate and reliable shadow detection method is essential to realize intelligent video processing applications. In this paper, a cepstrum‐based method for moving shadow detection is presented. The proposed method is tested on outdoor and indoor video sequences using ...

  18. Efficient Coding of Shape and Transparency for Video Objects

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2007-01-01

    A novel scheme for coding gray-level alpha planes in object-based video is presented. Gray-level alpha planes convey the shape and the transparency information, which are required for smooth composition of video objects. The algorithm proposed is based on the segmentation of the alpha plane...... shape layer is processed by a novel video shape coder. In intra mode, the DSLSC binary image coder presented in is used. This is extended here with an intermode utilizing temporal redundancies in shape image sequences. Then the opaque layer is compressed by a newly designed scheme which models...

  19. Frame Rate versus Spatial Quality: Which Video Characteristics Do Matter?

    DEFF Research Database (Denmark)

    Korhonen, Jari; Reiter, Ulrich; Ukhanova, Ann

    2013-01-01

    and temporal quality levels. We also propose simple yet powerful metrics for characterizing spatial and temporal properties of a video sequence, and demonstrate how these metrics can be applied for evaluating the relative impact of spatial and temporal quality on the perceived overall quality.......Several studies have shown that the relationship between perceived video quality and frame rate is dependent on the video content. In this paper, we have analyzed the content characteristics and compared them against the subjective results derived from preference decisions between different spatial...

  20. Modelling retinal pulsatile blood flow from video data.

    Science.gov (United States)

    Betz-Stablein, Brigid; Hazelton, Martin L; Morgan, William H

    2016-09-01

    Modern day datasets continue to increase in both size and diversity. One example of such 'big data' is video data. Within the medical arena, more disciplines are using video as a diagnostic tool. Given the large amount of data stored within a video image, it is one of most time consuming types of data to process and analyse. Therefore, it is desirable to have automated techniques to extract, process and analyse data from video images. While many methods have been developed for extracting and processing video data, statistical modelling to analyse the outputted data has rarely been employed. We develop a method to take a video sequence of periodic nature, extract the RGB data and model the changes occurring across the contiguous images. We employ harmonic regression to model periodicity with autoregressive terms accounting for the error process associated with the time series nature of the data. A linear spline is included to account for movement between frames. We apply this model to video sequences of retinal vessel pulsation, which is the pulsatile component of blood flow. Slope and amplitude are calculated for the curves generated from the application of the harmonic model, providing clinical insight into the location of obstruction within the retinal vessels. The method can be applied to individual vessels, or to smaller segments such as 2 × 2 pixels which can then be interpreted easily as a heat map. © The Author(s) 2016.

  1. A Novel Mobile Video Community Discovery Scheme Using Ontology-Based Semantical Interest Capture

    Directory of Open Access Journals (Sweden)

    Ruiling Zhang

    2016-01-01

    Full Text Available Leveraging network virtualization technologies, the community-based video systems rely on the measurement of common interests to define and steady relationship between community members, which promotes video sharing performance and improves scalability community structure. In this paper, we propose a novel mobile Video Community discovery scheme using ontology-based semantical interest capture (VCOSI. An ontology-based semantical extension approach is proposed, which describes video content and measures video similarity according to video key word selection methods. In order to reduce the calculation load of video similarity, VCOSI designs a prefix-filtering-based estimation algorithm to decrease energy consumption of mobile nodes. VCOSI further proposes a member relationship estimate method to construct scalable and resilient node communities, which promotes video sharing capacity of video systems with the flexible and economic community maintenance. Extensive tests show how VCOSI obtains better performance results in comparison with other state-of-the-art solutions.

  2. A video rate laser scanning confocal microscope

    Science.gov (United States)

    Ma, Hongzhou; Jiang, James; Ren, Hongwu; Cable, Alex E.

    2008-02-01

    A video-rate laser scanning microscope was developed as an imaging engine to integrate with other photonic building blocks to fulfill various microscopic imaging applications. The system is quipped with diode laser source, resonant scanner, galvo scanner, control electronic and computer loaded with data acquisition boards and imaging software. Based on an open frame design, the system can be combined with varies optics to perform the functions of fluorescence confocal microscopy, multi-photon microscopy and backscattering confocal microscopy. Mounted to the camera port, it allows a traditional microscope to obtain confocal images at video rate. In this paper, we will describe the design principle and demonstrate examples of applications.

  3. 3D video coding: an overview of present and upcoming standards

    Science.gov (United States)

    Merkle, Philipp; Müller, Karsten; Wiegand, Thomas

    2010-07-01

    An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.

  4. [Erythromycin ethylsuccinate obtaining possibilities].

    Science.gov (United States)

    Stan, Cătălina Daniela; Stefanache, Alina; Tântaru, Gladiola; Poiată, Antonia; Dumitrache, M; Diaconu, D E; Profire, Lenuţa

    2008-01-01

    In this study we tried to improve the erythromycin ethylsuccinate obtaining, having in view to separate the erythromycin ester by crystallization in water. The erythromycin acylation and the erythromycin ethylsuccinate crystallization were realized, following the next steps: 1. the acylation of the erythromycin with a methylene chloride solution of monoethylsuccinyl chloride, at 25-28 degrees C for 3 hours in the presence of NaHCO3; 2. the transfer of the erythromycin ethylsuccinate from methylene chloride solution in acetone solution by distillation of mixture methylene chloride: acetone 1:1 at 25-28 degrees C; 3. erythromycin ethylsuccinate separation by crystallization in water at pH = 8-8.5 and 5 degrees C for 90 minutes. The quality control for the erythromycin ester was performed according to the Xth edition of Romanian Pharmacopoeia standards using national standard for erythromycin ethylsuccinate and national standard for erythromycin with an activity of 1: 937 U and 2.02% humidity. The Micrococcus luteus ATCC 9341 was used as a test microorganism and a thin layer cromatography was performed for qualitative control. 13.1 g of erythromycin ethylsuccinate were obtained with an output of the process of 82.02%. Using water for the separation of erythromycin ethylsuccinate the output of the process is greater (82.02%) than in case of using petroleum ether (74.14%) or hexane (80.25%). The thin layer cromatography revealed an Rf = 0.56 and the microbiological activity of the erythromycin ethylsuccinate was 98.7% compared with the standard. Using water instead of hexane or petroleum ether is gainful for the separation of erythromycin ethylsuccinate from the reaction medium. The obtained erythromycin ethylsuccinate corresponds to the Xth edition of Romanian Pharmacopoeia standards. So, the raw materials consumption is decreased, the costs are cut down, the obtained product purity is high and the output of the process is greater.

  5. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  6. Talking Video in 'Everyday Life'

    DEFF Research Database (Denmark)

    McIlvenny, Paul

    For better or worse, video technologies have made their way into many domains of social life, for example in the domain of therapeutics. Techniques such as Marte Meo, Video Interaction Guidance (ViG), Video-Enhanced Reflection on Communication, Video Home Training and Video intervention....../prevention (VIP) all promote the use of video as a therapeutic tool. This paper focuses on media therapeutics and the various in situ uses of video technologies in the mass media for therapeutic purposes. Reality TV parenting programmes such as Supernanny, Little Angels, The House of Tiny Tearaways, Honey, We......’re Killing the Kids, and Driving Mum and Dad Mad all use video as a prominent element of not only the audiovisual spectacle of reality television but also the interactional therapy, counselling, coaching and/or instruction intrinsic to these programmes. Thus, talk-on-video is used to intervene...

  7. Reconstructing Interlaced High-Dynamic-Range Video Using Joint Learning.

    Science.gov (United States)

    Choi, Inchang; Baek, Seung-Hwan; Kim, Min H

    2017-11-01

    For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extended dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with the state-of-the-art high-dynamic-range video methods.

  8. A reduced-reference perceptual image and video quality metric based on edge preservation

    Science.gov (United States)

    Martini, Maria G.; Villarini, Barbara; Fiorucci, Federico

    2012-12-01

    In image and video compression and transmission, it is important to rely on an objective image/video quality metric which accurately represents the subjective quality of processed images and video sequences. In some scenarios, it is also important to evaluate the quality of the received video sequence with minimal reference to the transmitted one. For instance, for quality improvement of video transmission through closed-loop optimisation, the video quality measure can be evaluated at the receiver and provided as feedback information to the system controller. The original image/video sequence--prior to compression and transmission--is not usually available at the receiver side, and it is important to rely at the receiver side on an objective video quality metric that does not need reference or needs minimal reference to the original video sequence. The observation that the human eye is very sensitive to edge and contour information of an image underpins the proposal of our reduced reference (RR) quality metric, which compares edge information between the distorted and the original image. Results highlight that the metric correlates well with subjective observations, also in comparison with commonly used full-reference metrics and with a state-of-the-art RR metric.

  9. A Large-scale Benchmark Dataset for Event Recognition in Surveillance Video

    Science.gov (United States)

    2011-06-01

    the stationary dataset, we include downsampled versions of dataset obtained by down- sampling the original HD videos to lower framerates and pixel...when video framerates and pixel resolutions are low. This is a relatively unexplored area 3155 Figure 2. Six example scenes in VIRAT Video Dataset...A Large-scale Benchmark Dataset for Event Recognition in Surveillance Video Sangmin Oh, Anthony Hoogs, Amitha Perera, Naresh Cuntoor, Chia-Chih Chen

  10. VideoStory: A New Multimedia Embedding for Few Example Recognition and Translation of Events

    Science.gov (United States)

    2014-11-07

    This objective minimizes the quadratic error between the original video descriptions Y , and the reconstructed translations obtained from A and S...this purpose, we parse the grammatical structure of title captions using a probabilistic Figure 3: Terms from the VideoStory46K dataset occurring in...according to Eq. (2). Then the video embedding is learned separately, by minimizing the error of predicting the embedded descriptions from the videos

  11. Video y desarrollo rural

    Directory of Open Access Journals (Sweden)

    Fraser Colin

    2015-01-01

    Full Text Available Las primeras experiencias de video rural fueron realizadas en Perú y México. El proyecto peruano es conocido como CESPAC (Centro de Servicios de Pedagogía Audiovisual para la Capacitación. Con financiamiento externo de la FAO fue iniciado en la década del 70. El proyecto mexicano fue bautizado con el nombre de PRODERITH (Programa de Desarrollo Rural Integrado del Trópico Húmedo. Su componente de video rural tuvo un éxito muy particular a nivel de base.La evaluación concluyó en que el video rural como sistema de comunicación social para el desarrollo es excelente y de bajo costo

  12. A Big Video Manifesto

    DEFF Research Database (Denmark)

    Mcilvenny, Paul Bruce; Davidsen, Jacob

    2017-01-01

    For the last few years, we have witnessed a hype about the potential results and insights that quantitative big data can bring to the social sciences. The wonder of big data has moved into education, traffic planning, and disease control with a promise of making things better with big numbers...... and beautiful visualisations. However, we also need to ask what the tools of big data can do both for the Humanities and for more interpretative approaches and methods. Thus, we prefer to explore how the power of computation, new sensor technologies and massive storage can also help with video-based qualitative...... inquiry, such as video ethnography, ethnovideo, performance documentation, anthropology and multimodal interaction analysis. That is why we put forward, half-jokingly at first, a Big Video manifesto to spur innovation in the Digital Humanities....

  13. Online video examination

    DEFF Research Database (Denmark)

    Qvist, Palle

    courses are accredited to the master programme. The programme is online, worldwide and on demand. It recruits students from all over the world. The programme is organized exemplary in accordance the principles in the problem-based and project-based learning method used at Aalborg University where students......The Master programme in Problem-Based Learning in Engineering and Science, MPBL (www.mpbl.aau.dk), at Aalborg University, is an international programme offering formalized staff development. The programme is also offered in smaller parts as single subject courses (SSC). Passed single subject...... have large influence on their own teaching, learning and curriculum. The programme offers streamed videos in combination with other learning resources. It is a concept which offers video as pure presentation - video lectures - but also as an instructional tool which gives the students the possibility...

  14. Brains on video games.

    Science.gov (United States)

    Bavelier, Daphne; Green, C Shawn; Han, Doug Hyun; Renshaw, Perry F; Merzenich, Michael M; Gentile, Douglas A

    2011-11-18

    The popular press is replete with stories about the effects of video and computer games on the brain. Sensationalist headlines claiming that video games 'damage the brain' or 'boost brain power' do not do justice to the complexities and limitations of the studies involved, and create a confusing overall picture about the effects of gaming on the brain. Here, six experts in the field shed light on our current understanding of the positive and negative ways in which playing video games can affect cognition and behaviour, and explain how this knowledge can be harnessed for educational and rehabilitation purposes. As research in this area is still in its early days, the contributors of this Viewpoint also discuss several issues and challenges that should be addressed to move the field forward.

  15. A low-light-level video recursive filtering technology based on the three-dimensional coefficients

    Science.gov (United States)

    Fu, Rongguo; Feng, Shu; Shen, Tianyu; Luo, Hao; Wei, Yifang; Yang, Qi

    2017-08-01

    Low light level video is an important method of observation under low illumination condition, but the SNR of low light level video is low, the effect of observation is poor, so the noise reduction processing must be carried out. Low light level video noise mainly includes Gauss noise, Poisson noise, impulse noise, fixed pattern noise and dark current noise. In order to remove the noise in low-light-level video effectively, improve the quality of low-light-level video. This paper presents an improved time domain recursive filtering algorithm with three dimensional filtering coefficients. This algorithm makes use of the correlation between the temporal domain of the video sequence. In the video sequences, the proposed algorithm adaptively adjusts the local window filtering coefficients in space and time by motion estimation techniques, for the different pixel points of the same frame of the image, the different weighted coefficients are used. It can reduce the image tail, and ensure the noise reduction effect well. Before the noise reduction, a pretreatment based on boxfilter is used to reduce the complexity of the algorithm and improve the speed of the it. In order to enhance the visual effect of low-light-level video, an image enhancement algorithm based on guided image filter is used to enhance the edge of the video details. The results of experiment show that the hybrid algorithm can remove the noise of the low-light-level video effectively, enhance the edge feature and heighten the visual effects of video.

  16. Using Video from Mobile Phones to Improve Pediatric Phone Triage in an Underserved Population.

    Science.gov (United States)

    Freeman, Brandi; Mayne, Stephanie; Localio, A Russell; Luberti, Anthony; Zorc, Joseph J; Fiks, Alexander G

    2017-02-01

    Video-capable mobile phones are widely available, but few studies have evaluated their use in telephone triage for pediatric patients. We assessed the feasibility, acceptability, and utility of videos sent via mobile phones to enhance pediatric telephone triage for an underserved population with asthma. We recruited children who presented to an urban pediatric emergency department with an asthma exacerbation along with their parent/guardian. Parents and the research team each obtained a video of the child's respiratory exam, and the research team conducted a concurrent in-person rating of respiratory status. We measured the acceptability of families sending videos as part of telephone triage (survey) and the feasibility of this approach (rates of successful video transmission by parents to the research team). To estimate the utility of the video in appropriately triaging children, four clinicians reviewed each video and rated whether they found the video reassuring, neutral, or raising concerns. Among 60 families (78% Medicaid, 85% Black), 80% of parents reported that sending a video would be helpful and 68% reported that a nurse's review of a video would increase their trust in the triage assessment. Most families (75%) successfully transmitted a video to the research team. All clinician raters found the video reassuring regarding the severity of the child's asthma exacerbation for 68% of children. Obtaining mobile phone videos for telephone triage is acceptable to families, feasible, and may help improve the quality of telephone triage in an urban, minority population.

  17. Neural Basis of Video Gaming: A Systematic Review

    Science.gov (United States)

    Palaus, Marc; Marron, Elena M.; Viejo-Sobera, Raquel; Redolar-Ripoll, Diego

    2017-01-01

    Background: Video gaming is an increasingly popular activity in contemporary society, especially among young people, and video games are increasing in popularity not only as a research tool but also as a field of study. Many studies have focused on the neural and behavioral effects of video games, providing a great deal of video game derived brain correlates in recent decades. There is a great amount of information, obtained through a myriad of methods, providing neural correlates of video games. Objectives: We aim to understand the relationship between the use of video games and their neural correlates, taking into account the whole variety of cognitive factors that they encompass. Methods: A systematic review was conducted using standardized search operators that included the presence of video games and neuro-imaging techniques or references to structural or functional brain changes. Separate categories were made for studies featuring Internet Gaming Disorder and studies focused on the violent content of video games. Results: A total of 116 articles were considered for the final selection. One hundred provided functional data and 22 measured structural brain changes. One-third of the studies covered video game addiction, and 14% focused on video game related violence. Conclusions: Despite the innate heterogeneity of the field of study, it has been possible to establish a series of links between the neural and cognitive aspects, particularly regarding attention, cognitive control, visuospatial skills, cognitive workload, and reward processing. However, many aspects could be improved. The lack of standardization in the different aspects of video game related research, such as the participants' characteristics, the features of each video game genre and the diverse study goals could contribute to discrepancies in many related studies. PMID:28588464

  18. Neural Basis of Video Gaming: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Marc Palaus

    2017-05-01

    Full Text Available Background: Video gaming is an increasingly popular activity in contemporary society, especially among young people, and video games are increasing in popularity not only as a research tool but also as a field of study. Many studies have focused on the neural and behavioral effects of video games, providing a great deal of video game derived brain correlates in recent decades. There is a great amount of information, obtained through a myriad of methods, providing neural correlates of video games.Objectives: We aim to understand the relationship between the use of video games and their neural correlates, taking into account the whole variety of cognitive factors that they encompass.Methods: A systematic review was conducted using standardized search operators that included the presence of video games and neuro-imaging techniques or references to structural or functional brain changes. Separate categories were made for studies featuring Internet Gaming Disorder and studies focused on the violent content of video games.Results: A total of 116 articles were considered for the final selection. One hundred provided functional data and 22 measured structural brain changes. One-third of the studies covered video game addiction, and 14% focused on video game related violence.Conclusions: Despite the innate heterogeneity of the field of study, it has been possible to establish a series of links between the neural and cognitive aspects, particularly regarding attention, cognitive control, visuospatial skills, cognitive workload, and reward processing. However, many aspects could be improved. The lack of standardization in the different aspects of video game related research, such as the participants' characteristics, the features of each video game genre and the diverse study goals could contribute to discrepancies in many related studies.

  19. Neural Basis of Video Gaming: A Systematic Review.

    Science.gov (United States)

    Palaus, Marc; Marron, Elena M; Viejo-Sobera, Raquel; Redolar-Ripoll, Diego

    2017-01-01

    Background: Video gaming is an increasingly popular activity in contemporary society, especially among young people, and video games are increasing in popularity not only as a research tool but also as a field of study. Many studies have focused on the neural and behavioral effects of video games, providing a great deal of video game derived brain correlates in recent decades. There is a great amount of information, obtained through a myriad of methods, providing neural correlates of video games. Objectives: We aim to understand the relationship between the use of video games and their neural correlates, taking into account the whole variety of cognitive factors that they encompass. Methods: A systematic review was conducted using standardized search operators that included the presence of video games and neuro-imaging techniques or references to structural or functional brain changes. Separate categories were made for studies featuring Internet Gaming Disorder and studies focused on the violent content of video games. Results: A total of 116 articles were considered for the final selection. One hundred provided functional data and 22 measured structural brain changes. One-third of the studies covered video game addiction, and 14% focused on video game related violence. Conclusions: Despite the innate heterogeneity of the field of study, it has been possible to establish a series of links between the neural and cognitive aspects, particularly regarding attention, cognitive control, visuospatial skills, cognitive workload, and reward processing. However, many aspects could be improved. The lack of standardization in the different aspects of video game related research, such as the participants' characteristics, the features of each video game genre and the diverse study goals could contribute to discrepancies in many related studies.

  20. Video library for video imaging detection at intersection stop lines.

    Science.gov (United States)

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  1. User aware video streaming

    Science.gov (United States)

    Kerofsky, Louis; Jagannath, Abhijith; Reznik, Yuriy

    2015-03-01

    We describe the design of a video streaming system using adaptation to viewing conditions to reduce the bitrate needed for delivery of video content. A visual model is used to determine sufficient resolution needed under various viewing conditions. Sensors on a mobile device estimate properties of the viewing conditions, particularly the distance to the viewer. We leverage the framework of existing adaptive bitrate streaming systems such as HLS, Smooth Streaming or MPEG-DASH. The client rate selection logic is modified to include a sufficient resolution computed using the visual model and the estimated viewing conditions. Our experiments demonstrate significant bitrate savings compare to conventional streaming methods which do not exploit viewing conditions.

  2. Contextual analysis of videos

    CERN Document Server

    Thida, Myo; Monekosso, Dorothy

    2013-01-01

    Video context analysis is an active and vibrant research area, which provides means for extracting, analyzing and understanding behavior of a single target and multiple targets. Over the last few decades, computer vision researchers have been working to improve the accuracy and robustness of algorithms to analyse the context of a video automatically. In general, the research work in this area can be categorized into three major topics: 1) counting number of people in the scene 2) tracking individuals in a crowd and 3) understanding behavior of a single target or multiple targets in the scene.

  3. Video-based rendering

    CERN Document Server

    Magnor, Marcus A

    2005-01-01

    Driven by consumer-market applications that enjoy steadily increasing economic importance, graphics hardware and rendering algorithms are a central focus of computer graphics research. Video-based rendering is an approach that aims to overcome the current bottleneck in the time-consuming modeling process and has applications in areas such as computer games, special effects, and interactive TV. This book offers an in-depth introduction to video-based rendering, a rapidly developing new interdisciplinary topic employing techniques from computer graphics, computer vision, and telecommunication en

  4. CERN Video News

    CERN Multimedia

    2003-01-01

    From Monday you can see on the web the new edition of CERN's Video News. Thanks to a collaboration between the audiovisual teams at CERN and Fermilab, you can see a report made by the American laboratory. The clip concerns the LHC magnets that are being constructed at Fermilab. Also in the programme: the spectacular rotation of one of the ATLAS coils, the arrival at CERN of the first American magnet made at Brookhaven, the story of the discovery 20 years ago of the W and Z bosons at CERN. http://www.cern.ch/video or Bulletin web page.

  5. SCTP as scalable video coding transport

    Science.gov (United States)

    Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.

    2013-12-01

    This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.

  6. Video special effects editing in MPEG-2 compressed video

    OpenAIRE

    Fernando, WAC; Canagarajah, CN; Bull, David

    2000-01-01

    With the increase of digital technology in video production, several types of complex video special effects editing have begun to appear in video clips. In this paper we consider fade-out and fade-in special effects editing in MPEG-2 compressed video without full frame decompression and motion estimation. We estimated the DCT coefficients and use these coefficients together with the existing motion vectors to produce these special effects editing in compressed domain. Results show that both o...

  7. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  8. AFSC/ABL: Delta Submersible Dive Video Archive in Alaska, 1988-2009

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Seafloor habitat and fish data collected from videos obtained from Delta submarine dives conducted by Auke Bay Laboratories.

  9. Streaming Video--The Wave of the Video Future!

    Science.gov (United States)

    Brown, Laura

    2004-01-01

    Videos and DVDs give the teachers more flexibility than slide projectors, filmstrips, and 16mm films but teachers and students are excited about a new technology called streaming. Streaming allows the educators to view videos on demand via the Internet, which works through the transfer of digital media like video, and voice data that is received…

  10. GPS-Aided Video Tracking

    Directory of Open Access Journals (Sweden)

    Udo Feuerhake

    2015-08-01

    Full Text Available Tracking moving objects is both challenging and important for a large variety of applications. Different technologies based on the global positioning system (GPS and video or radio data are used to obtain the trajectories of the observed objects. However, in some use cases, they fail to provide sufficiently accurate, complete and correct data at the same time. In this work we present an approach for fusing GPS- and video-based tracking in order to exploit their individual advantages. In this way we aim to combine the reliability of GPS tracking with the high geometric accuracy of camera detection. For the fusion of the movement data provided by the different devices we use a hidden Markov model (HMM formulation and the Viterbi algorithm to extract the most probable trajectories. In three experiments, we show that our approach is able to deal with challenging situations like occlusions or objects which are temporarily outside the monitored area. The results show the desired increase in terms of accuracy, completeness and correctness.

  11. Fingerprint multicast in secure video streaming.

    Science.gov (United States)

    Zhao, H Vicky; Liu, K J Ray

    2006-01-01

    Digital fingerprinting is an emerging technology to protect multimedia content from illegal redistribution, where each distributed copy is labeled with unique identification information. In video streaming, huge amount of data have to be transmitted to a large number of users under stringent latency constraints, so the bandwidth-efficient distribution of uniquely fingerprinted copies is crucial. This paper investigates the secure multicast of anticollusion fingerprinted video in streaming applications and analyzes their performance. We first propose a general fingerprint multicast scheme that can be used with most spread spectrum embedding-based multimedia fingerprinting systems. To further improve the bandwidth efficiency, we explore the special structure of the fingerprint design and propose a joint fingerprint design and distribution scheme. From our simulations, the two proposed schemes can reduce the bandwidth requirement by 48% to 87%, depending on the number of users, the characteristics of video sequences, and the network and computation constraints. We also show that under the constraint that all colluders have the same probability of detection, the embedded fingerprints in the two schemes have approximately the same collusion resistance. Finally, we propose a fingerprint drift compensation scheme to improve the quality of the reconstructed sequences at the decoder's side without introducing extra communication overhead.

  12. Adaptive subband coding of full motion video

    Science.gov (United States)

    Sharifi, Kamran; Xiao, Leping; Leon-Garcia, Alberto

    1993-10-01

    In this paper a new algorithm for digital video coding is presented that is suitable for digital storage and video transmission applications in the range of 5 to 10 Mbps. The scheme is based on frame differencing and, unlike recent proposals, does not employ motion estimation and compensation. A novel adaptive grouping structure is used to segment the video sequence into groups of frames of variable sizes. Within each group, the frame difference is taken in a closed loop Differential Pulse Code Modulation (DPCM) structure and then decomposed into different frequency subbands. The important subbands are transformed using the Discrete Cosine Transform (DCT) and the resulting coefficients are adaptively quantized and runlength coded. The adaptation is based on the variance of sample values in each subband. To reduce the computation load, a very simple and efficient way has been used to estimate the variance of the subbands. It is shown that for many types of sequences, the performance of the proposed coder is comparable to that of coding methods which use motion parameters.

  13. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... Grants and Funding Extramural Research Division of Extramural Science Programs Division of Extramural Activities Extramural Contacts NEI ... Amaurosis Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded ...

  14. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five ... was designed to help you learn more about Rheumatoid Arthritis (RA). You will learn how the diagnosis ...

  15. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Our Staff Rheumatology Specialty Centers You are here: Home / Patient Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video ... to take a more active role in your care. The information in these videos should not take ...

  16. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... will allow you to take a more active role in your care. The information in these videos ... Stategies to Increase your Level of Physical Activity Role of Body Weight in Osteoarthritis Educational Videos for ...

  17. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... here. Will You Support the Education of Arthritis Patients? Each year, over 1 million people visit this ... of Body Weight in Osteoarthritis Educational Videos for Patients Rheumatoid Arthritis Educational Video Series Psoriatic Arthritis 101 ...

  18. Videos & Tools: MedlinePlus

    Science.gov (United States)

    ... of this page: https://medlineplus.gov/videosandcooltools.html Videos & Tools To use the sharing features on this page, please enable JavaScript. Watch health videos on topics such as anatomy, body systems, and ...

  19. Health Videos: MedlinePlus

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/anatomyvideos.html.htm Health Videos To use the sharing features on this page, please enable JavaScript. These animated videos show the anatomy of body parts and organ ...

  20. Scanning laser video camera/ microscope

    Science.gov (United States)

    Wang, C. P.; Bow, R. T.

    1984-10-01

    A laser scanning system capable of scanning at standard video rate has been developed. The scanning mirrors, circuit design and system performance, as well as its applications to video cameras and ultra-violet microscopes, are discussed.

  1. Astronomy Video Contest

    Science.gov (United States)

    McFarland, John

    2008-05-01

    During Galileo's lifetime his staunchest supporter was Johannes Kepler, Imperial Mathematician to the Holy Roman Emperor. Johannes Kepler will be in St. Louis to personally offer a tribute to Galileo. Set Galileo's astronomy discoveries to music and you get the newest song by the well known acappella group, THE CHROMATICS. The song, entitled "Shoulders of Giants” was written specifically for IYA-2009 and will be debuted at this conference. The song will also be used as a base to create a music video by synchronizing a person's own images to the song's lyrics and tempo. Thousands of people already do this for fun and post their videos on YOU TUBE and other sites. The ASTRONOMY VIDEO CONTEST will be launched as a vehicle to excite, enthuse and educate people about astronomy and science. It will be an annual event administered by the Johannes Kepler Project and will continue to foster the goals of IYA-2009 for years to come. During this presentation the basic categories, rules, and prizes for the Astronomy Video Contest will be covered and finally the new song "Shoulders of Giants” by THE CHROMATICS will be unveiled

  2. Provocative Video Scenarios

    DEFF Research Database (Denmark)

    Caglio, Agnese

    This paper presents the use of ”provocative videos”, as a tool to support and deepen findings from ethnographic investigation on the theme of remote videocommunication. The videos acted as a resource to also investigate potential for novel technologies supporting continuous connection between...

  3. Video Content Foraging

    NARCIS (Netherlands)

    van Houten, Ynze; Schuurman, Jan Gerrit; Verhagen, Pleunes Willem; Enser, Peter; Kompatsiaris, Yiannis; O’Connor, Noel E.; Smeaton, Alan F.; Smeulders, Arnold W.M.

    2004-01-01

    With information systems, the real design problem is not increased access to information, but greater efficiency in finding useful information. In our approach to video content browsing, we try to match the browsing environment with human information processing structures by applying ideas from

  4. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Donate Resources Links Videos Podcasts Webinars For the Media For Clinicians For Policymakers For Family Caregivers Glossary Sign Up for Our Blog Subscribe to Blog Enter your email address to subscribe to this blog and receive notifications of new posts by email. Email Address CLOSE Home About ...

  5. Scalable Video Coding

    NARCIS (Netherlands)

    Choupani, R.

    2017-01-01

    With the rapid improvements in digital communication technologies, distributing high-definition visual information has become more widespread. However, the available technologies were not sufficient to support the rising demand for high-definition video. This situation is further complicated when

  6. Video processing project

    CSIR Research Space (South Africa)

    Globisch, R

    2009-03-01

    Full Text Available Video processing source code for algorithms and tools used in software media pipelines (e.g. image scalers, colour converters, etc.) The currently available source code is written in C++ with their associated libraries and DirectShow- Filters....

  7. Video narrativer i sygeplejerskeuddannelsen

    DEFF Research Database (Denmark)

    Jensen, Inger

    2009-01-01

    I artiklen gives nogle bud på hvordan video narrativer kan bruges i sygeplejerskeuddannelsen som triggers, der åbner for diskussioner og udvikling af meningsfulde holdninger til medmennesker. Det belyses også hvordan undervisere i deres didaktiske overvejelser kan inddrage elementer fra teori om...

  8. Streaming-video produktion

    DEFF Research Database (Denmark)

    Grønkjær, Poul

    2004-01-01

     E-learning Lab på Aalborg Universitet har i forbindelse med forskningsprojektet Virtuelle Læringsformer og Læringsmiljøer foretaget en række praktiske eksperimenter med streaming-video produktioner. Hensigten med denne artikel er at formidle disse erfaringer. Artiklen beskriver hele produktionsf...... E-learning Lab på Aalborg Universitet har i forbindelse med forskningsprojektet Virtuelle Læringsformer og Læringsmiljøer foretaget en række praktiske eksperimenter med streaming-video produktioner. Hensigten med denne artikel er at formidle disse erfaringer. Artiklen beskriver hele...... produktionsforløbet: fra ide til færdigt produkt, forskellige typer af præsentationer, dramaturgiske overvejelser samt en konceptskitse. Streaming-video teknologien er nu så udviklet med et så tilfredsstillende audiovisuelt udtryk at vi kan begynde at fokusere på, hvilket indhold der er velegnet til at blive gjort...... tilgængeligt uafhængigt af tid og sted. Afslutningsvis er der en række kildehenvisninger, blandt andet en oversigt over de streaming-video produktioner, som denne artikel bygger på....

  9. Characteristics of Instructional Videos

    Science.gov (United States)

    Beheshti, Mobina; Taspolat, Ata; Kaya, Omer Sami; Sapanca, Hamza Fatih

    2018-01-01

    Nowadays, video plays a significant role in education in terms of its integration into traditional classes, the principal delivery system of information in classes particularly in online courses as well as serving as a foundation of many blended classes. Hence, education is adopting a modern approach of instruction with the target of moving away…

  10. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available Home About Donate Search Search What Is It Definition Pediatric Palliative Care Disease Types FAQ Handout for Patients and Families Is It Right for You How to Get It Talk to your Doctor Find a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts ...

  11. Mobiele video voor bedrijfscommunicatie

    NARCIS (Netherlands)

    Niamut, O.A.; Weerdt, C.A. van der; Havekes, A.

    2009-01-01

    Het project Penta Mobilé liep van juni tot november 2009 en had als doel de mogelijkheden van mobiele video voor bedrijfscommunicatie toepassingen in kaart te brengen. Dit onderzoek werd uitgevoerd samen met vijf (‘Penta’) partijen: Business Tales, Condor Digital, European Communication Projects

  12. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Surgery What is acoustic neuroma Diagnosing Symptoms Side effects ... Groups Is a support group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer Support Program Community Connections Overview Find a Meeting ...

  13. Developing a Video Steganography Toolkit

    OpenAIRE

    Ridgway, James; Stannett, Mike

    2014-01-01

    Although techniques for separate image and audio steganography are widely known, relatively little has been described concerning the hiding of information within video streams ("video steganography"). In this paper we review the current state of the art in this field, and describe the key issues we have encountered in developing a practical video steganography system. A supporting video is also available online at http://www.youtube.com/watch?v=YhnlHmZolRM

  14. Two-description distributed video coding for robust transmission

    Directory of Open Access Journals (Sweden)

    Zhao Yao

    2011-01-01

    Full Text Available Abstract In this article, a two-description distributed video coding (2D-DVC is proposed to address the robust video transmission of low-power capturers. The odd/even frame-splitting partitions a video into two sub-sequences to produce two descriptions. Each description consists of two parts, where part 1 is a zero-motion based H.264-coded bitstream of a sub-sequence and part 2 is a Wyner-Ziv (WZ-coded bitstream of the other sub-sequence. As the redundant part, the WZ-coded bitstream guarantees that the lost sub-sequence is recovered when one description is lost. On the other hand, the redundancy degrades the rate-distortion performance as no loss occurs. A residual 2D-DVC is employed to further improve the rate-distortion performance, where the difference of two sub-sequences is WZ encoded to generate part 2 in each description. Furthermore, an optimization method is applied to control an appropriate amount of redundancy and therefore facilitate the tuning of central/side distortion tradeoff. The experimental results show that the proposed schemes achieve better performance than the referenced one especially for low-motion videos. Moreover, our schemes still maintain low-complexity encoding property.

  15. Study of Temporal Effects on Subjective Video Quality of Experience.

    Science.gov (United States)

    Bampis, Christos George; Zhi Li; Moorthy, Anush Krishna; Katsavounidis, Ioannis; Aaron, Anne; Bovik, Alan Conrad

    2017-11-01

    HTTP adaptive streaming is being increasingly deployed by network content providers, such as Netflix and YouTube. By dividing video content into data chunks encoded at different bitrates, a client is able to request the appropriate bitrate for the segment to be played next based on the estimated network conditions. However, this can introduce a number of impairments, including compression artifacts and rebuffering events, which can severely impact an end-user's quality of experience (QoE). We have recently created a new video quality database, which simulates a typical video streaming application, using long video sequences and interesting Netflix content. Going beyond previous efforts, the new database contains highly diverse and contemporary content, and it includes the subjective opinions of a sizable number of human subjects regarding the effects on QoE of both rebuffering and compression distortions. We observed that rebuffering is always obvious and unpleasant to subjects, while bitrate changes may be less obvious due to content-related dependencies. Transient bitrate drops were preferable over rebuffering only on low complexity video content, while consistently low bitrates were poorly tolerated. We evaluated different objective video quality assessment algorithms on our database and found that objective video quality models are unreliable for QoE prediction on videos suffering from both rebuffering events and bitrate changes. This implies the need for more general QoE models that take into account objective quality models, rebuffering-aware information, and memory. The publicly available video content as well as metadata for all of the videos in the new database can be found at http://live.ece.utexas.edu/research/LIVE_NFLXStudy/nflx_index.html.

  16. Shadow Detection Based on Regions of Light Sources for Object Extraction in Nighttime Video.

    Science.gov (United States)

    Lee, Gil-Beom; Lee, Myeong-Jin; Lee, Woo-Kyung; Park, Joo-Heon; Kim, Tae-Hwan

    2017-03-22

    Intelligent video surveillance systems detect pre-configured surveillance events through background modeling, foreground and object extraction, object tracking, and event detection. Shadow regions inside video frames sometimes appear as foreground objects, interfere with ensuing processes, and finally degrade the event detection performance of the systems. Conventional studies have mostly used intensity, color, texture, and geometric information to perform shadow detection in daytime video, but these methods lack the capability of removing shadows in nighttime video. In this paper, a novel shadow detection algorithm for nighttime video is proposed; this algorithm partitions each foreground object based on the object's vertical histogram and screens out shadow objects by validating their orientations heading toward regions of light sources. From the experimental results, it can be seen that the proposed algorithm shows more than 93.8% shadow removal and 89.9% object extraction rates for nighttime video sequences, and the algorithm outperforms conventional shadow removal algorithms designed for daytime videos.

  17. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia Animations Blindness Cataract Convergence Insufficiency Diabetic Eye Disease Dilated Eye Exam Dry Eye For Kids Glaucoma ...

  18. CERN Video News on line

    CERN Multimedia

    2003-01-01

    The latest CERN video news is on line. In this issue : an interview with the Director General and reports on the new home for the DELPHI barrel and the CERN firemen's spectacular training programme. There's also a vintage video news clip from 1954. See: www.cern.ch/video or Bulletin web page

  19. We All Stream for Video

    Science.gov (United States)

    Technology & Learning, 2008

    2008-01-01

    More than ever, teachers are using digital video to enhance their lessons. In fact, the number of schools using video streaming increased from 30 percent to 45 percent between 2004 and 2006, according to Market Data Retrieval. Why the popularity? For starters, video-streaming products are easy to use. They allow teachers to punctuate lessons with…

  20. Social Properties of Mobile Video

    Science.gov (United States)

    Mitchell, April Slayden; O'Hara, Kenton; Vorbau, Alex

    Mobile video is now an everyday possibility with a wide array of commercially available devices, services, and content. These new technologies have created dramatic shifts in the way video-based media can be produced, consumed, and delivered by people beyond the familiar behaviors associated with fixed TV and video technologies. Such technology revolutions change the way users behave and change their expectations in regards to their mobile video experiences. Building upon earlier studies of mobile video, this paper reports on a study using diary techniques and ethnographic interviews to better understand how people are using commercially available mobile video technologies in their everyday lives. Drawing on reported episodes of mobile video behavior, the study identifies the social motivations and values underpinning these behaviors that help characterize mobile video consumption beyond the simplistic notion of viewing video only to kill time. This paper also discusses the significance of user-generated content and the usage of video in social communities through the description of two mobile video technology services that allow users to create and share content. Implications for adoption and design of mobile video technologies and services are discussed as well.

  1. Video Analysis of Rolling Cylinders

    Science.gov (United States)

    Phommarach, S.; Wattanakasiwich, P.; Johnston, I.

    2012-01-01

    In this work, we studied the rolling motion of solid and hollow cylinders down an inclined plane at different angles. The motions were captured on video at 300 frames s[superscript -1], and the videos were analyzed frame by frame using video analysis software. Data from the real motion were compared with the theory of rolling down an inclined…

  2. Video Games and Digital Literacies

    Science.gov (United States)

    Steinkuehler, Constance

    2010-01-01

    Today's youth are situated in a complex information ecology that includes video games and print texts. At the basic level, video game play itself is a form of digital literacy practice. If we widen our focus from the "individual player + technology" to the online communities that play them, we find that video games also lie at the nexus of a…

  3. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos ... Your Arthritis Managing Chronic Pain and Depression in Arthritis Nutrition & Rheumatoid Arthritis Arthritis and Health-related Quality of Life ...

  4. Dynamic Textures Modeling via Joint Video Dictionary Learning.

    Science.gov (United States)

    Wei, Xian; Li, Yuanxiang; Shen, Hao; Chen, Fang; Kleinsteuber, Martin; Wang, Zhongfeng

    2017-04-06

    Video representation is an important and challenging task in the computer vision community. In this paper, we consider the problem of modeling and classifying video sequences of dynamic scenes which could be modeled in a dynamic textures (DT) framework. At first, we assume that image frames of a moving scene can be modeled as a Markov random process. We propose a sparse coding framework, named joint video dictionary learning (JVDL), to model a video adaptively. By treating the sparse coefficients of image frames over a learned dictionary as the underlying "states", we learn an efficient and robust linear transition matrix between two adjacent frames of sparse events in time series. Hence, a dynamic scene sequence is represented by an appropriate transition matrix associated with a dictionary. In order to ensure the stability of JVDL, we impose several constraints on such transition matrix and dictionary. The developed framework is able to capture the dynamics of a moving scene by exploring both sparse properties and the temporal correlations of consecutive video frames. Moreover, such learned JVDL parameters can be used for various DT applications, such as DT synthesis and recognition. Experimental results demonstrate the strong competitiveness of the proposed JVDL approach in comparison with state-of-the-art video representation methods. Especially, it performs significantly better in dealing with DT synthesis and recognition on heavily corrupted data.

  5. Fast prediction algorithm for multiview video coding

    Science.gov (United States)

    Abdelazim, Abdelrahman; Mein, Stephen James; Varley, Martin Roy; Ait-Boudaoud, Djamel

    2013-03-01

    The H.264/multiview video coding (MVC) standard has been developed to enable efficient coding for three-dimensional and multiple viewpoint video sequences. The inter-view statistical dependencies are utilized and an inter-view prediction is employed to provide more efficient coding; however, this increases the overall encoding complexity. Motion homogeneity is exploited here to selectively enable inter-view prediction, and to reduce complexity in the motion estimation (ME) and the mode selection processes. This has been accomplished by defining situations that relate macro-blocks' motion characteristics to the mode selection and the inter-view prediction processes. When comparing the proposed algorithm to the H.264/MVC reference software and other recent work, the experimental results demonstrate a significant reduction in ME time while maintaining similar rate-distortion performance.

  6. Design and implementation of a wireless video surveillance system based on ARM

    Science.gov (United States)

    Li, Yucheng; Han, Dantao; Yan, Juanli

    2011-06-01

    A wireless video surveillance system based on ARM was designed and implemented in this article. The newest ARM11 S3C6410 was used as the main monitoring terminal chip with the embedded Linux operating system. The video input was obtained by the analog CCD and transferred from analog to digital by the video chip TVP5150. The video was packed by RTP and transmitted by the wireless USB TL-WN322G+ after being compressed by H.264 encoders in S3C6410. Further more, the video images were preprocessed. It can detect the abnormities of the specified scene and the abnormal alarms. The video transmission definition is the standard definition 480P. The video stream can be real-time monitored. The system has been used in the real-time intelligent video surveillance of the specified scene.

  7. Efficient Video Transcoding from H.263 to H.264/AVC Standard with Enhanced Rate Control

    Directory of Open Access Journals (Sweden)

    Nguyen Viet-Anh

    2006-01-01

    Full Text Available A new video coding standard H.264/AVC has been recently developed and standardized. The standard represents a number of advances in video coding technology in terms of both coding efficiency and flexibility and is expected to replace the existing standards such as H.263 and MPEG-1/2/4 in many possible applications. In this paper we investigate and present efficient syntax transcoding and downsizing transcoding methods from H.263 to H.264/AVC standard. Specifically, we propose an efficient motion vector reestimation scheme using vector median filtering and a fast intraprediction mode selection scheme based on coarse edge information obtained from integer-transform coefficients. Furthermore, an enhanced rate control method based on a quadratic model is proposed for selecting quantization parameters at the sequence and frame levels together with a new frame-layer bit allocation scheme based on the side information in the precoded video. Extensive experiments have been conducted and the results show the efficiency and effectiveness of the proposed methods.

  8. Enhanced video display and navigation for networked streaming video and networked video playlists

    Science.gov (United States)

    Deshpande, Sachin

    2006-01-01

    In this paper we present an automatic enhanced video display and navigation capability for networked streaming video and networked video playlists. Our proposed method uses Synchronized Multimedia Integration Language (SMIL) as presentation language and Real Time Streaming Protocol (RTSP) as network remote control protocol to automatically generate a "enhanced video strip" display for easy navigation. We propose and describe two approaches - a smart client approach and a smart server approach. We also describe a prototype system implementation of our proposed approach.

  9. Adaptive Processing for Sequence Alignment

    KAUST Repository

    Zidan, Mohammed A.

    2012-01-26

    Disclosed are various embodiments for adaptive processing for sequence alignment. In one embodiment, among others, a method includes obtaining a query sequence and a plurality of database sequences. A first portion of the plurality of database sequences is distributed to a central processing unit (CPU) and a second portion of the plurality of database sequences is distributed to a graphical processing unit (GPU) based upon a predetermined splitting ratio associated with the plurality of database sequences, where the database sequences of the first portion are shorter than the database sequences of the second portion. A first alignment score for the query sequence is determined with the CPU based upon the first portion of the plurality of database sequences and a second alignment score for the query sequence is determined with the GPU based upon the second portion of the plurality of database sequences.

  10. Joint distributed source-channel coding for 3D videos

    Science.gov (United States)

    Palma, Veronica; Cancellaro, Michela; Neri, Alessandro

    2011-03-01

    This paper presents a distributed joint source-channel 3D video coding system. Our aim is the design of an efficient coding scheme for stereoscopic video communication over noisy channels that preserves the perceived visual quality while guaranteeing a low computational complexity. The drawback in using stereo sequences is the increased amount of data to be transmitted. Several methods are being used in the literature for encoding stereoscopic video. A significantly different approach respect to traditional video coding has been represented by Distributed Video Coding (DVC), which introduces a flexible architecture with the design of low complex video encoders. In this paper we propose a novel method for joint source-channel coding in a distributed approach. We choose turbo code for our application and study the new setting of distributed joint source channel coding of a video. Turbo code allows to send the minimum amount of data while guaranteeing near channel capacity error correcting performance. In this contribution, the mathematical framework will be fully detailed and tradeoff among redundancy and perceived quality and quality of experience will be analyzed with the aid of numerical experiments.

  11. Video transmission on ATM networks. Ph.D. Thesis

    Science.gov (United States)

    Chen, Yun-Chung

    1993-01-01

    The broadband integrated services digital network (B-ISDN) is expected to provide high-speed and flexible multimedia applications. Multimedia includes data, graphics, image, voice, and video. Asynchronous transfer mode (ATM) is the adopted transport techniques for B-ISDN and has the potential for providing a more efficient and integrated environment for multimedia. It is believed that most broadband applications will make heavy use of visual information. The prospect of wide spread use of image and video communication has led to interest in coding algorithms for reducing bandwidth requirements and improving image quality. The major results of a study on the bridging of network transmission performance and video coding are: Using two representative video sequences, several video source models are developed. The fitness of these models are validated through the use of statistical tests and network queuing performance. A dual leaky bucket algorithm is proposed as an effective network policing function. The concept of the dual leaky bucket algorithm can be applied to a prioritized coding approach to achieve transmission efficiency. A mapping of the performance/control parameters at the network level into equivalent parameters at the video coding level is developed. Based on that, a complete set of principles for the design of video codecs for network transmission is proposed.

  12. A Modified Multiview Video Streaming System Using 3-Tier Architecture

    Directory of Open Access Journals (Sweden)

    Mohamed M. Fouad

    2016-01-01

    Full Text Available In this paper, we present a modified inter-view prediction Multiview Video Coding (MVC scheme from the perspective of viewer's interactivity. When a viewer requests some view(s, our scheme leads to lower transmission bit-rate. We develop an interactive multiview video streaming system exploiting that modified MVC scheme. Conventional interactive multiview video systems require high bandwidth due to redundant data being transferred. With real data test sequences, clear improvements are shown using the proposed interactive multiview video system compared to competing ones in terms of the average transmission bit-rate and storage size of the decoded (i.e., transferred data with comparable rate-distortion.

  13. Least-Square Prediction for Backward Adaptive Video Coding

    Directory of Open Access Journals (Sweden)

    Li Xin

    2006-01-01

    Full Text Available Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP, and demonstrate its potential in video coding. Motivated by the duality between edge contour in images and motion trajectory in video, we propose to derive the best prediction of the current frame from its causal past using least-square method. It is demonstrated that LSP is particularly effective for modeling video material with slow motion and can be extended to handle fast motion by temporal warping and forward adaptation. For typical QCIF test sequences, LSP often achieves smaller MSE than , full-search, quarter-pel block matching algorithm (BMA without the need of transmitting any overhead.

  14. Defining the cognitive enhancing properties of video games: Steps Towards Standardization and Translation.

    Science.gov (United States)

    Goodwin, Shikha Jain; Dziobek, Derek

    2016-09-01

    Ever since video games were available to the general public, they have intrigued brain researchers for many reasons. There is an enormous amount of diversity in the video game research, ranging from types of video games used, the amount of time spent playing video games, the definition of video gamer versus non-gamer to the results obtained after playing video games. In this paper, our goal is to provide a critical discussion of these issues, along with some steps towards generalization using the discussion of an article published by Clemenson and Stark (2005) as the starting point. The authors used a distinction between 2D versus 3D video games to compare their effects on the learning and memory in humans. The primary hypothesis of the authors is that the exploration of virtual environments while playing video games is a human correlate of environment enrichment. Authors found that video gamers performed better than the non-video gamers, and if non-gamers are trained on playing video gamers, 3D games provide better environment enrichment compared to 2D video games, as indicated by better memory scores. The end goal of standardization in video games is to be able to translate the field so that the results can be used for greater good.

  15. Automatic sequences

    CERN Document Server

    Haeseler, Friedrich

    2003-01-01

    Automatic sequences are sequences which are produced by a finite automaton. Although they are not random they may look as being random. They are complicated, in the sense of not being not ultimately periodic, they may look rather complicated, in the sense that it may not be easy to name the rule by which the sequence is generated, however there exists a rule which generates the sequence. The concept automatic sequences has special applications in algebra, number theory, finite automata and formal languages, combinatorics on words. The text deals with different aspects of automatic sequences, in particular:· a general introduction to automatic sequences· the basic (combinatorial) properties of automatic sequences· the algebraic approach to automatic sequences· geometric objects related to automatic sequences.

  16. Digital Video Teach Yourself VISUALLY

    CERN Document Server

    Watson, Lonzell

    2010-01-01

    Tips and techniques for shooting and sharing superb digital videos. Never before has video been more popular-or more accessible to the home photographer. Now you can create YouTube-worthy, professional-looking video, with the help of this richly illustrated guide. In a straightforward, simple, highly visual format, Teach Yourself VISUALLY Digital Video demystifies the secrets of great video. With colorful screenshots and illustrations plus step-by-step instructions, the book explains the features of your camera and their capabilities, and shows you how to go beyond "auto" to manually

  17. Video mining using combinations of unsupervised and supervised learning techniques

    Science.gov (United States)

    Divakaran, Ajay; Miyahara, Koji; Peker, Kadir A.; Radhakrishnan, Regunathan; Xiong, Ziyou

    2003-12-01

    We discuss the meaning and significance of the video mining problem, and present our work on some aspects of video mining. A simple definition of video mining is unsupervised discovery of patterns in audio-visual content. Such purely unsupervised discovery is readily applicable to video surveillance as well as to consumer video browsing applications. We interpret video mining as content-adaptive or "blind" content processing, in which the first stage is content characterization and the second stage is event discovery based on the characterization obtained in stage 1. We discuss the target applications and find that using a purely unsupervised approach are too computationally complex to be implemented on our product platform. We then describe various combinations of unsupervised and supervised learning techniques that help discover patterns that are useful to the end-user of the application. We target consumer video browsing applications such as commercial message detection, sports highlights extraction etc. We employ both audio and video features. We find that supervised audio classification combined with unsupervised unusual event discovery enables accurate supervised detection of desired events. Our techniques are computationally simple and robust to common variations in production styles etc.

  18. Segmentation Based Video Steganalysis to Detect Motion Vector Modification

    Directory of Open Access Journals (Sweden)

    Peipei Wang

    2017-01-01

    Full Text Available This paper presents a steganalytic approach against video steganography which modifies motion vector (MV in content adaptive manner. Current video steganalytic schemes extract features from fixed-length frames of the whole video and do not take advantage of the content diversity. Consequently, the effectiveness of the steganalytic feature is influenced by video content and the problem of cover source mismatch also affects the steganalytic performance. The goal of this paper is to propose a steganalytic method which can suppress the differences of statistical characteristics caused by video content. The given video is segmented to subsequences according to block’s motion in every frame. The steganalytic features extracted from each category of subsequences with close motion intensity are used to build one classifier. The final steganalytic result can be obtained by fusing the results of weighted classifiers. The experimental results have demonstrated that our method can effectively improve the performance of video steganalysis, especially for videos of low bitrate and low embedding ratio.

  19. Delivering Diagnostic Quality Video over Mobile Wireless Networks for Telemedicine

    Directory of Open Access Journals (Sweden)

    Sira P. Rao

    2009-01-01

    Full Text Available In real-time remote diagnosis of emergency medical events, mobility can be enabled by wireless video communications. However, clinical use of this potential advance will depend on definitive and compelling demonstrations of the reliability of diagnostic quality video. Because the medical domain has its own fidelity criteria, it is important to incorporate diagnostic video quality criteria into any video compression system design. To this end, we used flexible algorithms for region-of-interest (ROI video compression and obtained feedback from medical experts to develop criteria for diagnostically lossless (DL quality. The design of the system occurred in three steps-measurement of bit rate at which DL quality is achieved through evaluation of videos by medical experts, incorporation of that information into a flexible video encoder through the notion of encoder states, and an encoder state update option based on a built-in quality criterion. Medical experts then evaluated our system for the diagnostic quality of the video, allowing us to verify that it is possible to realize DL quality in the ROI at practical communication data transfer rates, enabling mobile medical assessment over bit-rate limited wireless channels. This work lays the scientific foundation for additional validation through prototyped technology, field testing, and clinical trials.

  20. Representing videos in tangible products

    Science.gov (United States)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  1. Geotail Video News Release

    Science.gov (United States)

    1992-01-01

    The Geotail mission, part of the International Solar Terrestrial Physics (ISTP) program, measures global energy flow and transformation in the magnetotail to increase understanding of fundamental magnetospheric processes. The satellite was launched on July 24, 1992 onboard a Delta II rocket. This video shows with animation the solar wind, and its effect on the Earth. The narrator explains that the Geotail spacecraft was designed and built by the Institute of Space and Astronautical Science (ISAS), the Japanese Space Agency. The mission objectives are reviewed by one of the scientist in a live view. The video also shows an animation of the orbit, while the narrator explains the orbit and the reason for the small launch window.

  2. CARACTERIZACION VOZ Y VIDEO

    Directory of Open Access Journals (Sweden)

    Octavio José Salcedo Parra

    2011-11-01

    Full Text Available La motivación para caracterizar el tráfico de voz y video está en la necesidad de las empresas proveedoras de servicio en mantener redes de transporte de información con capacidades acordes a los requerimientos de los usuarios.  Poder determinar en forma oportuna como los elementos técnicos que hacen parte de las redes afectan su desempeño, teniendo en cuenta que cada tipo de servicio es afectado en mayor o menor medida por dichos elementos dentro de los que tenemos el jitter, las demoras y las pérdidas de paquetes entre otros. El presente trabajo muestra varios casos de caracterización de tráfico tanto de voz como de video en las que se utilizan una diversidad de técnicas para diferentes tipos de servicio.

  3. Video Pulses: User-Based Modeling of Interesting Video Segments

    Directory of Open Access Journals (Sweden)

    Markos Avlonitis

    2014-01-01

    Full Text Available We present a user-based method that detects regions of interest within a video in order to provide video skims and video summaries. Previous research in video retrieval has focused on content-based techniques, such as pattern recognition algorithms that attempt to understand the low-level features of a video. We are proposing a pulse modeling method, which makes sense of a web video by analyzing users' Replay interactions with the video player. In particular, we have modeled the user information seeking behavior as a time series and the semantic regions as a discrete pulse of fixed width. Then, we have calculated the correlation coefficient between the dynamically detected pulses at the local maximums of the user activity signal and the pulse of reference. We have found that users' Replay activity significantly matches the important segments in information-rich and visually complex videos, such as lecture, how-to, and documentary. The proposed signal processing of user activity is complementary to previous work in content-based video retrieval and provides an additional user-based dimension for modeling the semantics of a social video on the web.

  4. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  5. The video lecture

    OpenAIRE

    Crook, Charles; Schofield, Louise

    2017-01-01

    Vocabulary for describing the structures, roles, and relationships characteristic of traditional, or ‘offline’, education has been seamlessly applied to the designs of ‘online’ education. One example is the lecture, delivered as a video recording. The purpose of this research is to consider the concept of ‘lecture’ as realised in both offline and online contexts. We explore how media differences entail different student experiences and how these differences relate to design decisions associat...

  6. The Future of Video

    OpenAIRE

    Li, F.

    2016-01-01

    Executive Summary \\ud \\ud A range of technological innovations (e.g. smart phones and digital cameras), infrastructural advances (e.g. broadband and 3G/4G wireless networks) and platform developments (e.g. YouTube, Facebook, Snapchat, Instagram, Amazon, and Netflix) are collectively transforming the way video is produced, distributed, consumed, archived – and importantly, monetised. Changes have been observed well beyond the mainstream TV and film industries, and these changes are increasingl...

  7. Robotic video photogrammetry system

    Science.gov (United States)

    Gustafson, Peter C.

    1997-07-01

    For many years, photogrammetry has been in use at TRW. During that time, needs have arisen for highly repetitive measurements. In an effort to satisfy these needs in a timely manner, a specialized Robotic Video Photogrammetry System (RVPS) was developed by TRW in conjunction with outside vendors. The primary application for the RVPS has strict accuracy requirements that demand significantly more images than the previously used film-based system. The time involved in taking these images was prohibitive but by automating the data acquisition process, video techniques became a practical alternative to the more traditional film- based approach. In fact, by applying video techniques, measurement productivity was enhanced significantly. Analysis involved was also brought `on-board' to the RVPS, allowing shop floor acquisition and delivery of results. The RVPS has also been applied in other tasks and was found to make a critical improvement in productivity, allowing many more tests to be run in a shorter time cycle. This paper will discuss the creation of the system and TRW's experiences with the RVPS. Highlighted will be the lessons learned during these efforts and significant attributes of the process not common to the standard application of photogrammetry for industrial measurement. As productivity and ease of use continue to drive the application of photogrammetry in today's manufacturing climate, TRW expects several systems, with technological improvements applied, to be in use in the near future.

  8. Utilizing Video Games

    Science.gov (United States)

    Blaize, L.

    Almost from its birth, the computer and video gaming industry has done an admirable job of communicating the vision and attempting to convey the experience of traveling through space to millions of gamers from all cultures and demographics. This paper will propose several approaches the 100 Year Starship Study can take to use the power of interactive media to stir interest in the Starship and related projects among a global population. It will examine successful gaming franchises from the past that are relevant to the mission and consider ways in which the Starship Study could cooperate with game development studios to bring the Starship vision to those franchises and thereby to the public. The paper will examine ways in which video games can be used to crowd-source research aspects for the Study, and how video games are already considering many of the same topics that will be examined by this Study. Finally, the paper will propose some mechanisms by which the 100 Year Starship Study can establish very close ties with the gaming industry and foster cooperation in pursuit of the Study's goals.

  9. Video Malware - Behavioral Analysis

    Directory of Open Access Journals (Sweden)

    Rajdeepsinh Dodia

    2015-04-01

    Full Text Available Abstract The counts of malware attacks exploiting the internet increasing day by day and has become a serious threat. The latest malware spreading out through the media players embedded using the video clip of funny in nature to lure the end users. Once it is executed and installed then the behavior of the malware is in the malware authors hand. The spread of the malware emulates through Internet USB drives sharing of the files and folders can be anything which makes presence concealed. The funny video named as it connected to the film celebrity where the malware variant was collected from the laptop of the terror outfit organization .It runs in the backend which it contains malicious code which steals the user sensitive information like banking credentials username amp password and send it to the remote host user called command amp control. The stealed data is directed to the email encapsulated in the malicious code. The potential malware will spread through the USB and other devices .In summary the analysis reveals the presence of malicious code in executable video file and its behavior.

  10. Energy saving approaches for video streaming on smartphone based on QoE modeling

    DEFF Research Database (Denmark)

    Ballesteros, Luis Guillermo Martinez; Ickin, Selim; Fiedler, Markus

    2016-01-01

    In this paper, we study the influence of video stalling on QoE. We provide QoE models that are obtained in realistic scenarios on the smartphone, and provide energy-saving approaches for smartphone by leveraging the proposed QoE models in relation to energy. Results show that approximately 5J...... is saved in a 3 minutes video clip with an acceptable Mean Opinion Score (MOS) level when the video frames are skipped. If the video frames are not skipped, then it is suggested to avoid freezes during a video stream as the freezes highly increase the energy waste on the smartphones....

  11. Synthesis of Speaker Facial Movement to Match Selected Speech Sequences

    Science.gov (United States)

    Scott, K. C.; Kagels, D. S.; Watson, S. H.; Rom, H.; Wright, J. R.; Lee, M.; Hussey, K. J.

    1994-01-01

    A system is described which allows for the synthesis of a video sequence of a realistic-appearing talking human head. A phonic based approach is used to describe facial motion; image processing rather than physical modeling techniques are used to create video frames.

  12. Ball lightning observation: an objective video-camera analysis report

    OpenAIRE

    Sello, Stefano; Viviani, Paolo; Paganini, Enrico

    2011-01-01

    In this paper we describe a video-camera recording of a (probable) ball lightning event and both the related image and signal analyses for its photometric and dynamical characterization. The results strongly support the BL nature of the recorded luminous ball object and allow the researchers to have an objective and unique video document of a possible BL event for further analyses. Some general evaluations of the obtained results considering the proposed ball lightning models conclude the paper.

  13. Instructive Video Retrieval for Surgical Skill Coaching Using Attribute Learning

    Science.gov (United States)

    2015-06-28

    automated feedback to a trainee). In this paper, we present a video-based skill coaching system for simulation-based surgical training by exploring a...trainee). In this paper, we present a video-based skill coaching system for simulation-based surgical training by exploring a newly proposed problem of...tips region and can be fur- ther detected by the spacial information based on the obtained probabilities. Since all surgical actions occur in the region

  14. Surveillance Video Synopsis in GIS

    Directory of Open Access Journals (Sweden)

    Yujia Xie

    2017-10-01

    Full Text Available Surveillance videos contain a considerable amount of data, wherein interesting information to the user is sparsely distributed. Researchers construct video synopsis that contain key information extracted from a surveillance video for efficient browsing and analysis. Geospatial–temporal information of a surveillance video plays an important role in the efficient description of video content. Meanwhile, current approaches of video synopsis lack the introduction and analysis of geospatial-temporal information. Owing to the preceding problems mentioned, this paper proposes an approach called “surveillance video synopsis in GIS”. Based on an integration model of video moving objects and GIS, the virtual visual field and the expression model of the moving object are constructed by spatially locating and clustering the trajectory of the moving object. The subgraphs of the moving object are reconstructed frame by frame in a virtual scene. Results show that the approach described in this paper comprehensively analyzed and created fusion expression patterns between video dynamic information and geospatial–temporal information in GIS and reduced the playback time of video content.

  15. Recognizing problem video game use.

    Science.gov (United States)

    Porter, Guy; Starcevic, Vladan; Berle, David; Fenech, Pauline

    2010-02-01

    It has been increasingly recognized that some people develop problem video game use, defined here as excessive use of video games resulting in various negative psychosocial and/or physical consequences. The main objectives of the present study were to identify individuals with problem video game use and compare them with those without problem video game use on several variables. An international, anonymous online survey was conducted, using a questionnaire with provisional criteria for problem video game use, which the authors have developed. These criteria reflect the crucial features of problem video game use: preoccupation with and loss of control over playing video games and multiple adverse consequences of this activity. A total of 1945 survey participants completed the survey. Respondents who were identified as problem video game users (n = 156, 8.0%) differed significantly from others (n = 1789) on variables that provided independent, preliminary validation of the provisional criteria for problem video game use. They played longer than planned and with greater frequency, and more often played even though they did not want to and despite believing that they should not do it. Problem video game users were more likely to play certain online role-playing games, found it easier to meet people online, had fewer friends in real life, and more often reported excessive caffeine consumption. People with problem video game use can be identified by means of a questionnaire and on the basis of the present provisional criteria, which require further validation. These findings have implications for recognition of problem video game users among individuals, especially adolescents, who present to mental health services. Mental health professionals need to acknowledge the public health significance of the multiple negative consequences of problem video game use.

  16. On the relative importance of audio and video in the presence of packet losses

    DEFF Research Database (Denmark)

    Korhonen, Jari; Reiter, Ulrich; Myakotnykh, Eugene

    2010-01-01

    In streaming applications, unequal protection of audio and video tracks may be necessary to maintain the optimal perceived overall quality. For this purpose, the application should be aware of the relative importance of audio and video in an audiovisual sequence. In this paper, we propose a subje...... and video quality, but also that the currently used classification criteria for content are not sufficient to predict the users’ preference...... a subjective test arrangement for finding the optimal tradeoff between subjective audio and video qualities in situations when it is not possible to have perfect quality for both modalities concurrently. Our results show that content poses a significant impact on the preferred compromise between audio...

  17. High Definition Video Streaming Using H.264 Video Compression

    OpenAIRE

    Bechqito, Yassine

    2009-01-01

    This thesis presents high definition video streaming using H.264 codec implementation. The experiment carried out in this study was done for an offline streaming video but a model for live high definition streaming is introduced, as well. Prior to the actual experiment, this study describes digital media streaming. Also, the different technologies involved in video streaming are covered. These include streaming architecture and a brief overview on H.264 codec as well as high definition t...

  18. Video Quality Prediction over Wireless 4G

    KAUST Repository

    Lau, Chun Pong

    2013-04-14

    In this paper, we study the problem of video quality prediction over the wireless 4G network. Video transmission data is collected from a real 4G SCM testbed for investigating factors that affect video quality. After feature transformation and selection on video and network parameters, video quality is predicted by solving as regression problem. Experimental results show that the dominated factor on video quality is the channel attenuation and video quality can be well estimated by our models with small errors.

  19. Rate-Adaptive Video Compression (RAVC) Universal Video Stick (UVS)

    Science.gov (United States)

    Hench, David L.

    2009-05-01

    The H.264 video compression standard, aka MPEG 4 Part 10 aka Advanced Video Coding (AVC) allows new flexibility in the use of video in the battlefield. This standard necessitates encoder chips to effectively utilize the increased capabilities. Such chips are designed to cover the full range of the standard with designers of individual products given the capability of selecting the parameters that differentiate a broadcast system from a video conferencing system. The SmartCapture commercial product and the Universal Video Stick (UVS) military versions are about the size of a thumb drive with analog video input and USB (Universal Serial Bus) output and allow the user to select the parameters of imaging to the. Thereby, allowing the user to select video bandwidth (and video quality) using four dimensions of quality, on the fly, without stopping video transmission. The four dimensions are: 1) spatial, change from 720 pixel x 480 pixel to 320 pixel x 360 pixel to 160 pixel x 180 pixel, 2) temporal, change from 30 frames/ sec to 5 frames/sec, 3) transform quality with a 5 to 1 range, 4) and Group of Pictures (GOP) that affects noise immunity. The host processor simply wraps the H.264 network abstraction layer packets into the appropriate network packets. We also discuss the recently adopted scalable amendment to H.264 that will allow limit RAVC at any point in the communication chain by throwing away preselected packets.

  20. Video Tracking dalam Digital Compositing untuk Paska Produksi Video

    Directory of Open Access Journals (Sweden)

    Ardiyan Ardiyan

    2012-04-01

    Full Text Available Video Tracking is one of the processes in video postproduction and motion picture digitally. The ability of video tracking method in the production is helpful to realize the concept of the visual. It is considered in the process of visual effects making. This paper presents how the tracking process and its benefits in visual needs, especially for video and motion picture production. Some of the things involved in the process of tracking such as failure to do so are made clear in this discussion. 

  1. YouTube videos in the English language as a patient education resource for cataract surgery.

    Science.gov (United States)

    Bae, Steven S; Baxter, Stephanie

    2017-08-28

    To assess the quality of the content of YouTube videos for cataract surgery patient education. Hotel Dieu Hospital, Kingston, Ontario, Canada. Observational study. "Cataract surgery," "cataract surgery for patients," and "cataract surgery patient education" were used as search terms. The first two pages of search results were reviewed. Descriptive statistics such as video length and view count were obtained. Two cataract surgeons devised 14 criteria important for educating patients about the procedure. Videos were analyzed based on the presence or absence of these criteria. Videos were also assessed for whether they had a primary commercial intent. Seventy-two videos were analyzed after excluding 48 videos that were duplicate, irrelevant, or not in English. The majority of videos came from a medical professional (71%) and many depicted a real cataract surgery procedure (43%). Twenty-one percent of the videos had a primary commercial intent to promote a practice or product. Out of a total possible 14 points, the mean number of usefulness criteria satisfied was only 2.28 ± 1.80. There was no significant difference in view count between the most useful videos and other videos (p = 0.94). Videos from medical organizations such as the National Health Service were more useful (p YouTube, but most are not adequately educational. Patients may be receiving biased information from videos created with primary commercial intent. Physicians should be aware of the type of information patients may be accessing on YouTube.

  2. Fully scalable video coding in multicast applications

    Science.gov (United States)

    Lerouge, Sam; De Sutter, Robbie; Lambert, Peter; Van de Walle, Rik

    2004-01-01

    The increasing diversity of the characteristics of the terminals and networks that are used to access multimedia content through the internet introduces new challenges for the distribution of multimedia data. Scalable video coding will be one of the elementary solutions in this domain. This type of coding allows to adapt an encoded video sequence to the limitations of the network or the receiving device by means of very basic operations. Algorithms for creating fully scalable video streams, in which multiple types of scalability are offered at the same time, are becoming mature. On the other hand, research on applications that use such bitstreams is only recently emerging. In this paper, we introduce a mathematical model for describing such bitstreams. In addition, we show how we can model applications that use scalable bitstreams by means of definitions that are built on top of this model. In particular, we chose to describe a multicast protocol that is targeted at scalable bitstreams. This way, we will demonstrate that it is possible to define an abstract model for scalable bitstreams, that can be used as a tool for reasoning about such bitstreams and related applications.

  3. A special broadcast of CERN's Video news

    CERN Multimedia

    2003-01-01

    A special edition of CERN's video news giving a complete update on the LHC project is to be broadcast in the Main Auditorium. After your lunch make a small detour to the Main Auditorium, where you see the big picture. On 14, 15 and 16 May, between 12:30 and 14:00, a special edition of CERN's video news bulletin will be broadcast in the Main Auditorium. You will have the chance get up-to-date on the LHC project and its experiments. With four years to go before the first collisions in the LHC, the LHC Project Leader Lyn Evans will present a status report on the construction of the accelerator. The spokesmen of the five LHC experiments (ALICE, ATLAS, CMS, LHCb and TOTEM) will explain how the work is going and what the state of play will be in four years' time. This special video news broadcast is the result of collaboration between the CERN Audiovisual Service, the Photo Service and the External communication section. The broadcast will begin with a brand-new programme title sequence. And just as in the real c...

  4. Digital image sequence processing, compression, and analysis

    CERN Document Server

    Reed, Todd R

    2004-01-01

    IntroductionTodd R. ReedCONTENT-BASED IMAGE SEQUENCE REPRESENTATIONPedro M. Q. Aguiar, Radu S. Jasinschi, José M. F. Moura, andCharnchai PluempitiwiriyawejTHE COMPUTATION OF MOTIONChristoph Stiller, Sören Kammel, Jan Horn, and Thao DangMOTION ANALYSIS AND DISPLACEMENT ESTIMATION IN THE FREQUENCY DOMAINLuca Lucchese and Guido Maria CortelazzoQUALITY OF SERVICE ASSESSMENT IN NEW GENERATION WIRELESS VIDEO COMMUNICATIONSGaetano GiuntaERROR CONCEALMENT IN DIGITAL VIDEOFrancesco G.B. De NataleIMAGE SEQUENCE RESTORATION: A WIDER PERSPECTIVEAnil KokaramVIDEO SUMMARIZATIONCuneyt M. Taskiran and Edward

  5. Video – ned med overliggeren

    DEFF Research Database (Denmark)

    Langebæk, Rikke

    2010-01-01

    Århus – nov 2010 ’Podcast og Video i Undervisningen’ Video – helt ned på jorden Rikke Langebæk, DVM, Phd-studerende, Seniordyrlæge, Institut for Mindre Husdyrs Sygdomme, LIFE, KU Anvendelsen af video i undervisningen har mange iøjnefaldende fordele, og der er nok mange, der drømmer om at implemen......Århus – nov 2010 ’Podcast og Video i Undervisningen’ Video – helt ned på jorden Rikke Langebæk, DVM, Phd-studerende, Seniordyrlæge, Institut for Mindre Husdyrs Sygdomme, LIFE, KU Anvendelsen af video i undervisningen har mange iøjnefaldende fordele, og der er nok mange, der drømmer om...

  6. Robust Watermarking of Video Streams

    Directory of Open Access Journals (Sweden)

    T. Polyák

    2006-01-01

    Full Text Available In the past few years there has been an explosion in the use of digital video data. Many people have personal computers at home, and with the help of the Internet users can easily share video files on their computer. This makes possible the unauthorized use of digital media, and without adequate protection systems the authors and distributors have no means to prevent it.Digital watermarking techniques can help these systems to be more effective by embedding secret data right into the video stream. This makes minor changes in the frames of the video, but these changes are almost imperceptible to the human visual system. The embedded information can involve copyright data, access control etc. A robust watermark is resistant to various distortions of the video, so it cannot be removed without affecting the quality of the host medium. In this paper I propose a video watermarking scheme that fulfills the requirements of a robust watermark. 

  7. ANALISA OPTIMALISASI TEKNIK ESTIMASI DAN KOMPENSASI GERAK PADA ENKODER VIDEO H.263

    Directory of Open Access Journals (Sweden)

    Oka Widyantara

    2009-05-01

    Full Text Available Mode baseline encoder video H.263 menerapkan teknik estimasi dan kompensasi gerak dengan satu vector gerak untuk setiap macroblock. Prosedur area pencarian menggunakan pencarian penuh dengan akurasi setengah pixel pada bidang [16,15.5] membuat prediksi di tepian frame tidak dapat diprediksi dengan baik. Peningkatan unjuk kerja pengkodean prediksi interframe encoder video H.263 dengan optimalisasi teknik estimasi dan kompensasi gerak diimplementasikan dengan penambahan area pencarian [31.5,31.5] (unrestricted motion vector, Annex D dan 4 motion vector (advanced prediction mode, Annex F. Hasil penelitian menunjukkan bahwa advanced mode mampu meningkatkan nilai SNR sebesar 0.03 dB untuk sequence video claire, 0.2 dB untuk sequence video foreman, 0.041 dB untuk sequence video Glasgow, dan juga mampu menurunkan bit rate pengkodean sebesar 2.3 % untuk video Claire, 15.63 % untuk video Foreman,  dan 9.8% untuk video Glasgow dibandingkan dengan implementasi 1 motion vector pada pengkodean baseline mode.

  8. Comparison of three video laryngoscopy devices to direct laryngoscopy for intubating obese patients: a randomized controlled trial.

    Science.gov (United States)

    Yumul, Roya; Elvir-Lazo, Ofelia L; White, Paul F; Sloninsky, Alejandro; Kaplan, Marshal; Kariger, Robert; Naruse, Robert; Parker, Nathaniel; Pham, Christine; Zhang, Xiao; Wender, Ronald H

    2016-06-01

    To compare three different video laryngoscope devices (VL) to standard direct laryngoscopy (DL) for tracheal intubation of obese patients undergoing bariatric surgery. VL (vs DL) would reduce the time required to achieve successful tracheal intubation and improve the glottic view. Prospective, randomized and controlled. Preoperative/operating rooms and postanesthesia care unit. One hundred twenty-one obese patients (ASA physical status I-III), aged 18 to 80 years, body mass index (BMI) >30 kg/m(2) undergoing elective bariatric surgery. Patients were prospectively randomized assigned to one of 4 different airway devices for tracheal intubation: standard Macintosh (Mac) blade (DL); Video-Mac VL; Glide Scope VL; or McGrath VL. After performing a preoperative airway evaluation, patients underwent a standardized induction sequence. The glottic view was graded using the Cormack Lehane and percentage of glottic opening (POGO) scoring systems at the time of tracheal intubation. Times from the blade entering the patient's mouth to obtaining a glottic view, placement of the tracheal tube, and confirmation of an end-tidal CO2 waveform were recorded. In addition, intubation attempts, adjuvant airway devices, hemodynamic changes, adverse events, and any airway-related trauma were recorded. All three VL devices provided improved glottic views compared to standard DL (p < 0.05). Video-Mac VL and McGrath also significantly reduced the time required to obtain the glottic view. Video-Mac VL significantly reduced the time required for successful placement of the tracheal tube (vs DL and the others VL device groups). The Video-Mac and GlideScope required fewer intubation attempts (P< .05) and less frequent use of ancillary intubating devices compared to DL and the McGrath VL. Video-Mac and GlideScope required fewer intubation attempts than standard DL and the McGrath device. The Video-Mac also significantly reduced the time needed to secure the airway and improved the glottic view

  9. Motion Entropy Feature and Its Applications to Event-Based Segmentation of Sports Video

    Science.gov (United States)

    Chen, Chen-Yu; Wang, Jia-Ching; Wang, Jhing-Fa; Hu, Yu-Hen

    2008-12-01

    An entropy-based criterion is proposed to characterize the pattern and intensity of object motion in a video sequence as a function of time. By applying a homoscedastic error model-based time series change point detection algorithm to this motion entropy curve, one is able to segment the corresponding video sequence into individual sections, each consisting of a semantically relevant event. The proposed method is tested on six hours of sports videos including basketball, soccer, and tennis. Excellent experimental results are observed.

  10. Error Resilience in Current Distributed Video Coding Architectures

    Directory of Open Access Journals (Sweden)

    Tonoli Claudia

    2009-01-01

    Full Text Available In distributed video coding the signal prediction is shifted at the decoder side, giving therefore most of the computational complexity burden at the receiver. Moreover, since no prediction loop exists before transmission, an intrinsic robustness to transmission errors has been claimed. This work evaluates and compares the error resilience performance of two distributed video coding architectures. In particular, we have considered a video codec based on the Stanford architecture (DISCOVER codec and a video codec based on the PRISM architecture. Specifically, an accurate temporal and rate/distortion based evaluation of the effects of the transmission errors for both the considered DVC architectures has been performed and discussed. These approaches have been also compared with H.264/AVC, in both cases of no error protection, and simple FEC error protection. Our evaluations have highlighted in all cases a strong dependence of the behavior of the various codecs to the content of the considered video sequence. In particular, PRISM seems to be particularly well suited for low-motion sequences, whereas DISCOVER provides better performance in the other cases.

  11. Video Games and Adolescent Fighting

    OpenAIRE

    Ward, Michael R.

    2010-01-01

    Psychologists have found positive correlations between playing violent video games and violent and antisocial attitudes. However, these studies typically do not control for other covariates, particularly sex, that are known to be associated with both video game play and aggression. This study exploits the Youth Risk Behavior Survey, which includes questions on video game play and fighting as well as basic demographic information. With both parametric and nonparametric estimators, as there is ...

  12. Women as Video Game Consumers

    OpenAIRE

    Kiviranta, Hanna

    2017-01-01

    The purpose of this Thesis is to study women as video game consumers through the games that they play. This was done by case studies on the content of five video games from genres that statistically are popular amongst women. To introduce the topic and to build the theoretical framework, the key terms and the video game industry are introduced. The reader is acquainted with theories on consumer behaviour, buying processes and factors that influence our consuming habits. These aspects are...

  13. Port Video and Logo

    OpenAIRE

    Whitehead, Stuart; Rush, Joshua

    2013-01-01

    Logo PDF files should be accessible by any PDF reader such as Adobe Reader. SVG files of the logo are vector graphics accessible by programs such as Inkscape or Adobe Illustrator. PNG files are image files of the logo that should be able to be opened by any operating system's default image viewer. The final report is submitted in both .doc (Microsoft Word) and .pdf formats. The video is submitted in .avi format and can be viewed with Windows Media Player or VLC. Audio .wav files are also ...

  14. Video, videoarte, iconoclasmo

    OpenAIRE

    Roncallo Dow, Sergio; Universidad de la Sabana

    2013-01-01

    El propósito de este artículo es realizar un acercamiento a la forma-video y al videoarte desde una perspectiva estética. Para ello, en un primer momento, se hace una reflexión a propósito del estatuto de la imagen en occidente buscado evidenciar su carácter oscuro y el temor que parece haber suscitado desde siempre. Este punto se trabaja sobre algunos postulados platónicos que nos llevan a pensar un posible camino de la superación del iconoclasmo a través del surrealismo, el cine y la fotogr...

  15. Intellectual Video Filming

    DEFF Research Database (Denmark)

    Juel, Henrik

    Like everyone else university students of the humanities are quite used to watching Hollywood productions and professional TV. It requires some didactic effort to redirect their eyes and ears away from the conventional mainstream style and on to new and challenging ways of using the film media...... in favour of worthy causes. However, it is also very rewarding to draw on the creativity, enthusiasm and rapidly improving technical skills of young students, and to guide them to use video equipment themselves for documentary, for philosophical film essays and intellectual debate. In the digital era...

  16. CHARACTER RECOGNITION OF VIDEO SUBTITLES\\

    Directory of Open Access Journals (Sweden)

    Satish S Hiremath

    2016-11-01

    Full Text Available An important task in content based video indexing is to extract text information from videos. The challenges involved in text extraction and recognition are variation of illumination on each video frame with text, the text present on the complex background and different font size of the text. Using various image processing algorithms like morphological operations, blob detection and histogram of oriented gradients the character recognition of video subtitles is implemented. Segmentation, feature extraction and classification are the major steps of character recognition. Several experimental results are shown to demonstrate the performance of the proposed algorithm

  17. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  18. A novel video dataset for change detection benchmarking.

    Science.gov (United States)

    Goyette, Nil; Jodoin, Pierre-Marc; Porikli, Fatih; Konrad, Janusz; Ishwar, Prakash

    2014-11-01

    Change detection is one of the most commonly encountered low-level tasks in computer vision and video processing. A plethora of algorithms have been developed to date, yet no widely accepted, realistic, large-scale video data set exists for benchmarking different methods. Presented here is a unique change detection video data set consisting of nearly 90 000 frames in 31 video sequences representing six categories selected to cover a wide range of challenges in two modalities (color and thermal infrared). A distinguishing characteristic of this benchmark video data set is that each frame is meticulously annotated by hand for ground-truth foreground, background, and shadow area boundaries-an effort that goes much beyond a simple binary label denoting the presence of change. This enables objective and precise quantitative comparison and ranking of video-based change detection algorithms. This paper discusses various aspects of the new data set, quantitative performance metrics used, and comparative results for over two dozen change detection algorithms. It draws important conclusions on solved and remaining issues in change detection, and describes future challenges for the scientific community. The data set, evaluation tools, and algorithm rankings are available to the public on a website and will be updated with feedback from academia and industry in the future.

  19. Distortion-Based Link Adaptation for Wireless Video Transmission

    Directory of Open Access Journals (Sweden)

    Andrew Nix

    2008-06-01

    Full Text Available Wireless local area networks (WLANs such as IEEE 802.11a/g utilise numerous transmission modes, each providing different throughputs and reliability levels. Most link adaptation algorithms proposed in the literature (i maximise the error-free data throughput, (ii do not take into account the content of the data stream, and (iii rely strongly on the use of ARQ. Low-latency applications, such as real-time video transmission, do not permit large numbers of retransmission. In this paper, a novel link adaptation scheme is presented that improves the quality of service (QoS for video transmission. Rather than maximising the error-free throughput, our scheme minimises the video distortion of the received sequence. With the use of simple and local rate distortion measures and end-to-end distortion models at the video encoder, the proposed scheme estimates the received video distortion at the current transmission rate, as well as on the adjacent lower and higher rates. This allows the system to select the link-speed which offers the lowest distortion and to adapt to the channel conditions. Simulation results are presented using the MPEG-4/AVC H.264 video compression standard over IEEE 802.11g. The results show that the proposed system closely follows the optimum theoretic solution.

  20. Query-adaptive multiple instance learning for video instance retrieval.

    Science.gov (United States)

    Ting-Chu Lin; Min-Chun Yang; Chia-Yin Tsai; Wang, Yu-Chiang Frank

    2015-04-01

    Given a query image containing the object of interest (OOI), we propose a novel learning framework for retrieving relevant frames from the input video sequence. While techniques based on object matching have been applied to solve this task, their performance would be typically limited due to the lack of capabilities in handling variations in visual appearances of the OOI across video frames. Our proposed framework can be viewed as a weakly supervised approach, which only requires a small number of (randomly selected) relevant and irrelevant frames from the input video for performing satisfactory retrieval performance. By utilizing frame-level label information of such video frames together with the query image, we propose a novel query-adaptive multiple instance learning algorithm, which exploits the visual appearance information of the OOI from the query and that of the aforementioned video frames. As a result, the derived learning model would exhibit additional discriminating abilities while retrieving relevant instances. Experiments on two real-world video data sets would confirm the effectiveness and robustness of our proposed approach.

  1. QIM blind video watermarking scheme based on Wavelet transform and principal component analysis

    Directory of Open Access Journals (Sweden)

    Nisreen I. Yassin

    2014-12-01

    Full Text Available In this paper, a blind scheme for digital video watermarking is proposed. The security of the scheme is established by using one secret key in the retrieval of the watermark. Discrete Wavelet Transform (DWT is applied on each video frame decomposing it into a number of sub-bands. Maximum entropy blocks are selected and transformed using Principal Component Analysis (PCA. Quantization Index Modulation (QIM is used to quantize the maximum coefficient of the PCA blocks of each sub-band. Then, the watermark is embedded into the selected suitable quantizer values. The proposed scheme is tested using a number of video sequences. Experimental results show high imperceptibility. The computed average PSNR exceeds 45 dB. Finally, the scheme is applied on two medical videos. The proposed scheme shows high robustness against several attacks such as JPEG coding, Gaussian noise addition, histogram equalization, gamma correction, and contrast adjustment in both cases of regular videos and medical videos.

  2. Transform domain Wyner-Ziv video coding with refinement of noise residue and side information

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2010-01-01

    are successively updating the estimated noise residue for noise modeling and side information frame quality during decoding. Experimental results show that the proposed decoder can improve the Rate- Distortion (RD) performance of a state-of-the-art Wyner Ziv video codec for the set of test sequences.......Distributed Video Coding (DVC) is a video coding paradigm which mainly exploits the source statistics at the decoder based on the availability of side information at the decoder. This paper considers feedback channel based Transform Domain Wyner-Ziv (TDWZ) DVC. The coding efficiency of TDWZ video...... coding does not match that of conventional video coding yet, mainly due to the quality of side information and inaccurate noise estimation. In this context, a novel TDWZ video decoder with noise residue refinement (NRR) and side information refinement (SIR) is proposed. The proposed refinement schemes...

  3. Austin Community College Video Game Development Certificate

    Science.gov (United States)

    McGoldrick, Robert

    2008-01-01

    The Video Game Development program is designed and developed by leaders in the Austin video game development industry, under the direction of the ACC Video Game Advisory Board. Courses are taught by industry video game developers for those who want to become video game developers. The program offers a comprehensive approach towards learning what's…

  4. SnapVideo: Personalized Video Generation for a Sightseeing Trip.

    Science.gov (United States)

    Zhang, Luming; Jing, Peiguang; Su, Yuting; Zhang, Chao; Shaoz, Ling

    2017-11-01

    Leisure tourism is an indispensable activity in urban people's life. Due to the popularity of intelligent mobile devices, a large number of photos and videos are recorded during a trip. Therefore, the ability to vividly and interestingly display these media data is a useful technique. In this paper, we propose SnapVideo, a new method that intelligently converts a personal album describing of a trip into a comprehensive, aesthetically pleasing, and coherent video clip. The proposed framework contains three main components. The scenic spot identification model first personalizes the video clips based on multiple prespecified audience classes. We then search for some auxiliary related videos from YouTube 1 according to the selected photos. To comprehensively describe a scenery, the view generation module clusters the crawled video frames into a number of views. Finally, a probabilistic model is developed to fit the frames from multiple views into an aesthetically pleasing and coherent video clip, which optimally captures the semantics of a sightseeing trip. Extensive user studies demonstrated the competitiveness of our method from an aesthetic point of view. Moreover, quantitative analysis reflects that semantically important spots are well preserved in the final video clip. 1 https://www.youtube.com/.

  5. Studenterproduceret video til eksamen

    Directory of Open Access Journals (Sweden)

    Kenneth Hansen

    2016-05-01

    Full Text Available Formålet med denne artikel er at vise, hvordan læringsdesign og stilladsering kan anvendes til at skabe en ramme for studenterproduceret video til eksamen på videregående uddannelser. Artiklen tager udgangspunkt i en problemstilling, hvor uddannelsesinstitutionerne skal håndtere og koordinere undervisning inden for både det faglige område og mediefagligt område og sikre en balance mellem en fagfaglighed og en mediefaglig tilgang. Ved at dele opgaven ud på flere faglige resurser, er der mere koordinering, men man kommer omkring problemet med krav til underviserne om dobbelt faglighed ved medieproduktioner. Med afsæt i Lanarca Declarationens perspektiver på læringsdesign og hovedsageligt Jerome Bruners principper for stilladsering, sammensættes en model for understøttelse af videoproduktion af studerende på videregående uddannelser. Ved at anvende denne model for undervisningssessioner og forløb får de fagfaglige og mediefaglige undervisere et redskab til at fokusere og koordinere indsatsen frem mod målet med, at de studerende producerer og anvender video til eksamen.

  6. Video consultation use by Australian general practitioners: video vignette study.

    Science.gov (United States)

    Jiwa, Moyez; Meng, Xingqiong

    2013-06-19

    There is unequal access to health care in Australia, particularly for the one-third of the population living in remote and rural areas. Video consultations delivered via the Internet present an opportunity to provide medical services to those who are underserviced, but this is not currently routine practice in Australia. There are advantages and shortcomings to using video consultations for diagnosis, and general practitioners (GPs) have varying opinions regarding their efficacy. The aim of this Internet-based study was to explore the attitudes of Australian GPs toward video consultation by using a range of patient scenarios presenting different clinical problems. Overall, 102 GPs were invited to view 6 video vignettes featuring patients presenting with acute and chronic illnesses. For each vignette, they were asked to offer a differential diagnosis and to complete a survey based on the theory of planned behavior documenting their views on the value of a video consultation. A total of 47 GPs participated in the study. The participants were younger than Australian GPs based on national data, and more likely to be working in a larger practice. Most participants (72%-100%) agreed on the differential diagnosis in all video scenarios. Approximately one-third of the study participants were positive about video consultations, one-third were ambivalent, and one-third were against them. In all, 91% opposed conducting a video consultation for the patient with symptoms of an acute myocardial infarction. Inability to examine the patient was most frequently cited as the reason for not conducting a video consultation. Australian GPs who were favorably inclined toward video consultations were more likely to work in larger practices, and were more established GPs, especially in rural areas. The survey results also suggest that the deployment of video technology will need to focus on follow-up consultations. Patients with minor self-limiting illnesses and those with medical

  7. Video Analysis in Multi-Intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Key, Everett Kiusan [Univ. of Washington, Seattle, WA (United States); Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Van Buren, Kendra Lu [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warren, Will [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-27

    This is a project which was performed by a graduated high school student at Los Alamos National Laboratory (LANL). The goal of the Multi-intelligence (MINT) project is to determine the state of a facility from multiple data streams. The data streams are indirect observations. The researcher is using DARHT (Dual-Axis Radiographic Hydrodynamic Test Facility) as a proof of concept. In summary, videos from the DARHT facility contain a rich amount of information. Distribution of car activity can inform us about the state of the facility. Counting large vehicles shows promise as another feature for identifying the state of operations. Signal processing techniques are limited by the low resolution and compression of the videos. We are working on integrating these features with features obtained from other data streams to contribute to the MINT project. Future work can pursue other observations, such as when the gate is functioning or non-functioning.

  8. Capture and playback synchronization in video conferencing

    Science.gov (United States)

    Shae, Zon-Yin; Chang, Pao-Chi; Chen, Mon-Song

    1995-03-01

    Packet-switching based video conferencing has emerged as one of the most important multimedia applications. Lip synchronization can be disrupted in the packet network as the result of the network properties: packet delay jitters at the capture end, network delay jitters, packet loss, packet arrived out of sequence, local clock mismatch, and video playback overlay with the graphic system. The synchronization problem become more demanding as the real time and multiparty requirement of the video conferencing application. Some of the above mentioned problem can be solved in the more advanced network architecture as ATM having promised. This paper will present some of the solutions to the problems that can be useful at the end station terminals in the massively deployed packet switching network today. The playback scheme in the end station will consist of two units: compression domain buffer management unit and the pixel domain buffer management unit. The pixel domain buffer management unit is responsible for removing the annoying frame shearing effect in the display. The compression domain buffer management unit is responsible for parsing the incoming packets for identifying the complete data blocks in the compressed data stream which can be decoded independently. The compression domain buffer management unit is also responsible for concealing the effects of clock mismatch, lip synchronization, and packet loss, out of sequence, and network jitters. This scheme can also be applied to the multiparty teleconferencing environment. Some of the schemes presented in this paper have been implemented in the Multiparty Multimedia Teleconferencing (MMT) system prototype at the IBM watson research center.

  9. Impact of Constant Rate Factor on Objective Video Quality Assessment

    Directory of Open Access Journals (Sweden)

    Juraj Bienik

    2017-01-01

    Full Text Available This paper deals with the impact of constant rate factor value on the objective video quality assessment using PSNR and SSIM metrics. Compression efficiency of H.264 and H.265 codecs defined by different Constant rate factor (CRF values was tested. The assessment was done for eight types of video sequences depending on content for High Definition (HD, Full HD (FHD and Ultra HD (UHD resolution. Finally, performance of both mentioned codecs with emphasis on compression ratio and efficiency of coding was compared.

  10. Real-time Multiple Abnormality Detection in Video Data

    DEFF Research Database (Denmark)

    Have, Simon Hartmann; Ren, Huamin; Moeslund, Thomas B.

    2013-01-01

    Automatic abnormality detection in video sequences has recently gained an increasing attention within the research community. Although progress has been seen, there are still some limitations in current research. While most systems are designed at detecting specific abnormality, others which...... are capable of detecting more than two types of abnormalities rely on heavy computation. Therefore, we provide a framework for detecting abnormalities in video surveillance by using multiple features and cascade classifiers, yet achieve above real-time processing speed. Experimental results on two datasets...

  11. Relative Hidden Markov Models for Video-Based Evaluation of Motion Skills in Surgical Training.

    Science.gov (United States)

    Zhang, Qiang; Li, Baoxin

    2015-06-01

    A proper temporal model is essential to analysis tasks involving sequential data. In computer-assisted surgical training, which is the focus of this study, obtaining accurate temporal models is a key step towards automated skill-rating. Conventional learning approaches can have only limited success in this domain due to insufficient amount of data with accurate labels. We propose a novel formulation termed Relative Hidden Markov Model and develop algorithms for obtaining a solution under this formulation. The method requires only relative ranking between input pairs, which are readily available from training sessions in the target application, hence alleviating the requirement on data labeling. The proposed algorithm learns a model from the training data so that the attribute under consideration is linked to the likelihood of the input, hence supporting comparing new sequences. For evaluation, synthetic data are first used to assess the performance of the approach, and then we experiment with real videos from a widely-adopted surgical training platform. Experimental results suggest that the proposed approach provides a promising solution to video-based motion skill evaluation. To further illustrate the potential of generalizing the method to other applications of temporal analysis, we also report experiments on using our model on speech-based emotion recognition.

  12. Multimodal Semantics Extraction from User-Generated Videos

    Directory of Open Access Journals (Sweden)

    Francesco Cricri

    2012-01-01

    Full Text Available User-generated video content has grown tremendously fast to the point of outpacing professional content creation. In this work we develop methods that analyze contextual information of multiple user-generated videos in order to obtain semantic information about public happenings (e.g., sport and live music events being recorded in these videos. One of the key contributions of this work is a joint utilization of different data modalities, including such captured by auxiliary sensors during the video recording performed by each user. In particular, we analyze GPS data, magnetometer data, accelerometer data, video- and audio-content data. We use these data modalities to infer information about the event being recorded, in terms of layout (e.g., stadium, genre, indoor versus outdoor scene, and the main area of interest of the event. Furthermore we propose a method that automatically identifies the optimal set of cameras to be used in a multicamera video production. Finally, we detect the camera users which fall within the field of view of other cameras recording at the same public happening. We show that the proposed multimodal analysis methods perform well on various recordings obtained in real sport events and live music performances.

  13. Hierarchical video surveillance architecture: a chassis for video big data analytics and exploration

    Science.gov (United States)

    Ajiboye, Sola O.; Birch, Philip; Chatwin, Christopher; Young, Rupert

    2015-03-01

    There is increasing reliance on video surveillance systems for systematic derivation, analysis and interpretation of the data needed for predicting, planning, evaluating and implementing public safety. This is evident from the massive number of surveillance cameras deployed across public locations. For example, in July 2013, the British Security Industry Association (BSIA) reported that over 4 million CCTV cameras had been installed in Britain alone. The BSIA also reveal that only 1.5% of these are state owned. In this paper, we propose a framework that allows access to data from privately owned cameras, with the aim of increasing the efficiency and accuracy of public safety planning, security activities, and decision support systems that are based on video integrated surveillance systems. The accuracy of results obtained from government-owned public safety infrastructure would improve greatly if privately owned surveillance systems `expose' relevant video-generated metadata events, such as triggered alerts and also permit query of a metadata repository. Subsequently, a police officer, for example, with an appropriate level of system permission can query unified video systems across a large geographical area such as a city or a country to predict the location of an interesting entity, such as a pedestrian or a vehicle. This becomes possible with our proposed novel hierarchical architecture, the Fused Video Surveillance Architecture (FVSA). At the high level, FVSA comprises of a hardware framework that is supported by a multi-layer abstraction software interface. It presents video surveillance systems as an adapted computational grid of intelligent services, which is integration-enabled to communicate with other compatible systems in the Internet of Things (IoT).

  14. Sequence assembly

    DEFF Research Database (Denmark)

    Scheibye-Alsing, Karsten; Hoffmann, S.; Frankel, Annett Maria

    2009-01-01

    Despite the rapidly increasing number of sequenced and re-sequenced genomes, many issues regarding the computational assembly of large-scale sequencing data have remain unresolved. Computational assembly is crucial in large genome projects as well for the evolving high-throughput technologies...... and plays an important role in processing the information generated by these methods. Here, we provide a comprehensive overview of the current publicly available sequence assembly programs. We describe the basic principles of computational assembly along with the main concerns, such as repetitive sequences...... in genomic DNA, highly expressed genes and alternative transcripts in EST sequences. We summarize existing comparisons of different assemblers and provide a detailed descriptions and directions for download of assembly programs at: http://genome.ku.dk/resources/assembly/methods.html....

  15. Laws of reflection and Snell's law revisited by video modeling

    Science.gov (United States)

    Rodrigues, M.; Simeão Carvalho, P.

    2014-07-01

    Video modelling is being used, nowadays, as a tool for teaching and learning several topics in Physics. Most of these topics are related to kinematics. In this work we show how video modelling can be used for demonstrations and experimental teaching in optics, namely the laws of reflection and the well-known Snell's Law of light. Videos were recorded with a photo camera at 30 frames/s, and analysed with the open source software Tracker. Data collected from several frames was treated with the Data Tool module, and graphs were built to obtain relations between incident, reflected and refraction angles, as well as to determine the refractive index of Perspex. These videos can be freely distributed in the web and explored with students within the classroom, or as a homework assignment to improve student's understanding on specific contents. They present a large didactic potential for teaching basic optics in high school with an interactive methodology.

  16. Scalable-to-lossless transform domain distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Veselov, Anton

    2010-01-01

    Distributed video coding (DVC) is a novel approach providing new features as low complexity encoding by mainly exploiting the source statistics at the decoder based on the availability of decoder side information. In this paper, scalable-tolossless DVC is presented based on extending a lossy...... TransformDomain Wyner-Ziv (TDWZ) distributed video codec with feedback.The lossless coding is obtained by using a reversible integer DCT.Experimental results show that the performance of the proposed scalable-to-lossless TDWZ video codec can outperform alternatives based on the JPEG 2000 standard. The TDWZ...... codec provides frame by frame encoding. Comparing the lossless coding efficiency, the proposed scalable-to-lossless TDWZ video codec can save up to 5%-13% bits compared to JPEG LS and H.264 Intra frame lossless coding and do so as a scalable-to-lossless coding....

  17. Sexual orientation and childhood gender nonconformity: evidence from home videos.

    Science.gov (United States)

    Rieger, Gerulf; Linsenmeier, Joan A W; Gygax, Lorenz; Bailey, J Michael

    2008-01-01

    Homosexual adults tend to be more gender nonconforming than heterosexual adults in some of their behaviors, feelings, and interests. Retrospective studies have also shown large differences in childhood gender nonconformity, but these studies have been criticized for possible memory biases. The authors studied an indicator of childhood gender nonconformity not subject to such biases: childhood home videos. They recruited homosexual and heterosexual men and women (targets) with videos from their childhood and subsequently asked heterosexual and homosexual raters to judge the gender nonconformity of the targets from both the childhood videos and adult videos made for the study. Prehomosexual children were judged more gender nonconforming, on average, than preheterosexual children, and this pattern obtained for both men and women. This difference emerged early, carried into adulthood, and was consistent with self-report. In addition, targets who were more gender nonconforming tended to recall more childhood rejection. Copyright (c) 2008 APA.

  18. Sinusoidal Wave Estimation Using Photogrammetry and Short Video Sequences

    Directory of Open Access Journals (Sweden)

    Ewelina Rupnik

    2015-12-01

    Full Text Available The objective of the work is to model the shape of the sinusoidal shape of regular water waves generated in a laboratory flume. The waves are traveling in time and render a smooth surface, with no white caps or foam. Two methods are proposed, treating the water as a diffuse and specular surface, respectively. In either case, the water is presumed to take the shape of a traveling sine wave, reducing the task of the 3D reconstruction to resolve the wave parameters. The first conceived method performs the modeling part purely in 3D space. Having triangulated the points in a separate phase via bundle adjustment, a sine wave is fitted into the data in a least squares manner. The second method presents a more complete approach for the entire calculation workflow beginning in the image space. The water is perceived as a specular surface, and the traveling specularities are the only observations visible to the  cameras, observations that are notably single image. The depth ambiguity is removed given additional constraints encoded within the law of reflection and the modeled parametric surface. The observation and constraint equations compose a single system of equations that is solved with the method of least squares adjustment. The devised approaches are validated against the data coming from a capacitive level sensor and on physical targets floating on the surface. The outcomes agree to a high degree.

  19. Computer Vision Tools for Finding Images and Video Sequences.

    Science.gov (United States)

    Forsyth, D. A.

    1999-01-01

    Computer vision offers a variety of techniques for searching for pictures in large collections of images. Appearance methods compare images based on the overall content of the image using certain criteria. Finding methods concentrate on matching subparts of images, defined in a variety of ways, in hope of finding particular objects. These ideas…

  20. No Reference Prediction of Quality Metrics for H.264 Compressed Infrared Image Sequences for UAV Applications

    DEFF Research Database (Denmark)

    Hossain, Kabir; Mantel, Claire; Forchhammer, Søren

    2018-01-01

    The framework for this research work is the acquisition of Infrared (IR) images from Unmanned Aerial Vehicles (UAV). In this paper we consider the No-Reference (NR) prediction of Full Reference Quality Metrics for Infrared (IR) video sequences which are compressed and thus distorted by an H.264...... and temporal perceptual information. Those features are then mapped, using a machine learning (ML) algorithm, the Support Vector Regression (SVR), to the quality scores of Full Reference (FR) quality metrics. The novelty of this work is to design a NR framework for the prediction of quality metrics by applying...... with the true FR quality metrics scores of four images metrics: PSNR, NQM, SSIM and UQI and one video metric: VQM. Results show that our technique achieves a fairly reasonable performance. The improved performance obtained in SROCC and LCC is up to 0.99 and the RMSE is reduced to as little as 0.01 between...

  1. The effects of wireless channel errors on the quality of real time ultrasound video transmission.

    Science.gov (United States)

    Hernández, Carolina; Alesanco, Alvaro; Abadia, Violeta; García, José

    2006-01-01

    In this paper the effect of the conditions of wireless channels on real time ultrasound video transmission is studied. In order to simulate the transmission through a wireless channel, the model of Gilbert-Elliot is used, and the influence of its parameters in transmitted video quality is evaluated. In addition, the efficiency of using both UDP and UDP-Lite as transport protocols has been studied. The effect of using different video compression rates for XviD codec is also analyzed. Based on the obtained results it is observed as the election of the video compression rate depends on the bit error rate (BER) of the channel, since the election of a high compression bit rate for video transmission through a channel with a high BER can degrade the video quality more than using a lower compression rate. On the other hand, it is observed that using UDP as transport protocol better results in all the studied cases are obtained.

  2. Genome Sequencing

    DEFF Research Database (Denmark)

    Sato, Shusei; Andersen, Stig Uggerhøj

    2014-01-01

    The current Lotus japonicus reference genome sequence is based on a hybrid assembly of Sanger TAC/BAC, Sanger shotgun and Illumina shotgun sequencing data generated from the Miyakojima-MG20 accession. It covers nearly all expressed L. japonicus genes and has been annotated mainly based...... on transcriptional evidence. Analysis of repetitive sequences suggests that they are underrepresented in the reference assembly, reflecting an enrichment of gene-rich regions in the current assembly. Characterization of Lotus natural variation by resequencing of L. japonicus accessions and diploid Lotus species...... is currently ongoing, facilitated by the MG20 reference sequence...

  3. Instructional Effectiveness of Video Media.

    Science.gov (United States)

    Wetzel, C. Douglas; And Others

    This volume is a blend of media research, cognitive science research, and tradecraft knowledge regarding video production techniques. The research covers: visual learning; verbal-auditory information; news broadcasts; the value of motion and animation in film and video; simulation (including realism and fidelity); the relationship of text and…

  4. Negotiation for Strategic Video Games

    OpenAIRE

    Afiouni, Einar Nour; Øvrelid, Leif Julian

    2013-01-01

    This project aims to examine the possibilities of using game theoretic concepts and multi-agent systems in modern video games with real time demands. We have implemented a multi-issue negotiation system for the strategic video game Civilization IV, evaluating different negotiation techniques with a focus on the use of opponent modeling to improve negotiation results.

  5. Perancangan Video Game Legenda Anglingdarma

    OpenAIRE

    Siswanto, Jefry Yosua; Ardianto, Deny Tri; Srisanto, Erandaru

    2014-01-01

    Video game dapat digunakan untuk membawakan sebuah cerita rakyat dari negeri masing-masing.Bagi negara-negara yang industri game-nya belum maju, hal ini dapat digunakan sebagai solusi untuk memperkenalkan cerita rakyat.Untuk itu video game ini dibuat agar setidaknya dapat membantu mengenalkan kembali cerita rakyat Indonesia.Dibuat dengan teknik ilustrasi untuk mempermudah pengenalan dan memberikan daya tarik sendiri.

  6. The Art of Video Games

    Science.gov (United States)

    Johnson, Mark M.

    2012-01-01

    The Smithsonian American Art Museum has created and will tour an exhibition on a most unusual but extremely popular art form--"The Art of Video Games." As one of the largest and first of its type, this exhibition will document and explore a 40-year evolution of video games as an artistic medium, with a focus on striking visual effects and the…

  7. Video Streaming in Online Learning

    Science.gov (United States)

    Hartsell, Taralynn; Yuen, Steve Chi-Yin

    2006-01-01

    The use of video in teaching and learning is a common practice in education today. As learning online becomes more of a common practice in education, streaming video and audio will play a bigger role in delivering course materials to online learners. This form of technology brings courses alive by allowing online learners to use their visual and…

  8. Teaching Idioms: Video or Lecture.

    Science.gov (United States)

    Kenyon, Patricia; Daly, Kimberly

    1991-01-01

    A study evaluated the effectiveness of video instruction in teaching the meanings and uses of idioms to 20 deaf adolescents. Students improved their knowledge and use of idioms more when exposed to the video/discussion approach than to the lecture/discussion approach. (DB)

  9. Video Games as Moral Educators?

    Science.gov (United States)

    Khoo, Angeline

    2012-01-01

    The growing interest in video gaming is matched by a corresponding increase in concerns about the harmful effects on children and adolescents. There are numerous studies on aggression and addiction which spark debates on the negative effects of video gaming. At the same time, there are also studies demonstrating prosocial effects. This paper…

  10. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... National Eye Institute’s mission is to “conduct and support research, training, health information dissemination, and other programs ... search for current job openings visit HHS USAJobs Home > NEI YouTube Videos > NEI YouTube Videos: Amblyopia NEI ...

  11. Epistemic Authority, Lies, and Video

    DEFF Research Database (Denmark)

    Andersen, Rune Saugmann

    2013-01-01

    This article analyses how videos of violent protests become politically powerful arguments able to intervene in debates about security. It does so by looking at a series of videos taken by police authorities and protesters during street battles in Copenhagen in August 2009, when protesters oppose...

  12. Multiple Feature Fusion Based on Co-Training Approach and Time Regularization for Place Classification in Wearable Video

    Directory of Open Access Journals (Sweden)

    Vladislavs Dovgalecs

    2013-01-01

    Full Text Available The analysis of video acquired with a wearable camera is a challenge that multimedia community is facing with the proliferation of such sensors in various applications. In this paper, we focus on the problem of automatic visual place recognition in a weakly constrained environment, targeting the indexing of video streams by topological place recognition. We propose to combine several machine learning approaches in a time regularized framework for image-based place recognition indoors. The framework combines the power of multiple visual cues and integrates the temporal continuity information of video. We extend it with computationally efficient semisupervised method leveraging unlabeled video sequences for an improved indexing performance. The proposed approach was applied on challenging video corpora. Experiments on a public and a real-world video sequence databases show the gain brought by the different stages of the method.

  13. Automated Video Detection of Epileptic Convulsion Slowing as a Precursor for Post-Seizure Neuronal Collapse.

    Science.gov (United States)

    Kalitzin, Stiliyan N; Bauer, Prisca R; Lamberts, Robert J; Velis, Demetrios N; Thijs, Roland D; Lopes Da Silva, Fernando H

    2016-12-01

    Automated monitoring and alerting for adverse events in people with epilepsy can provide higher security and quality of life for those who suffer from this debilitating condition. Recently, we found a relation between clonic slowing at the end of a convulsive seizure (CS) and the occurrence and duration of a subsequent period of postictal generalized EEG suppression (PGES). Prolonged periods of PGES can be predicted by the amount of progressive increase of interclonic intervals (ICIs) during the seizure. The purpose of the present study is to develop an automated, remote video sensing-based algorithm for real-time detection of significant clonic slowing that can be used to alert for PGES. This may help preventing sudden unexpected death in epilepsy (SUDEP). The technique is based on our previously published optical flow video sequence processing paradigm that was applied for automated detection of major motor seizures. Here, we introduce an integral Radon-like transformation on the time-frequency wavelet spectrum to detect log-linear frequency changes during the seizure. We validate the automated detection and quantification of the ICI increase by comparison to the results from manually processed electroencephalography (EEG) traces as "gold standard". We studied 48 cases of convulsive seizures for which synchronized EEG-video recordings were available. In most cases, the spectral ridges obtained from Gabor-wavelet transformations of the optical flow group velocities were in close proximity to the ICI traces detected manually from EEG data during the seizure. The quantification of the slowing-down effect measured by the dominant angle in the Radon transformed spectrum was significantly correlated with the exponential ICI increase factors obtained from manual detection. If this effect is validated as a reliable precursor of PGES periods that lead to or increase the probability of SUDEP, the proposed method would provide an efficient alerting device.

  14. An analysis of lecture video utilization in undergraduate medical education: associations with performance in the courses

    Directory of Open Access Journals (Sweden)

    Chandrasekhar Arcot

    2009-01-01

    Full Text Available Abstract Background Increasing numbers of medical schools are providing videos of lectures to their students. This study sought to analyze utilization of lecture videos by medical students in their basic science courses and to determine if student utilization was associated with performance on exams. Methods Streaming videos of lectures (n = 149 to first year and second year medical students (n = 284 were made available through a password-protected server. Server logs were analyzed over a 10-week period for both classes. For each lecture, the logs recorded time and location from which students accessed the file. A survey was administered at the end of the courses to obtain additional information about student use of the videos. Results There was a wide disparity in the level of use of lecture videos by medical students with the majority of students accessing the lecture videos sparingly (60% of the students viewed less than 10% of the available videos. The anonymous student survey revealed that students tended to view the videos by themselves from home during weekends and prior to exams. Students who accessed lecture videos more frequently had significantly (p Conclusion We conclude that videos of lectures are used by relatively few medical students and that individual use of videos is associated with the degree to which students are having difficulty with the subject matter.

  15. Watermarking textures in video games

    Science.gov (United States)

    Liu, Huajian; Berchtold, Waldemar; Schäfer, Marcel; Lieb, Patrick; Steinebach, Martin

    2014-02-01

    Digital watermarking is a promising solution to video game piracy. In this paper, based on the analysis of special challenges and requirements in terms of watermarking textures in video games, a novel watermarking scheme for DDS textures in video games is proposed. To meet the performance requirements in video game applications, the proposed algorithm embeds the watermark message directly in the compressed stream in DDS files and can be straightforwardly applied in watermark container technique for real-time embedding. Furthermore, the embedding approach achieves high watermark payload to handle collusion secure fingerprinting codes with extreme length. Hence, the scheme is resistant to collusion attacks, which is indispensable in video game applications. The proposed scheme is evaluated in aspects of transparency, robustness, security and performance. Especially, in addition to classical objective evaluation, the visual quality and playing experience of watermarked games is assessed subjectively in game playing.

  16. Video game induced knuckle pad.

    Science.gov (United States)

    Rushing, Mary E; Sheehan, Daniel J; Davis, Loretta S

    2006-01-01

    Controversy and concern surround the video game playing fascination of children. Scientific reports have explored the negative effects of video games on youth, with a growing number recognizing the actual physical implications of this activity. We offer another reason to discourage children's focus on video games: knuckle pads. A 13-year-old black boy presented with an asymptomatic, slightly hyperpigmented plaque over his right second distal interphalangeal joint. A punch biopsy specimen confirmed knuckle pad as the diagnosis, and a traumatic etiology from video game playing was suspected. Knuckle pads can be painful, cosmetically unappealing, and refractory to treatment. They can now be recognized as yet another potential adverse consequence of chronic video game playing.

  17. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  18. Color image and video enhancement

    CERN Document Server

    Lecca, Michela; Smolka, Bogdan

    2015-01-01

    This text covers state-of-the-art color image and video enhancement techniques. The book examines the multivariate nature of color image/video data as it pertains to contrast enhancement, color correction (equalization, harmonization, normalization, balancing, constancy, etc.), noise removal and smoothing. This book also discusses color and contrast enhancement in vision sensors and applications of image and video enhancement.   ·         Focuses on enhancement of color images/video ·         Addresses algorithms for enhancing color images and video ·         Presents coverage on super resolution, restoration, in painting, and colorization.

  19. Smart Streaming for Online Video Services

    OpenAIRE

    Chen, Liang; Zhou, Yipeng; Chiu, Dah Ming

    2013-01-01

    Bandwidth consumption is a significant concern for online video service providers. Practical video streaming systems usually use some form of HTTP streaming (progressive download) to let users download the video at a faster rate than the video bitrate. Since users may quit before viewing the complete video, however, much of the downloaded video will be "wasted". To the extent that users' departure behavior can be predicted, we develop smart streaming that can be used to improve user QoE with ...

  20. Personalized video summarization based on group scoring

    OpenAIRE

    Darabi, K; G. Ghinea

    2014-01-01

    In this paper an expert-based model for generation of personalized video summaries is suggested. The video frames are initially scored and annotated by multiple video experts. Thereafter, the scores for the video segments that have been assigned the higher priorities by end users will be upgraded. Considering the required summary length, the highest scored video frames will be inserted into a personalized final summary. For evaluation purposes, the video summaries generated by our system have...