WorldWideScience

Sample records for video sequences captured

  1. Video Screen Capture Basics

    Science.gov (United States)

    Dunbar, Laura

    2014-01-01

    This article is an introduction to video screen capture. Basic information of two software programs, QuickTime for Mac and BlueBerry Flashback Express for PC, are also discussed. Practical applications for video screen capture are given.

  2. Real-Time Human Detection for Aerial Captured Video Sequences via Deep Models

    Directory of Open Access Journals (Sweden)

    Nouar AlDahoul

    2018-01-01

    Full Text Available Human detection in videos plays an important role in various real life applications. Most of traditional approaches depend on utilizing handcrafted features which are problem-dependent and optimal for specific tasks. Moreover, they are highly susceptible to dynamical events such as illumination changes, camera jitter, and variations in object sizes. On the other hand, the proposed feature learning approaches are cheaper and easier because highly abstract and discriminative features can be produced automatically without the need of expert knowledge. In this paper, we utilize automatic feature learning methods which combine optical flow and three different deep models (i.e., supervised convolutional neural network (S-CNN, pretrained CNN feature extractor, and hierarchical extreme learning machine for human detection in videos captured using a nonstatic camera on an aerial platform with varying altitudes. The models are trained and tested on the publicly available and highly challenging UCF-ARG aerial dataset. The comparison between these models in terms of training, testing accuracy, and learning speed is analyzed. The performance evaluation considers five human actions (digging, waving, throwing, walking, and running. Experimental results demonstrated that the proposed methods are successful for human detection task. Pretrained CNN produces an average accuracy of 98.09%. S-CNN produces an average accuracy of 95.6% with soft-max and 91.7% with Support Vector Machines (SVM. H-ELM has an average accuracy of 95.9%. Using a normal Central Processing Unit (CPU, H-ELM’s training time takes 445 seconds. Learning in S-CNN takes 770 seconds with a high performance Graphical Processing Unit (GPU.

  3. A New Motion Capture System For Automated Gait Analysis Based On Multi Video Sequence Analysis

    DEFF Research Database (Denmark)

    Jensen, Karsten; Juhl, Jens

    There is an increasing demand for assessing foot mal positions and an interest in monitoring the effect of treatment. In the last decades several different motion capture systems has been used. This abstract describes a new low cost motion capture system.......There is an increasing demand for assessing foot mal positions and an interest in monitoring the effect of treatment. In the last decades several different motion capture systems has been used. This abstract describes a new low cost motion capture system....

  4. 4DCAPTURE: a general purpose software package for capturing and analyzing two- and three-dimensional motion data acquired from video sequences

    Science.gov (United States)

    Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake

    2003-07-01

    4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The

  5. Capture and playback synchronization in video conferencing

    Science.gov (United States)

    Shae, Zon-Yin; Chang, Pao-Chi; Chen, Mon-Song

    1995-03-01

    Packet-switching based video conferencing has emerged as one of the most important multimedia applications. Lip synchronization can be disrupted in the packet network as the result of the network properties: packet delay jitters at the capture end, network delay jitters, packet loss, packet arrived out of sequence, local clock mismatch, and video playback overlay with the graphic system. The synchronization problem become more demanding as the real time and multiparty requirement of the video conferencing application. Some of the above mentioned problem can be solved in the more advanced network architecture as ATM having promised. This paper will present some of the solutions to the problems that can be useful at the end station terminals in the massively deployed packet switching network today. The playback scheme in the end station will consist of two units: compression domain buffer management unit and the pixel domain buffer management unit. The pixel domain buffer management unit is responsible for removing the annoying frame shearing effect in the display. The compression domain buffer management unit is responsible for parsing the incoming packets for identifying the complete data blocks in the compressed data stream which can be decoded independently. The compression domain buffer management unit is also responsible for concealing the effects of clock mismatch, lip synchronization, and packet loss, out of sequence, and network jitters. This scheme can also be applied to the multiparty teleconferencing environment. Some of the schemes presented in this paper have been implemented in the Multiparty Multimedia Teleconferencing (MMT) system prototype at the IBM watson research center.

  6. Reduced attentional capture in action video game players.

    Science.gov (United States)

    Chisholm, Joseph D; Hickey, Clayton; Theeuwes, Jan; Kingstone, Alan

    2010-04-01

    Recent studies indicate that playing action video games improves performance on a number of attention-based tasks. However, it remains unclear whether action video game experience primarily affects endogenous or exogenous forms of spatial orienting. To examine this issue, action video game players and non-action video game players performed an attentional capture task. The results show that action video game players responded quicker than non-action video game players, both when a target appeared in isolation and when a salient, task-irrelevant distractor was present in the display. Action video game players additionally showed a smaller capture effect than did non-action video game players. When coupled with the findings of previous studies, the collective evidence indicates that extensive experience with action video games may enhance players' top-down attentional control, which, in turn, can modulate the negative effects of bottom-up attentional capture.

  7. A video annotation methodology for interactive video sequence generation

    NARCIS (Netherlands)

    C.A. Lindley; R.A. Earnshaw; J.A. Vince

    2001-01-01

    textabstractThe FRAMES project within the RDN CRC (Cooperative Research Centre for Research Data Networks) has developed an experimental environment for dynamic virtual video sequence synthesis from databases of video data. A major issue for the development of dynamic interactive video applications

  8. General Video Game AI: Learning from Screen Capture

    OpenAIRE

    Kunanusont, Kamolwan; Lucas, Simon M.; Perez-Liebana, Diego

    2017-01-01

    General Video Game Artificial Intelligence is a general game playing framework for Artificial General Intelligence research in the video-games domain. In this paper, we propose for the first time a screen capture learning agent for General Video Game AI framework. A Deep Q-Network algorithm was applied and improved to develop an agent capable of learning to play different games in the framework. After testing this algorithm using various games of different categories and difficulty levels, th...

  9. Student-Built Underwater Video and Data Capturing Device

    Science.gov (United States)

    Whitt, F.

    2016-12-01

    Students from Stockbridge High School Robotics Team invention is a low cost underwater video and data capturing device. This system is capable of shooting time-lapse photography and/or video for up to 3 days of video at a time. It can be used in remote locations without having to change batteries or adding additional external hard drives for data storage. The video capturing device has a unique base and mounting system which houses a pi drive and a programmable raspberry pi with a camera module. This system is powered by two 12 volt batteries, which makes it easier for users to recharge after use. Our data capturing device has the same unique base and mounting system as the underwater camera. The data capturing device consists of an Arduino and SD card shield that is capable of collecting continuous temperature and pH readings underwater. This data will then be logged onto the SD card for easy access and recording. The low cost underwater video and data capturing device can reach depths up to 100 meters while recording 36 hours of video on 1 terabyte of storage. It also features night vision infrared light capabilities. The cost to build our invention is $500. The goal of this was to provide a device that can easily be accessed by marine biologists, teachers, researchers and citizen scientists to capture photographic and water quality data in marine environments over extended periods of time.

  10. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    Science.gov (United States)

    Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu

    2017-01-01

    Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112

  11. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    Directory of Open Access Journals (Sweden)

    Guangle Yao

    2017-08-01

    Full Text Available Background subtraction (BS is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR.

  12. Spatial-temporal forensic analysis of mass casualty incidents using video sequences.

    Science.gov (United States)

    Hao Dong; Juechen Yin; Schafer, James; Ganz, Aura

    2016-08-01

    In this paper we introduce DIORAMA based forensic analysis of mass casualty incidents (MCI) using video sequences. The video sequences captured on site are automatically annotated by metadata, which includes the capture time and the camera location and viewing direction. Using a visual interface the MCI investigators can easily understand the availability of video clips in specific areas of interest, and efficiently review them. The video-based forensic analysis system will enable the MCI investigators to better understand the rescue operations and subsequently improve training procedures.

  13. CAN INTERMITTENT VIDEO SAMPLING CAPTURE INDIVIDUAL DIFFERENCES IN NATURALISTIC DRIVING?

    Science.gov (United States)

    Aksan, Nazan; Schall, Mark; Anderson, Steven; Dawson, Jeffery; Tippin, Jon; Rizzo, Matthew

    2013-01-01

    We examined the utility and validity of intermittent video samples from black box devices for capturing individual difference variability in real-world driving performance in an ongoing study of obstructive sleep apnea (OSA) and community controls. Three types of video clips were coded for several dimensions of interest to driving research including safety, exposure, and driver state. The preliminary findings indicated that clip types successfully captured variability along targeted dimensions such as highway vs. city driving, driver state such as distraction and sleepiness, and safety. Sleepiness metrics were meaningfully associated with adherence to PAP (positive airway pressure) therapy. OSA patients who were PAP adherent showed less sleepiness and less non-driving related gaze movements than nonadherent patients. Simple differences in sleepiness did not readily translate to improvements in driver safety, consistent with epidemiologic evidence to date.

  14. Outdoor Markerless Motion Capture With Sparse Handheld Video Cameras.

    Science.gov (United States)

    Wang, Yangang; Liu, Yebin; Tong, Xin; Dai, Qionghai; Tan, Ping

    2017-04-12

    We present a method for outdoor markerless motion capture with sparse handheld video cameras. In the simplest setting, it only involves two mobile phone cameras following the character. This setup can maximize the flexibilities of data capture and broaden the applications of motion capture. To solve the character pose under such challenge settings, we exploit the generative motion capture methods and propose a novel model-view consistency that considers both foreground and background in the tracking stage. The background is modeled as a deformable 2D grid, which allows us to compute the background-view consistency for sparse moving cameras. The 3D character pose is tracked with a global-local optimization through minimizing our consistency cost. A novel L1 motion regularizer is also proposed in the optimization to constrain the solution pose space. The whole process of the proposed method is simple as frame by frame video segmentation is not required. Our method outperforms several alternative methods on various examples demonstrated in the paper.

  15. A novel visual saliency detection method for infrared video sequences

    Science.gov (United States)

    Wang, Xin; Zhang, Yuzhen; Ning, Chen

    2017-12-01

    Infrared video applications such as target detection and recognition, moving target tracking, and so forth can benefit a lot from visual saliency detection, which is essentially a method to automatically localize the ;important; content in videos. In this paper, a novel visual saliency detection method for infrared video sequences is proposed. Specifically, for infrared video saliency detection, both the spatial saliency and temporal saliency are considered. For spatial saliency, we adopt a mutual consistency-guided spatial cues combination-based method to capture the regions with obvious luminance contrast and contour features. For temporal saliency, a multi-frame symmetric difference approach is proposed to discriminate salient moving regions of interest from background motions. Then, the spatial saliency and temporal saliency are combined to compute the spatiotemporal saliency using an adaptive fusion strategy. Besides, to highlight the spatiotemporal salient regions uniformly, a multi-scale fusion approach is embedded into the spatiotemporal saliency model. Finally, a Gestalt theory-inspired optimization algorithm is designed to further improve the reliability of the final saliency map. Experimental results demonstrate that our method outperforms many state-of-the-art saliency detection approaches for infrared videos under various backgrounds.

  16. Hybridization Capture Using Short PCR Products Enriches Small Genomes by Capturing Flanking Sequences (CapFlank)

    DEFF Research Database (Denmark)

    Tsangaras, Kyriakos; Wales, Nathan; Sicheritz-Pontén, Thomas

    2014-01-01

    Solution hybridization capture methods utilize biotinylated oligonucleotides as baits to enrich homologous sequences from next generation sequencing (NGS) libraries. Coupled with NGS, the method generates kilo to gigabases of high confidence consensus targeted sequence. However, in many experiments...

  17. Spatially Varying Image Based Lighting by Light Probe Sequences, Capture, Processing and Rendering

    OpenAIRE

    Unger, Jonas; Gustavson, Stefan; Ynnerman, Anders

    2007-01-01

    We present a novel technique for capturing spatially or temporally resolved light probe sequences, and using them for image based lighting. For this purpose we have designed and built a real-time light probe, a catadioptric imaging system that can capture the full dynamic range of the lighting incident at each point in space at video frame rates, while being moved through a scene. The real-time light probe uses a digital imaging system which we have programmed to capture high quality, photome...

  18. Hybridization capture using short PCR products enriches small genomes by capturing flanking sequences (CapFlank.

    Directory of Open Access Journals (Sweden)

    Kyriakos Tsangaras

    Full Text Available Solution hybridization capture methods utilize biotinylated oligonucleotides as baits to enrich homologous sequences from next generation sequencing (NGS libraries. Coupled with NGS, the method generates kilo to gigabases of high confidence consensus targeted sequence. However, in many experiments, a non-negligible fraction of the resulting sequence reads are not homologous to the bait. We demonstrate that during capture, the bait-hybridized library molecules add additional flanking library sequences iteratively, such that baits limited to targeting relatively short regions (e.g. few hundred nucleotides can result in enrichment across entire mitochondrial and bacterial genomes. Our findings suggest that some of the off-target sequences derived in capture experiments are non-randomly enriched, and that CapFlank will facilitate targeted enrichment of large contiguous sequences with minimal prior target sequence information.

  19. Reduced attentional capture in action video game players

    NARCIS (Netherlands)

    Chisholm, J; Hickey, C.; Theeuwes, J.; Kingstone, A.

    2010-01-01

    Recent studies indicate that playing action video games improves performance on a number of attention-based tasks. However, it remains unclear whether action video game experience primarily affects endogenous or exogenous forms of spatial orienting. To examine this issue, action video game players

  20. Teacher Self-Captured Video: Learning to See

    Science.gov (United States)

    Sherin, Miriam Gamoran; Dyer, Elizabeth B.

    2017-01-01

    Videos are often used for demonstration and evaluation, but a more productive approach would be using video to support teachers' ability to notice and interpret classroom interactions. That requires thinking carefully about the physical aspects of shooting video--where the camera is placed and how easily student interactions can be heard--as well…

  1. A Novel Mobile Video Community Discovery Scheme Using Ontology-Based Semantical Interest Capture

    Directory of Open Access Journals (Sweden)

    Ruiling Zhang

    2016-01-01

    Full Text Available Leveraging network virtualization technologies, the community-based video systems rely on the measurement of common interests to define and steady relationship between community members, which promotes video sharing performance and improves scalability community structure. In this paper, we propose a novel mobile Video Community discovery scheme using ontology-based semantical interest capture (VCOSI. An ontology-based semantical extension approach is proposed, which describes video content and measures video similarity according to video key word selection methods. In order to reduce the calculation load of video similarity, VCOSI designs a prefix-filtering-based estimation algorithm to decrease energy consumption of mobile nodes. VCOSI further proposes a member relationship estimate method to construct scalable and resilient node communities, which promotes video sharing capacity of video systems with the flexible and economic community maintenance. Extensive tests show how VCOSI obtains better performance results in comparison with other state-of-the-art solutions.

  2. Making Educational and Scholarly Videos with Screen Capture Software

    Directory of Open Access Journals (Sweden)

    Mark L. Burkey

    2015-11-01

    Full Text Available This resource describes several options for making educational videos using “screencasting”, or “screen capture” software.  The author (who has over 300 screencasted videos on YouTube and indexed on his website, www.burkeyacademy.com describes the software and hardware tools needed, including some open source and free-to-use tools.  Links to some “how to” videos are included, as well as some links to other example videos demonstrating novel professional uses for screencasting.

  3. ALOGORITHMS FOR AUTOMATIC RUNWAY DETECTION ON VIDEO SEQUENCES

    Directory of Open Access Journals (Sweden)

    A. I. Logvin

    2015-01-01

    Full Text Available The article discusses algorithm for automatic runway detection on video sequences. The main stages of algorithm are represented. Some methods to increase reliability of recognition are described.

  4. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+

    Directory of Open Access Journals (Sweden)

    Steven Nicholas Graves, MA

    2015-02-01

    Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  5. Gait Analysis by Multi Video Sequence Analysis

    DEFF Research Database (Denmark)

    Jensen, Karsten; Juhl, Jens

    2009-01-01

    The project presented in this article aims to develop software so that close-range photogrammetry with sufficient accuracy can be used to point out the most frequent foot mal positions and monitor the effect of the traditional treatment. The project is carried out as a cooperation between...... the Orthopaedic Surgery in Northern Jutland and the Laboratory for Geoinformatics, Aalborg University. The superior requirements on the system are that it shall be without heavy expenses, be easy to install and easy to operate. A first version of the system is designed to measure the navicula height...... and the calcaneus angle during gait. In the introductory phase of the project the task has been to select, purchase and draw up hardware, select and purchase software concerning video streaming and to develop special software concerning automated registration of the position of the foot during gait by Multi Video...

  6. Capturing Better Photos and Video with your iPhone

    CERN Document Server

    Thomas, J Dennis; Sammon, Rick

    2011-01-01

    Offers unique advice for taking great photos and videos with your iPod or iPhone!. Packed with unique advice, tips, and tricks, this one-of-a-kind, full-color reference presents step-by-step guidance for taking the best possible quality photos and videos using your iPod or iPhone. Top This unique book walks you through everything from composing a picture, making minor edits, and posting content to using apps to create more dynamic images. You'll quickly put to use this up-to-date coverage of executing both common and uncommon photo and video tasks on your mobile device.: Presents unique advice

  7. Analysis of simulated angiographic procedures: part 1--capture and presentation of audio and video recordings.

    Science.gov (United States)

    Duncan, James R; Glaiberman, Craig B

    2006-12-01

    To assess different methods of recording angiographic simulations and to determine how such recordings might be used for training and research. Two commercially available high-fidelity angiography simulations, the Mentice Vascular Interventional Simulation Trainer and the Simbionix AngioMentor, were used for data collection. Video and audio records of simulated procedures were created by different methods, including software-based screen capture, video splitters and converters, and external cameras. Recording parameters were varied, and the recordings were transferred to computer workstations for postprocessing and presentation. The information displayed on the simulators' computer screens could be captured by each method. Although screen-capture software provided the highest resolution, workflow considerations favored a hardware-based solution that duplicated the video signal and recorded the data stream(s) at lower resolutions. Additional video and audio recording devices were used to monitor the angiographer's actions during the simulated procedures. The multiple audio and video files were synchronized and composited with personal computers equipped with commercially available video editing software. Depending on the needs of the intended audience, the resulting files could be distributed and displayed at full or reduced resolutions. The capture, editing, presentation, and distribution of synchronized multichannel audio and video recordings holds great promise for angiography training and simulation research. To achieve this potential, technical challenges will need to be met, and content will need to be tailored to suit the needs of trainees and researchers.

  8. Outreach with video: Using YouTube and screen and lecture capture to reach thousands.

    OpenAIRE

    Gibbs, Graham R.

    2012-01-01

    This talk will report on my experience of creating a variety of types of video learning resources and disseminating them, mainly through YouTube where my channel has over 300 subscribers and over 100,000 views.\\ud \\ud The videos have been created either using Camtasia screen capture software or by videoing lecture sessions. I will discuss some of the techniques for enhancing the video in pedagogically useful ways and some of the production issues for ensuring high quality production. Then I w...

  9. Multiplexed DNA sequence capture of mitochondrial genomes using PCR products.

    Directory of Open Access Journals (Sweden)

    Tomislav Maricic

    Full Text Available BACKGROUND: To utilize the power of high-throughput sequencers, target enrichment methods have been developed. The majority of these require reagents and equipment that are only available from commercial vendors and are not suitable for the targets that are a few kilobases in length. METHODOLOGY/PRINCIPAL FINDINGS: We describe a novel and economical method in which custom made long-range PCR products are used to capture complete human mitochondrial genomes from complex DNA mixtures. We use the method to capture 46 complete mitochondrial genomes in parallel and we sequence them on a single lane of an Illumina GA(II instrument. CONCLUSIONS/SIGNIFICANCE: This method is economical and simple and particularly suitable for targets that can be amplified by PCR and do not contain highly repetitive sequences such as mtDNA. It has applications in population genetics and forensics, as well as studies of ancient DNA.

  10. Adaptive deblocking and deringing of H.264/AVC video sequences

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Burini, Nino; Forchhammer, Søren

    2013-01-01

    We present a method to reduce blocking and ringing artifacts in H.264/AVC video sequences. For deblocking, the proposed method uses a quality measure of a block based coded image to find filtering modes. Based on filtering modes, the images are segmented to three classes and a specific deblocking...... filter is applied to each class. Deringing is obtained by an adaptive bilateral filter; spatial and intensity spread parameters are selected adaptively using texture and edge mapping. The analysis of objective and subjective experimental results shows that the proposed algorithm is effective...... in deblocking and deringing low bit-rate H.264 video sequences....

  11. Binocular video ophthalmoscope for simultaneous recording of sequences of the human retina to compare dynamic parameters

    Science.gov (United States)

    Tornow, Ralf P.; Milczarek, Aleksandra; Odstrcilik, Jan; Kolar, Radim

    2017-07-01

    A parallel video ophthalmoscope was developed to acquire short video sequences (25 fps, 250 frames) of both eyes simultaneously with exact synchronization. Video sequences were registered off-line to compensate for eye movements. From registered video sequences dynamic parameters like cardiac cycle induced reflection changes and eye movements can be calculated and compared between eyes.

  12. MAP Estimation of Chin and Cheek Contours in Video Sequences

    Directory of Open Access Journals (Sweden)

    Kampmann Markus

    2004-01-01

    Full Text Available An algorithm for the estimation of chin and cheek contours in video sequences is proposed. This algorithm exploits a priori knowledge about shape and position of chin and cheek contours in images. Exploiting knowledge about the shape, a parametric 2D model representing chin and cheek contours is introduced. Exploiting knowledge about the position, a MAP estimator is developed taking into account the observed luminance gradient as well as a priori probabilities of chin and cheek contours positions. The proposed algorithm was tested with head and shoulder video sequences (image resolution CIF. In nearly 70% of all investigated video frames, a subjectively error free estimation could be achieved. The 2D estimate error is measured as on average between 2.4 and .

  13. Tracking of Individuals in Very Long Video Sequences

    DEFF Research Database (Denmark)

    Fihl, Preben; Corlin, Rasmus; Park, Sangho

    2006-01-01

    In this paper we present an approach for automatically detecting and tracking humans in very long video sequences. The detection is based on background subtraction using a multi-mode Codeword method. We enhance this method both in terms of representation and in terms of automatically updating the...

  14. The utility of live video capture to enhance debriefing following transcatheter aortic valve replacement.

    Science.gov (United States)

    Seamans, David P; Louka, Boshra F; Fortuin, F David; Patel, Bhavesh M; Sweeney, John P; Lanza, Louis A; DeValeria, Patrick A; Ezrre, Kim M; Ramakrishna, Harish

    2016-10-01

    The surgical and procedural specialties are continually evolving their methods to include more complex and technically difficult cases. These cases can be longer and incorporate multiple teams in a different model of operating room synergy. Patients are frequently older, with comorbidities adding to the complexity of these cases. Recording of this environment has become more feasible recently with advancement in video and audio capture systems often used in the simulation realm. We began using live capture to record a new procedure shortly after starting these cases in our institution. This has provided continued assessment and evaluation of live procedures. The goal of this was to improve human factors and situational challenges by review and debriefing. B-Line Medical's LiveCapture video system was used to record successive transcatheter aortic valve replacement (TAVR) procedures in our cardiac catheterization/laboratory. An illustrative case is used to discuss analysis and debriefing of the case using this system. An illustrative case is presented that resulted in long-term changes to our approach of these cases. The video capture documented rare events during one of our TAVR procedures. Analysis and debriefing led to definitive changes in our practice. While there are hurdles to the use of this technology in every institution, the role for the ongoing use of video capture, analysis, and debriefing may play an important role in the future of patient safety and human factors analysis in the operating environment.

  15. Hardware architectures for real time processing of High Definition video sequences

    OpenAIRE

    Genovese, Mariangela

    2014-01-01

    Actually, application fields, such as medicine, space exploration, surveillance, authentication, HDTV, and automated industry inspection, require capturing, storing and processing continuous streams of video data. Consequently, different process techniques (video enhancement, segmentation, object detection, or video compression, as examples) are involved in these applications. Such techniques often require a significant number of operations depending on the algorithm complexity and the video ...

  16. Video Lecture Capture Technology Helps Students Study without Affecting Attendance in Large Microbiology Lecture Courses

    Directory of Open Access Journals (Sweden)

    Jennifer Lynn McLean

    2016-12-01

    Full Text Available Recording lectures using video lecture capture software and making them available for students to watch anytime, from anywhere, has become a common practice in many universities across many disciplines. The software has become increasingly easy to use and is commonly provided and maintained by higher education institutions. Several studies have reported that students use lecture capture to enhance their learning and study for assessments, as well as to catch up on material they miss when they cannot attend class due to extenuating circumstances. Furthermore, students with disabilities and students from non-English Speaking Backgrounds (NESB may benefit from being able to watch the video lecture captures at their own pace. Yet, the effect of this technology on class attendance remains a controversial topic and largely unexplored in undergraduate microbiology education. Here, we show that when video lecture captures were available in our large enrollment general microbiology courses, attendance did not decrease. In fact, the majority of students reported that having the videos available did not encourage them to skip class, but rather they used them as a study tool. When we surveyed NESB students and nontraditional students about their attitudes toward this technology, they found it helpful for their learning and for keeping up with the material.

  17. Video capture on student-owned mobile devices to facilitate psychomotor skills acquisition: A feasibility study.

    Science.gov (United States)

    Hinck, Glori; Bergmann, Thomas F

    2013-01-01

    Objective : We evaluated the feasibility of using mobile device technology to allow students to record their own psychomotor skills so that these recordings can be used for self-reflection and formative evaluation. Methods : Students were given the choice of using DVD recorders, zip drive video capture equipment, or their personal mobile phone, device, or digital camera to record specific psychomotor skills. During the last week of the term, they were asked to complete a 9-question survey regarding their recording experience, including details of mobile phone ownership, technology preferences, technical difficulties, and satisfaction with the recording experience and video critique process. Results : Of those completing the survey, 83% currently owned a mobile phone with video capability. Of the mobile phone owners 62% reported having email capability on their phone and that they could transfer their video recording successfully to their computer, making it available for upload to the learning management system. Viewing the video recording of the psychomotor skill was valuable to 88% of respondents. Conclusions : Our results suggest that mobile phones are a viable technology to use for the video capture and critique of psychomotor skills, as most students own this technology and their satisfaction with this method is high.

  18. Video capture virtual reality as a flexible and effective rehabilitation tool

    Directory of Open Access Journals (Sweden)

    Katz Noomi

    2004-12-01

    Full Text Available Abstract Video capture virtual reality (VR uses a video camera and software to track movement in a single plane without the need to place markers on specific bodily locations. The user's image is thereby embedded within a simulated environment such that it is possible to interact with animated graphics in a completely natural manner. Although this technology first became available more than 25 years ago, it is only within the past five years that it has been applied in rehabilitation. The objective of this article is to describe the way this technology works, to review its assets relative to other VR platforms, and to provide an overview of some of the major studies that have evaluated the use of video capture technologies for rehabilitation.

  19. Sequence Capture versus Restriction Site Associated DNA Sequencing for Shallow Systematics.

    Science.gov (United States)

    Harvey, Michael G; Smith, Brian Tilston; Glenn, Travis C; Faircloth, Brant C; Brumfield, Robb T

    2016-09-01

    Sequence capture and restriction site associated DNA sequencing (RAD-Seq) are two genomic enrichment strategies for applying next-generation sequencing technologies to systematics studies. At shallow timescales, such as within species, RAD-Seq has been widely adopted among researchers, although there has been little discussion of the potential limitations and benefits of RAD-Seq and sequence capture. We discuss a series of issues that may impact the utility of sequence capture and RAD-Seq data for shallow systematics in non-model species. We review prior studies that used both methods, and investigate differences between the methods by re-analyzing existing RAD-Seq and sequence capture data sets from a Neotropical bird (Xenops minutus). We suggest that the strengths of RAD-Seq data sets for shallow systematics are the wide dispersion of markers across the genome, the relative ease and cost of laboratory work, the deep coverage and read overlap at recovered loci, and the high overall information that results. Sequence capture's benefits include flexibility and repeatability in the genomic regions targeted, success using low-quality samples, more straightforward read orthology assessment, and higher per-locus information content. The utility of a method in systematics, however, rests not only on its performance within a study, but on the comparability of data sets and inferences with those of prior work. In RAD-Seq data sets, comparability is compromised by low overlap of orthologous markers across species and the sensitivity of genetic diversity in a data set to an interaction between the level of natural heterozygosity in the samples examined and the parameters used for orthology assessment. In contrast, sequence capture of conserved genomic regions permits interrogation of the same loci across divergent species, which is preferable for maintaining comparability among data sets and studies for the purpose of drawing general conclusions about the impact of

  20. Google Glass Video Capture of Cardiopulmonary Resuscitation Events: A Pilot Simulation Study.

    Science.gov (United States)

    Kassutto, Stacey M; Kayser, Joshua B; Kerlin, Meeta P; Upton, Mark; Lipschik, Gregg; Epstein, Andrew J; Dine, C Jessica; Schweickert, William

    2017-12-01

    Video recording of resuscitation from fixed camera locations has been used to assess adherence to guidelines and provide feedback on performance. However, inpatient cardiac arrests often happen in unpredictable locations and crowded rooms, making video recording of these events problematic. We sought to understand the feasibility of Google Glass (GG) as a method for recording inpatient cardiac arrests and capturing salient resuscitation factors for post-event review. This observational study involved recording simulated cardiac arrest events on inpatient medical wards. Each simulation was reviewed by 3 methods: in-room physician direct observation, stationary video camera (SVC), and GG. Nurse and physician specialists analyzed the videos for global visibility and audibility, as well as recording quality of predefined resuscitation events and behaviors. Resident code leaders were surveyed regarding attitudes toward GG use in the clinical emergency setting. Of 11 simulated cardiac arrest events, 9 were successfully recorded by all observation methods (1 GG failure, 1 SVC failure). GG was judged slightly better than SVC recording for average global visualization (3.95 versus 3.15, P = .0003) and average global audibility (4.77 versus 4.42, P = .002). Of the GG videos, 19% had limitations in overall interpretability compared with 35% of SVC recordings (P = .039). All 10 survey respondents agreed that GG was easy to use; however, 2 found it distracting and 3 were uncomfortable with future use during actual resuscitations. GG is a feasible and acceptable method for capturing simulated resuscitation events in the inpatient setting.

  1. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.

    Science.gov (United States)

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H

    2015-02-01

    Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  2. The high resolution video capture system on the alcator C-Mod tokamak

    Science.gov (United States)

    Allen, A. J.; Terry, J. L.; Garnier, D.; Stillerman, J. A.; Wurden, G. A.

    1997-01-01

    A new system for routine digitization of video images is presently operating on the Alcator C-Mod tokamak. The PC-based system features high resolution video capture, storage, and retrieval. The captured images are stored temporarily on the PC, but are eventually written to CD. Video is captured from one of five filtered RS-170 CCD cameras at 30 frames per second (fps) with 640×480 pixel resolution. In addition, the system can digitize the output from a filtered Kodak Ektapro EM Digital Camera which captures images at 1000 fps with 239×192 resolution. Present views of this set of cameras include a wide angle and a tangential view of the plasma, two high resolution views of gas puff capillaries embedded in the plasma facing components, and a view of ablating, high speed Li pellets. The system is being used to study (1) the structure and location of visible emissions (including MARFEs) from the main plasma and divertor, (2) asymmetries in gas puff plumes due to flows in the scrape-off layer (SOL), and (3) the tilt and cigar-shaped spatial structure of the Li pellet ablation cloud.

  3. New algorithm for iris recognition based on video sequences

    Science.gov (United States)

    Bourennane, Salah; Fossati, Caroline; Ketchantang, William

    2010-07-01

    Among existing biometrics, iris recognition systems are among the most accurate personal biometric identification systems. However, the acquisition of a workable iris image requires strict cooperation of the user; otherwise, the image will be rejected by a verification module because of its poor quality, inducing a high false reject rate (FRR). The FRR may also increase when iris localization fails or when the pupil is too dilated. To improve the existing methods, we propose to use video sequences acquired in real time by a camera. In order to keep the same computational load to identify the iris, we propose a new method to estimate the iris characteristics. First, we propose a new iris texture characterization based on Fourier-Mellin transform, which is less sensitive to pupil dilatations than previous methods. Then, we develop a new iris localization algorithm that is robust to variations of quality (partial occlusions due to eyelids and eyelashes, light reflects, etc.), and finally, we introduce a fast and new criterion of suitable image selection from an iris video sequence for an accurate recognition. The accuracy of each step of the algorithm in the whole proposed recognition process is tested and evaluated using our own iris video database and several public image databases, such as CASIA, UBIRIS, and BATH.

  4. A unified framework for capturing facial images in video surveillance systems using cooperative camera system

    Science.gov (United States)

    Chan, Fai; Moon, Yiu-Sang; Chen, Jiansheng; Ma, Yiu-Kwan; Tsang, Wai-Hung; Fu, Kah-Kuen

    2008-04-01

    Low resolution and un-sharp facial images are always captured from surveillance videos because of long human-camera distance and human movements. Previous works addressed this problem by using an active camera to capture close-up facial images without considering human movements and mechanical delays of the active camera. In this paper, we proposed a unified framework to capture facial images in video surveillance systems by using one static and active camera in a cooperative manner. Human faces are first located by a skin-color based real-time face detection algorithm. A stereo camera model is also employed to approximate human face location and his/her velocity with respect to the active camera. Given the mechanical delays of the active camera, the position of a target face with a given delay can be estimated using a Human-Camera Synchronization Model. By controlling the active camera with corresponding amount of pan, tilt, and zoom, a clear close-up facial image of a moving human can be captured then. We built the proposed system in an 8.4-meter indoor corridor. Results show that the proposed stereo camera configuration can locate faces with average error of 3%. In addition, it is capable of capturing facial images of a walking human clearly in first instance in 90% of the test cases.

  5. Motion-compensated scan conversion of interlaced video sequences

    Science.gov (United States)

    Schultz, Richard R.; Stevenson, Robert L.

    1996-03-01

    When an interlaced image sequence is viewed at the rate of sixty frames per second, the human visual system interpolates the data so that the missing fields are not noticeable. However, if frames are viewed individually, interlacing artifacts are quite prominent. This paper addresses the problem of deinterlacing image sequences for the purposes of analyzing video stills and generating high-resolution hardcopy of individual frames. Multiple interlaced frames are temporally integrated to estimate a single progressively-scanned still image, with motion compensation used between frames. A video observation model is defined which incorporates temporal information via estimated interframe motion vectors. The resulting ill- posed inverse problem is regularized through Bayesian maximum a posteriori (MAP) estimation, utilizing a discontinuity-preserving prior model for the spatial data. Progressively- scanned estimates computed from interlaced image sequences are shown at several spatial interpolation factors, since the multiframe Bayesian scan conversion algorithm is capable of simultaneously deinterlacing the data and enhancing spatial resolution. Problems encountered in the estimation of motion vectors from interlaced frames are addressed.

  6. Recognizing surgeon's actions during suture operations from video sequences

    Science.gov (United States)

    Li, Ye; Ohya, Jun; Chiba, Toshio; Xu, Rong; Yamashita, Hiromasa

    2014-03-01

    Because of the shortage of nurses in the world, the realization of a robotic nurse that can support surgeries autonomously is very important. More specifically, the robotic nurse should be able to autonomously recognize different situations of surgeries so that the robotic nurse can pass necessary surgical tools to the medical doctors in a timely manner. This paper proposes and explores methods that can classify suture and tying actions during suture operations from the video sequence that observes the surgery scene that includes the surgeon's hands. First, the proposed method uses skin pixel detection and foreground extraction to detect the hand area. Then, interest points are randomly chosen from the hand area so that their 3D SIFT descriptors are computed. A word vocabulary is built by applying hierarchical K-means to these descriptors, and the words' frequency histogram, which corresponds to the feature space, is computed. Finally, to classify the actions, either SVM (Support Vector Machine), Nearest Neighbor rule (NN) for the feature space or a method that combines "sliding window" with NN is performed. We collect 53 suture videos and 53 tying videos to build the training set and to test the proposed method experimentally. It turns out that the NN gives higher than 90% accuracies, which are better recognition than SVM. Negative actions, which are different from either suture or tying action, are recognized with quite good accuracies, while "Sliding window" did not show significant improvements for suture and tying and cannot recognize negative actions.

  7. Heart rate measurement based on face video sequence

    Science.gov (United States)

    Xu, Fang; Zhou, Qin-Wu; Wu, Peng; Chen, Xing; Yang, Xiaofeng; Yan, Hong-jian

    2015-03-01

    This paper proposes a new non-contact heart rate measurement method based on photoplethysmography (PPG) theory. With this method we can measure heart rate remotely with a camera and ambient light. We collected video sequences of subjects, and detected remote PPG signals through video sequences. Remote PPG signals were analyzed with two methods, Blind Source Separation Technology (BSST) and Cross Spectral Power Technology (CSPT). BSST is a commonly used method, and CSPT is used for the first time in the study of remote PPG signals in this paper. Both of the methods can acquire heart rate, but compared with BSST, CSPT has clearer physical meaning, and the computational complexity of CSPT is lower than that of BSST. Our work shows that heart rates detected by CSPT method have good consistency with the heart rates measured by a finger clip oximeter. With good accuracy and low computational complexity, the CSPT method has a good prospect for the application in the field of home medical devices and mobile health devices.

  8. On the relationship between perceptual impact of source and channel distortions in video sequences

    DEFF Research Database (Denmark)

    Korhonen, Jari; Reiter, Ulrich; You, Junyong

    2010-01-01

    It is known that peak signal-to-noise ratio (PSNR) can be used for assessing the relative qualities of distorted video sequences meaningfully only if the compared sequences contain similar types of distortions. In this paper, we propose a model for rough assessment of the bias in PSNR results, when...... video sequences with both channel and source distortion are compared against video sequences with source distortion only. The proposed method can be used to compare the relative perceptual quality levels of video sequences with different distortion types more reliably than using plain PSNR....

  9. Captured metagenomics: large-scale targeting of genes based on 'sequence capture' reveals functional diversity in soils.

    Science.gov (United States)

    Manoharan, Lokeshwaran; Kushwaha, Sandeep K; Hedlund, Katarina; Ahrén, Dag

    2015-12-01

    Microbial enzyme diversity is a key to understand many ecosystem processes. Whole metagenome sequencing (WMG) obtains information on functional genes, but it is costly and inefficient due to large amount of sequencing that is required. In this study, we have applied a captured metagenomics technique for functional genes in soil microorganisms, as an alternative to WMG. Large-scale targeting of functional genes, coding for enzymes related to organic matter degradation, was applied to two agricultural soil communities through captured metagenomics. Captured metagenomics uses custom-designed, hybridization-based oligonucleotide probes that enrich functional genes of interest in metagenomic libraries where only probe-bound DNA fragments are sequenced. The captured metagenomes were highly enriched with targeted genes while maintaining their target diversity and their taxonomic distribution correlated well with the traditional ribosomal sequencing. The captured metagenomes were highly enriched with genes related to organic matter degradation; at least five times more than similar, publicly available soil WMG projects. This target enrichment technique also preserves the functional representation of the soils, thereby facilitating comparative metagenomics projects. Here, we present the first study that applies the captured metagenomics approach in large scale, and this novel method allows deep investigations of central ecosystem processes by studying functional gene abundances. © The Author 2015. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  10. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+

    OpenAIRE

    Steven Nicholas Graves, MA; Deana Saleh Shenaq, MD; Alexander J. Langerman, MD; David H. Song, MD, MBA, FACS

    2015-01-01

    Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons’ point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon’s perspective using the GoPro App. The camera was used ...

  11. Insertion of impairments in test video sequences for quality assessment based on psychovisual characteristics

    OpenAIRE

    López Velasco, Juan Pedro; Rodrigo Ferrán, Juan Antonio; Jiménez Bermejo, David; Menendez Garcia, Jose Manuel

    2014-01-01

    Assessing video quality is a complex task. While most pixel-based metrics do not present enough correlation between objective and subjective results, algorithms need to correspond to human perception when analyzing quality in a video sequence. For analyzing the perceived quality derived from concrete video artifacts in determined region of interest we present a novel methodology for generating test sequences which allow the analysis of impact of each individual distortion. Through results obt...

  12. Sub-band/transform compression of video sequences

    Science.gov (United States)

    Sauer, Ken; Bauer, Peter

    1992-01-01

    The progress on compression of video sequences is discussed. The overall goal of the research was the development of data compression algorithms for high-definition television (HDTV) sequences, but most of our research is general enough to be applicable to much more general problems. We have concentrated on coding algorithms based on both sub-band and transform approaches. Two very fundamental issues arise in designing a sub-band coder. First, the form of the signal decomposition must be chosen to yield band-pass images with characteristics favorable to efficient coding. A second basic consideration, whether coding is to be done in two or three dimensions, is the form of the coders to be applied to each sub-band. Computational simplicity is of essence. We review the first portion of the year, during which we improved and extended some of the previous grant period's results. The pyramid nonrectangular sub-band coder limited to intra-frame application is discussed. Perhaps the most critical component of the sub-band structure is the design of bandsplitting filters. We apply very simple recursive filters, which operate at alternating levels on rectangularly sampled, and quincunx sampled images. We will also cover the techniques we have studied for the coding of the resulting bandpass signals. We discuss adaptive three-dimensional coding which takes advantage of the detection algorithm developed last year. To this point, all the work on this project has been done without the benefit of motion compensation (MC). Motion compensation is included in many proposed codecs, but adds significant computational burden and hardware expense. We have sought to find a lower-cost alternative featuring a simple adaptation to motion in the form of the codec. In sequences of high spatial detail and zooming or panning, it appears that MC will likely be necessary for the proposed quality and bit rates.

  13. Fuzzy Logic-Based Scenario Recognition from Video Sequences

    Directory of Open Access Journals (Sweden)

    E. Elbaşi

    2013-10-01

    Full Text Available In recent years, video surveillance and monitoring have gained importance because of security and safety concerns. Banks, borders, airports, stores, and parking areas are the important application areas. There are two main parts in scenario recognition: Low level processing, including moving object detection and object tracking, and feature extraction. We have developed new features through this work which are RUD (relative upper density, RMD (relative middle density and RLD (relative lower density, and we have used other features such as aspect ratio, width, height, and color of the object. High level processing, including event start-end point detection, activity detection for each frame and scenario recognition for sequence of images. This part is the focus of our research, and different pattern recognition and classification methods are implemented and experimental results are analyzed. We looked into several methods of classification which are decision tree, frequency domain classification, neural network-based classification, Bayes classifier, and pattern recognition methods, which are control charts, and hidden Markov models. The control chart approach, which is a decision methodology, gives more promising results than other methodologies. Overlapping between events is one of the problems, hence we applied fuzzy logic technique to solve this problem. After using this method the total accuracy increased from 95.6 to 97.2.

  14. Video Capture and Editing as a Tool for the Storage, Distribution, and Illustration of Morphological Characters of Nematodes

    OpenAIRE

    De Ley, Paul; Bert, Wim

    2002-01-01

    Morphological identification and detailed observation of nematodes usually requires permanent slides, but these are never truly permanent and often prevent the same specimens to be used for other purposes. To efficiently record the morphology of nematodes in a format that allows easy archiving, editing, and distribution, we have assembled two micrographic video capture and editing (VCE) configurations. These assemblies allow production of short video clips that mimic multifocal observation of...

  15. Analysis of electron capture process in charge pumping sequence using time domain measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hori, Masahiro, E-mail: hori@eng.u-toyama.ac.jp; Watanabe, Tokinobu; Ono, Yukinori [Graduate School of Science and Engineering, University of Toyama, 3190 Gofuku, Toyama 930-8555 (Japan); Tsuchiya, Toshiaki [Interdisciplinary Graduate School of Science and Engineering, Shimane University, 1060 Nishikawatsu, Matsue 690-8504 (Japan)

    2014-12-29

    A method for analyzing the electron capture process in the charge pumping (CP) sequence is proposed and demonstrated. The method monitors the electron current in the CP sequence in time domain. This time-domain measurements enable us to directly access the process of the electron capture to the interface defects, which are obscured in the conventional CP method. Using the time-domain measurements, the rise time dependence of the capture process is systematically investigated. We formulate the capture process based on the rate equation and derive an analytic form of the current due to the electron capture to the defects. Based on the formula, the experimental data are analyzed and the capture cross section is obtained. In addition, the time-domain data unveil that the electron capture process completes before the electron channel opens, or below the threshold voltage in a low frequency range of the pulse.

  16. Real-time UAV trajectory generation using feature points matching between video image sequences

    Science.gov (United States)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  17. Chroma Subsampling Influence on the Perceived Video Quality for Compressed Sequences in High Resolutions

    Directory of Open Access Journals (Sweden)

    Miroslav Uhrina

    2017-01-01

    Full Text Available This paper deals with the influence of chroma subsampling on perceived video quality measured by subjective metrics. The evaluation was done for two most used video codecs H.264/AVC and H.265/HEVC. Eight types of video sequences with Full HD and Ultra HD resolutions depending on content were tested. The experimental results showed that observers did not see the difference between unsubsampled and subsampled sequences, so using subsampled videos is preferable even 50 % of the amount of data can be saved. Also, the minimum bitrates to achieve the good and fair quality by each codec and resolution were determined.

  18. Summarization of Surveillance Video Sequences Using Face Quality Assessment

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.; Rahmati, Mohammad

    2011-01-01

    Constant working surveillance cameras in public places, such as airports and banks, produce huge amount of video data. Faces in such videos can be extracted in real time. However, most of these detected faces are either redundant or useless. Redundant information adds computational costs to facial...

  19. Diagnostics of primary immunodeficiency diseases: a sequencing capture approach.

    Directory of Open Access Journals (Sweden)

    Lotte N Moens

    Full Text Available Primary Immunodeficiencies (PID are genetically inherited disorders characterized by defects of the immune system, leading to increased susceptibility to infection. Due to the variety of clinical symptoms and the complexity of current diagnostic procedures, accurate diagnosis of PID is often difficult in daily clinical practice. Thanks to the advent of "next generation" sequencing technologies and target enrichment methods, the development of multiplex diagnostic assays is now possible. In this study, we applied a selector-based target enrichment assay to detect disease-causing mutations in 179 known PID genes. The usefulness of this assay for molecular diagnosis of PID was investigated by sequencing DNA from 33 patients, 18 of which had at least one known causal mutation at the onset of the experiment. We were able to identify the disease causing mutations in 60% of the investigated patients, indicating that the majority of PID cases could be resolved using a targeted sequencing approach. Causal mutations identified in the unknown patient samples were located in STAT3, IGLL1, RNF168 and PGM3. Based on our results, we propose a stepwise approach for PID diagnostics, involving targeted resequencing, followed by whole transcriptome and/or whole genome sequencing if causative variants are not found in the targeted exons.

  20. A scalable, fully automated process for construction of sequence-ready human exome targeted capture libraries.

    Science.gov (United States)

    Fisher, Sheila; Barry, Andrew; Abreu, Justin; Minie, Brian; Nolan, Jillian; Delorey, Toni M; Young, Geneva; Fennell, Timothy J; Allen, Alexander; Ambrogio, Lauren; Berlin, Aaron M; Blumenstiel, Brendan; Cibulskis, Kristian; Friedrich, Dennis; Johnson, Ryan; Juhn, Frank; Reilly, Brian; Shammas, Ramy; Stalker, John; Sykes, Sean M; Thompson, Jon; Walsh, John; Zimmer, Andrew; Zwirko, Zac; Gabriel, Stacey; Nicol, Robert; Nusbaum, Chad

    2011-01-01

    Genome targeting methods enable cost-effective capture of specific subsets of the genome for sequencing. We present here an automated, highly scalable method for carrying out the Solution Hybrid Selection capture approach that provides a dramatic increase in scale and throughput of sequence-ready libraries produced. Significant process improvements and a series of in-process quality control checkpoints are also added. These process improvements can also be used in a manual version of the protocol.

  1. Quality-Aware Estimation of Facial Landmarks in Video Sequences

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.

    2015-01-01

    Face alignment in video is a primitive step for facial image analysis. The accuracy of the alignment greatly depends on the quality of the face image in the video frames and low quality faces are proven to cause erroneous alignment. Thus, this paper proposes a system for quality aware face...... alignment by using a Supervised Decent Method (SDM) along with a motion based forward extrapolation method. The proposed system first extracts faces from video frames. Then, it employs a face quality assessment technique to measure the face quality. If the face quality is high, the proposed system uses SDM...... for facial landmark detection. If the face quality is low the proposed system corrects the facial landmarks that are detected by SDM. Depending upon the face velocity in consecutive video frames and face quality measure, two algorithms are proposed for correction of landmarks in low quality faces by using...

  2. Finding and Improving the Key-Frames of Long Video Sequences for Face Recognition

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2010-01-01

    of such video sequences by any enhancement or even face recognition algorithm is demanding. Thus, there is a need for a mechanism to summarize the input video sequence to a set of key-frames and then applying an enhancement algorithm to this subset. This paper presents a system doing exactly this. The system......Face recognition systems are very sensitive to the quality and resolution of their input face images. This makes such systems unreliable when working with long surveillance video sequences without employing some selection and enhancement algorithms. On the other hand, processing all the frames...... uses face quality assessment to select the key-frames and a hybrid super-resolution to enhance the face image quality. The suggested system that employs a linear associator face recognizer to evaluate the enhanced results has been tested on real surveillance video sequences and the experimental results...

  3. Evaluation of a target region capture sequencing platform using monogenic diabetes as a study-model

    DEFF Research Database (Denmark)

    Gao, Rui; Liu, Yanxia; Gjesing, Anette Marianne Prior

    2014-01-01

    and next generation sequencing which might be used as an efficient way to diagnose various genetic disorders. We aimed to develop a target-region capture sequencing platform to screen 117 selected candidate genes involved in metabolism for mutations and to evaluate its performance using monogenic diabetes...

  4. Correction of Line Interleaving Displacement in Frame Captured Aerial Video Imagery

    Science.gov (United States)

    B. Cooke; A. Saucier

    1995-01-01

    Scientists with the USDA Forest Service are currently assessing the usefulness of aerial video imagery for various purposes including midcycle inventory updates. The potential of video image data for these purposes may be compromised by scan line interleaving displacement problems. Interleaving displacement problems cause features in video raster datasets to have...

  5. Video surveillance captures student hand hygiene behavior, reactivity to observation, and peer influence in Kenyan primary schools.

    Directory of Open Access Journals (Sweden)

    Amy J Pickering

    Full Text Available In-person structured observation is considered the best approach for measuring hand hygiene behavior, yet is expensive, time consuming, and may alter behavior. Video surveillance could be a useful tool for objectively monitoring hand hygiene behavior if validated against current methods.Student hand cleaning behavior was monitored with video surveillance and in-person structured observation, both simultaneously and separately, at four primary schools in urban Kenya over a study period of 8 weeks.Video surveillance and in-person observation captured similar rates of hand cleaning (absolute difference <5%, p = 0.74. Video surveillance documented higher hand cleaning rates (71% when at least one other person was present at the hand cleaning station, compared to when a student was alone (48%; rate ratio  = 1.14 [95% CI 1.01-1.28]. Students increased hand cleaning rates during simultaneous video and in-person monitoring as compared to single-method monitoring, suggesting reactivity to each method of monitoring. This trend was documented at schools receiving a handwashing with soap intervention, but not at schools receiving a sanitizer intervention.Video surveillance of hand hygiene behavior yields results comparable to in-person observation among schools in a resource-constrained setting. Video surveillance also has certain advantages over in-person observation, including rapid data processing and the capability to capture new behavioral insights. Peer influence can significantly improve student hand cleaning behavior and, when possible, should be exploited in the design and implementation of school hand hygiene programs.

  6. Application of target capture sequencing of exons and conserved non-coding sequences to 20 inbred rat strains

    Directory of Open Access Journals (Sweden)

    Minako Yoshihara

    2016-12-01

    Full Text Available We report sequence data obtained by our recently devised target capture method TargetEC applied to 20 inbred rat strains. This method encompasses not only all annotated exons but also highly conserved non-coding sequences shared among vertebrates. The total length of the target regions covers 146.8 Mb. On an average, we obtained 31.7× depth of target coverage and identified 154,330 SNVs and 24,368 INDELs for each strain. This corresponds to 470,037 unique SNVs and 68,652 unique INDELs among the 20 strains. The sequence data can be accessed at DDBJ/EMBL/GenBank under accession number PRJDB4648, and the identified variants have been deposited at http://bioinfo.sls.kyushu-u.ac.jp/rat_target_capture/20_strains.vcf.gz.

  7. Phylogenetic properties of 50 nuclear loci in Medicago (Leguminosae) generated using multiplexed sequence capture and next-generation sequencing.

    Science.gov (United States)

    de Sousa, Filipe; Bertrand, Yann J K; Nylinder, Stephan; Oxelman, Bengt; Eriksson, Jonna S; Pfeil, Bernard E

    2014-01-01

    Next-generation sequencing technology has increased the capacity to generate molecular data for plant biological research, including phylogenetics, and can potentially contribute to resolving complex phylogenetic problems. The evolutionary history of Medicago L. (Leguminosae: Trifoliae) remains unresolved due to incongruence between published phylogenies. Identification of the processes causing this genealogical incongruence is essential for the inference of a correct species phylogeny of the genus and requires that more molecular data, preferably from low-copy nuclear genes, are obtained across different species. Here we report the development of 50 novel LCN markers in Medicago and assess the phylogenetic properties of each marker. We used the genomic resources available for Medicago truncatula Gaertn., hybridisation-based gene enrichment (sequence capture) techniques and Next-Generation Sequencing to generate sequences. This alternative proves to be a cost-effective approach to amplicon sequencing in phylogenetic studies at the genus or tribe level and allows for an increase in number and size of targeted loci. Substitution rate estimates for each of the 50 loci are provided, and an overview of the variation in substitution rates among a large number of low-copy nuclear genes in plants is presented for the first time. Aligned sequences of major species lineages of Medicago and its sister genus are made available and can be used in further probe development for sequence-capture of the same markers.

  8. STUDY OF BLOCKING EFFECT ELIMINATION METHODS BY MEANS OF INTRAFRAME VIDEO SEQUENCE INTERPOLATION

    Directory of Open Access Journals (Sweden)

    I. S. Rubina

    2015-01-01

    Full Text Available The paper deals with image interpolation methods and their applicability to eliminate some of the artifacts related to both the dynamic properties of objects in video sequences and algorithms used in the order of encoding steps. The main drawback of existing methods is the high computational complexity, unacceptable in video processing. Interpolation of signal samples for blocking - effect elimination at the output of the convertion encoding is proposed as a part of the study. It was necessary to develop methods for improvement of compression ratio and quality of the reconstructed video data by blocking effect elimination on the borders of the segments by intraframe interpolating of video sequence segments. The main point of developed methods is an adaptive recursive algorithm application with adaptive-sized interpolation kernel both with and without the brightness gradient consideration at the boundaries of objects and video sequence blocks. Within theoretical part of the research, methods of information theory (RD-theory and data redundancy elimination, methods of pattern recognition and digital signal processing, as well as methods of probability theory are used. Within experimental part of the research, software implementation of compression algorithms with subsequent comparison of the implemented algorithms with the existing ones was carried out. Proposed methods were compared with the simple averaging algorithm and the adaptive algorithm of central counting interpolation. The advantage of the algorithm based on the adaptive kernel size selection interpolation is in compression ratio increasing by 30%, and the advantage of the modified algorithm based on the adaptive interpolation kernel size selection is in the compression ratio increasing by 35% in comparison with existing algorithms, interpolation and quality of the reconstructed video sequence improving by 3% compared to the one compressed without interpolation. The findings will be

  9. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    The ldquoatmosphere-space interactions monitorrdquo (ASIM) is a payload to be mounted on one of the external platforms of the Columbus module of the International Space Station (ISS). The instruments include six video cameras, six photometers and one X-ray detector. The main scientific objective...... of the mission is to study transient luminous events (TLE) above severe thunderstorms: the sprites, jets and elves. Other atmospheric phenomena are also studied including aurora, gravity waves and meteors. As part of the ASIM Phase B study, on-board processing of data from the cameras is being developed...

  10. A Novel Face Segmentation Algorithm from a Video Sequence for Real-Time Face Recognition

    Directory of Open Access Journals (Sweden)

    Sudhaker Samuel RD

    2007-01-01

    Full Text Available The first step in an automatic face recognition system is to localize the face region in a cluttered background and carefully segment the face from each frame of a video sequence. In this paper, we propose a fast and efficient algorithm for segmenting a face suitable for recognition from a video sequence. The cluttered background is first subtracted from each frame, in the foreground regions, a coarse face region is found using skin colour. Then using a dynamic template matching approach the face is efficiently segmented. The proposed algorithm is fast and suitable for real-time video sequence. The algorithm is invariant to large scale and pose variation. The segmented face is then handed over to a recognition algorithm based on principal component analysis and linear discriminant analysis. The online face detection, segmentation, and recognition algorithms take an average of 0.06 second on a 3.2 GHz P4 machine.

  11. Capturing "attrition intensifying" structural traits from didactic interaction sequences of MOOC learners

    OpenAIRE

    Sinha, Tanmay; Li, Nan; Jermann, Patrick; Dillenbourg, Pierre

    2014-01-01

    This work is an attempt to discover hidden structural configurations in learning activity sequences of students in Massive Open Online Courses (MOOCs). Leveraging combined representations of video click- stream interactions and forum activities, we seek to fundamentally understand traits that are predictive of decreasing engagement over time. Grounded in the inter- disciplinary field of network science, we follow a graph based approach to success- fully extract indicators of active and passiv...

  12. An Efficient Solution for Hand Gesture Recognition from Video Sequence

    Directory of Open Access Journals (Sweden)

    PRODAN, R.-C.

    2012-08-01

    Full Text Available The paper describes a system of hand gesture recognition by image processing for human robot interaction. The recognition and interpretation of the hand postures acquired through a video camera allow the control of the robotic arm activity: motion - translation and rotation in 3D - and tightening/releasing the clamp. A gesture dictionary was defined and heuristic algorithms for recognition were developed and tested. The system can be used for academic and industrial purposes, especially for those activities where the movements of the robotic arm were not previously scheduled, for training the robot easier than using a remote control. Besides the gesture dictionary, the novelty of the paper consists in a new technique for detecting the relative positions of the fingers in order to recognize the various hand postures, and in the achievement of a robust system for controlling robots by postures of the hands.

  13. Sequence Capture and Phylogenetic Utility of Genomic Ultraconserved Elements Obtained from Pinned Insect Specimens.

    Directory of Open Access Journals (Sweden)

    Bonnie B Blaimer

    Full Text Available Obtaining sequence data from historical museum specimens has been a growing research interest, invigorated by next-generation sequencing methods that allow inputs of highly degraded DNA. We applied a target enrichment and next-generation sequencing protocol to generate ultraconserved elements (UCEs from 51 large carpenter bee specimens (genus Xylocopa, representing 25 species with specimen ages ranging from 2-121 years. We measured the correlation between specimen age and DNA yield (pre- and post-library preparation DNA concentration and several UCE sequence capture statistics (raw read count, UCE reads on target, UCE mean contig length and UCE locus count with linear regression models. We performed piecewise regression to test for specific breakpoints in the relationship of specimen age and DNA yield and sequence capture variables. Additionally, we compared UCE data from newer and older specimens of the same species and reconstructed their phylogeny in order to confirm the validity of our data. We recovered 6-972 UCE loci from samples with pre-library DNA concentrations ranging from 0.06-9.8 ng/μL. All investigated DNA yield and sequence capture variables were significantly but only moderately negatively correlated with specimen age. Specimens of age 20 years or less had significantly higher pre- and post-library concentrations, UCE contig lengths, and locus counts compared to specimens older than 20 years. We found breakpoints in our data indicating a decrease of the initial detrimental effect of specimen age on pre- and post-library DNA concentration and UCE contig length starting around 21-39 years after preservation. Our phylogenetic results confirmed the integrity of our data, giving preliminary insights into relationships within Xylocopa. We consider the effect of additional factors not measured in this study on our age-related sequence capture results, such as DNA fragmentation and preservation method, and discuss the promise of the UCE

  14. Analysis of Delta-V Losses During Lunar Capture Sequence Using Finite Thrust

    Directory of Open Access Journals (Sweden)

    Young-Joo Song

    2011-09-01

    Full Text Available To prepare for a future Korean lunar orbiter mission, semi-optimal lunar capture orbits using finite thrust are designed and analyzed. Finite burn delta-V losses during lunar capture sequence are also analyzed by comparing those with values derived with impulsive thrusts in previous research. To design a hypothetical lunar capture sequence, two different intermediate capture orbits having orbital periods of about 12 hours and 3.5 hours are assumed, and final mission operation orbit around the Moon is assumed to be 100 km altitude with 90 degree of inclination. For the performance of the on-board thruster, three different performances (150 N with Isp of 200 seconds, 300 N with Isp of 250 seconds, 450 N with Isp of 300 seconds are assumed, to provide a broad range of estimates of delta-V losses. As expected, it is found that the finite burn-arc sweeps almost symmetric orbital portions with respect to the perilune vector to minimize the delta-Vs required to achieve the final orbit. In addition, a difference of up to about 2% delta-V can occur during the lunar capture sequences with the use of assumed engine configurations, compared to scenarios with impulsive thrust. However, these delta-V losses will differ for every assumed lunar explorer's on-board thrust capability. Therefore, at the early stage of mission planning, careful consideration must be made while estimating mission budgets, particularly if the preliminary mission studies were assumed using impulsive thrust. The results provided in this paper are expected to lead to further progress in the design field of Korea’s lunar orbiter mission, particularly the lunar capture sequences using finite thrust.

  15. Exploiting the great potential of Sequence Capture data by a new tool, SUPER-CAP.

    Science.gov (United States)

    Ruggieri, Valentino; Anzar, Irantzu; Paytuvi, Andreu; Calafiore, Roberta; Cigliano, Riccardo Aiese; Sanseverino, Walter; Barone, Amalia

    2017-02-01

    The recent development of Sequence Capture methodology represents a powerful strategy for enhancing data generation to assess genetic variation of targeted genomic regions. Here, we present SUPER-CAP, a bioinformatics web tool aimed at handling Sequence Capture data, fine calculating the allele frequency of variations and building genotype-specific sequence of captured genes. The dataset used to develop this in silico strategy consists of 378 loci and related regulative regions in a collection of 44 tomato landraces. About 14,000 high-quality variants were identified. The high depth (>40×) of coverage and adopting the correct filtering criteria allowed identification of about 4,000 rare variants and 10 genes with a different copy number variation. We also show that the tool is capable to reconstruct genotype-specific sequences for each genotype by using the detected variants. This allows evaluating the combined effect of multiple variants in the same protein. The architecture and functionality of SUPER-CAP makes the software appropriate for a broad set of analyses including SNP discovery and mining. Its functionality, together with the capability to process large data sets and efficient detection of sequence variation, makes SUPER-CAP a valuable bioinformatics tool for genomics and breeding purposes. © The Author 2016. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  16. Library preparation and multiplex capture for massive parallel sequencing applications made efficient and easy.

    Directory of Open Access Journals (Sweden)

    Mårten Neiman

    Full Text Available During the recent years, rapid development of sequencing technologies and a competitive market has enabled researchers to perform massive sequencing projects at a reasonable cost. As the price for the actual sequencing reactions drops, enabling more samples to be sequenced, the relative price for preparing libraries gets larger and the practical laboratory work becomes complex and tedious. We present a cost-effective strategy for simplified library preparation compatible with both whole genome- and targeted sequencing experiments. An optimized enzyme composition and reaction buffer reduces the number of required clean-up steps and allows for usage of bulk enzymes which makes the whole process cheap, efficient and simple. We also present a two-tagging strategy, which allows for multiplex sequencing of targeted regions. To prove our concept, we have prepared libraries for low-pass sequencing from 100 ng DNA, performed 2-, 4- and 8-plex exome capture and a 96-plex capture of a 500 kb region. In all samples we see a high concordance (>99.4% of SNP calls when comparing to commercially available SNP-chip platforms.

  17. An Automatic Video Meteor Observation Using UFO Capture at the Showa Station

    Science.gov (United States)

    Fujiwara, Y.; Nakamura, T.; Ejiri, M.; Suzuki, H.

    2012-05-01

    The goal of our study is to clarify meteor activities in the southern hemi-sphere by continuous optical observations with video cameras with automatic meteor detection and recording at Syowa station, Antarctica.

  18. TWO-DIMENSIONAL VIDEO ANALYSIS IS COMPARABLE TO 3D MOTION CAPTURE IN LOWER EXTREMITY MOVEMENT ASSESSMENT.

    Science.gov (United States)

    Schurr, Stacy A; Marshall, Ashley N; Resch, Jacob E; Saliba, Susan A

    2017-04-01

    Although 3D motion capture is considered the "gold standard" for recording and analyzing kinematics, 2D video analysis may be a more reasonable, inexpensive, and portable option for kinematic assessment during pre-participation screenings. Few studies have compared quantitative measurements of lower extremity functional tasks between 2D and 3D. To compare kinematic measurements of the trunk and lower extremity in the frontal and sagittal planes between 2D video camera and 3D motion capture analyses obtained concurrently during a SLS. Descriptive laboratory study. Twenty-six healthy, recreationally active adults volunteered to participate. Participants performed three trials of the single leg squat on each limb, which were recorded simultaneously by three 2D video cameras and a 3D motion capture system. Dependent variables analyzed were joint displacement at the trunk, hip, knee, and ankle in the frontal and sagittal planes during the task compared to single leg quiet standing. Dependent variables exhibited moderate to strong correlations between the two measures in the sagittal plane ( r  = 0.51-.093), and a poor correlation at the knee in the frontal plane ( r  = 0.308) at ( p  ≤ 0.05) All other dependent variables revealed non-significant results between the two measures. Bland-Altman plots revealed strong agreement in the average mean difference in the amount of joint displacement between 2D and 3D in the sagittal plane (trunk = 1.68 º, hip = 2.60 º, knee = 0.74 º, and ankle = 3.12 º). Agreement in the frontal plane was good (trunk = 7.92 °, hip = -8.72 º, knee = -6.62 º, and ankle = 3.03 °). Moderate to strong relationships were observed between 2D video camera and 3D motion capture analyses at all joints in the sagittal plane, and the average mean difference was comparable to the standard error of measure with goniometry. The results suggest that despite the lack of precision and ability to

  19. Video Enhancement and Dynamic Range Control of HDR Sequences for Automotive Applications

    Directory of Open Access Journals (Sweden)

    Giovanni Ramponi

    2007-01-01

    Full Text Available CMOS video cameras with high dynamic range (HDR output are particularly suitable for driving assistance applications, where lighting conditions can strongly vary, going from direct sunlight to dark areas in tunnels. However, common visualization devices can only handle a low dynamic range, and thus a dynamic range reduction is needed. Many algorithms have been proposed in the literature to reduce the dynamic range of still pictures. Anyway, extending the available methods to video is not straightforward, due to the peculiar nature of video data. We propose an algorithm for both reducing the dynamic range of video sequences and enhancing its appearance, thus improving visual quality and reducing temporal artifacts. We also provide an optimized version of our algorithm for a viable hardware implementation on an FPGA. The feasibility of this implementation is demonstrated by means of a case study.

  20. Laser capture microdissection microscopy and genome sequencing of the avian malaria parasite, Plasmodium relictum.

    Science.gov (United States)

    Lutz, Holly L; Marra, Nicholas J; Grewe, Felix; Carlson, Jenny S; Palinauskas, Vaidas; Valkiūnas, Gediminas; Stanhope, Michael J

    2016-12-01

    Acquiring genomic material from avian malaria parasites for genome sequencing has proven problematic due to the nucleation of avian erythrocytes, which produces a large ratio of host to parasite DNA (∼1 million to 1 bp). We tested the ability of laser capture microdissection microscopy to isolate parasite cells from individual avian erythrocytes for four avian Plasmodium species, and subsequently applied whole genome amplification and Illumina sequencing methods to Plasmodium relictum (lineage pSGS1) to produce sequence reads of the P. relictum genome. We assembled ∼335 kbp of parasite DNA from this species, but were unable to completely avoid contamination by host DNA and other sources. However, it is clear that laser capture microdissection holds promise for the isolation of genomic material from haemosporidian parasites in intracellular life stages. In particular, laser capture microdissection may prove useful for isolating individual parasite species from co-infected hosts. Although not explicitly tested in this study, laser capture microdissection may also have important applications for isolation of rare parasite lineages and museum specimens for which no fresh material exists.

  1. Video capture and editing as a tool for the storage, distribution, and illustration of morphological characters of nematodes.

    Science.gov (United States)

    De Ley, Paul; Bert, Wim

    2002-12-01

    Morphological identification and detailed observation of nematodes usually requires permanent slides, but these are never truly permanent and often prevent the same specimens to be used for other purposes. To efficiently record the morphology of nematodes in a format that allows easy archiving, editing, and distribution, we have assembled two micrographic video capture and editing (VCE) configurations. These assemblies allow production of short video clips that mimic multifocal observation of nematode specimens through a light microscope. Images so obtained can be used for training, management, and online access of "virtual voucher specimens" in taxonomic collections, routine screening of fixed or unfixed specimens, recording of ephemeral staining patterns, or recording of freshly dissected internal organs prior to their decomposition. We provide an overview of the components and operation of both of our systems and evaluate their efficiency and image quality. We conclude that VCE is a highly versatile approach that is likely to become widely used in nematology research and teaching.

  2. Development and evaluation of a panel of filovirus sequence capture probes for pathogen detection by next-generation sequencing.

    Directory of Open Access Journals (Sweden)

    Jeffrey W Koehler

    Full Text Available A detailed understanding of the circulating pathogens in a particular geographic location aids in effectively utilizing targeted, rapid diagnostic assays, thus allowing for appropriate therapeutic and containment procedures. This is especially important in regions prevalent for highly pathogenic viruses co-circulating with other endemic pathogens such as the malaria parasite. The importance of biosurveillance is highlighted by the ongoing Ebola virus disease outbreak in West Africa. For example, a more comprehensive assessment of the regional pathogens could have identified the risk of a filovirus disease outbreak earlier and led to an improved diagnostic and response capacity in the region. In this context, being able to rapidly screen a single sample for multiple pathogens in a single tube reaction could improve both diagnostics as well as pathogen surveillance. Here, probes were designed to capture identifying filovirus sequence for the ebolaviruses Sudan, Ebola, Reston, Taï Forest, and Bundibugyo and the Marburg virus variants Musoke, Ci67, and Angola. These probes were combined into a single probe panel, and the captured filovirus sequence was successfully identified using the MiSeq next-generation sequencing platform. This panel was then used to identify the specific filovirus from nonhuman primates experimentally infected with Ebola virus as well as Bundibugyo virus in human sera samples from the Democratic Republic of the Congo, thus demonstrating the utility for pathogen detection using clinical samples. While not as sensitive and rapid as real-time PCR, this panel, along with incorporating additional sequence capture probe panels, could be used for broad pathogen screening and biosurveillance.

  3. Methylation-capture and Next-Generation Sequencing of free circulating DNA from human plasma.

    Science.gov (United States)

    Warton, Kristina; Lin, Vita; Navin, Tina; Armstrong, Nicola J; Kaplan, Warren; Ying, Kevin; Gloss, Brian; Mangs, Helena; Nair, Shalima S; Hacker, Neville F; Sutherland, Robert L; Clark, Susan J; Samimi, Goli

    2014-06-15

    Free circulating DNA (fcDNA) has many potential clinical applications, due to the non-invasive way in which it is collected. However, because of the low concentration of fcDNA in blood, genome-wide analysis carries many technical challenges that must be overcome before fcDNA studies can reach their full potential. There are currently no definitive standards for fcDNA collection, processing and whole-genome sequencing. We report novel detailed methodology for the capture of high-quality methylated fcDNA, library preparation and downstream genome-wide Next-Generation Sequencing. We also describe the effects of sample storage, processing and scaling on fcDNA recovery and quality. Use of serum versus plasma, and storage of blood prior to separation resulted in genomic DNA contamination, likely due to leukocyte lysis. Methylated fcDNA fragments were isolated from 5 donors using a methyl-binding protein-based protocol and appear as a discrete band of ~180 bases. This discrete band allows minimal sample loss at the size restriction step in library preparation for Next-Generation Sequencing, allowing for high-quality sequencing from minimal amounts of fcDNA. Following sequencing, we obtained 37 × 10(6)-86 × 10(6) unique mappable reads, representing more than 50% of total mappable reads. The methylation status of 9 genomic regions as determined by DNA capture and sequencing was independently validated by clonal bisulphite sequencing. Our optimized methods provide high-quality methylated fcDNA suitable for whole-genome sequencing, and allow good library complexity and accurate sequencing, despite using less than half of the recommended minimum input DNA.

  4. Sequence-Specific Covalent Capture Coupled with High-Contrast Nanopore Detection of a Disease-Derived Nucleic Acid Sequence.

    Science.gov (United States)

    Nejad, Maryam Imani; Shi, Ruicheng; Zhang, Xinyue; Gu, Li-Qun; Gates, Kent S

    2017-07-18

    Hybridization-based methods for the detection of nucleic acid sequences are important in research and medicine. Short probes provide sequence specificity, but do not always provide a durable signal. Sequence-specific covalent crosslink formation can anchor probes to target DNA and might also provide an additional layer of target selectivity. Here, we developed a new crosslinking reaction for the covalent capture of specific nucleic acid sequences. This process involved reaction of an abasic (Ap) site in a probe strand with an adenine residue in the target strand and was used for the detection of a disease-relevant T→A mutation at position 1799 of the human BRAF kinase gene sequence. Ap-containing probes were easily prepared and displayed excellent specificity for the mutant sequence under isothermal assay conditions. It was further shown that nanopore technology provides a high contrast-in essence, digital-signal that enables sensitive, single-molecule sensing of the cross-linked duplexes. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Three-dimensional fuzzy filter in color video sequence denoising implemented on DSP

    Science.gov (United States)

    Ponomaryov, Volodymyr I.; Montenegro, Hector; Peralta-Fabi, Ricardo

    2013-02-01

    In this paper, we present a Fuzzy 3D filter for color video sequences to suppress impulsive noise. The difference between the designed algorithm in comparison with other state- of-the-art algorithms consists of employing the three RGB bands of the video sequence data and analyzing the fuzzy gradients values obtained in eight directions, finally processing two temporal neighboring frames together. The simulation results have confirmed sufficiently better performance of the novel 3D filter both in terms of objective metrics (PSNR, MAE, NCD, SSIM) as well as in subjective perception via human vision in the color sequences. An efficiency analysis of the designed and other promising filters have been performed on the DSP TMS320DM642 by Texas InstrumentsTM through MATLAB's SimulinkTM module, showing that the 3D filter can be used in real-time processing applications.

  6. Nyquist Sampling Theorem: Understanding the Illusion of a Spinning Wheel Captured with a Video Camera

    Science.gov (United States)

    Levesque, Luc

    2014-01-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…

  7. Using video-reflexive ethnography to capture the complexity of leadership enactment in the healthcare workplace.

    Science.gov (United States)

    Gordon, Lisi; Rees, Charlotte; Ker, Jean; Cleland, Jennifer

    2017-12-01

    Current theoretical thinking asserts that leadership should be distributed across many levels of healthcare organisations to improve the patient experience and staff morale. However, much healthcare leadership education focusses on the training and competence of individuals and little attention is paid to the interprofessional workplace and how its inherent complexities might contribute to the emergence of leadership. Underpinned by complexity theory, this research aimed to explore how interprofessional healthcare teams enact leadership at a micro-level through influential acts of organising. A whole (interprofessional) team workplace-based study utilising video-reflexive ethnography occurred in two UK clinical sites. Thematic framework analyses of the video data (video-observation and video-reflexivity sessions) were undertaken, followed by in-depth analyses of human-human and human-material interactions. Data analysis revealed a complex interprofessional environment where leadership is a dynamic process, negotiated and renegotiated in various ways throughout interactions (both formal and informal). Being able to "see" themselves at work gave participants the opportunity to discuss and analyse their everyday leadership practices and challenge some of their sometimes deeply entrenched values, beliefs, practices and assumptions about healthcare leadership. These study findings therefore indicate a need to redefine the way that medical and healthcare educators facilitate leadership development and argue for new approaches to research which shifts the focus from leaders to leadership.

  8. [Identification of APEC genes expressed in vivo by selective capture of transcribed sequences].

    Science.gov (United States)

    Chen, Xiang; Gao, Song; Wang, Xiao-quan; Jiao, Xin-an; Liu, Xiu-fan

    2007-06-01

    Direct screening of bacterial genes expressed during infection in the host is limited, because isolation of bacterial transcripts from host tissues necessitates separation from the abundance of host RNA. Selective capture of transcribed sequences (SCOTS) allows the selective capture of bacterial cDNA derived from infected tissues using hybridization to biotinylated bacterial genomic DNA. Avian pathogenic E. coli strain E037 (serogroup O78) was used in a chicken infection model to identify bacterial genes that are expressed in infected tissues. Three-week-old white leghorn specific-pathogen-free chickens were inoculated into the right thoracic air sac with a 0.1 mL suspension containing 10(7) CFU of APEC strain E037. Total RNA was isolated from infected tissues (pericardium and air sacs) 6 or 24h postinfection and converted to cDNAs. By using the cDNA selection method of selective capture of transcribed sequences and enrichment for the isolation of pathogen-specific (non-pathogenic E. coli K-12 strain ) transcripts, pathogen-specific cDNAs were identified. Randomly chosen cDNA clones derived from transcripts in the air sacs or pericardium were selected and sequenced. The clones, termed aec, contained numerous APEC-specific sequences. Among the distinct 31 aec clones, pathogen-specific clones contained sequences homologous to known and novel putative bacterial virulence gene products involved in adherence, iron transport, lipopolysaccharide (LPS) synthesis, plasmid replication and conjugation, putative phage encoded products, and gene products of unknown function. Overall, the current study provided a means to identify novel pathogen-specific genes expressed in vivo and insight regarding the global gene expression of a pathogenic E. coli strain in a natural animal host during the infectious process.

  9. [Diagnosis of a case with oculocutaneous albinism type Ⅲ with next generation exome capture sequencing].

    Science.gov (United States)

    Lyu, Yuqiang; Huang, Jing; Zhang, Kaihui; Liu, Guohua; Gao, Min; Gai, Zhongtao; Liu, Yi

    2017-02-10

    To explore the clinical and genetic features of a Chinese boy with oculocutaneous albinism. The clinical features of the patient were analyzed. The DNA of the patient and his parents was extracted and sequenced by next generation exome capture sequencing. The nature and impact of detected mutation were predicted and validated. The child has displayed strabismus, poor vision, nystagmus and brown hair. DNA sequencing showed that the patient has carried compound heterozygous mutations of the TYRP1 gene, namely c.1214C>A (p.T405N) and c.1333dupG, which were inherited from his mother and father, respectively. Neither mutation was reported previously. The child has suffered from oculocutaneous albinism type Ⅲ caused by mutations of the TYRP1 gene.

  10. GIFT-Grab: Real-time C++ and Python multi-channel video capture, processing and encoding API

    Directory of Open Access Journals (Sweden)

    Dzhoshkun Ismail Shakir

    2017-10-01

    Full Text Available GIFT-Grab is an open-source API for acquiring, processing and encoding video streams in real time. GIFT-Grab supports video acquisition using various frame-grabber hardware as well as from standard-compliant network streams and video files. The current GIFT-Grab release allows for multi-channel video acquisition and encoding at the maximum frame rate of supported hardware – 60 frames per second (fps. GIFT-Grab builds on well-established highly configurable multimedia libraries including FFmpeg and OpenCV. GIFT-Grab exposes a simplified high-level API, aimed at facilitating integration into client applications with minimal coding effort. The core implementation of GIFT-Grab is in C++11. GIFT-Grab also features a Python API compatible with the widely used scientific computing packages NumPy and SciPy. GIFT-Grab was developed for capturing multiple simultaneous intra-operative video streams from medical imaging devices. Yet due to the ubiquity of video processing in research, GIFT-Grab can be used in many other areas. GIFT-Grab is hosted and managed on the software repository of the Centre for Medical Image Computing (CMIC at University College London, and is also mirrored on GitHub. In addition it is available for installation from the Python Package Index (PyPI via the pip installation tool. Funding statement: This work was supported through an Innovative Engineering for Health award by the Wellcome Trust [WT101957], the Engineering and Physical Sciences Research Council (EPSRC [NS/A000027/1] and a National Institute for Health Research Biomedical Research Centre UCLH/UCL High Impact Initiative. Sébastien Ourselin receives funding from the EPSRC (EP/H046410/1, EP/J020990/1, EP/K005278 and the MRC (MR/J01107X/1. Luis C. García-Peraza-Herrera is supported by the EPSRC-funded UCL Centre for Doctoral Training in Medical Imaging (EP/L016478/1.

  11. Dynamic occlusion detection and inpainting of in situ captured terrestrial laser scanning point clouds sequence

    Science.gov (United States)

    Chen, Chi; Yang, Bisheng

    2016-09-01

    Laser point clouds captured using terrestrial laser scanning (TLS) in an uncontrollable urban outdoor or indoor scene suffer from irregular shaped data blanks caused by dynamic occlusion that temporarily exists, i.e., moving objects, such as pedestrians or cars, resulting in integrality and quality losses of the scene data. This paper proposes a novel automatic dynamic occlusion detection and inpainting method for sequential TLS point clouds captured from one scan position. In situ collected laser point clouds sequences are indexed by establishing a novel panoramic space partition that assigns a three dimensional voxel to each laser point according to the scanning setups. Then two stationary background models are constructed at the ray voxel level using the laser reflectance intensity and geometrical attributes of the point set inside each voxel across the TLS sequence. Finally, the background models are combined to detect the points on the dynamic object, and the ray voxels of the detected dynamic points are tracked for further inpainting by replacing the ray voxels with the corresponding background voxels from another scan. The resulting scene is free of dynamic occlusions. Experiments validated the effectiveness of the proposed method for indoor and outdoor TLS point clouds captured by a commercial terrestrial scanner. The proposed method achieves high precision and recall rate for dynamic occlusion detection and produces clean inpainted point clouds for further processing.

  12. Sequence analysis of peptide:oligonucleotide heteroconjugates by electron capture dissociation and electron transfer dissociation.

    Science.gov (United States)

    Krivos, Kady L; Limbach, Patrick A

    2010-08-01

    Mass spectrometry analysis of protein-nucleic acid cross-links is challenging due to the dramatically different chemical properties of the two components. Identifying specific sites of attachment between proteins and nucleic acids requires methods that enable sequencing of both the peptide and oligonucleotide component of the heteroconjugate cross-link. While collision-induced dissociation (CID) has previously been used for sequencing such heteroconjugates, CID generates fragmentation along the phosphodiester backbone of the oligonucleotide preferentially. The result is a reduction in peptide fragmentation within the heteroconjugate. In this work, we have examined the effectiveness of electron capture dissociation (ECD) and electron-transfer dissociation (ETD) for sequencing heteroconjugates. Both methods were found to yield preferential fragmentation of the peptide component of a peptide:oligonucleotide heteroconjugate, with minimal differences in sequence coverage between these two electron-induced dissociation methods. Sequence coverage was found to increase with increasing charge state of the heteroconjugate, but decreases with increasing size of the oligonucleotide component. To overcome potential intermolecular interactions between the two components of the heteroconjugate, supplemental activation with ETD was explored. The addition of a supplemental activation step was found to increase peptide sequence coverage over ETD alone, suggesting that electrostatic interactions between the peptide and oligonucleotide components are one limiting factor in sequence coverage by these two approaches. These results show that ECD/ETD methods can be used for the tandem mass spectrometry sequencing of peptide:oligonucleotide heteroconjugates, and these methods are complementary to existing CID methods already used for sequencing of protein-nucleic acid cross-links. Copyright 2010. Published by Elsevier Inc.

  13. HIERARCHICAL ADAPTIVE ROOD PATTERN SEARCH FOR MOTION ESTIMATION AT VIDEO SEQUENCE ANALYSIS

    Directory of Open Access Journals (Sweden)

    V. T. Nguyen

    2016-05-01

    Full Text Available Subject of Research.The paper deals with the motion estimation algorithms for the analysis of video sequences in compression standards MPEG-4 Visual and H.264. Anew algorithm has been offered based on the analysis of the advantages and disadvantages of existing algorithms. Method. Thealgorithm is called hierarchical adaptive rood pattern search (Hierarchical ARPS, HARPS. This new algorithm includes the classic adaptive rood pattern search ARPS and hierarchical search MP (Hierarchical search or Mean pyramid. All motion estimation algorithms have been implemented using MATLAB package and tested with several video sequences. Main Results. The criteria for evaluating the algorithms were: speed, peak signal to noise ratio, mean square error and mean absolute deviation. The proposed method showed a much better performance at a comparable error and deviation. The peak signal to noise ratio in different video sequences shows better and worse results than characteristics of known algorithms so it requires further investigation. Practical Relevance. Application of this algorithm in MPEG-4 and H.264 codecs instead of the standard can significantly reduce compression time. This feature enables to recommend it in telecommunication systems for multimedia data storing, transmission and processing.

  14. Ventilator Data Extraction with a Video Display Image Capture and Processing System.

    Science.gov (United States)

    Wax, David B; Hill, Bryan; Levin, Matthew A

    2017-06-01

    Medical hardware and software device interoperability standards are not uniform. The result of this lack of standardization is that information available on clinical devices may not be readily or freely available for import into other systems for research, decision support, or other purposes. We developed a novel system to import discrete data from an anesthesia machine ventilator by capturing images of the graphical display screen and using image processing to extract the data with off-the-shelf hardware and open-source software. We were able to successfully capture and verify live ventilator data from anesthesia machines in multiple operating rooms and store the discrete data in a relational database at a substantially lower cost than vendor-sourced solutions.

  15. Fall Detection for Elderly from Partially Observed Depth-Map Video Sequences Based on View-Invariant Human Activity Representation

    Directory of Open Access Journals (Sweden)

    Rami Alazrai

    2017-03-01

    Full Text Available This paper presents a new approach for fall detection from partially-observed depth-map video sequences. The proposed approach utilizes the 3D skeletal joint positions obtained from the Microsoft Kinect sensor to build a view-invariant descriptor for human activity representation, called the motion-pose geometric descriptor (MPGD. Furthermore, we have developed a histogram-based representation (HBR based on the MPGD to construct a length-independent representation of the observed video subsequences. Using the constructed HBR, we formulate the fall detection problem as a posterior-maximization problem in which the posteriori probability for each observed video subsequence is estimated using a multi-class SVM (support vector machine classifier. Then, we combine the computed posteriori probabilities from all of the observed subsequences to obtain an overall class posteriori probability of the entire partially-observed depth-map video sequence. To evaluate the performance of the proposed approach, we have utilized the Kinect sensor to record a dataset of depth-map video sequences that simulates four fall-related activities of elderly people, including: walking, sitting, falling form standing and falling from sitting. Then, using the collected dataset, we have developed three evaluation scenarios based on the number of unobserved video subsequences in the testing videos, including: fully-observed video sequence scenario, single unobserved video subsequence of random lengths scenarios and two unobserved video subsequences of random lengths scenarios. Experimental results show that the proposed approach achieved an average recognition accuracy of 93 . 6 % , 77 . 6 % and 65 . 1 % , in recognizing the activities during the first, second and third evaluation scenario, respectively. These results demonstrate the feasibility of the proposed approach to detect falls from partially-observed videos.

  16. Classification of video sequences into chosen generalized use classes of target size and lighting level.

    Science.gov (United States)

    Leszczuk, Mikołaj; Dudek, Łukasz; Witkowski, Marcin

    The VQiPS (Video Quality in Public Safety) Working Group, supported by the U.S. Department of Homeland Security, has been developing a user guide for public safety video applications. According to VQiPS, five parameters have particular importance influencing the ability to achieve a recognition task. They are: usage time-frame, discrimination level, target size, lighting level, and level of motion. These parameters form what are referred to as Generalized Use Classes (GUCs). The aim of our research was to develop algorithms that would automatically assist classification of input sequences into one of the GUCs. Target size and lighting level parameters were approached. The experiment described reveals the experts' ambiguity and hesitation during the manual target size determination process. However, the automatic methods developed for target size classification make it possible to determine GUC parameters with 70 % compliance to the end-users' opinion. Lighting levels of the entire sequence can be classified with an efficiency reaching 93 %. To make the algorithms available for use, a test application has been developed. It is able to process video files and display classification results, the user interface being very simple and requiring only minimal user interaction.

  17. Spatiotemporal Super-Resolution Reconstruction Based on Robust Optical Flow and Zernike Moment for Video Sequences

    Directory of Open Access Journals (Sweden)

    Meiyu Liang

    2013-01-01

    Full Text Available In order to improve the spatiotemporal resolution of the video sequences, a novel spatiotemporal super-resolution reconstruction model (STSR based on robust optical flow and Zernike moment is proposed in this paper, which integrates the spatial resolution reconstruction and temporal resolution reconstruction into a unified framework. The model does not rely on accurate estimation of subpixel motion and is robust to noise and rotation. Moreover, it can effectively overcome the problems of hole and block artifacts. First we propose an efficient robust optical flow motion estimation model based on motion details preserving, then we introduce the biweighted fusion strategy to implement the spatiotemporal motion compensation. Next, combining the self-adaptive region correlation judgment strategy, we construct a fast fuzzy registration scheme based on Zernike moment for better STSR with higher efficiency, and then the final video sequences with high spatiotemporal resolution can be obtained by fusion of the complementary and redundant information with nonlocal self-similarity between the adjacent video frames. Experimental results demonstrate that the proposed method outperforms the existing methods in terms of both subjective visual and objective quantitative evaluations.

  18. Anticipatory Eye Movements While Watching Continuous Action Across Shots in Video Sequences: A Developmental Study.

    Science.gov (United States)

    Kirkorian, Heather L; Anderson, Daniel R

    2017-07-01

    Eye movements were recorded as 12-month-olds (n = 15), 4-year-olds (n = 17), and adults (n = 19) watched a 15-min video with sequences of shots conveying continuous motion. The central question was whether, and at what age, viewers anticipate the reappearance of objects following cuts to new shots. Adults were more likely than younger viewers to make anticipatory eye movements. Four-year-olds responded to transitions more slowly and tended to fixate the center of the screen. Infants' eye movement patterns reflected a tendency to react rather than anticipate. Findings are consistent with the hypothesis that adults integrate content across shots and understand how space is represented in edited video. Results are interpreted with respect to a developing understanding of film editing due to experience and cognitive maturation. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.

  19. Detection and Localization of Anomalous Motion in Video Sequences from Local Histograms of Labeled Affine Flows

    Directory of Open Access Journals (Sweden)

    Juan-Manuel Pérez-Rúa

    2017-05-01

    Full Text Available We propose an original method for detecting and localizing anomalous motion patterns in videos from a camera view-based motion representation perspective. Anomalous motion should be taken in a broad sense, i.e., unexpected, abnormal, singular, irregular, or unusual motion. Identifying distinctive dynamic information at any time point and at any image location in a sequence of images is a key requirement in many situations and applications. The proposed method relies on so-called labeled affine flows (LAF involving both affine velocity vectors and affine motion classes. At every pixel, a motion class is inferred from the affine motion model selected in a set of candidate models estimated over a collection of windows. Then, the image is subdivided in blocks where motion class histograms weighted by the affine motion vector magnitudes are computed. They are compared blockwise to histograms of normal behaviors with a dedicated distance. More specifically, we introduce the local outlier factor (LOF to detect anomalous blocks. LOF is a local flexible measure of the relative density of data points in a feature space, here the space of LAF histograms. By thresholding the LOF value, we can detect an anomalous motion pattern in any block at any time instant of the video sequence. The threshold value is automatically set in each block by means of statistical arguments. We report comparative experiments on several real video datasets, demonstrating that our method is highly competitive for the intricate task of detecting different types of anomalous motion in videos. Specifically, we obtain very competitive results on all the tested datasets: 99.2% AUC for UMN, 82.8% AUC for UCSD, and 95.73% accuracy for PETS 2009, at the frame level.

  20. Mutation analysis of Chinese sporadic congenital sideroblastic anemia by targeted capture sequencing.

    Science.gov (United States)

    An, Wenbin; Zhang, Jingliao; Chang, Lixian; Zhang, Yingchi; Wan, Yang; Ren, Yuanyuan; Niu, Deyun; Wu, Jian; Zhu, Xiaofan; Guo, Ye

    2015-05-20

    Congenital sideroblastic anemias (CSAs) comprise a group of heterogenous genetic diseases that are caused by the mutation of various genes involved in heme biosynthesis, iron-sulfur cluster biogenesis, or mitochondrial solute transport or metabolism. However, approximately 40% of patients with CSA have not been found to have pathogenic gene mutations. In this study, we systematically analyzed the mutation profile in 10 Chinese patients with sporadic CSA. We performed targeted deep sequencing analysis in ten patients with CSA using a panel of 417 genes that included known CSA-related genes. Mitochondrial genomes were analyzed using next-generation sequencing with a mitochondria enrichment kit and the HiSeq2000 sequencing platform. The results were confirmed by Sanger sequencing. The ALAS2 mutation was detected in one patient. SLC25A38 mutations were detected in three patients, including three novel mutations. Mitochondrial DNA deletions were detected in two patients. No disease-causing mutations were detected in four patients. To our knowledge, the pyridoxine-effective mutation C471Y of ALAS2, the compound heterozygous mutation W87X, I143Pfs146X, and the homozygous mutation R134C of SLC25A38 were found for the first time. Our findings add to the number of reported cases of this rare disease and to the CSA pathogenic mutation database. Our findings expand the phenotypic profile of mitochondrial DNA deletion mutations. This work also demonstrates the application of a congenital blood disease assay and targeted capture sequencing for the genetic screening analysis and diagnosis of heterogenous genetic CSA.

  1. Characterization of functional methylomes by next-generation capture sequencing identifies novel disease-associated variants

    Science.gov (United States)

    Allum, Fiona; Shao, Xiaojian; Guénard, Frédéric; Simon, Marie-Michelle; Busche, Stephan; Caron, Maxime; Lambourne, John; Lessard, Julie; Tandre, Karolina; Hedman, Åsa K.; Kwan, Tony; Ge, Bing; Rönnblom, Lars; McCarthy, Mark I.; Deloukas, Panos; Richmond, Todd; Burgess, Daniel; Spector, Timothy D.; Tchernof, André; Marceau, Simon; Lathrop, Mark; Vohl, Marie-Claude; Pastinen, Tomi; Grundberg, Elin; Ahmadi, Kourosh R.; Ainali, Chrysanthi; Barrett, Amy; Bataille, Veronique; Bell, Jordana T.; Buil, Alfonso; Dermitzakis, Emmanouil T.; Dimas, Antigone S.; Durbin, Richard; Glass, Daniel; Hassanali, Neelam; Ingle, Catherine; Knowles, David; Krestyaninova, Maria; Lindgren, Cecilia M.; Lowe, Christopher E.; Meduri, Eshwar; di Meglio, Paola; Min, Josine L.; Montgomery, Stephen B.; Nestle, Frank O.; Nica, Alexandra C.; Nisbet, James; O'Rahilly, Stephen; Parts, Leopold; Potter, Simon; Sandling, Johanna; Sekowska, Magdalena; Shin, So-Youn; Small, Kerrin S.; Soranzo, Nicole; Surdulescu, Gabriela; Travers, Mary E.; Tsaprouni, Loukia; Tsoka, Sophia; Wilk, Alicja; Yang, Tsun-Po; Zondervan, Krina T.

    2015-01-01

    Most genome-wide methylation studies (EWAS) of multifactorial disease traits use targeted arrays or enrichment methodologies preferentially covering CpG-dense regions, to characterize sufficiently large samples. To overcome this limitation, we present here a new customizable, cost-effective approach, methylC-capture sequencing (MCC-Seq), for sequencing functional methylomes, while simultaneously providing genetic variation information. To illustrate MCC-Seq, we use whole-genome bisulfite sequencing on adipose tissue (AT) samples and public databases to design AT-specific panels. We establish its efficiency for high-density interrogation of methylome variability by systematic comparisons with other approaches and demonstrate its applicability by identifying novel methylation variation within enhancers strongly correlated to plasma triglyceride and HDL-cholesterol, including at CD36. Our more comprehensive AT panel assesses tissue methylation and genotypes in parallel at ∼4 and ∼3 M sites, respectively. Our study demonstrates that MCC-Seq provides comparable accuracy to alternative approaches but enables more efficient cataloguing of functional and disease-relevant epigenetic and genetic variants for large-scale EWAS. PMID:26021296

  2. Dactyl Alphabet Gesture Recognition in a Video Sequence Using Microsoft Kinect

    Science.gov (United States)

    Artyukhin, S. G.; Mestetskiy, L. M.

    2015-05-01

    This paper presents an efficient framework for solving the problem of static gesture recognition based on data obtained from the web cameras and depth sensor Kinect (RGB-D - data). Each gesture given by a pair of images: color image and depth map. The database store gestures by it features description, genereated by frame for each gesture of the alphabet. Recognition algorithm takes as input a video sequence (a sequence of frames) for marking, put in correspondence with each frame sequence gesture from the database, or decide that there is no suitable gesture in the database. First, classification of the frame of the video sequence is done separately without interframe information. Then, a sequence of successful marked frames in equal gesture is grouped into a single static gesture. We propose a method combined segmentation of frame by depth map and RGB-image. The primary segmentation is based on the depth map. It gives information about the position and allows to get hands rough border. Then, based on the color image border is specified and performed analysis of the shape of the hand. Method of continuous skeleton is used to generate features. We propose a method of skeleton terminal branches, which gives the opportunity to determine the position of the fingers and wrist. Classification features for gesture is description of the position of the fingers relative to the wrist. The experiments were carried out with the developed algorithm on the example of the American Sign Language. American Sign Language gesture has several components, including the shape of the hand, its orientation in space and the type of movement. The accuracy of the proposed method is evaluated on the base of collected gestures consisting of 2700 frames.

  3. Chromosome conformation capture uncovers potential genome-wide interactions between human conserved non-coding sequences.

    Directory of Open Access Journals (Sweden)

    Daniel Robyr

    Full Text Available Comparative analyses of various mammalian genomes have identified numerous conserved non-coding (CNC DNA elements that display striking conservation among species, suggesting that they have maintained specific functions throughout evolution. CNC function remains poorly understood, although recent studies have identified a role in gene regulation. We hypothesized that the identification of genomic loci that interact physically with CNCs would provide information on their functions. We have used circular chromosome conformation capture (4C to characterize interactions of 10 CNCs from human chromosome 21 in K562 cells. The data provide evidence that CNCs are capable of interacting with loci that are enriched for CNCs. The number of trans interactions varies among CNCs; some show interactions with many loci, while others interact with few. Some of the tested CNCs are capable of driving the expression of a reporter gene in the mouse embryo, and associate with the oligodendrocyte genes OLIG1 and OLIG2. Our results underscore the power of chromosome conformation capture for the identification of targets of functional DNA elements and raise the possibility that CNCs exert their functions by physical association with defined genomic regions enriched in CNCs. These CNC-CNC interactions may in part explain their stringent conservation as a group of regulatory sequences.

  4. Automatic real-time tracking of fetal mouth in fetoscopic video sequence for supporting fetal surgeries

    Science.gov (United States)

    Xu, Rong; Xie, Tianliang; Ohya, Jun; Zhang, Bo; Sato, Yoshinobu; Fujie, Masakatsu G.

    2013-03-01

    Recently, a minimally invasive surgery (MIS) called fetoscopic tracheal occlusion (FETO) was developed to treat severe congenital diaphragmatic hernia (CDH) via fetoscopy, by which a detachable balloon is placed into the fetal trachea for preventing pulmonary hypoplasia through increasing the pressure of the chest cavity. This surgery is so dangerous that a supporting system for navigating surgeries is deemed necessary. In this paper, to guide a surgical tool to be inserted into the fetal trachea, an automatic approach is proposed to detect and track the fetal face and mouth via fetoscopic video sequencing. More specifically, the AdaBoost algorithm is utilized as a classifier to detect the fetal face based on Haarlike features, which calculate the difference between the sums of the pixel intensities in each adjacent region at a specific location in a detection window. Then, the CamShift algorithm based on an iterative search in a color histogram is applied to track the fetal face, and the fetal mouth is fitted by an ellipse detected via an improved iterative randomized Hough transform approach. The experimental results demonstrate that the proposed automatic approach can accurately detect and track the fetal face and mouth in real-time in a fetoscopic video sequence, as well as provide an effective and timely feedback to the robot control system of the surgical tool for FETO surgeries.

  5. Sequence capture using RAD probes clarifies phylogenetic relationships and species boundaries in Primula sect. Auricula.

    Science.gov (United States)

    Boucher, F C; Casazza, G; Szövényi, P; Conti, E

    2016-11-01

    Species-rich evolutionary radiations are a common feature of mountain floras worldwide. However, the frequent lack of phylogenetic resolution in species-rich alpine plant groups hampers progress towards clarifying the causes of diversification in mountains. In this study, we use the largest plant group endemic to the European Alpine system, Primula sect. Auricula, as a model system. We employ a newly developed next-generation-sequencing protocol, involving sequence capture with RAD probes, and map reads to the reference genome of Primula veris to obtain DNA matrices with thousands of SNPs. We use these data-rich matrices to infer phylogenetic relationships in Primula sect. Auricula and examine species delimitations in two taxonomically difficult subgroups: the clades formed by the close relatives of P. auricula and P. pedemontana, respectively. Our molecular dataset enables us to resolve most phylogenetic relationships in the group with strong support, and in particular to infer four well-supported clades within sect. Auricula. Our results support existing species delimitations for P. auricula, P. lutea, and P. subpyrenaica, while they suggest that the group formed by P. pedemontana and close relatives might need taxonomic revision. Finally, we discuss preliminary implications of these findings on the biogeographic history of Primula sect. Auricula. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Pulling out the 1%: whole-genome capture for the targeted enrichment of ancient DNA sequencing libraries.

    Science.gov (United States)

    Carpenter, Meredith L; Buenrostro, Jason D; Valdiosera, Cristina; Schroeder, Hannes; Allentoft, Morten E; Sikora, Martin; Rasmussen, Morten; Gravel, Simon; Guillén, Sonia; Nekhrizov, Georgi; Leshtakov, Krasimir; Dimitrova, Diana; Theodossiev, Nikola; Pettener, Davide; Luiselli, Donata; Sandoval, Karla; Moreno-Estrada, Andrés; Li, Yingrui; Wang, Jun; Gilbert, M Thomas P; Willerslev, Eske; Greenleaf, William J; Bustamante, Carlos D

    2013-11-07

    Most ancient specimens contain very low levels of endogenous DNA, precluding the shotgun sequencing of many interesting samples because of cost. Ancient DNA (aDNA) libraries often contain DNA, with the majority of sequencing capacity taken up by environmental DNA. Here we present a capture-based method for enriching the endogenous component of aDNA sequencing libraries. By using biotinylated RNA baits transcribed from genomic DNA libraries, we are able to capture DNA fragments from across the human genome. We demonstrate this method on libraries created from four Iron Age and Bronze Age human teeth from Bulgaria, as well as bone samples from seven Peruvian mummies and a Bronze Age hair sample from Denmark. Prior to capture, shotgun sequencing of these libraries yielded an average of 1.2% of reads mapping to the human genome (including duplicates). After capture, this fraction increased substantially, with up to 59% of reads mapped to human and enrichment ranging from 6- to 159-fold. Furthermore, we maintained coverage of the majority of regions sequenced in the precapture library. Intersection with the 1000 Genomes Project reference panel yielded an average of 50,723 SNPs (range 3,062-147,243) for the postcapture libraries sequenced with 1 million reads, compared with 13,280 SNPs (range 217-73,266) for the precapture libraries, increasing resolution in population genetic analyses. Our whole-genome capture approach makes it less costly to sequence aDNA from specimens containing very low levels of endogenous DNA, enabling the analysis of larger numbers of samples. Copyright © 2013 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  7. Predicting human activities in sequences of actions in RGB-D videos

    Science.gov (United States)

    Jardim, David; Nunes, Luís.; Dias, Miguel

    2017-03-01

    In our daily activities we perform prediction or anticipation when interacting with other humans or with objects. Prediction of human activity made by computers has several potential applications: surveillance systems, human computer interfaces, sports video analysis, human-robot-collaboration, games and health-care. We propose a system capable of recognizing and predicting human actions using supervised classifiers trained with automatically labeled data evaluated in our human activity RGB-D dataset (recorded with a Kinect sensor) and using only the position of the main skeleton joints to extract features. Using conditional random fields (CRFs) to model the sequential nature of actions in a sequence has been used before, but where other approaches try to predict an outcome or anticipate ahead in time (seconds), we try to predict what will be the next action of a subject. Our results show an activity prediction accuracy of 89.9% using an automatically labeled dataset.

  8. “First-person view” of pathogen transmission and hand hygiene – use of a new head-mounted video capture and coding tool

    Directory of Open Access Journals (Sweden)

    Lauren Clack

    2017-10-01

    Full Text Available Abstract Background Healthcare workers’ hands are the foremost means of pathogen transmission in healthcare, but detailed hand trajectories have been insufficiently researched so far. We developed and applied a new method to systematically document hand-to-surface exposures (HSE to delineate true hand transmission pathways in real-life healthcare settings. Methods A head-mounted camera and commercial coding software were used to capture ten active care episodes by eight nurses and two physicians and code HSE type and duration using a hierarchical coding scheme. We identified HSE sequences of particular relevance to infectious risks for patients based on the WHO ‘Five Moments for Hand Hygiene’. The study took place in a trauma intensive care unit in a 900-bed university hospital in Switzerland. Results Overall, the ten videos totaled 296.5 min and featured eight nurses and two physicians. A total of 4222 HSE were identified (1 HSE every 4.2 s, which concerned bare (79% and gloved (21% hands. The HSE inside the patient zone (n = 1775; 42% included mobile objects (33%, immobile surfaces (5%, and patient intact skin (4%, while HSE outside the patient zone (n = 1953; 46% included HCW’s own body (10%, mobile objects (28%, and immobile surfaces (8%. A further 494 (12% events involved patient critical sites. Sequential analysis revealed 291 HSE transitions from outside to inside patient zone, i.e. “colonization events”, and 217 from any surface to critical sites, i.e. “infection events”. Hand hygiene occurred 97 times, 14 (5% adherence times at colonization events and three (1% adherence times at infection events. On average, hand rubbing lasted 13 ± 9 s. Conclusions The abundance of HSE underscores the central role of hands in the spread of potential pathogens while hand hygiene occurred rarely at potential colonization and infection events. Our approach produced a valid video and coding instrument for in-depth analysis of

  9. "First-person view" of pathogen transmission and hand hygiene - use of a new head-mounted video capture and coding tool.

    Science.gov (United States)

    Clack, Lauren; Scotoni, Manuela; Wolfensberger, Aline; Sax, Hugo

    2017-01-01

    Healthcare workers' hands are the foremost means of pathogen transmission in healthcare, but detailed hand trajectories have been insufficiently researched so far. We developed and applied a new method to systematically document hand-to-surface exposures (HSE) to delineate true hand transmission pathways in real-life healthcare settings. A head-mounted camera and commercial coding software were used to capture ten active care episodes by eight nurses and two physicians and code HSE type and duration using a hierarchical coding scheme. We identified HSE sequences of particular relevance to infectious risks for patients based on the WHO 'Five Moments for Hand Hygiene'. The study took place in a trauma intensive care unit in a 900-bed university hospital in Switzerland. Overall, the ten videos totaled 296.5 min and featured eight nurses and two physicians. A total of 4222 HSE were identified (1 HSE every 4.2 s), which concerned bare (79%) and gloved (21%) hands. The HSE inside the patient zone (n = 1775; 42%) included mobile objects (33%), immobile surfaces (5%), and patient intact skin (4%), while HSE outside the patient zone (n = 1953; 46%) included HCW's own body (10%), mobile objects (28%), and immobile surfaces (8%). A further 494 (12%) events involved patient critical sites. Sequential analysis revealed 291 HSE transitions from outside to inside patient zone, i.e. "colonization events", and 217 from any surface to critical sites, i.e. "infection events". Hand hygiene occurred 97 times, 14 (5% adherence) times at colonization events and three (1% adherence) times at infection events. On average, hand rubbing lasted 13 ± 9 s. The abundance of HSE underscores the central role of hands in the spread of potential pathogens while hand hygiene occurred rarely at potential colonization and infection events. Our approach produced a valid video and coding instrument for in-depth analysis of hand trajectories during active patient care that may help to design

  10. Combined hybridization capture and shotgun sequencing for ancient DNA analysis of extinct wild and domestic dromedary camel.

    Science.gov (United States)

    Mohandesan, Elmira; Speller, Camilla F; Peters, Joris; Uerpmann, Hans-Peter; Uerpmann, Margarethe; De Cupere, Bea; Hofreiter, Michael; Burger, Pamela A

    2017-03-01

    The performance of hybridization capture combined with next-generation sequencing (NGS) has seen limited investigation with samples from hot and arid regions until now. We applied hybridization capture and shotgun sequencing to recover DNA sequences from bone specimens of ancient-domestic dromedary (Camelus dromedarius) and its extinct ancestor, the wild dromedary from Jordan, Syria, Turkey and the Arabian Peninsula, respectively. Our results show that hybridization capture increased the percentage of mitochondrial DNA (mtDNA) recovery by an average 187-fold and in some cases yielded virtually complete mitochondrial (mt) genomes at multifold coverage in a single capture experiment. Furthermore, we tested the effect of hybridization temperature and time by using a touchdown approach on a limited number of samples. We observed no significant difference in the number of unique dromedary mtDNA reads retrieved with the standard capture compared to the touchdown method. In total, we obtained 14 partial mitochondrial genomes from ancient-domestic dromedaries with 17-95% length coverage and 1.27-47.1-fold read depths for the covered regions. Using whole-genome shotgun sequencing, we successfully recovered endogenous dromedary nuclear DNA (nuDNA) from domestic and wild dromedary specimens with 1-1.06-fold read depths for covered regions. Our results highlight that despite recent methodological advances, obtaining ancient DNA (aDNA) from specimens recovered from hot, arid environments is still problematic. Hybridization protocols require specific optimization, and samples at the limit of DNA preservation need multiple replications of DNA extraction and hybridization capture as has been shown previously for Middle Pleistocene specimens. © 2016 The Authors. Molecular Ecology Resources Published by John Wiley & Sons Ltd.

  11. The next generation of target capture technologies - large DNA fragment enrichment and sequencing determines regional genomic variation of high complexity.

    Science.gov (United States)

    Dapprich, Johannes; Ferriola, Deborah; Mackiewicz, Kate; Clark, Peter M; Rappaport, Eric; D'Arcy, Monica; Sasson, Ariella; Gai, Xiaowu; Schug, Jonathan; Kaestner, Klaus H; Monos, Dimitri

    2016-07-09

    The ability to capture and sequence large contiguous DNA fragments represents a significant advancement towards the comprehensive characterization of complex genomic regions. While emerging sequencing platforms are capable of producing several kilobases-long reads, the fragment sizes generated by current DNA target enrichment technologies remain a limiting factor, producing DNA fragments generally shorter than 1 kbp. The DNA enrichment methodology described herein, Region-Specific Extraction (RSE), produces DNA segments in excess of 20 kbp in length. Coupling this enrichment method to appropriate sequencing platforms will significantly enhance the ability to generate complete and accurate sequence characterization of any genomic region without the need for reference-based assembly. RSE is a long-range DNA target capture methodology that relies on the specific hybridization of short (20-25 base) oligonucleotide primers to selected sequence motifs within the DNA target region. These capture primers are then enzymatically extended on the 3'-end, incorporating biotinylated nucleotides into the DNA. Streptavidin-coated beads are subsequently used to pull-down the original, long DNA template molecules via the newly synthesized, biotinylated DNA that is bound to them. We demonstrate the accuracy, simplicity and utility of the RSE method by capturing and sequencing a 4 Mbp stretch of the major histocompatibility complex (MHC). Our results show an average depth of coverage of 164X for the entire MHC. This depth of coverage contributes significantly to a 99.94 % total coverage of the targeted region and to an accuracy that is over 99.99 %. RSE represents a cost-effective target enrichment method capable of producing sequencing templates in excess of 20 kbp in length. The utility of our method has been proven to generate superior coverage across the MHC as compared to other commercially available methodologies, with the added advantage of producing longer sequencing

  12. A multi scale motion saliency method for keyframe extraction from motion capture sequences

    OpenAIRE

    Halit, Cihan

    2010-01-01

    Ankara : The Department of Computer Engineering and the Institute of Engineering and Science of Bilkent University, 2010. Thesis (Master's) -- Bilkent University, 2010. Includes bibliographical references leaves 47-50. Motion capture is an increasingly popular animation technique; however data acquired by motion capture can become substantial. This makes it di cult to use motion capture data in a number of applications, such as motion editing, motion understanding, automati...

  13. Applications of targeted gene capture and next-generation sequencing technologies in studies of human deafness and other genetic disabilities.

    Science.gov (United States)

    Lin, Xi; Tang, Wenxue; Ahmad, Shoeb; Lu, Jingqiao; Colby, Candice C; Zhu, Jason; Yu, Qing

    2012-06-01

    The goal of sequencing the entire human genome for $1000 is almost in sight. However, the total costs including DNA sequencing, data management, and analysis to yield a clear data interpretation are unlikely to be lowered significantly any time soon to make studies on a population scale and daily clinical uses feasible. Alternatively, the targeted enrichment of specific groups of disease and biological pathway-focused genes and the capture of up to an entire human exome (~1% of the genome) allowing an unbiased investigation of the complete protein-coding regions in the genome are now routine. Targeted gene capture followed by sequencing with massively parallel next-generation sequencing (NGS) has the advantages of 1) significant cost saving, 2) higher sequencing accuracy because of deeper achievable coverage, 3) a significantly shorter turnaround time, and 4) a more feasible data set for a bioinformatic analysis outcome that is functionally interpretable. Gene capture combined with NGS has allowed a much greater number of samples to be examined than is currently practical with whole-genome sequencing. Such an approach promises to bring a paradigm shift to biomedical research of Mendelian disorders and their clinical diagnoses, ultimately enabling personalized medicine based on one's genetic profile. In this review, we describe major methodologies currently used for gene capture and detection of genetic variations by NGS. We will highlight applications of this technology in studies of genetic disorders and discuss issues pertaining to applications of this powerful technology in genetic screening and the discovery of genes implicated in syndromic and non-syndromic hearing loss. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Efficient cross-species capture hybridization and next-generation sequencing of mitochondrial genomes from noninvasively sampled museum specimens

    Science.gov (United States)

    Mason, Victor C.; Li, Gang; Helgen, Kristofer M.; Murphy, William J.

    2011-01-01

    The ability to uncover the phylogenetic history of recently extinct species and other species known only from archived museum material has rapidly improved due to the reduced cost and increased sequence capacity of next-generation sequencing technologies. One limitation of these approaches is the difficulty of isolating and sequencing large, orthologous DNA regions across multiple divergent species, which is exacerbated for museum specimens, where DNA quality varies greatly between samples and contamination levels are often high. Here we describe the use of cross-species DNA capture hybridization techniques and next-generation sequencing to selectively isolate and sequence partial to full-length mitochondrial DNA genomes from the degraded DNA of museum specimens, using probes generated from the DNA of a single extant species. We demonstrate our approach on specimens from an enigmatic gliding mammal, the Sunda colugo, which is widely distributed throughout Southeast Asia. We isolated DNA from 13 colugo specimens collected 47–170 years ago, and successfully captured and sequenced mitochondrial DNA from every specimen, frequently recovering fragments with 10%–13% sequence divergence from the capture probe sequence. Phylogenetic results reveal deep genetic divergence among colugos, both within and between the islands of Borneo and Java, as well as between the Malay Peninsula and different Sundaic islands. Our method is based on noninvasive sampling of minute amounts of soft tissue material from museum specimens, leaving the original specimen essentially undamaged. This approach represents a paradigm shift away from standard PCR-based approaches for accessing population genetic and phylogenomic information from poorly known and difficult-to-study species. PMID:21880778

  15. Learning with Technology: Video Modeling with Concrete-Representational-Abstract Sequencing for Students with Autism Spectrum Disorder

    Science.gov (United States)

    Yakubova, Gulnoza; Hughes, Elizabeth M.; Shinaberry, Megan

    2016-01-01

    The purpose of this study was to determine the effectiveness of a video modeling intervention with concrete-representational-abstract instructional sequence in teaching mathematics concepts to students with autism spectrum disorder (ASD). A multiple baseline across skills design of single-case experimental methodology was used to determine the…

  16. Comparison of Exome and Genome Sequencing Technologies for the Complete Capture of Protein-Coding Regions

    NARCIS (Netherlands)

    Lelieveld, S.H.; Spielmann, M.; Mundlos, S.; Veltman, J.A.; Gilissen, C.

    2015-01-01

    For next-generation sequencing technologies, sufficient base-pair coverage is the foremost requirement for the reliable detection of genomic variants. We investigated whether whole-genome sequencing (WGS) platforms offer improved coverage of coding regions compared with whole-exome sequencing (WES)

  17. A Macro-Observation Scheme for Abnormal Event Detection in Daily-Life Video Sequences

    Directory of Open Access Journals (Sweden)

    Chiu Wei-Yao

    2010-01-01

    Full Text Available Abstract We propose a macro-observation scheme for abnormal event detection in daily life. The proposed macro-observation representation records the time-space energy of motions of all moving objects in a scene without segmenting individual object parts. The energy history of each pixel in the scene is instantly updated with exponential weights without explicitly specifying the duration of each activity. Since possible activities in daily life are numerous and distinct from each other and not all abnormal events can be foreseen, images from a video sequence that spans sufficient repetition of normal day-to-day activities are first randomly sampled. A constrained clustering model is proposed to partition the sampled images into groups. The new observed event that has distinct distance from any of the cluster centroids is then classified as an anomaly. The proposed method has been evaluated in daily work of a laboratory and BEHAVE benchmark dataset. The experimental results reveal that it can well detect abnormal events such as burglary and fighting as long as they last for a sufficient duration of time. The proposed method can be used as a support system for the scene that requires full time monitoring personnel.

  18. INTEGRATED GEOREFERENCING OF STEREO IMAGE SEQUENCES CAPTURED WITH A STEREOVISION MOBILE MAPPING SYSTEM – APPROACHES AND PRACTICAL RESULTS

    Directory of Open Access Journals (Sweden)

    H. Eugster

    2012-07-01

    Full Text Available Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations – in our case of the imaging sensors – normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.

  19. Integrated Georeferencing of Stereo Image Sequences Captured with a Stereovision Mobile Mapping System - Approaches and Practical Results

    Science.gov (United States)

    Eugster, H.; Huber, F.; Nebiker, S.; Gisi, A.

    2012-07-01

    Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations - in our case of the imaging sensors - normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.

  20. Using statistical analysis and artificial intelligence tools for automatic assessment of video sequences

    Science.gov (United States)

    Ekobo Akoa, Brice; Simeu, Emmanuel; Lebowsky, Fritz

    2014-01-01

    This paper proposes two novel approaches to Video Quality Assessment (VQA). Both approaches attempt to develop video evaluation techniques capable of replacing human judgment when rating video quality in subjective experiments. The underlying study consists of selecting fundamental quality metrics based on Human Visual System (HVS) models and using artificial intelligence solutions as well as advanced statistical analysis. This new combination enables suitable video quality ratings while taking as input multiple quality metrics. The first method uses a neural network based machine learning process. The second method consists in evaluating the video quality assessment using non-linear regression model. The efficiency of the proposed methods is demonstrated by comparing their results with those of existing work done on synthetic video artifacts. The results obtained by each method are compared with scores from a database resulting from subjective experiments.

  1. Digital video.

    Science.gov (United States)

    Johnson, Don; Johnson, Mike

    2004-04-01

    The process of digital capture, editing, and archiving video has become an important aspect of documenting arthroscopic surgery. Recording the arthroscopic findings before and after surgery is an essential part of the patient's medical record. The hardware and software has become more reasonable to purchase, but the learning curve to master the software is steep. Digital video is captured at the time of arthroscopy to a hard disk, and written to a CD at the end of the operative procedure. The process of obtaining video of open procedures is more complex. Outside video of the procedure is recorded on digital tape with a digital video camera. The camera must be plugged into a computer to capture the video on the hard disk. Adobe Premiere software is used to edit the video and render the finished video to the hard drive. This finished video is burned onto a CD. We outline the choice of computer hardware and software for the manipulation of digital video. The techniques of backup and archiving the completed projects and files also are outlined. The uses of digital video for education and the formats that can be used in PowerPoint presentations are discussed.

  2. Hybridization Capture-Based Next-Generation Sequencing to Evaluate Coding Sequence and Deep Intronic Mutations in the NF1 Gene

    Directory of Open Access Journals (Sweden)

    Karin Soares Cunha

    2016-12-01

    Full Text Available Neurofibromatosis 1 (NF1 is one of the most common genetic disorders and is caused by mutations in the NF1 gene. NF1 gene mutational analysis presents a considerable challenge because of its large size, existence of highly homologous pseudogenes located throughout the human genome, absence of mutational hotspots, and diversity of mutations types, including deep intronic splicing mutations. We aimed to evaluate the use of hybridization capture-based next-generation sequencing to screen coding and noncoding NF1 regions. Hybridization capture-based next-generation sequencing, with genomic DNA as starting material, was used to sequence the whole NF1 gene (exons and introns from 11 unrelated individuals and 1 relative, who all had NF1. All of them met the NF1 clinical diagnostic criteria. We showed a mutation detection rate of 91% (10 out of 11. We identified eight recurrent and two novel mutations, which were all confirmed by Sanger methodology. In the Sanger sequencing confirmation, we also included another three relatives with NF1. Splicing alterations accounted for 50% of the mutations. One of them was caused by a deep intronic mutation (c.1260 + 1604A > G. Frameshift truncation and missense mutations corresponded to 30% and 20% of the pathogenic variants, respectively. In conclusion, we show the use of a simple and fast approach to screen, at once, the entire NF1 gene (exons and introns for different types of pathogenic variations, including the deep intronic splicing mutations.

  3. Subjective quality of video sequences rendered on LCD with local backlight dimming at different lighting conditions

    DEFF Research Database (Denmark)

    Mantel, Claire; Korhonen, Jari; Pedersen, Jesper Mørkhøj

    2015-01-01

    This paper focuses on the influence of ambient light on the perceived quality of videos displayed on Liquid Crystal Display (LCD) with local backlight dimming. A subjective test assessing the quality of videos with two backlight dimming methods and three lighting conditions, i.e. no light, low...

  4. Model-free 3D face shape reconstruction from video sequences

    NARCIS (Netherlands)

    van Dam, C.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    In forensic comparison of facial video data, often only the best quality frontal face frames are selected, and hence much video data is ignored. To improve 2D facial comparison for law enforcement and forensic investigation, we introduce a model-free 3D shape reconstruction algorithm based on 2D

  5. Landmark-based model-free 3D face shape reconstruction from video sequences

    NARCIS (Netherlands)

    van Dam, C.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan; Broemme, A.; Busch, C.

    2013-01-01

    In forensic comparison of facial video data, often only the best quality frontal face frames are selected, and hence potentially useful video data is ignored. To improve 2D facial comparison for law enforcement and forensic investigation, we introduce a model-free 3D shape reconstruction algorithm

  6. Low cost sequencing of mitogenomes from museum samples using baits capture and Ion Torrent

    NARCIS (Netherlands)

    Kollias, Spyros; Poortvliet, Marloes; Smolina, Irina; Hoarau, Galice

    The development of various target enrichment methods in combination with next generation sequencing techniques has greatly facilitated the use of partially degraded DNA samples in genetic studies. We employed the MYbaits target enrichment system in combination with Ion Torrent sequencing on a broad

  7. Using Next-Generation Sequencing for DNA Barcoding: Capturing Allelic Variation in ITS2.

    Science.gov (United States)

    Batovska, Jana; Cogan, Noel O I; Lynch, Stacey E; Blacket, Mark J

    2017-01-05

    Internal Transcribed Spacer 2 (ITS2) is a popular DNA barcoding marker; however, in some animal species it is hypervariable and therefore difficult to sequence with traditional methods. With next-generation sequencing (NGS) it is possible to sequence all gene variants despite the presence of single nucleotide polymorphisms (SNPs), insertions/deletions (indels), homopolymeric regions, and microsatellites. Our aim was to compare the performance of Sanger sequencing and NGS amplicon sequencing in characterizing ITS2 in 26 mosquito species represented by 88 samples. The suitability of ITS2 as a DNA barcoding marker for mosquitoes, and its allelic diversity in individuals and species, was also assessed. Compared to Sanger sequencing, NGS was able to characterize the ITS2 region to a greater extent, with resolution within and between individuals and species that was previously not possible. A total of 382 unique sequences (alleles) were generated from the 88 mosquito specimens, demonstrating the diversity present that has been overlooked by traditional sequencing methods. Multiple indels and microsatellites were present in the ITS2 alleles, which were often specific to species or genera, causing variation in sequence length. As a barcoding marker, ITS2 was able to separate all of the species, apart from members of the Culex pipiens complex, providing the same resolution as the commonly used Cytochrome Oxidase I (COI). The ability to cost-effectively sequence hypervariable markers makes NGS an invaluable tool with many applications in the DNA barcoding field, and provides insights into the limitations of previous studies and techniques. Copyright © 2017 Batovska et al.

  8. Using Next-Generation Sequencing for DNA Barcoding: Capturing Allelic Variation in ITS2

    Directory of Open Access Journals (Sweden)

    Jana Batovska

    2017-01-01

    Full Text Available Internal Transcribed Spacer 2 (ITS2 is a popular DNA barcoding marker; however, in some animal species it is hypervariable and therefore difficult to sequence with traditional methods. With next-generation sequencing (NGS it is possible to sequence all gene variants despite the presence of single nucleotide polymorphisms (SNPs, insertions/deletions (indels, homopolymeric regions, and microsatellites. Our aim was to compare the performance of Sanger sequencing and NGS amplicon sequencing in characterizing ITS2 in 26 mosquito species represented by 88 samples. The suitability of ITS2 as a DNA barcoding marker for mosquitoes, and its allelic diversity in individuals and species, was also assessed. Compared to Sanger sequencing, NGS was able to characterize the ITS2 region to a greater extent, with resolution within and between individuals and species that was previously not possible. A total of 382 unique sequences (alleles were generated from the 88 mosquito specimens, demonstrating the diversity present that has been overlooked by traditional sequencing methods. Multiple indels and microsatellites were present in the ITS2 alleles, which were often specific to species or genera, causing variation in sequence length. As a barcoding marker, ITS2 was able to separate all of the species, apart from members of the Culex pipiens complex, providing the same resolution as the commonly used Cytochrome Oxidase I (COI. The ability to cost-effectively sequence hypervariable markers makes NGS an invaluable tool with many applications in the DNA barcoding field, and provides insights into the limitations of previous studies and techniques.

  9. Use of Authentic Videos in the Classroom : Capturing Real-life Language from Closed-captioned Films

    OpenAIRE

    角山,照彦

    1997-01-01

    "This paper is intended to examine the use of "authentic" texts in the classroom, focusing on film videos as an example of such texts. After briefly discussing the definition of "authenticity",this paper examines why authentic texts are advantageous and what specific problems they might pose to both teachers and students. Then,feedback from the students,with regard to their material preference,will be presented,which indicated their strong preference for authentic materials and the difficulty...

  10. An innovative approach for assessing the ergonomic risks of lifting tasks using a video motion capture system

    OpenAIRE

    Wilson, Rhoda M.

    2006-01-01

    Human Systems Integration Report Low back pain (LBP) and work-related musculoskeletal disorders (WMSDs) can lead to employee absenteeism, sick leave, and permanent disability. Over the years, much work has been done in examining physical exposure to ergonomic risks. The current research presents a new approach for assessing WMSD risk during lifting related tasks that combines traditional observational methods with video recording methods. One particular application area, the Future Com...

  11. Next-generation sequencing for the diagnosis of hereditary breast and ovarian cancer using genomic capture targeting multiple candidate genes.

    Science.gov (United States)

    Castéra, Laurent; Krieger, Sophie; Rousselin, Antoine; Legros, Angélina; Baumann, Jean-Jacques; Bruet, Olivia; Brault, Baptiste; Fouillet, Robin; Goardon, Nicolas; Letac, Olivier; Baert-Desurmont, Stéphanie; Tinat, Julie; Bera, Odile; Dugast, Catherine; Berthet, Pascaline; Polycarpe, Florence; Layet, Valérie; Hardouin, Agnes; Frébourg, Thierry; Vaur, Dominique

    2014-11-01

    To optimize the molecular diagnosis of hereditary breast and ovarian cancer (HBOC), we developed a next-generation sequencing (NGS)-based screening based on the capture of a panel of genes involved, or suspected to be involved in HBOC, on pooling of indexed DNA and on paired-end sequencing in an Illumina GAIIx platform, followed by confirmation by Sanger sequencing or MLPA/QMPSF. The bioinformatic pipeline included CASAVA, NextGENe, CNVseq and Alamut-HT. We validated this procedure by the analysis of 59 patients' DNAs harbouring SNVs, indels or large genomic rearrangements of BRCA1 or BRCA2. We also conducted a blind study in 168 patients comparing NGS versus Sanger sequencing or MLPA analyses of BRCA1 and BRCA2. All mutations detected by conventional procedures were detected by NGS. We then screened, using three different versions of the capture set, a large series of 708 consecutive patients. We detected in these patients 69 germline deleterious alterations within BRCA1 and BRCA2, and 4 TP53 mutations in 468 patients also tested for this gene. We also found 36 variations inducing either a premature codon stop or a splicing defect among other genes: 5/708 in CHEK2, 3/708 in RAD51C, 1/708 in RAD50, 7/708 in PALB2, 3/708 in MRE11A, 5/708 in ATM, 3/708 in NBS1, 1/708 in CDH1, 3/468 in MSH2, 2/468 in PMS2, 1/708 in BARD1, 1/468 in PMS1 and 1/468 in MLH3. These results demonstrate the efficiency of NGS in performing molecular diagnosis of HBOC. Detection of mutations within other genes than BRCA1 and BRCA2 highlights the genetic heterogeneity of HBOC.

  12. Liquid-phase sequence capture and targeted re-sequencing revealed novel polymorphisms in tomato genes belonging to the MEP carotenoid pathway.

    Science.gov (United States)

    Terracciano, Irma; Cantarella, Concita; Fasano, Carlo; Cardi, Teodoro; Mennella, Giuseppe; D'Agostino, Nunzio

    2017-07-17

    Tomato (Solanum lycopersicum L.) plants are characterized by having a variety of fruit colours that reflect the composition and accumulation of diverse carotenoids in the berries. Carotenoids are extensively studied for their health-promoting effects and this explains the great attention these pigments received by breeders and researchers worldwide. In this work we applied Agilent's SureSelect liquid-phase sequence capture and Illumina targeted re-sequencing of 34 tomato genes belonging to the methylerythritol phosphate (MEP) carotenoid pathway on a panel of 48 genotypes which differ for carotenoid content calculated as the sum of β-carotene, cis- and trans-lycopene. We targeted 230 kb of genomic regions including all exons and regulatory regions and observed ~40% of on-target capture. We found ample genetic variation among all the genotypes under study and generated an extensive catalog of SNPs/InDels located in both genic and regulatory regions. SNPs/InDels were also classified based on genomic location and putative biological effect. With our work we contributed to the identification of allelic variations possibly underpinning a key agronomic trait in tomato. Results from this study can be exploited for the promotion of novel studies on tomato bio-fortification as well as of breeding programs related to carotenoid accumulation in fruits.

  13. Determination of exterior parameters for video image sequences from helicopter by block adjustment with combined vertical and oblique images

    Science.gov (United States)

    Zhang, Jianqing; Zhang, Yong; Zhang, Zuxun

    2003-09-01

    Determination of image exterior parameters is a key aspect for the realization of automatic texture mapping of buildings in the reconstruction of real 3D city models. This paper reports about an application of automatic aerial triangulation on a block with three video image sequences, one vertical image sequence to buildings' roofs and two oblique image sequences to buildings' walls. A new process procedure is developed in order to auto matching homologous points between images in oblique and vertical images. Two strategies are tested. One is treating three strips as independent blocks and executing strip block adjustment respectively, the other is creating a block with three strips, using the new image matching procedure to extract large number of tie points and executing block adjustment. The block adjustment results of these two strategies are also compared.

  14. Isolation and sequencing of active origins of DNA replication by nascent strand capture and release (NSCR

    Directory of Open Access Journals (Sweden)

    Dimiter Kunnev

    2015-11-01

    Full Text Available Nascent strand capture and release (NSCR is a method for isolation of short nascent strands to identify origins of DNA replication.  The protocol provided involves isolation of total DNA, denaturation, size fractionation on a sucrose gradient, 5’-biotinylation of the appropriate size nucleic acids, binding to a streptavidin coated column or magnetic beads, intensive washing, and specific release only the RNA containing chimeric nascent strand DNA using RNaseI. The method has been applied to mammalian cells derived from proliferative tissues and cell culture but could be used for any system where DNA replication is primed by a small RNA resulting in chimeric RNA-DNA molecules.

  15. Organism-specific rRNA capture system for application in next-generation sequencing.

    Directory of Open Access Journals (Sweden)

    Sai-Kam Li

    Full Text Available RNA-sequencing is a powerful tool in studying RNomics. However, the highly abundance of ribosomal RNAs (rRNA and transfer RNA (tRNA have predominated in the sequencing reads, thereby hindering the study of lowly expressed genes. Therefore, rRNA depletion prior to sequencing is often performed in order to preserve the subtle alteration in gene expression especially those at relatively low expression levels. One of the commercially available methods is to use DNA or RNA probes to hybridize to the target RNAs. However, there is always a concern with the non-specific binding and unintended removal of messenger RNA (mRNA when the same set of probes is applied to different organisms. The degree of such unintended mRNA removal varies among organisms due to organism-specific genomic variation. We developed a computer-based method to design probes to deplete rRNA in an organism-specific manner. Based on the computation results, biotinylated-RNA-probes were produced by in vitro transcription and were used to perform rRNA depletion with subtractive hybridization. We demonstrated that the designed probes of 16S rRNAs and 23S rRNAs can efficiently remove rRNAs from Mycobacterium smegmatis. In comparison with a commercial subtractive hybridization-based rRNA removal kit, using organism-specific probes is better in preserving the RNA integrity and abundance. We believe the computer-based design approach can be used as a generic method in preparing RNA of any organisms for next-generation sequencing, particularly for the transcriptome analysis of microbes.

  16. Persistent Target Tracking Using Likelihood Fusion in Wide-Area and Full Motion Video Sequences

    Science.gov (United States)

    2012-07-01

    problems associated with a moving platform including gimbal -based stabilization errors, relative motion where sensor and target are both moving, seams in...Image Processing, 2000, pp. 561–564. [46] A. Hafiane, K. Palaniappan, and G. Seetharaman, “ UAV -video registra- tion using block-based features,” in IEEE

  17. Multiplexed chromosome conformation capture sequencing for rapid genome-scale high-resolution detection of long-range chromatin interactions.

    Science.gov (United States)

    Stadhouders, Ralph; Kolovos, Petros; Brouwer, Rutger; Zuin, Jessica; van den Heuvel, Anita; Kockx, Christel; Palstra, Robert-Jan; Wendt, Kerstin S; Grosveld, Frank; van Ijcken, Wilfred; Soler, Eric

    2013-03-01

    Chromosome conformation capture (3C) technology is a powerful and increasingly popular tool for analyzing the spatial organization of genomes. Several 3C variants have been developed (e.g., 4C, 5C, ChIA-PET, Hi-C), allowing large-scale mapping of long-range genomic interactions. Here we describe multiplexed 3C sequencing (3C-seq), a 4C variant coupled to next-generation sequencing, allowing genome-scale detection of long-range interactions with candidate regions. Compared with several other available techniques, 3C-seq offers a superior resolution (typically single restriction fragment resolution; approximately 1-8 kb on average) and can be applied in a semi-high-throughput fashion. It allows the assessment of long-range interactions of up to 192 genes or regions of interest in parallel by multiplexing library sequencing. This renders multiplexed 3C-seq an inexpensive, quick (total hands-on time of 2 weeks) and efficient method that is ideal for the in-depth analysis of complex genetic loci. The preparation of multiplexed 3C-seq libraries can be performed by any investigator with basic skills in molecular biology techniques. Data analysis requires basic expertise in bioinformatics and in Linux and Python environments. The protocol describes all materials, critical steps and bioinformatics tools required for successful application of 3C-seq technology.

  18. Translational database selection and multiplexed sequence capture for up front filtering of reliable breast cancer biomarker candidates.

    Directory of Open Access Journals (Sweden)

    Patrik L Ståhl

    Full Text Available Biomarker identification is of utmost importance for the development of novel diagnostics and therapeutics. Here we make use of a translational database selection strategy, utilizing data from the Human Protein Atlas (HPA on differentially expressed protein patterns in healthy and breast cancer tissues as a means to filter out potential biomarkers for underlying genetic causatives of the disease. DNA was isolated from ten breast cancer biopsies, and the protein coding and flanking non-coding genomic regions corresponding to the selected proteins were extracted in a multiplexed format from the samples using a single DNA sequence capture array. Deep sequencing revealed an even enrichment of the multiplexed samples and a great variation of genetic alterations in the tumors of the sampled individuals. Benefiting from the upstream filtering method, the final set of biomarker candidates could be completely verified through bidirectional Sanger sequencing, revealing a 40 percent false positive rate despite high read coverage. Of the variants encountered in translated regions, nine novel non-synonymous variations were identified and verified, two of which were present in more than one of the ten tumor samples.

  19. Extremely low-coverage whole genome sequencing in South Asians captures population genomics information.

    Science.gov (United States)

    Rustagi, Navin; Zhou, Anbo; Watkins, W Scott; Gedvilaite, Erika; Wang, Shuoguo; Ramesh, Naveen; Muzny, Donna; Gibbs, Richard A; Jorde, Lynn B; Yu, Fuli; Xing, Jinchuan

    2017-05-22

    The cost of Whole Genome Sequencing (WGS) has decreased tremendously in recent years due to advances in next-generation sequencing technologies. Nevertheless, the cost of carrying out large-scale cohort studies using WGS is still daunting. Past simulation studies with coverage at ~2x have shown promise for using low coverage WGS in studies focused on variant discovery, association study replications, and population genomics characterization. However, the performance of low coverage WGS in populations with a complex history and no reference panel remains to be determined. South Indian populations are known to have a complex population structure and are an example of a major population group that lacks adequate reference panels. To test the performance of extremely low-coverage WGS (EXL-WGS) in populations with a complex history and to provide a reference resource for South Indian populations, we performed EXL-WGS on 185 South Indian individuals from eight populations to ~1.6x coverage. Using two variant discovery pipelines, SNPTools and GATK, we generated a consensus call set that has ~90% sensitivity for identifying common variants (minor allele frequency ≥ 10%). Imputation further improves the sensitivity of our call set. In addition, we obtained high-coverage for the whole mitochondrial genome to infer the maternal lineage evolutionary history of the Indian samples. Overall, we demonstrate that EXL-WGS with imputation can be a valuable study design for variant discovery with a dramatically lower cost than standard WGS, even in populations with a complex history and without available reference data. In addition, the South Indian EXL-WGS data generated in this study will provide a valuable resource for future Indian genomic studies.

  20. Evaluation of the DTBird video-system at the Smoela wind-power plant. Detection capabilities for capturing near-turbine avian behaviour

    Energy Technology Data Exchange (ETDEWEB)

    Roel, May; Hamre, Oeyvind; Vang, Roald; Nygaard, Torgeir

    2012-07-01

    Collisions between birds and wind turbines can be a problem at wind-power plants both onshore and offshore, and the presence of endangered bird species or proximity to key functional bird areas can have major impact on the choice of site or location wind turbines. There is international consensus that one of the mail challenges in the development of measures to reduce bird collisions is the lack of good methods for assessment of the efficacy of inventions. In order to be better abe to assess the efficacy of mortality-reducing measures Statkraft wishes to find a system that can be operated under Norwegian conditions and that renders objective and quantitative information on collisions and near-flying birds. DTbird developed by Liquen Consultoria Ambiental S.L. is such a system, which is based on video-recording bird flights near turbines during the daylight period (light levels>200 lux). DTBird is a self-working system developed to detect flying birds and to take programmed actions (i.e. warming, dissuasion, collision registration, and turbine stop control) linked to real-time bird detection. This report evaluates how well the DTBird system is able to detect birds in the vicinity of a wind turbine, and assess to which extent it can be utilized to study near-turbine bird flight behaviour and possible deterrence. The evaluation was based on the video sequence recorded with the DTBird systems installed at turbine 21 and turbine 42 at the Smoela wind-power plant between March 2 2012 and September 30 2012, together with GPS telemetry data on white-tailed eagles and avian radar data. The average number of falsely triggered video sequences (false positive rate) was 1.2 per day, and during daytime the DTBird system recorded between 76% and 96% of all bird flights in the vicinity of the turbines. Visually estimated distances of recorded bird flights in the video sequences were in general assessed to be farther from the turbines com pared to the distance settings used within

  1. An innovative experimental sequence on electromagnetic induction and eddy currents based on video analysis and cheap data acquisition

    Science.gov (United States)

    Bonanno, A.; Bozzo, G.; Sapia, P.

    2017-11-01

    In this work, we present a coherent sequence of experiments on electromagnetic (EM) induction and eddy currents, appropriate for university undergraduate students, based on a magnet falling through a drilled aluminum disk. The sequence, leveraging on the didactical interplay between the EM and mechanical aspects of the experiments, allows us to exploit the students’ awareness of mechanics to elicit their comprehension of EM phenomena. The proposed experiments feature two kinds of measurements: (i) kinematic measurements (performed by means of high-speed video analysis) give information on the system’s kinematics and, via appropriate numerical data processing, allow us to get dynamic information, in particular on energy dissipation; (ii) induced electromagnetic field (EMF) measurements (by using a homemade multi-coil sensor connected to a cheap data acquisition system) allow us to quantitatively determine the inductive effects of the moving magnet on its neighborhood. The comparison between experimental results and the predictions from an appropriate theoretical model (of the dissipative coupling between the moving magnet and the conducting disk) offers many educational hints on relevant topics related to EM induction, such as Maxwell’s displacement current, magnetic field flux variation, and the conceptual link between induced EMF and induced currents. Moreover, the didactical activity gives students the opportunity to be trained in video analysis, data acquisition and numerical data processing.

  2. Exome capture sequencing identifies a novel CCM1 mutation in a Chinese family with multiple cerebral cavernous malformations.

    Science.gov (United States)

    Mao, Cheng-Yuan; Yang, Jing; Zhang, Shu-Yu; Luo, Hai-Yang; Song, Bo; Liu, Yu-Tao; Wu, Jun; Sun, Shi-Lei; Yang, Zhi-Hua; Du, Pan; Wang, Yao-He; Shi, Chang-He; Xu, Yu-Ming

    2016-12-01

    Cerebral cavernous malformations (CCMs) are vascular anomalies predominantly in the central nervous system but may include lesions in other tissues, such as the retina, skin and liver. The main clinical manifestations include seizures, hemorrhage, recurrent headaches and focal neurological deficits. Previous studies of familial CCMs (FCCMs) have mainly reported in Hispanic and Caucasian cases. Here, we report on FCCMs in a Chinese family further characterized by a novel CCM1 gene mutation. We investigated clinical and neuroradiological features of a Chinese family of 30 members. Furthermore, we used exome capture sequencing to identify the causing gene. The CCM1 mRNA expression level in three patients of the family and 10 wild-type healthy individuals were detected by real-time quantitative polymerase chain reaction (real-time RT-PCR). Brain magnetic resonance imaging demonstrated multiple intracranial lesions in seven members. The clinical manifestation of CCM was found in five of these cases, including recurrent headaches, weakness, hemorrhage and seizures. Moreover, we identified a novel nonsense mutation c.1159G>T (p. E387*) in the CCM1 gene in the pedigree. Based on real-time RT-PCR results, we have found that the CCM1 mRNA expression level in three patients was reduced by 35% than that in wild-type healthy individuals. Our finding suggests that the novel nonsense mutation c.1159G>T in CCM1 gene is associated with FCCM, and that CCM1 haploinsufficiency may be the underlying mechanism of CCMs. Furthermore, it also demonstrates that exome capture sequencing is an efficient and direct diagnostic tool to identify causes of genetically heterogeneous diseases.

  3. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  4. Simulation of video sequences for an accurate evaluation of tracking algorithms on complex scenes

    Science.gov (United States)

    Dubreu, Christine; Manzanera, Antoine; Bohain, Eric

    2008-04-01

    As target tracking is arousing more and more interest, the necessity to reliably assess tracking algorithms in any conditions is becoming essential. The evaluation of such algorithms requires a database of sequences representative of the whole range of conditions in which the tracking system is likely to operate, together with its associated ground truth. However, building such a database with real sequences, and collecting the associated ground truth appears to be hardly possible and very time-consuming. Therefore, more and more often, synthetic sequences are generated by complex and heavy simulation platforms to evaluate the performance of tracking algorithms. Some methods have also been proposed using simple synthetic sequences generated without such complex simulation platforms. These sequences are generated from a finite number of discriminating parameters, and are statistically representative, as regards these parameters, of real sequences. They are very simple and not photorealistic, but can be reliably used for low-level tracking algorithms evaluation in any operating conditions. The aim of this paper is to assess the reliability of these non-photorealistic synthetic sequences for evaluation of tracking systems on complex-textured objects, and to show how the number of parameters can be increased to synthesize more elaborated scenes and deal with more complex dynamics, including occlusions and three-dimensional deformations.

  5. Development and preliminary evaluation of an online educational video about whole-genome sequencing for research participants, patients, and the general public.

    Science.gov (United States)

    Sanderson, Saskia C; Suckiel, Sabrina A; Zweig, Micol; Bottinger, Erwin P; Jabs, Ethylin Wang; Richardson, Lynne D

    2016-05-01

    As whole-genome sequencing (WGS) increases in availability, WGS educational aids are needed for research participants, patients, and the general public. Our aim was therefore to develop an accessible and scalable WGS educational aid. We engaged multiple stakeholders in an iterative process over a 1-year period culminating in the production of a novel 10-minute WGS educational animated video, "Whole Genome Sequencing and You" (https://goo.gl/HV8ezJ). We then presented the animated video to 281 online-survey respondents (the video-information group). There were also two comparison groups: a written-information group (n = 281) and a no-information group (n = 300). In the video-information group, 79% reported the video was easy to understand, satisfaction scores were high (mean 4.00 on 1-5 scale, where 5 = high satisfaction), and knowledge increased significantly. There were significant differences in knowledge compared with the no-information group but few differences compared with the written-information group. Intention to receive personal results from WGS and decisional conflict in response to a hypothetical scenario did not differ between the three groups. The educational animated video, "Whole Genome Sequencing and You," was well received by this sample of online-survey respondents. Further work is needed to evaluate its utility as an aid to informed decision making about WGS in other populations.Genet Med 18 5, 501-512.

  6. Video Salient Object Detection via Fully Convolutional Networks.

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing; Shao, Ling

    This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further

  7. Facial Expression Recognition from Video Sequences Based on Spatial-Temporal Motion Local Binary Pattern and Gabor Multiorientation Fusion Histogram

    Directory of Open Access Journals (Sweden)

    Lei Zhao

    2017-01-01

    Full Text Available This paper proposes novel framework for facial expressions analysis using dynamic and static information in video sequences. First, based on incremental formulation, discriminative deformable face alignment method is adapted to locate facial points to correct in-plane head rotation and break up facial region from background. Then, spatial-temporal motion local binary pattern (LBP feature is extracted and integrated with Gabor multiorientation fusion histogram to give descriptors, which reflect static and dynamic texture information of facial expressions. Finally, a one-versus-one strategy based multiclass support vector machine (SVM classifier is applied to classify facial expressions. Experiments on Cohn-Kanade (CK + facial expression dataset illustrate that integrated framework outperforms methods using single descriptors. Compared with other state-of-the-art methods on CK+, MMI, and Oulu-CASIA VIS datasets, our proposed framework performs better.

  8. Detection of distorted frames in retinal video-sequences via machine learning

    Science.gov (United States)

    Kolar, Radim; Liberdova, Ivana; Odstrcilik, Jan; Hracho, Michal; Tornow, Ralf P.

    2017-07-01

    This paper describes detection of distorted frames in retinal sequences based on set of global features extracted from each frame. The feature vector is consequently used in classification step, in which three types of classifiers are tested. The best classification accuracy 96% has been achieved with support vector machine approach.

  9. Measuring Sandy Bottom Dynamics by Exploiting Depth from Stereo Video Sequences

    DEFF Research Database (Denmark)

    Musumeci, Rosaria E.; Farinella, Giovanni M.; Foti, Enrico

    2013-01-01

    In this paper an imaging system for measuring sandy bottom dynamics is proposed. The system exploits stereo sequences and projected laser beams to build the 3D shape of the sandy bottom during time. The reconstruction is used by experts of the field to perform accurate measurements and analysis...

  10. Automatic Representation and Segmentation of Video Sequences via a Novel Framework Based on the nD-EVM and Kohonen Networks

    Directory of Open Access Journals (Sweden)

    José-Yovany Luis-García

    2016-01-01

    Full Text Available Recently in the Computer Vision field, a subject of interest, at least in almost every video application based on scene content, is video segmentation. Some of these applications are indexing, surveillance, medical imaging, event analysis, and computer-guided surgery, for naming some of them. To achieve their goals, these applications need meaningful information about a video sequence, in order to understand the events in its corresponding scene. Therefore, we need semantic information which can be obtained from objects of interest that are present in the scene. In order to recognize objects we need to compute features which aid the finding of similarities and dissimilarities, among other characteristics. For this reason, one of the most important tasks for video and image processing is segmentation. The segmentation process consists in separating data into groups that share similar features. Based on this, in this work we propose a novel framework for video representation and segmentation. The main workflow of this framework is given by the processing of an input frame sequence in order to obtain, as output, a segmented version. For video representation we use the Extreme Vertices Model in the n-Dimensional Space while we use the Discrete Compactness descriptor as feature and Kohonen Self-Organizing Maps for segmentation purposes.

  11. Real-Time Recognition of Action Sequences Using a DistributedVideo Sensor Network

    Directory of Open Access Journals (Sweden)

    Vinod Kulathumani

    2013-07-01

    Full Text Available In this paper, we describe how information obtained from multiple views usinga network of cameras can be effectively combined to yield a reliable and fast humanactivity recognition system. First, we present a score-based fusion technique for combininginformation from multiple cameras that can handle the arbitrary orientation of the subjectwith respect to the cameras and that does not rely on a symmetric deployment of thecameras. Second, we describe how longer, variable duration, inter-leaved action sequencescan be recognized in real-time based on multi-camera data that is continuously streaming in.Our framework does not depend on any particular feature extraction technique, and as aresult, the proposed system can easily be integrated on top of existing implementationsfor view-specific classifiers and feature descriptors. For implementation and testing of theproposed system, we have used computationally simple locality-specific motion informationextracted from the spatio-temporal shape of a human silhouette as our feature descriptor.This lends itself to an efficient distributed implementation, while maintaining a high framecapture rate. We demonstrate the robustness of our algorithms by implementing them ona portable multi-camera, video sensor network testbed and evaluating system performanceunder different camera network configurations.

  12. Identification of novel BRCA founder mutations in Middle Eastern breast cancer patients using capture and Sanger sequencing analysis.

    Science.gov (United States)

    Bu, Rong; Siraj, Abdul K; Al-Obaisi, Khadija A S; Beg, Shaham; Al Hazmi, Mohsen; Ajarim, Dahish; Tulbah, Asma; Al-Dayel, Fouad; Al-Kuraya, Khawla S

    2016-09-01

    Ethnic differences of breast cancer genomics have prompted us to investigate the spectra of BRCA1 and BRCA2 mutations in different populations. The prevalence and effect of BRCA 1 and BRCA 2 mutations in Middle Eastern population is not fully explored. To characterize the prevalence of BRCA mutations in Middle Eastern breast cancer patients, BRCA mutation screening was performed in 818 unselected breast cancer patients using Capture and/or Sanger sequencing. 19 short tandem repeat (STR) markers were used for founder mutation analysis. In our study, nine different types of deleterious mutation were identified in 28 (3.4%) cases, 25 (89.3%) cases in BRCA 1 and 3 (10.7%) cases in BRCA 2. Seven recurrent mutations identified accounted for 92.9% (26/28) of all the mutant cases. Haplotype analysis was performed to confirm c.1140 dupG and c.4136_4137delCT mutations as novel putative founder mutation, accounting for 46.4% (13/28) of all BRCA mutant cases and 1.6% (13/818) of all the breast cancer cases, respectively. Moreover, BRCA 1 mutation was significantly associated with BRCA 1 protein expression loss (p = 0.0005). Our finding revealed that a substantial number of BRCA mutations were identified in clinically high risk breast cancer from Middle East region. Identification of the mutation spectrum, prevalence and founder effect in Middle Eastern population facilitates genetic counseling, risk assessment and development of cost-effective screening strategy. © 2016 UICC.

  13. Tracing Sequential Video Production

    DEFF Research Database (Denmark)

    Otrel-Cass, Kathrin; Khalid, Md. Saifuddin

    2015-01-01

    With an interest in learning that is set in collaborative situations, the data session presents excerpts from video data produced by two of fifteen students from a class of 5th semester techno-anthropology course. Students used video cameras to capture the time they spent working with a scientist...... video, nature of the interactional space, and material and spatial semiotics....

  14. Fast Aerial Video Stitching

    Directory of Open Access Journals (Sweden)

    Jing Li

    2014-10-01

    Full Text Available The highly efficient and robust stitching of aerial video captured by unmanned aerial vehicles (UAVs is a challenging problem in the field of robot vision. Existing commercial image stitching systems have seen success with offline stitching tasks, but they cannot guarantee high-speed performance when dealing with online aerial video sequences. In this paper, we present a novel system which has an unique ability to stitch high-frame rate aerial video at a speed of 150 frames per second (FPS. In addition, rather than using a high-speed vision platform such as FPGA or CUDA, our system is running on a normal personal computer. To achieve this, after the careful comparison of the existing invariant features, we choose the FAST corner and binary descriptor for efficient feature extraction and representation, and present a spatial and temporal coherent filter to fuse the UAV motion information into the feature matching. The proposed filter can remove the majority of feature correspondence outliers and significantly increase the speed of robust feature matching by up to 20 times. To achieve a balance between robustness and efficiency, a dynamic key frame-based stitching framework is used to reduce the accumulation errors. Extensive experiments on challenging UAV datasets demonstrate that our approach can break through the speed limitation and generate an accurate stitching image for aerial video stitching tasks.

  15. Visually driven chaining of elementary swim patterns into a goal-directed motor sequence: a virtual reality study of zebrafish prey capture.

    Science.gov (United States)

    Trivedi, Chintan A; Bollmann, Johann H

    2013-01-01

    Prey capture behavior critically depends on rapid processing of sensory input in order to track, approach, and catch the target. When using vision, the nervous system faces the problem of extracting relevant information from a continuous stream of input in order to detect and categorize visible objects as potential prey and to select appropriate motor patterns for approach. For prey capture, many vertebrates exhibit intermittent locomotion, in which discrete motor patterns are chained into a sequence, interrupted by short periods of rest. Here, using high-speed recordings of full-length prey capture sequences performed by freely swimming zebrafish larvae in the presence of a single paramecium, we provide a detailed kinematic analysis of first and subsequent swim bouts during prey capture. Using Fourier analysis, we show that individual swim bouts represent an elementary motor pattern. Changes in orientation are directed toward the target on a graded scale and are implemented by an asymmetric tail bend component superimposed on this basic motor pattern. To further investigate the role of visual feedback on the efficiency and speed of this complex behavior, we developed a closed-loop virtual reality setup in which minimally restrained larvae recapitulated interconnected swim patterns closely resembling those observed during prey capture in freely moving fish. Systematic variation of stimulus properties showed that prey capture is initiated within a narrow range of stimulus size and velocity. Furthermore, variations in the delay and location of swim triggered visual feedback showed that the reaction time of secondary and later swims is shorter for stimuli that appear within a narrow spatio-temporal window following a swim. This suggests that the larva may generate an expectation of stimulus position, which enables accelerated motor sequencing if the expectation is met by appropriate visual feedback.

  16. Visually driven chaining of elementary swim patterns into a goal-directed motor sequence: a virtual reality study of zebrafish prey capture

    Directory of Open Access Journals (Sweden)

    Chintan A Trivedi

    2013-05-01

    Full Text Available Prey capture behavior critically depends on rapid processing of sensory input in order to track, approach and catch the target. When using vision, the nervous system faces the problem of extracting relevant information from a continuous stream of input in order to detect and categorize visible objects as potential prey and to select appropriate motor patterns for approach. For prey capture, many vertebrates exhibit intermittent locomotion, in which discrete motor patterns are chained into a sequence, interrupted by short periods of rest. Here, using high-speed recordings of full-length prey capture sequences performed by freely swimming zebrafish larvae in the presence of a single paramecium, we provide a detailed kinematic analysis of first and subsequent swim bouts during prey capture. Using Fourier analysis, we show that individual swim bouts represent an elementary motor pattern. Changes in orientation are directed towards the target on a graded scale and are implemented by an asymmetric tail bend component superimposed on this basic motor pattern. To further investigate the role of visual feedback on the efficiency and speed of this complex behavior, we developed a closed-loop virtual reality setup in which minimally restrained larvae recapitulated interconnected swim patterns closely resembling those observed during prey capture in freely moving fish. Systematic variation of stimulus properties showed that prey capture is initiated within a narrow range of stimulus size and velocity. Furthermore, variations in the delay and location of swim-triggered visual feedback showed that the reaction time of secondary and later swims is shorter for stimuli that appear within a narrow spatio-temporal window following a swim. This suggests that the larva may generate an expectation of stimulus position, which enables accelerated motor sequencing if the expectation is met by appropriate visual feedback.

  17. A High-Throughput and Low-Complexity H.264/AVC Intra 16×16 Prediction Architecture for HD Video Sequences

    Directory of Open Access Journals (Sweden)

    M. Orlandić

    2014-11-01

    Full Text Available H.264/AVC compression standard provides tools and solutions for an efficient coding of video sequences of various resolutions. Spatial redundancy in a video frame is removed by use of intra prediction algorithm. There are three block-wise types of intra prediction: 4×4, 8×8 and 16×16. This paper proposes an efficient, low-complexity architecture for intra 16×16 prediction that provides real-time processing of HD video sequences. All four prediction (V, H, DC, Plane modes are supported in the implementation. The high-complexity plane mode computes a number of intermediate parameters required for creating prediction pixels. The local memory buffers are used for storing intermediate reconstructed data used as reference pixels in intra prediction process. The high throughput is achieved by 16-pixel parallelism and the proposed prediction process takes 48 cycles for processing one macroblock. The proposed architecture is synthesized and implemented on Kintex 705 -XC7K325T board and requires 94 MHz to encode a video sequence of HD 4k×2k (3840×2160 resolution at 60 fps in real time. This represents a significant improvement compared to the state of the art.

  18. Interactive segmentation of tongue contours in ultrasound video sequences using quality maps

    Science.gov (United States)

    Ghrenassia, Sarah; Ménard, Lucie; Laporte, Catherine

    2014-03-01

    Ultrasound (US) imaging is an effective and non invasive way of studying the tongue motions involved in normal and pathological speech, and the results of US studies are of interest for the development of new strategies in speech therapy. State-of-the-art tongue shape analysis techniques based on US images depend on semi-automated tongue segmentation and tracking techniques. Recent work has mostly focused on improving the accuracy of the tracking techniques themselves. However, occasional errors remain inevitable, regardless of the technique used, and the tongue tracking process must thus be supervised by a speech scientist who will correct these errors manually or semi-automatically. This paper proposes an interactive framework to facilitate this process. In this framework, the user is guided towards potentially problematic portions of the US image sequence by a segmentation quality map that is based on the normalized energy of an active contour model and automatically produced during tracking. When a problematic segmentation is identified, corrections to the segmented contour can be made on one image and propagated both forward and backward in the problematic subsequence, thereby improving the user experience. The interactive tools were tested in combination with two different tracking algorithms. Preliminary results illustrate the potential of the proposed framework, suggesting that the proposed framework generally improves user interaction time, with little change in segmentation repeatability.

  19. Cage-based performance capture

    CERN Document Server

    Savoye, Yann

    2014-01-01

    Nowadays, highly-detailed animations of live-actor performances are increasingly easier to acquire and 3D Video has reached considerable attentions in visual media production. In this book, we address the problem of extracting or acquiring and then reusing non-rigid parametrization for video-based animations. At first sight, a crucial challenge is to reproduce plausible boneless deformations while preserving global and local captured properties of dynamic surfaces with a limited number of controllable, flexible and reusable parameters. To solve this challenge, we directly rely on a skin-detached dimension reduction thanks to the well-known cage-based paradigm. First, we achieve Scalable Inverse Cage-based Modeling by transposing the inverse kinematics paradigm on surfaces. Thus, we introduce a cage inversion process with user-specified screen-space constraints. Secondly, we convert non-rigid animated surfaces into a sequence of optimal cage parameters via Cage-based Animation Conversion. Building upon this re...

  20. Study of recognizing multiple persons' complicated hand gestures from the video sequence acquired by a moving camera

    Science.gov (United States)

    Dan, Luo; Ohya, Jun

    2010-02-01

    Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.

  1. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution – an application in higher education

    NARCIS (Netherlands)

    Jan Kuijten; Ajda Ortac; Hans Maier; Gert de Heer

    2015-01-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels).

  2. Tilted pillar array fabrication by the combination of proton beam writing and soft lithography for microfluidic cell capture Part 2: Image sequence analysis based evaluation and biological application.

    Science.gov (United States)

    Járvás, Gábor; Varga, Tamás; Szigeti, Márton; Hajba, László; Fürjes, Péter; Rajta, István; Guttman, András

    2017-07-17

    As a continuation of our previously published work, this paper presents a detailed evaluation of a microfabricated cell capture device utilizing a doubly tilted micropillar array. The device was fabricated using a novel hybrid technology based on the combination of proton beam writing and conventional lithography techniques. Tilted pillars offer unique flow characteristics and support enhanced fluidic interaction for improved immunoaffinity based cell capture. The performance of the microdevice was evaluated by an image sequence analysis based in-house developed single-cell tracking system. Individual cell tracking allowed in-depth analysis of the cell-chip surface interaction mechanism from hydrodynamic point of view. Simulation results were validated by using the hybrid device and the optimized surface functionalization procedure. Finally, the cell capture capability of this new generation microdevice was demonstrated by efficiently arresting cells from a HT29 cell-line suspension. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. LCM-Seq: A Method for Spatial Transcriptomic Profiling Using Laser Capture Microdissection Coupled with PolyA-Based RNA Sequencing.

    Science.gov (United States)

    Nichterwitz, Susanne; Benitez, Julio Aguila; Hoogstraaten, Rein; Deng, Qiaolin; Hedlund, Eva

    2018-01-01

    LCM-seq couples laser capture microdissection of cells from frozen tissues with polyA-based RNA sequencing and is applicable to single neurons. The method utilizes off-the-shelf reagents and direct lysis of the cells without RNA purification, making it a simple and relatively cheap method with high reproducibility and sensitivity compared to previous methods. The advantage with LCM-seq is also that tissue sections are kept intact and thus the positional information of each cell is preserved.

  4. Next-generation DNA barcoding: using next-generation sequencing to enhance and accelerate DNA barcode capture from single specimens.

    Science.gov (United States)

    Shokralla, Shadi; Gibson, Joel F; Nikbakht, Hamid; Janzen, Daniel H; Hallwachs, Winnie; Hajibabaei, Mehrdad

    2014-09-01

    DNA barcoding is an efficient method to identify specimens and to detect undescribed/cryptic species. Sanger sequencing of individual specimens is the standard approach in generating large-scale DNA barcode libraries and identifying unknowns. However, the Sanger sequencing technology is, in some respects, inferior to next-generation sequencers, which are capable of producing millions of sequence reads simultaneously. Additionally, direct Sanger sequencing of DNA barcode amplicons, as practiced in most DNA barcoding procedures, is hampered by the need for relatively high-target amplicon yield, coamplification of nuclear mitochondrial pseudogenes, confusion with sequences from intracellular endosymbiotic bacteria (e.g. Wolbachia) and instances of intraindividual variability (i.e. heteroplasmy). Any of these situations can lead to failed Sanger sequencing attempts or ambiguity of the generated DNA barcodes. Here, we demonstrate the potential application of next-generation sequencing platforms for parallel acquisition of DNA barcode sequences from hundreds of specimens simultaneously. To facilitate retrieval of sequences obtained from individual specimens, we tag individual specimens during PCR amplification using unique 10-mer oligonucleotides attached to DNA barcoding PCR primers. We employ 454 pyrosequencing to recover full-length DNA barcodes of 190 specimens using 12.5% capacity of a 454 sequencing run (i.e. two lanes of a 16 lane run). We obtained an average of 143 sequence reads for each individual specimen. The sequences produced are full-length DNA barcodes for all but one of the included specimens. In a subset of samples, we also detected Wolbachia, nontarget species, and heteroplasmic sequences. Next-generation sequencing is of great value because of its protocol simplicity, greatly reduced cost per barcode read, faster throughout and added information content. © 2014 The Authors. Molecular Ecology Resources Published by John Wiley & Sons Ltd.

  5. Video Salient Object Detection via Fully Convolutional Networks

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing; Shao, Ling

    2018-01-01

    This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: (1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data, and (2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image datasets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the DAVIS dataset (MAE of .06) and the FBMS dataset (MAE of .07), and do so with much improved speed (2fps with all steps).

  6. Interactive video algorithms and technologies

    CERN Document Server

    Hammoud, Riad

    2006-01-01

    This book covers both algorithms and technologies of interactive videos, so that businesses in IT and data managements, scientists and software engineers in video processing and computer vision, coaches and instructors that use video technology in teaching, and finally end-users will greatly benefit from it. This book contains excellent scientific contributions made by a number of pioneering scientists and experts from around the globe. It consists of five parts. The first part introduces the reader to interactive video and video summarization and presents effective methodologies for automatic abstraction of a single video sequence, a set of video sequences, and a combined audio-video sequence. In the second part, a list of advanced algorithms and methodologies for automatic and semi-automatic analysis and editing of audio-video documents are presented. The third part tackles a more challenging level of automatic video re-structuring, filtering of video stream by extracting of highlights, events, and meaningf...

  7. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution - an application in higher education.

    Science.gov (United States)

    Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan

    2015-11-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study. © 2015 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  8. Discovery of Chromatin-Associated Proteins via Sequence-Specific Capture and Mass Spectrometric Protein Identification in Saccharomyces cerevisiae

    OpenAIRE

    Kennedy-Darling, Julia; Guillen-Ahlers, Hector; Shortreed, Michael R.; Scalf, Mark; Frey, Brian L.; Kendziorski, Christina; Olivier, Michael; Gasch, Audrey P.; Smith, Lloyd M.

    2014-01-01

    DNA?protein interactions play critical roles in the control of genome expression and other fundamental processes. An essential element in understanding how these systems function is to identify their molecular components. We present here a novel strategy, Hybridization Capture of Chromatin Associated Proteins for Proteomics (HyCCAPP), to identify proteins that are interacting with any given region of the genome. This technology identifies and quantifies the proteins that are specifically inte...

  9. Capturing chloroplast variation for molecular ecology studies: a simple next generation sequencing approach applied to a rainforest tree

    OpenAIRE

    McPherson, Hannah; van der Merwe, Marlien; Delaney, Sven K; Edwards, Mark A; Henry, Robert J; McIntosh, Emma; Rymer, Paul D; Milner, Melita L; Siow, Juelian; Rossetto, Maurizio

    2013-01-01

    Background With high quantity and quality data production and low cost, next generation sequencing has the potential to provide new opportunities for plant phylogeographic studies on single and multiple species. Here we present an approach for in silicio chloroplast DNA assembly and single nucleotide polymorphism detection from short-read shotgun sequencing. The approach is simple and effective and can be implemented using standard bioinformatic tools. Results The chloroplast genome of Toona ...

  10. A family-based probabilistic method for capturing de novo mutations from high-throughput short-read sequencing data.

    Science.gov (United States)

    Cartwright, Reed A; Hussin, Julie; Keebler, Jonathan E M; Stone, Eric A; Awadalla, Philip

    2012-01-06

    Recent advances in high-throughput DNA sequencing technologies and associated statistical analyses have enabled in-depth analysis of whole-genome sequences. As this technology is applied to a growing number of individual human genomes, entire families are now being sequenced. Information contained within the pedigree of a sequenced family can be leveraged when inferring the donors' genotypes. The presence of a de novo mutation within the pedigree is indicated by a violation of Mendelian inheritance laws. Here, we present a method for probabilistically inferring genotypes across a pedigree using high-throughput sequencing data and producing the posterior probability of de novo mutation at each genomic site examined. This framework can be used to disentangle the effects of germline and somatic mutational processes and to simultaneously estimate the effect of sequencing error and the initial genetic variation in the population from which the founders of the pedigree arise. This approach is examined in detail through simulations and areas for method improvement are noted. By applying this method to data from members of a well-defined nuclear family with accurate pedigree information, the stage is set to make the most direct estimates of the human mutation rate to date.

  11. Capturing chloroplast variation for molecular ecology studies: a simple next generation sequencing approach applied to a rainforest tree.

    Science.gov (United States)

    McPherson, Hannah; van der Merwe, Marlien; Delaney, Sven K; Edwards, Mark A; Henry, Robert J; McIntosh, Emma; Rymer, Paul D; Milner, Melita L; Siow, Juelian; Rossetto, Maurizio

    2013-03-14

    With high quantity and quality data production and low cost, next generation sequencing has the potential to provide new opportunities for plant phylogeographic studies on single and multiple species. Here we present an approach for in silicio chloroplast DNA assembly and single nucleotide polymorphism detection from short-read shotgun sequencing. The approach is simple and effective and can be implemented using standard bioinformatic tools. The chloroplast genome of Toona ciliata (Meliaceae), 159,514 base pairs long, was assembled from shotgun sequencing on the Illumina platform using de novo assembly of contigs. To evaluate its practicality, value and quality, we compared the short read assembly with an assembly completed using 454 data obtained after chloroplast DNA isolation. Sanger sequence verifications indicated that the Illumina dataset outperformed the longer read 454 data. Pooling of several individuals during preparation of the shotgun library enabled detection of informative chloroplast SNP markers. Following validation, we used the identified SNPs for a preliminary phylogeographic study of T. ciliata in Australia and to confirm low diversity across the distribution. Our approach provides a simple method for construction of whole chloroplast genomes from shotgun sequencing of whole genomic DNA using short-read data and no available closely related reference genome (e.g. from the same species or genus). The high coverage of Illumina sequence data also renders this method appropriate for multiplexing and SNP discovery and therefore a useful approach for landscape level studies of evolutionary ecology.

  12. Novel mutations in CRB1 gene identified in a chinese pedigree with retinitis pigmentosa by targeted capture and next generation sequencing

    Science.gov (United States)

    Lo, David; Weng, Jingning; Liu, xiaohong; Yang, Juhua; He, Fen; Wang, Yun; Liu, Xuyang

    2016-01-01

    PURPOSE To detect the disease-causing gene in a Chinese pedigree with autosomal-recessive retinitis pigmentosa (ARRP). METHODS All subjects in this family underwent a complete ophthalmic examination. Targeted-capture next generation sequencing (NGS) was performed on the proband to detect variants. All variants were verified in the remaining family members by PCR amplification and Sanger sequencing. RESULTS All the affected subjects in this pedigree were diagnosed with retinitis pigmentosa (RP). The compound heterozygous c.138delA (p.Asp47IlefsX24) and c.1841G>T (p.Gly614Val) mutations in the Crumbs homolog 1 (CRB1) gene were identified in all the affected patients but not in the unaffected individuals in this family. These mutations were inherited from their parents, respectively. CONCLUSION The novel compound heterozygous mutations in CRB1 were identified in a Chinese pedigree with ARRP using targeted-capture next generation sequencing. After evaluating the significant heredity and impaired protein function, the compound heterozygous c.138delA (p.Asp47IlefsX24) and c.1841G>T (p.Gly614Val) mutations are the causal genes of early onset ARRP in this pedigree. To the best of our knowledge, there is no previous report regarding the compound mutations. PMID:27806333

  13. An efficient and scalable graph modeling approach for capturing information at different levels in next generation sequencing reads.

    Science.gov (United States)

    Warnke, Julia D; Ali, Hesham H

    2013-01-01

    Next generation sequencing technologies have greatly advanced many research areas of the biomedical sciences through their capability to generate massive amounts of genetic information at unprecedented rates. The advent of next generation sequencing has led to the development of numerous computational tools to analyze and assemble the millions to billions of short sequencing reads produced by these technologies. While these tools filled an important gap, current approaches for storing, processing, and analyzing short read datasets generally have remained simple and lack the complexity needed to efficiently model the produced reads and assemble them correctly. Previously, we presented an overlap graph coarsening scheme for modeling read overlap relationships on multiple levels. Most current read assembly and analysis approaches use a single graph or set of clusters to represent the relationships among a read dataset. Instead, we use a series of graphs to represent the reads and their overlap relationships across a spectrum of information granularity. At each information level our algorithm is capable of generating clusters of reads from the reduced graph, forming an integrated graph modeling and clustering approach for read analysis and assembly. Previously we applied our algorithm to simulated and real 454 datasets to assess its ability to efficiently model and cluster next generation sequencing data. In this paper we extend our algorithm to large simulated and real Illumina datasets to demonstrate that our algorithm is practical for both sequencing technologies. Our overlap graph theoretic algorithm is able to model next generation sequencing reads at various levels of granularity through the process of graph coarsening. Additionally, our model allows for efficient representation of the read overlap relationships, is scalable for large datasets, and is practical for both Illumina and 454 sequencing technologies.

  14. Reconstructing Interlaced High-Dynamic-Range Video Using Joint Learning.

    Science.gov (United States)

    Choi, Inchang; Baek, Seung-Hwan; Kim, Min H

    2017-11-01

    For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extended dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with the state-of-the-art high-dynamic-range video methods.

  15. Automatic Person Identification in Camera Video by Motion Correlation

    Directory of Open Access Journals (Sweden)

    Dingbo Duan

    2014-01-01

    Full Text Available Person identification plays an important role in semantic analysis of video content. This paper presents a novel method to automatically label persons in video sequence captured from fixed camera. Instead of leveraging traditional face recognition approaches, we deal with the task of person identification by fusing information from motion sensor platforms, like smart phones, carried on human bodies and extracted from camera video. More specifically, a sequence of motion features extracted from camera video are compared with each of those collected from accelerometers of smart phones. When strong correlation is detected, identity information transmitted from the corresponding smart phone is used to identify the phone wearer. To test the feasibility and efficiency of the proposed method, extensive experiments are conducted which achieved impressive performance.

  16. Capturing Thoughts, Capturing Minds?

    DEFF Research Database (Denmark)

    Nielsen, Janni

    2004-01-01

    Think Aloud is cost effective, promises access to the user's mind and is the applied usability technique. But 'keep talking' is difficult, besides, the multimodal interface is visual not verbal. Eye-tracking seems to get around the verbalisation problem. It captures the visual focus of attention....... However, it is expensive, obtrusive and produces huge amount of data. Besides, eye-tracking do not give access to user's mind. Capturing interface/cursor tracking may be cost effective. It is easy to install, data collection is automatic and unobtrusive and replaying the captured recording to the user...

  17. Targeted capture and next-generation sequencing identifies C9orf75, encoding taperin, as the mutated gene in nonsyndromic deafness DFNB79.

    Science.gov (United States)

    Rehman, Atteeq Ur; Morell, Robert J; Belyantseva, Inna A; Khan, Shahid Y; Boger, Erich T; Shahzad, Mohsin; Ahmed, Zubair M; Riazuddin, Saima; Khan, Shaheen N; Riazuddin, Sheikh; Friedman, Thomas B

    2010-03-12

    Targeted genome capture combined with next-generation sequencing was used to analyze 2.9 Mb of the DFNB79 interval on chromosome 9q34.3, which includes 108 candidate genes. Genomic DNA from an affected member of a consanguineous family segregating recessive, nonsyndromic hearing loss was used to make a library of fragments covering the DFNB79 linkage interval defined by genetic analyses of four pedigrees. Homozygosity for eight previously unreported variants in transcribed sequences was detected by evaluating a library of 402,554 sequencing reads and was later confirmed by Sanger sequencing. Of these variants, six were determined to be polymorphisms in the Pakistani population, and one was in a noncoding gene that was subsequently excluded genetically from the DFNB79 linkage interval. The remaining variant was a nonsense mutation in a predicted gene, C9orf75, renamed TPRN. Evaluation of the other three DFNB79-linked families identified three additional frameshift mutations, for a total of four truncating alleles of this gene. Although TPRN is expressed in many tissues, immunolocalization of the protein product in the mouse cochlea shows prominent expression in the taper region of hair cell stereocilia. Consequently, we named the protein taperin. Copyright 2010 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  18. A Pilot Study of Noninvasive Prenatal Diagnosis of Alpha- and Beta-Thalassemia with Target Capture Sequencing of Cell-Free Fetal DNA in Maternal Blood.

    Science.gov (United States)

    Wang, Wenjuan; Yuan, Yuan; Zheng, Haiqing; Wang, Yaoshen; Zeng, Dan; Yang, Yihua; Yi, Xin; Xia, Yang; Zhu, Chunjiang

    2017-07-01

    Thalassemia is a dangerous hematolytic genetic disease. In south China, ∼24% Chinese carry alpha-thalassemia or beta-thalassemia gene mutations. Given the fact that the invasive sampling procedures can only be performed by professionals in experienced centers, it may increase the risk of miscarriage or infection. Thus, most people are worried about the invasive operation. As such, a noninvasive and accurate prenatal diagnosis is needed for appropriate genetic counseling for families with high risks. Here we sought to develop capture probes and their companion analysis methods for the noninvasive prenatal detection of deletional and nondeletional thalassemia. Two families diagnosed as carriers of either beta-thalassemia gene or Southeast Asian deletional alpha-thalassemia gene mutation were recruited. The maternal plasma and amniotic fluid were collected for prenatal diagnosis. Probes targeting exons of the genes of interest and the highly heterozygous SNPs within the 1Mb flanking region were designed. The target capture sequencing was performed with plasma DNA from the pregnant woman and genomic DNA from the couples and their children. Then the parental haplotype was constructed by the trios-based strategy. The fetal haplotype was deduced from the parental haplotype with a hidden Markov model-based algorithm. The fetal genotypes were successfully deduced in both families noninvasively. The noninvasively constructed haplotypes of both fetuses were identical to the invasive prenatal diagnosis results with an accuracy rate of 100% in the target region. Our study demonstrates that the effective noninvasive prenatal diagnosis of alpha-thalassemia and beta-thalassemia can be achieved with the targeted capture sequencing and the haplotype-assisted analysis method.

  19. SPECIAL REPORT: Creating Conference Video

    Directory of Open Access Journals (Sweden)

    Noel F. Peden

    2008-12-01

    Full Text Available Capturing video at a conference is easy. Doing it so the product is useful is another matter. Many subtle problems come into play so that video and audio obtained can be used to create a final product. This article discusses what the author learned in the two years of shooting and editing video for Code4Lib conference.

  20. Video Analysis of Rolling Cylinders

    Science.gov (United States)

    Phommarach, S.; Wattanakasiwich, P.; Johnston, I.

    2012-01-01

    In this work, we studied the rolling motion of solid and hollow cylinders down an inclined plane at different angles. The motions were captured on video at 300 frames s[superscript -1], and the videos were analyzed frame by frame using video analysis software. Data from the real motion were compared with the theory of rolling down an inclined…

  1. Video-based face recognition via convolutional neural networks

    Science.gov (United States)

    Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming

    2017-06-01

    Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.

  2. Top-Down and Bottom-Up Cues Based Moving Object Detection for Varied Background Video Sequences

    Directory of Open Access Journals (Sweden)

    Chirag I. Patel

    2014-01-01

    there is no need for background formulation and updates as it is background independent. Many bottom-up approaches and one combination of bottom-up and top-down approaches are proposed in the present paper. The proposed approaches seem more efficient due to inessential requirement of learning background model and due to being independent of previous video frames. Results indicate that the proposed approach works even against slight movements in the background and in various outdoor conditions.

  3. Sound for digital video

    CERN Document Server

    Holman, Tomlinson

    2013-01-01

    Achieve professional quality sound on a limited budget! Harness all new, Hollywood style audio techniques to bring your independent film and video productions to the next level.In Sound for Digital Video, Second Edition industry experts Tomlinson Holman and Arthur Baum give you the tools and knowledge to apply recent advances in audio capture, video recording, editing workflow, and mixing to your own film or video with stunning results. This fresh edition is chockfull of techniques, tricks, and workflow secrets that you can apply to your own projects from preproduction

  4. Capturing sequence variation among flowering-time regulatory gene homologues in the allopolyploid crop species Brassica napus

    Directory of Open Access Journals (Sweden)

    Sarah eSchiessl

    2014-08-01

    Full Text Available Flowering, the transition from the vegetative to the generative phase, is a decisive time point in the lifecycle of a plant. Flowering is controlled by a complex network of transcription factors, photoreceptors, enzymes and miRNAs. In recent years, several studies gave rise to the hypothesis that this network is also strongly involved in the regulation of other important lifecycle processes ranging from germination and seed development through to fundamental developmental and yield-related traits. In the allopolyploid crop species Brassica napus, (genome AACC, homoeologous copies of flowering time regulatory genes are implicated in major phenological variation within the species, however the extent and control of intraspecific and intergenomic variation among flowering-time regulators is still unclear. To investigate differences among B. napus morphotypes in relation to flowering-time gene variation, we performed targeted deep sequencing of 29 regulatory flowering-time genes in four genetically and phenologically diverse B. napus accessions. The genotype panel included a winter-type oilseed rape, a winter fodder rape, a spring-type oilseed rape (all B. napus ssp. napus and a swede (B. napus ssp. napobrassica, which show extreme differences in winter-hardiness, vernalization requirement and flowering behaviour. A broad range of genetic variation was detected in the targeted genes for the different morphotypes, including non-synonymous SNPs, copy number variation and presence-absence variation. The results suggest that this broad variation in vernalisation, clock and signaling genes could be a key driver of morphological differentiation for flowering-related traits in this recent allopolyploid crop species.

  5. Segmentation of Environmental Time Lapse Image Sequences for the Determination of Shore Lines Captured by Hand-Held Smartphone Cameras

    Science.gov (United States)

    Kröhnert, M.; Meichsner, R.

    2017-09-01

    The relevance of globally environmental issues gains importance since the last years with still rising trends. Especially disastrous floods may cause in serious damage within very short times. Although conventional gauging stations provide reliable information about prevailing water levels, they are highly cost-intensive and thus just sparsely installed. Smartphones with inbuilt cameras, powerful processing units and low-cost positioning systems seem to be very suitable wide-spread measurement devices that could be used for geo-crowdsourcing purposes. Thus, we aim for the development of a versatile mobile water level measurement system to establish a densified hydrological network of water levels with high spatial and temporal resolution. This paper addresses a key issue of the entire system: the detection of running water shore lines in smartphone images. Flowing water never appears equally in close-range images even if the extrinsics remain unchanged. Its non-rigid behavior impedes the use of good practices for image segmentation as a prerequisite for water line detection. Consequently, we use a hand-held time lapse image sequence instead of a single image that provides the time component to determine a spatio-temporal texture image. Using a region growing concept, the texture is analyzed for immutable shore and dynamic water areas. Finally, the prevalent shore line is examined by the resultant shapes. For method validation, various study areas are observed from several distances covering urban and rural flowing waters with different characteristics. Future work provides a transformation of the water line into object space by image-to-geometry intersection.

  6. SEGMENTATION OF ENVIRONMENTAL TIME LAPSE IMAGE SEQUENCES FOR THE DETERMINATION OF SHORE LINES CAPTURED BY HAND-HELD SMARTPHONE CAMERAS

    Directory of Open Access Journals (Sweden)

    M. Kröhnert

    2017-09-01

    Full Text Available The relevance of globally environmental issues gains importance since the last years with still rising trends. Especially disastrous floods may cause in serious damage within very short times. Although conventional gauging stations provide reliable information about prevailing water levels, they are highly cost-intensive and thus just sparsely installed. Smartphones with inbuilt cameras, powerful processing units and low-cost positioning systems seem to be very suitable wide-spread measurement devices that could be used for geo-crowdsourcing purposes. Thus, we aim for the development of a versatile mobile water level measurement system to establish a densified hydrological network of water levels with high spatial and temporal resolution. This paper addresses a key issue of the entire system: the detection of running water shore lines in smartphone images. Flowing water never appears equally in close-range images even if the extrinsics remain unchanged. Its non-rigid behavior impedes the use of good practices for image segmentation as a prerequisite for water line detection. Consequently, we use a hand-held time lapse image sequence instead of a single image that provides the time component to determine a spatio-temporal texture image. Using a region growing concept, the texture is analyzed for immutable shore and dynamic water areas. Finally, the prevalent shore line is examined by the resultant shapes. For method validation, various study areas are observed from several distances covering urban and rural flowing waters with different characteristics. Future work provides a transformation of the water line into object space by image-to-geometry intersection.

  7. Comparison of Target-Capture and Restriction-Site Associated DNA Sequencing for Phylogenomics: A Test in Cardinalid Tanagers (Aves, Genus: Piranga).

    Science.gov (United States)

    Manthey, Joseph D; Campillo, Luke C; Burns, Kevin J; Moyle, Robert G

    2016-07-01

    Restriction-site associated DNA sequencing (RAD-seq) and target capture of specific genomic regions, such as ultraconserved elements (UCEs), are emerging as two of the most popular methods for phylogenomics using reduced-representation genomic data sets. These two methods were designed to target different evolutionary timescales: RAD-seq was designed for population-genomic level questions and UCEs for deeper phylogenetics. The utility of both data sets to infer phylogenies across a variety of taxonomic levels has not been adequately compared within the same taxonomic system. Additionally, the effects of uninformative gene trees on species tree analyses (for target capture data) have not been explored. Here, we utilize RAD-seq and UCE data to infer a phylogeny of the bird genus Piranga The group has a range of divergence dates (0.5-6 myr), contains 11 recognized species, and lacks a resolved phylogeny. We compared two species tree methods for the RAD-seq data and six species tree methods for the UCE data. Additionally, in the UCE data, we analyzed a complete matrix as well as data sets with only highly informative loci. A complete matrix of 189 UCE loci with 10 or more parsimony informative (PI) sites, and an approximately 80% complete matrix of 1128 PI single-nucleotide polymorphisms (SNPs) (from RAD-seq) yield the same fully resolved phylogeny of Piranga We inferred non-monophyletic relationships of Piranga lutea individuals, with all other a priori species identified as monophyletic. Finally, we found that species tree analyses that included predominantly uninformative gene trees provided strong support for different topologies, with consistent phylogenetic results when limiting species tree analyses to highly informative loci or only using less informative loci with concatenation or methods meant for SNPs alone. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions

  8. Robust gait recognition from extremely low frame-rate videos

    OpenAIRE

    Guan, Yu; Li, Chang-Tsun; Choudhury, Sruti Das

    2013-01-01

    In this paper, we propose a gait recognition method for extremely low frame-rate videos. Different from the popular temporal reconstruction-based methods, the proposed method uses the average gait over the whole sequence as input feature template. Assuming the effect caused by extremely low frame-rate or large gait fluctuations are intra-class variations that the gallery data fails to capture, we build a general model based on random subspace method. More specifically, a number of weak classi...

  9. Exome Capture and Massively Parallel Sequencing Identifies a Novel HPSE2 Mutation in a Saudi Arabian Child with Ochoa (Urofacial) Syndrome

    Science.gov (United States)

    Al Badr, Wisam; Al Bader, Suha; Otto, Edgar; Hildebrandt, Friedhelm; Ackley, Todd; Peng, Weiping; Xu, Jishu; Li, Jun; Owens, Kailey M.; Bloom, David; Innis, Jeffrey W.

    2011-01-01

    We describe a child of Middle Eastern descent by first-cousin mating with idiopathic neurogenic bladder and high grade vesicoureteral reflux at 1 year of age, whose characteristic facial grimace led to the diagnosis of Ochoa (Urofacial) syndrome at age 5 years. We used homozygosity mapping, exome capture and paired end sequencing to identify the disease causing mutation in the proband. We reviewed the literature with respect to the urologic manifestations of Ochoa syndrome. A large region of marker homozygosity was observed at 10q24, consistent with known autosomal recessive inheritance, family consanguinity and previous genetic mapping in other families with Ochoa syndrome. A homozygous mutation was identified in the proband in HPSE2: c.1374_1378delTGTGC, a deletion of 5 nucleotides in exon 10 that is predicted to lead to a frameshift followed by replacement of 132 C-terminal amino acids with 153 novel amino acids (p.Ala458Alafsdel132ins153). This mutation is novel relative to very recently published mutations in HPSE2 in other families. Early intervention and recognition of Ochoa syndrome with control of risk factors and close surveillance will decrease complications and renal failure. PMID:21450525

  10. Exome capture and massively parallel sequencing identifies a novel HPSE2 mutation in a Saudi Arabian child with Ochoa (urofacial) syndrome.

    Science.gov (United States)

    Al Badr, Wisam; Al Bader, Suha; Otto, Edgar; Hildebrandt, Friedhelm; Ackley, Todd; Peng, Weiping; Xu, Jishu; Li, Jun; Owens, Kailey M; Bloom, David; Innis, Jeffrey W

    2011-10-01

    We describe a child of Middle Eastern descent by first-cousin coupling with idiopathic neurogenic bladder and high-grade vesicoureteral reflux at 1 year of age, whose characteristic facial grimace led to the diagnosis of Ochoa (urofacial) syndrome at age 5 years. We used homozygosity mapping, exome capture and paired-end sequencing to identify the disease causing mutation in the proband. We reviewed the literature with respect to the urologic manifestations of Ochoa syndrome. A large region of marker homozygosity was observed at 10q24, consistent with known autosomal recessive inheritance, family consanguinity and previous genetic mapping in other families with Ochoa syndrome. A homozygous mutation was identified in the proband in HPSE2: c.1374_1378delTGTGC, a deletion of 5 nucleotides in exon 10 that is predicted to lead to a frameshift followed by replacement of 132 C-terminal amino acids with 153 novel amino acids (p.Ala458Alafsdel132ins153). This mutation is novel relative to very recently published mutations in HPSE2 in other families. Early intervention and recognition of Ochoa syndrome with control of risk factors and close surveillance will decrease complications and renal failure. Copyright © 2011 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.

  11. Two-description distributed video coding for robust transmission

    Directory of Open Access Journals (Sweden)

    Zhao Yao

    2011-01-01

    Full Text Available Abstract In this article, a two-description distributed video coding (2D-DVC is proposed to address the robust video transmission of low-power capturers. The odd/even frame-splitting partitions a video into two sub-sequences to produce two descriptions. Each description consists of two parts, where part 1 is a zero-motion based H.264-coded bitstream of a sub-sequence and part 2 is a Wyner-Ziv (WZ-coded bitstream of the other sub-sequence. As the redundant part, the WZ-coded bitstream guarantees that the lost sub-sequence is recovered when one description is lost. On the other hand, the redundancy degrades the rate-distortion performance as no loss occurs. A residual 2D-DVC is employed to further improve the rate-distortion performance, where the difference of two sub-sequences is WZ encoded to generate part 2 in each description. Furthermore, an optimization method is applied to control an appropriate amount of redundancy and therefore facilitate the tuning of central/side distortion tradeoff. The experimental results show that the proposed schemes achieve better performance than the referenced one especially for low-motion videos. Moreover, our schemes still maintain low-complexity encoding property.

  12. Dynamic Textures Modeling via Joint Video Dictionary Learning.

    Science.gov (United States)

    Wei, Xian; Li, Yuanxiang; Shen, Hao; Chen, Fang; Kleinsteuber, Martin; Wang, Zhongfeng

    2017-04-06

    Video representation is an important and challenging task in the computer vision community. In this paper, we consider the problem of modeling and classifying video sequences of dynamic scenes which could be modeled in a dynamic textures (DT) framework. At first, we assume that image frames of a moving scene can be modeled as a Markov random process. We propose a sparse coding framework, named joint video dictionary learning (JVDL), to model a video adaptively. By treating the sparse coefficients of image frames over a learned dictionary as the underlying "states", we learn an efficient and robust linear transition matrix between two adjacent frames of sparse events in time series. Hence, a dynamic scene sequence is represented by an appropriate transition matrix associated with a dictionary. In order to ensure the stability of JVDL, we impose several constraints on such transition matrix and dictionary. The developed framework is able to capture the dynamics of a moving scene by exploring both sparse properties and the temporal correlations of consecutive video frames. Moreover, such learned JVDL parameters can be used for various DT applications, such as DT synthesis and recognition. Experimental results demonstrate the strong competitiveness of the proposed JVDL approach in comparison with state-of-the-art video representation methods. Especially, it performs significantly better in dealing with DT synthesis and recognition on heavily corrupted data.

  13. Modeling camera orientation and 3D structure from a sequence of images taken by a perambulating commercial video camera

    Science.gov (United States)

    M-Rouhani, Behrouz; Anderson, James A. D. W.

    1997-04-01

    In this paper we report the degree of reliability of image sequences taken by off-the-shelf TV cameras for modeling camera rotation and reconstructing 3D structure using computer vision techniques. This is done in spite of the fact that computer vision systems usually use imaging devices that are specifically designed for the human vision. Our scenario consists of a static scene and a mobile camera moving through the scene. The scene is any long axial building dominated by features along the three principal orientations and with at least one wall containing prominent repetitive planar features such as doors, windows bricks etc. The camera is an ordinary commercial camcorder moving along the axial axis of the scene and is allowed to rotate freely within the range +/- 10 degrees in all directions. This makes it possible that the camera be held by a walking unprofessional cameraman with normal gait, or to be mounted on a mobile robot. The system has been tested successfully on sequence of images of a variety of structured, but fairly cluttered scenes taken by different walking cameramen. The potential application areas of the system include medicine, robotics and photogrammetry.

  14. A method for obtaining simian immunodeficiency virus RNA sequences from laser capture microdissected and immune captured CD68+ and CD163+ macrophages from frozen tissue sections of bone marrow and brain.

    Science.gov (United States)

    Mallard, Jaclyn; Papazian, Emily; Soulas, Caroline; Nolan, David J; Salemi, Marco; Williams, Kenneth C

    2017-03-01

    Laser capture microdissection (LCM) is used to extract cells or tissue regions for analysis of RNA, DNA or protein. Several methods of LCM are established for different applications, but a protocol for consistently obtaining lentiviral RNA from LCM captured immune cell populations is not described. Obtaining optimal viral RNA for analysis of viral genes from immune-captured cells using immunohistochemistry (IHC) and LCM is challenging. IHC protocols have long antibody incubation times that increase risk of RNA degradation. But, immune capture of specific cell populations like macrophages without staining for virus cannot result in obtaining only a fraction of cells which are productively lentivirally infected. In this study we sought to obtain simian immunodeficiency virus (SIV) RNA from SIV gp120+ and CD68+ monocyte/macrophages in bone marrow (BM) and CD163+ perivascular macrophages in brain of SIV-infected rhesus macaques. Here, we report an IHC protocol with RNase inhibitors that consistently results in optimal quantity and yield of lentiviral RNA from LCM-captured immune cells. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. An optimised protocol for isolation of RNA from small sections of laser-capture microdissected FFPE tissue amenable for next-generation sequencing.

    Science.gov (United States)

    Amini, Parisa; Ettlin, Julia; Opitz, Lennart; Clementi, Elena; Malbon, Alexandra; Markkanen, Enni

    2017-08-23

    Formalin-fixed paraffin embedded (FFPE) tissue constitutes a vast treasury of samples for biomedical research. Thus far however, extraction of RNA from FFPE tissue has proved challenging due to chemical RNA-protein crosslinking and RNA fragmentation, both of which heavily impact on RNA quantity and quality for downstream analysis. With very small sample sizes, e.g. when performing Laser-capture microdissection (LCM) to isolate specific subpopulations of cells, recovery of sufficient RNA for analysis with reverse-transcription quantitative PCR (RT-qPCR) or next-generation sequencing (NGS) becomes very cumbersome and difficult. We excised matched cancer-associated stroma (CAS) and normal stroma from clinical specimen of FFPE canine mammary tumours using LCM, and compared the commonly used protease-based RNA isolation procedure with an adapted novel technique that additionally incorporates a focused ultrasonication step. We successfully adapted a protocol that uses focused ultrasonication to isolate RNA from small amounts of deparaffinised, stained, clinical LCM samples. Using this approach, we found that total RNA yields could be increased by 8- to 12-fold compared to a commonly used protease-based extraction technique. Surprisingly, RNA extracted using this new approach was qualitatively at least equal if not superior compared to the old approach, as Cq values in RT-qPCR were on average 2.3-fold lower using the new method. Finally, we demonstrate that RNA extracted using the new method performs comparably in NGS as well. We present a successful isolation protocol for extraction of RNA from difficult and limiting FFPE tissue samples that enables successful analysis of small sections of clinically relevant specimen. The possibility to study gene expression signatures in specific small sections of archival FFPE tissue, which often entail large amounts of highly relevant clinical follow-up data, unlocks a new dimension of hitherto difficult-to-analyse samples which now

  16. Joint denoising, demosaicing, and chromatic aberration correction for UHD video

    Science.gov (United States)

    Jovanov, Ljubomir; Philips, Wilfried; Damstra, Klaas Jan; Ellenbroek, Frank

    2017-09-01

    High-resolution video capture is crucial for numerous applications such as surveillance, security, industrial inspection, medical imaging and digital entertainment. In the last two decades, we are witnessing a dramatic increase of the spatial resolution and the maximal frame rate of video capturing devices. In order to achieve further resolution increase, numerous challenges will be facing us. Due to the reduced size of the pixel, the amount of light also reduces, leading to the increased noise level. Moreover, the reduced pixel size makes the lens imprecisions more pronounced, which especially applies to chromatic aberrations. Even in the case when high quality lenses are used some chromatic aberration artefacts will remain. Next, noise level additionally increases due to the higher frame rates. To reduce the complexity and the price of the camera, one sensor captures all three colors, by relying on Color Filter Arrays. In order to obtain full resolution color image, missing color components have to be interpolated, i.e. demosaicked, which is more challenging than in the case of lower resolution, due to the increased noise and aberrations. In this paper, we propose a new method, which jointly performs chromatic aberration correction, denoising and demosaicking. By jointly performing the reduction of all artefacts, we are reducing the overall complexity of the system and the introduction of new artefacts. In order to reduce possible flicker we also perform temporal video enhancement. We evaluate the proposed method on a number of publicly available UHD sequences and on sequences recorded in our studio.

  17. Digital Video Revisited: Storytelling, Conferencing, Remixing

    Science.gov (United States)

    Godwin-Jones, Robert

    2012-01-01

    Five years ago in the February, 2007, issue of LLT, I wrote about developments in digital video of potential interest to language teachers. Since then, there have been major changes in options for video capture, editing, and delivery. One of the most significant has been the rise in popularity of video-based storytelling, enabled largely by…

  18. Research on key technologies in multiview video and interactive multiview video streaming

    OpenAIRE

    Xiu, Xiaoyu

    2011-01-01

    Emerging video applications are being developed where multiple views of a scene are captured. Two central issues in the deployment of future multiview video (MVV) systems are compression efficiency and interactive video experience, which makes it necessary to develop advanced technologies on multiview video coding (MVC) and interactive multiview video streaming (IMVS). The former aims at efficient compression of all MVV data in a ratedistortion (RD) optimal manner by exploiting both temporal ...

  19. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  20. Scalable gastroscopic video summarization via similar-inhibition dictionary selection.

    Science.gov (United States)

    Wang, Shuai; Cong, Yang; Cao, Jun; Yang, Yunsheng; Tang, Yandong; Zhao, Huaici; Yu, Haibin

    2016-01-01

    This paper aims at developing an automated gastroscopic video summarization algorithm to assist clinicians to more effectively go through the abnormal contents of the video. To select the most representative frames from the original video sequence, we formulate the problem of gastroscopic video summarization as a dictionary selection issue. Different from the traditional dictionary selection methods, which take into account only the number and reconstruction ability of selected key frames, our model introduces the similar-inhibition constraint to reinforce the diversity of selected key frames. We calculate the attention cost by merging both gaze and content change into a prior cue to help select the frames with more high-level semantic information. Moreover, we adopt an image quality evaluation process to eliminate the interference of the poor quality images and a segmentation process to reduce the computational complexity. For experiments, we build a new gastroscopic video dataset captured from 30 volunteers with more than 400k images and compare our method with the state-of-the-arts using the content consistency, index consistency and content-index consistency with the ground truth. Compared with all competitors, our method obtains the best results in 23 of 30 videos evaluated based on content consistency, 24 of 30 videos evaluated based on index consistency and all videos evaluated based on content-index consistency. For gastroscopic video summarization, we propose an automated annotation method via similar-inhibition dictionary selection. Our model can achieve better performance compared with other state-of-the-art models and supplies more suitable key frames for diagnosis. The developed algorithm can be automatically adapted to various real applications, such as the training of young clinicians, computer-aided diagnosis or medical report generation. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  2. Markerless video analysis for movement quantification in pediatric epilepsy monitoring.

    Science.gov (United States)

    Lu, Haiping; Eng, How-Lung; Mandal, Bappaditya; Chan, Derrick W S; Ng, Yen-Ling

    2011-01-01

    This paper proposes a markerless video analytic system for quantifying body part movements in pediatric epilepsy monitoring. The system utilizes colored pajamas worn by a patient in bed to extract body part movement trajectories, from which various features can be obtained for seizure detection and analysis. Hence, it is non-intrusive and it requires no sensor/marker to be attached to the patient's body. It takes raw video sequences as input and a simple user-initialization indicates the body parts to be examined. In background/foreground modeling, Gaussian mixture models are employed in conjunction with HSV-based modeling. Body part detection follows a coarse-to-fine paradigm with graph-cut-based segmentation. Finally, body part parameters are estimated with domain knowledge guidance. Experimental studies are reported on sequences captured in an Epilepsy Monitoring Unit at a local hospital. The results demonstrate the feasibility of the proposed system in pediatric epilepsy monitoring and seizure detection.

  3. Laser capture microdissection followed by next-generation sequencing identifies disease-related microRNAs in psoriatic skin that reflect systemic microRNA changes in psoriasis

    DEFF Research Database (Denmark)

    Løvendorf, Marianne B; Mitsui, Hiroshi; Zibert, John R

    2015-01-01

    Psoriasis is a systemic disease with cutaneous manifestations. MicroRNAs (miRNAs) are small non-coding RNA molecules that are differentially expressed in psoriatic skin; however, only few cell- and region-specific miRNAs have been identified in psoriatic lesions. We used laser capture microdissec...

  4. Video Stabilization Using Feature Point Matching

    Science.gov (United States)

    Kulkarni, Shamsundar; Bormane, D. S.; Nalbalwar, S. L.

    2017-01-01

    Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper an algorithm is proposed to stabilize jittery videos. A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video is identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.

  5. Estimation of Web video multiplicity

    Science.gov (United States)

    Cheung, SenChing S.; Zakhor, Avideh

    1999-12-01

    With ever more popularity of video web-publishing, many popular contents are being mirrored, reformatted, modified and republished, resulting in excessive content duplication. While such redundancy provides fault tolerance for continuous availability of information, it could potentially create problems for multimedia search engines in that the search results for a given query might become repetitious, and cluttered with a large number of duplicates. As such, developing techniques for detecting similarity and duplication is important to multimedia search engines. In addition, content providers might be interested in identifying duplicates of their content for legal, contractual or other business related reasons. In this paper, we propose an efficient algorithm called video signature to detect similar video sequences for large databases such as the web. The idea is to first form a 'signature' for each video sequence by selection a small number of its frames that are most similar to a number of randomly chosen seed images. Then the similarity between any tow video sequences can be reliably estimated by comparing their respective signatures. Using this method, we achieve 85 percent recall and precision ratios on a test database of 377 video sequences. As a proof of concept, we have applied our proposed algorithm to a collection of 1800 hours of video corresponding to around 45000 clips from the web. Our results indicate that, on average, every video in our collection from the web has around five similar copies.

  6. US Spacesuit Knowledge Capture

    Science.gov (United States)

    Chullen, Cinda; Thomas, Ken; McMann, Joe; Dolan, Kristi; Bitterly, Rose; Lewis, Cathleen

    2011-01-01

    The ability to learn from both the mistakes and successes of the past is vital to assuring success in the future. Due to the close physical interaction between spacesuit systems and human beings as users, spacesuit technology and usage lends itself rather uniquely to the benefits realized from the skillful organization of historical information; its dissemination; the collection and identification of artifacts; and the education of those in the field. The National Aeronautics and Space Administration (NASA), other organizations and individuals have been performing United States (U.S.) Spacesuit Knowledge Capture since the beginning of space exploration. Avenues used to capture the knowledge have included publication of reports; conference presentations; specialized seminars; and classes usually given by veterans in the field. More recently the effort has been more concentrated and formalized whereby a new avenue of spacesuit knowledge capture has been added to the archives in which videotaping occurs engaging both current and retired specialists in the field presenting technical scope specifically for education and preservation of knowledge. With video archiving, all these avenues of learning can now be brought to life with the real experts presenting their wealth of knowledge on screen for future learners to enjoy. Scope and topics of U.S. spacesuit knowledge capture have included lessons learned in spacesuit technology, experience from the Gemini, Apollo, Skylab and Shuttle programs, hardware certification, design, development and other program components, spacesuit evolution and experience, failure analysis and resolution, and aspects of program management. Concurrently, U.S. spacesuit knowledge capture activities have progressed to a level where NASA, the National Air and Space Museum (NASM), Hamilton Sundstrand (HS) and the spacesuit community are now working together to provide a comprehensive closed-looped spacesuit knowledge capture system which includes

  7. Rate-Adaptive Video Compression (RAVC) Universal Video Stick (UVS)

    Science.gov (United States)

    Hench, David L.

    2009-05-01

    The H.264 video compression standard, aka MPEG 4 Part 10 aka Advanced Video Coding (AVC) allows new flexibility in the use of video in the battlefield. This standard necessitates encoder chips to effectively utilize the increased capabilities. Such chips are designed to cover the full range of the standard with designers of individual products given the capability of selecting the parameters that differentiate a broadcast system from a video conferencing system. The SmartCapture commercial product and the Universal Video Stick (UVS) military versions are about the size of a thumb drive with analog video input and USB (Universal Serial Bus) output and allow the user to select the parameters of imaging to the. Thereby, allowing the user to select video bandwidth (and video quality) using four dimensions of quality, on the fly, without stopping video transmission. The four dimensions are: 1) spatial, change from 720 pixel x 480 pixel to 320 pixel x 360 pixel to 160 pixel x 180 pixel, 2) temporal, change from 30 frames/ sec to 5 frames/sec, 3) transform quality with a 5 to 1 range, 4) and Group of Pictures (GOP) that affects noise immunity. The host processor simply wraps the H.264 network abstraction layer packets into the appropriate network packets. We also discuss the recently adopted scalable amendment to H.264 that will allow limit RAVC at any point in the communication chain by throwing away preselected packets.

  8. Video Analysis: Lessons from Professional Video Editing Practice

    Directory of Open Access Journals (Sweden)

    Eric Laurier

    2008-09-01

    Full Text Available In this paper we join a growing body of studies that learn from vernacular video analysts quite what video analysis as an intelligible course of action might be. Rather than pursuing epistemic questions regarding video as a number of other studies of video analysis have done, our concern here is with the crafts of producing the filmic. As such we examine how audio and video clips are indexed and brought to hand during the logging process, how a first assembly of the film is built at the editing bench and how logics of shot sequencing relate to wider concerns of plotting, genre and so on. In its conclusion we make a number of suggestions about the future directions of studying video and film editors at work. URN: urn:nbn:de:0114-fqs0803378

  9. Face Recognition and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Swapnil Vitthal Tathe

    2017-07-01

    Full Text Available Advancement in computer vision technology and availability of video capturing devices such as surveillance cameras has evoked new video processing applications. The research in video face recognition is mostly biased towards law enforcement applications. Applications involves human recognition based on face and iris, human computer interaction, behavior analysis, video surveillance etc. This paper presents face tracking framework that is capable of face detection using Haar features, recognition using Gabor feature extraction, matching using correlation score and tracking using Kalman filter. The method has good recognition rate for real-life videos and robust performance to changes due to illumination, environmental factors, scale, pose and orientations.

  10. Video recording in movement disorders: practical issues.

    Science.gov (United States)

    Duker, Andrew P

    2013-10-01

    Video recording can provide a valuable and unique record of the physical examinations of patients with a movement disorder, capturing nuances of movement and supplementing the written medical record. In addition, video is an indispensable tool for education and research in movement disorders. Digital file recording and storage has largely replaced analog tape recording, increasing the ease of editing and storing video records. Practical issues to consider include hardware and software configurations, video format, the security and longevity of file storage, patient consent, and video protocols.

  11. [Video documentation in forensic practice].

    Science.gov (United States)

    Schyma, C; Schyma, P

    1995-01-01

    The authors report in part 1 about their experiences with the Canon Ex1 Hi camcorder and the possibilities of documentation with the modern video technique. Application examples in legal medicine and criminalistics are described autopsy, scene, reconstruction of crimes etc. The online video documentation of microscopic sessions makes the discussion of findings easier. The use of video films for instruction produces a good resonance. The use of the video documentation can be extended by digitizing (Part 2). Two frame grabbers are presented, with which we obtained good results in digitizing of images captured from video. The best quality of images is achieved by online use of an image analysis chain. Corel 5.0 and PicEd Cora 4.0 allow complete image processings and analysis. The digital image processing influences the objectivity of the documentation. The applicabilities of image libraries are discussed.

  12. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  13. Mobile Video in Everyday Social Interactions

    Science.gov (United States)

    Reponen, Erika; Lehikoinen, Jaakko; Impiö, Jussi

    Video recording has become a spontaneous everyday activity for many people, thanks to the video capabilities of modern mobile phones. Internet connectivity of mobile phones enables fluent sharing of captured material even real-time, which makes video an up-and-coming everyday interaction medium. In this article we discuss the effect of the video camera in the social environment, everyday life situations, mainly based on a study where four groups of people used digital video cameras in their normal settings. We also reflect on another study of ours, relating to real-time mobile video communication and discuss future views. The aim of our research is to understand the possibilities in the domain of mobile video. Live and delayed sharing seem to have their special characteristics, live video being used as a virtual window between places whereas delayed video usage has more scope for good-quality content. While this novel way of interacting via mobile video enables new social patterns, it also raises new concerns for privacy and trust between participating persons in all roles, largely due to the widely spreading possibilities of videos. Video in a social situation affects cameramen (who record), targets (who are recorded), passers-by (who are unintentionally in the situation), and the audience (who follow the videos or recording situations) but also the other way around, the participants affect the video by their varying and evolving personal and communicational motivations for recording.

  14. Video games

    OpenAIRE

    Kolář, Vojtěch

    2012-01-01

    This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...

  15. Distortion Optimized Packet Scheduling and Prioritization of Multiple Video Streams over 802.11e Networks

    Directory of Open Access Journals (Sweden)

    Ilias Politis

    2007-01-01

    Full Text Available This paper presents a generic framework solution for minimizing video distortion of all multiple video streams transmitted over 802.11e wireless networks, including intelligent packet scheduling and channel access differentiation mechanisms. A distortion prediction model designed to capture the multireferenced frame coding characteristic of H.264/AVC encoded videos is used to predetermine the distortion importance of each video packet in all streams. Two intelligent scheduling algorithms are proposed: the “even-loss distribution,” where each video sender is experiencing the same loss and the “greedy-loss distribution” packet scheduling, where selected packets are dropped over all streams, ensuring that the most significant video stream in terms of picture context and quality characteristics will experience minimum losses. The proposed model has been verified with actual distortion measurements and has been found more accurate than the “additive distortion” model that omits the correlation among lost frames. The paper includes analytical and simulation results from the comparison of both schemes and from their comparison to the simplified additive model, for different video sequences and channel conditions.

  16. Targeted capture sequencing in whitebark pine reveals range-wide demographic and adaptive patterns despite challenges of a large, repetitive genome

    Directory of Open Access Journals (Sweden)

    John eSyring

    2016-04-01

    Full Text Available Whitebark pine (Pinus albicaulis inhabits an expansive range in western North America, and it is a keystone species of subalpine environments. Whitebark is susceptible to multiple threats – climate change, white pine blister rust, mountain pine beetle, and fire exclusion – and it is suffering significant mortality range-wide, prompting the tree to be listed as ‘globally endangered’ by the International Union for Conservation of Nature (IUCN and ‘endangered’ by the Canadian government. Conservation collections (in situ and ex situ are being initiated to preserve the genetic legacy of the species. Reliable, transferrable, and highly variable genetic markers are essential for quantifying the genetic profiles of seed collections relative to natural stands, and ensuring the completeness of conservation collections. We evaluated the use of hybridization-based target capture to enrich specific genomic regions from the 30+ GB genome of whitebark pine, and to evaluate genetic variation across loci, trees, and geography. Probes were designed to capture 7,849 distinct genes, and screening was performed on 48 trees. Despite the inclusion of repetitive elements in the probe pool, the resulting dataset provided information on 4,452 genes and 32% of targeted positions (528,873 bp, and we were able to identify 12,390 segregating sites from 47 trees. Variations reveal strong geographic trends in heterozygosity and allelic richness, with trees from the southern Cascade and Sierra Range showing the greatest distinctiveness and differentiation. Our results show that even under non-optimal conditions (low enrichment efficiency; inclusion of repetitive elements in baits, targeted enrichment produces high quality, codominant genotypes from large genomes. The resulting data can be readily integrated into management and gene conservation activities for whitebark pine, and have the potential to be applied to other members of 5-needle pine group (Pinus subsect

  17. Validation and Implementation of Targeted Capture and Sequencing for the Detection of Actionable Mutation, Copy Number Variation, and Gene Rearrangement in Clinical Cancer Specimens

    OpenAIRE

    Pritchard, Colin C.; Salipante, Stephen J.; Koehler, Karen; Smith, Christina; Scroggins, Sheena; Wood, Brent; Wu, David; Lee, Ming K.; Dintzis, Suzanne; Adey, Andrew; Liu, Yajuan; Keith D Eaton; Martins, Renato; Stricker, Kari; Margolin, Kim A

    2014-01-01

    Recent years have seen development and implementation of anticancer therapies targeted to particular gene mutations, but methods to assay clinical cancer specimens in a comprehensive way for the critical mutations remain underdeveloped. We have developed UW-OncoPlex, a clinical molecular diagnostic assay to provide simultaneous deep-sequencing information, based on >500× average coverage, for all classes of mutations in 194 clinically relevant genes. To validate UW-OncoPlex, we tested 98 prev...

  18. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  19. Video databases: automatic retrieval based on content.

    Science.gov (United States)

    Bolle, R. M.; Yeo, B.-L.; Yeung, M.

    Digital video databases are becoming more and more pervasive and finding video of interest in large databases is rapidly becoming a problem. Intelligent means of quick content-based video retrieval and content-based rapid video viewing is, therefore, an important topic of research. Video is a rich source of data, it contains visual and audio information, and in many cases, there is text associated with the video. Content-based video retrieval should use all this information in an efficient and effective way. From a human perspective, a video query can be viewed as an iterated sequence of navigating, searching, browsing, and viewing. This paper addresses video search in terms of these phases.

  20. SnapVideo: Personalized Video Generation for a Sightseeing Trip.

    Science.gov (United States)

    Zhang, Luming; Jing, Peiguang; Su, Yuting; Zhang, Chao; Shaoz, Ling

    2017-11-01

    Leisure tourism is an indispensable activity in urban people's life. Due to the popularity of intelligent mobile devices, a large number of photos and videos are recorded during a trip. Therefore, the ability to vividly and interestingly display these media data is a useful technique. In this paper, we propose SnapVideo, a new method that intelligently converts a personal album describing of a trip into a comprehensive, aesthetically pleasing, and coherent video clip. The proposed framework contains three main components. The scenic spot identification model first personalizes the video clips based on multiple prespecified audience classes. We then search for some auxiliary related videos from YouTube 1 according to the selected photos. To comprehensively describe a scenery, the view generation module clusters the crawled video frames into a number of views. Finally, a probabilistic model is developed to fit the frames from multiple views into an aesthetically pleasing and coherent video clip, which optimally captures the semantics of a sightseeing trip. Extensive user studies demonstrated the competitiveness of our method from an aesthetic point of view. Moreover, quantitative analysis reflects that semantically important spots are well preserved in the final video clip. 1 https://www.youtube.com/.

  1. Automated analysis and annotation of basketball video

    Science.gov (United States)

    Saur, Drew D.; Tan, Yap-Peng; Kulkarni, Sanjeev R.; Ramadge, Peter J.

    1997-01-01

    Automated analysis and annotation of video sequences are important for digital video libraries, content-based video browsing and data mining projects. A successful video annotation system should provide users with useful video content summary in a reasonable processing time. Given the wide variety of video genres available today, automatically extracting meaningful video content for annotation still remains hard by using current available techniques. However, a wide range video has inherent structure such that some prior knowledge about the video content can be exploited to improve our understanding of the high-level video semantic content. In this paper, we develop tools and techniques for analyzing structured video by using the low-level information available directly from MPEG compressed video. Being able to work directly in the video compressed domain can greatly reduce the processing time and enhance storage efficiency. As a testbed, we have developed a basketball annotation system which combines the low-level information extracted from MPEG stream with the prior knowledge of basketball video structure to provide high level content analysis, annotation and browsing for events such as wide- angle and close-up views, fast breaks, steals, potential shots, number of possessions and possession times. We expect our approach can also be extended to structured video in other domains.

  2. Scalable Photogrammetric Motion Capture System "mosca": Development and Application

    Science.gov (United States)

    Knyaz, V. A.

    2015-05-01

    Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  3. Video Podcasts

    DEFF Research Database (Denmark)

    Nortvig, Anne Mette; Sørensen, Birgitte Holm

    2016-01-01

    This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...

  4. Scene-aware joint global and local homographic video coding

    Science.gov (United States)

    Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.

    2016-09-01

    Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.

  5. AniPaint: interactive painterly animation from video.

    Science.gov (United States)

    O'Donovan, Peter; Hertzmann, Aaron

    2012-03-01

    This paper presents an interactive system for creating painterly animation from video sequences. Previous approaches to painterly animation typically emphasize either purely automatic stroke synthesis or purely manual stroke key framing. Our system supports a spectrum of interaction between these two approaches which allows the user more direct control over stroke synthesis. We introduce an approach for controlling the results of painterly animation: keyframed Control Strokes can affect automatic stroke's placement, orientation, movement, and color. Furthermore, we introduce a new automatic synthesis algorithm that traces strokes through a video sequence in a greedy manner, but, instead of a vector field, uses an objective function to guide placement. This allows the method to capture fine details, respect region boundaries, and achieve greater temporal coherence than previous methods. All editing is performed with a WYSIWYG interface where the user can directly refine the animation. We demonstrate a variety of examples using both automatic and user-guided results, with a variety of styles and source videos.

  6. Validation and implementation of targeted capture and sequencing for the detection of actionable mutation, copy number variation, and gene rearrangement in clinical cancer specimens.

    Science.gov (United States)

    Pritchard, Colin C; Salipante, Stephen J; Koehler, Karen; Smith, Christina; Scroggins, Sheena; Wood, Brent; Wu, David; Lee, Ming K; Dintzis, Suzanne; Adey, Andrew; Liu, Yajuan; Eaton, Keith D; Martins, Renato; Stricker, Kari; Margolin, Kim A; Hoffman, Noah; Churpek, Jane E; Tait, Jonathan F; King, Mary-Claire; Walsh, Tom

    2014-01-01

    Recent years have seen development and implementation of anticancer therapies targeted to particular gene mutations, but methods to assay clinical cancer specimens in a comprehensive way for the critical mutations remain underdeveloped. We have developed UW-OncoPlex, a clinical molecular diagnostic assay to provide simultaneous deep-sequencing information, based on >500× average coverage, for all classes of mutations in 194 clinically relevant genes. To validate UW-OncoPlex, we tested 98 previously characterized clinical tumor specimens from 10 different cancer types, including 41 formalin-fixed paraffin-embedded tissue samples. Mixing studies indicated reliable mutation detection in samples with ≥ 10% tumor cells. In clinical samples with ≥ 10% tumor cells, UW-OncoPlex correctly identified 129 of 130 known mutations [sensitivity 99.2%, (95% CI, 95.8%-99.9%)], including single nucleotide variants, small insertions and deletions, internal tandem duplications, gene copy number gains and amplifications, gene copy losses, chromosomal gains and losses, and actionable genomic rearrangements, including ALK-EML4, ROS1, PML-RARA, and BCR-ABL. In the same samples, the assay also identified actionable point mutations in genes not previously analyzed and novel gene rearrangements of MLL and GRIK4 in melanoma, and of ASXL1, PIK3R1, and SGCZ in acute myeloid leukemia. To best guide existing and emerging treatment regimens and facilitate integration of genomic testing with patient care, we developed a framework for data analysis, decision support, and reporting clinically actionable results. Copyright © 2014 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  7. Lossless Compression of Broadcast Video

    DEFF Research Database (Denmark)

    Martins, Bo; Eriksen, N.; Faber, E.

    1998-01-01

    We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...... artificial sequence containing uncompressible data all the 4:2:2, 8-bit test video material easily compresses losslessly to a rate below 125 Mbit/s. At this rate, video plus overhead can be contained in a single telecom 4th order PDH channel or a single STM-1 channel. Difficult 4:2:2, 10-bit test material...

  8. Performance Analysis of Video Transmission Using Sequential Distortion Minimization Method for Digital Video Broadcasting Terrestrial

    Directory of Open Access Journals (Sweden)

    Novita Astin

    2016-12-01

    Full Text Available This paper presents about the transmission of Digital Video Broadcasting system with streaming video resolution 640x480 on different IQ rate and modulation. In the video transmission, distortion often occurs, so the received video has bad quality. Key frames selection algorithm is flexibel on a change of video, but on these methods, the temporal information of a video sequence is omitted. To minimize distortion between the original video and received video, we aimed at adding methodology using sequential distortion minimization algorithm. Its aim was to create a new video, better than original video without significant loss of content between the original video and received video, fixed sequentially. The reliability of video transmission was observed based on a constellation diagram, with the best result on IQ rate 2 Mhz and modulation 8 QAM. The best video transmission was also investigated using SEDIM (Sequential Distortion Minimization Method and without SEDIM. The experimental result showed that the PSNR (Peak Signal to Noise Ratio average of video transmission using SEDIM was an increase from 19,855 dB to 48,386 dB and SSIM (Structural Similarity average increase 10,49%. The experimental results and comparison of proposed method obtained a good performance. USRP board was used as RF front-end on 2,2 GHz.

  9. Internet-based dissemination of educational video presentations: a primer in video podcasting.

    Science.gov (United States)

    Corl, Frank M; Johnson, Pamela T; Rowell, Melissa R; Fishman, Elliot K

    2008-07-01

    Video "podcasting" is an Internet-based publication and syndication technology that is defined as the process of capturing, editing, distributing, and downloading audio, video, and general multimedia productions. The expanded capacity for visual components allows radiologists to view still and animated media. These image-viewing characteristics and the ease of widespread delivery are well suited for radiologic education. This article presents detailed information about how to generate and distribute a video podcast using a Macintosh platform.

  10. Mengolah Data Video Analog menjadi Video Digital Sederhana

    Directory of Open Access Journals (Sweden)

    Nick Soedarso

    2010-10-01

    Full Text Available Nowadays, editing technology has entered the digital age. Technology will demonstrate the evidence of processing analog to digital data has become simpler since editing technology has been integrated in the society in all aspects. Understanding the technique of processing analog to digital data is important in producing a video. To utilize this technology, the introduction of equipments is fundamental to understand the features. The next phase is the capturing process that supports the preparation in editing process from scene to scene; therefore, it will become a watchable video.   

  11. Akademisk video

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    Dette kapitel har fokus på metodiske problemstillinger, der opstår i forhold til at bruge (digital) video i forbindelse med forskningskommunikation, ikke mindst online. Video har længe været benyttet i forskningen til dataindsamling og forskningskommunikation. Med digitaliseringen og internettet er...... der dog opstået nye muligheder og udfordringer i forhold til at formidle og distribuere forskningsresultater til forskellige målgrupper via video. Samtidig er klassiske metodologiske problematikker som forskerens positionering i forhold til det undersøgte stadig aktuelle. Både klassiske og nye...... problemstillinger diskuteres i kapitlet, som rammesætter diskussionen ud fra forskellige positioneringsmuligheder: formidler, historiefortæller, eller dialogist. Disse positioner relaterer sig til genrer inden for ’akademisk video’. Afslutningsvis præsenteres en metodisk værktøjskasse med redskaber til planlægning...

  12. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... World Videos. The workshops were run on December 4, 2016, in Cancun in Mexico. The two workshops together received 13 papers. Each paper was then reviewed by at least two expert reviewers in the field. In all, 11 papers were accepted to be presented at the workshops. The topics covered in the papers...

  13. 4DCapture/4DPlayer: evolving software packages for capturing, analyzing and displaying two- and three-dimensional motion data

    Science.gov (United States)

    Walton, James S.; Hodgson, Peter N.; Hallamasek, Karen G.

    2007-01-01

    In September 2002, during the 25 th Congress on High Speed Photography and Photonics, 4DVideo described a general purpose software application for the PC platform. This software (4DCapture TM) is designed to capture, analyze and display multiple video sequences. The application extracts trajectories and other kinematic information from (highspeed) video streams. Since 4DCapture TM was originally described, it has matured, and a second application (4DPlayer TM) has been introduced to support the distribution and viewing of video streams and kinematic data acquired by 4DCapture TM. 4DPlayer TM is "freeware". It may be redistributed to third parties, but it may not be modified. 4DCapture TM provides a structured environment for experimental data. Cameras are treated as transducers-that is, a source of technical data. The application provides an interface to the cameras for previewing the object-space, calibrating the images, and testing. This application can automatically track multiple landmarks seen from two or more views in two or three dimensions. Trajectories can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, data analysis application. 4DCapture TM also incorporates a simple animation capability and a friendly (FlowStack TM) user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture TM employs the AVI 2.0 standard and DirectX technology. 4DPlayer TM can be used to view multiple video sequences simultaneously and perform simple measurements of displacements and angles that vary over time. This application can detect and display the coordinates of landmarks previously identified by 4DCapture TM that have been embedded in the video streams.

  14. Video Playback Modifications for a DSpace Repository

    Directory of Open Access Journals (Sweden)

    Keith Gilbertson

    2016-01-01

    Full Text Available This paper focuses on modifications to an institutional repository system using the open source DSpace software to support playback of digital videos embedded within item pages. The changes were made in response to the formation and quick startup of an event capture group within the library that was charged with creating and editing video recordings of library events and speakers. This paper specifically discusses the selection of video formats, changes to the visual theme of the repository to allow embedded playback and captioning support, and modifications and bug fixes to the file downloading subsystem to enable skip-ahead playback of videos via byte-range requests. This paper also describes workflows for transcoding videos in the required formats, creating captions, and depositing videos into the repository.

  15. Capturing Revolute Motion and Revolute Joint Parameters with Optical Tracking

    Science.gov (United States)

    Antonya, C.

    2017-12-01

    Optical tracking of users and various technical systems are becoming more and more popular. It consists of analysing sequence of recorded images using video capturing devices and image processing algorithms. The returned data contains mainly point-clouds, coordinates of markers or coordinates of point of interest. These data can be used for retrieving information related to the geometry of the objects, but also to extract parameters for the analytical model of the system useful in a variety of computer aided engineering simulations. The parameter identification of joints deals with extraction of physical parameters (mainly geometric parameters) for the purpose of constructing accurate kinematic and dynamic models. The input data are the time-series of the marker’s position. The least square method was used for fitting the data into different geometrical shapes (ellipse, circle, plane) and for obtaining the position and orientation of revolute joins.

  16. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real W...

  17. Understanding Motion Capture for Computer Animation

    CERN Document Server

    Menache, Alberto

    2010-01-01

    The power of today's motion capture technology has taken animated characters and special effects to amazing new levels of reality. And with the release of blockbusters like Avatar and Tin-Tin, audiences continually expect more from each new release. To live up to these expectations, film and game makers, particularly technical animators and directors, need to be at the forefront of motion capture technology. In this extensively updated edition of Understanding Motion Capture for Computer Animation and Video Games, an industry insider explains the latest research developments in digital design

  18. Video temporal alignment for object viewpoint

    OpenAIRE

    Papazoglou, Anestis; Del Pero, Luca; Ferrari, Vittorio

    2017-01-01

    We address the problem of temporally aligning semantically similar videos, for example two videos of cars on different tracks. We present an alignment method that establishes frame-to-frame correspondences such that the two cars are seen from a similar viewpoint (e.g. facing right), while also being temporally smooth and visually pleasing. Unlike previous works, we do not assume that the videos show the same scripted sequence of events. We compare against three alternative methods, including ...

  19. Video enhancement effectiveness for target detection

    Science.gov (United States)

    Simon, Michael; Fischer, Amber; Petrov, Plamen

    2011-05-01

    Unmanned aerial vehicles (UAVs) capture real-time video data of military targets while keeping the warfighter at a safe distance. This keeps soldiers out of harm's way while they perform intelligence, surveillance and reconnaissance (ISR) and close-air support troops in contact (CAS-TIC) situations. The military also wants to use UAV video to achieve force multiplication. One method of achieving effective force multiplication involves fielding numerous UAVs with cameras and having multiple videos processed simultaneously by a single operator. However, monitoring multiple video streams is difficult for operators when the videos are of low quality. To address this challenge, we researched several promising video enhancement algorithms that focus on improving video quality. In this paper, we discuss our video enhancement suite and provide examples of video enhancement capabilities, focusing on stabilization, dehazing, and denoising. We provide results that show the effects of our enhancement algorithms on target detection and tracking algorithms. These results indicate that there is potential to assist the operator in identifying and tracking relevant targets with aided target recognition even on difficult video, increasing the force multiplier effect of UAVs. This work also forms the basis for human factors research into the effects of enhancement algorithms on ISR missions.

  20. Multi-hypothesis distributed stereo video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Zamarin, Marco; Forchhammer, Søren

    2013-01-01

    Distributed Video Coding (DVC) is a video coding paradigm that exploits the source statistics at the decoder based on the availability of the Side Information (SI). Stereo sequences are constituted by two views to give the user an illusion of depth. In this paper, we present a DVC decoder...

  1. R-clustering for egocentric video segmentation

    NARCIS (Netherlands)

    Talavera Martínez, Estefanía; Radeva, Petia

    2015-01-01

    In this paper, we present a new method for egocentric video temporal segmentation based on integrating a statistical mean change detector and agglomerative clustering(AC) within an energy-minimization framework. Given the tendency of most AC methods to oversegment video sequences when clustering

  2. Implementation of multistandard video signals integrator

    Science.gov (United States)

    Zabołotny, Wojciech M.; Pastuszak, Grzegorz; Sokół, Grzegorz; Borowik, Grzegorz; GÄ ska, Michał; Kasprowicz, Grzegorz H.; Poźniak, Krzysztof T.; Abramowski, Andrzej; Buchowicz, Andrzej; Trochimiuk, Maciej; Frasunek, Przemysław; Jurkiewicz, Rafał; Nalbach-Moszynska, Małgorzata; Wawrzusiak, Radosław; Bukowiecka, Danuta; Tyburska, Agata; Struniawski, Jarosław; Jastrzebski, Paweł; Jewartowski, BłaŻej; Brawata, Sebastian; Bubak, Iwona; Gloza, Małgorzata

    2017-08-01

    The paper describes the prototype implemetantion of the Video Signals Integrator (VSI). The function of the system is to integrate video signals from many sources. The VSI is a complex hybrid system consisting of hardware, firmware and software components. Its creation requires joint effort of experts from different areas. The VSI capture device is a portable hardware device responsible for capturing of video signals from different different sources and in various formats, and for transmitting them to the server. The NVR server aggregates video and control streams coming from different sources and multiplexes them into logical channels with each channel representing a single source. From there each channel can be distributed further to the end clients (consoles) for live display via a number of RTSP servers. The end client can, at the same time, inject control messages into a given channel to control movement of a CCTV camera.

  3. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition...

  4. Video Analytics

    DEFF Research Database (Denmark)

    include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition......This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...

  5. Robust Multitask Multiview Tracking in Videos.

    Science.gov (United States)

    Mei, Xue; Hong, Zhibin; Prokhorov, Danil; Tao, Dacheng

    2015-11-01

    Various sparse-representation-based methods have been proposed to solve tracking problems, and most of them employ least squares (LSs) criteria to learn the sparse representation. In many tracking scenarios, traditional LS-based methods may not perform well owing to the presence of heavy-tailed noise. In this paper, we present a tracking approach using an approximate least absolute deviation (LAD)-based multitask multiview sparse learning method to enjoy robustness of LAD and take advantage of multiple types of visual features, such as intensity, color, and texture. The proposed method is integrated in a particle filter framework, where learning the sparse representation for each view of the single particle is regarded as an individual task. The underlying relationship between tasks across different views and different particles is jointly exploited in a unified robust multitask formulation based on LAD. In addition, to capture the frequently emerging outlier tasks, we decompose the representation matrix to two collaborative components that enable a more robust and accurate approximation. We show that the proposed formulation can be effectively approximated by Nesterov's smoothing method and efficiently solved using the accelerated proximal gradient method. The presented tracker is implemented using four types of features and is tested on numerous synthetic sequences and real-world video sequences, including the CVPR2013 tracking benchmark and ALOV++ data set. Both the qualitative and quantitative results demonstrate the superior performance of the proposed approach compared with several state-of-the-art trackers.

  6. Fast motion prediction algorithm for multiview video coding

    Science.gov (United States)

    Abdelazim, Abdelrahman; Zhang, Guang Y.; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel

    2011-06-01

    Multiview Video Coding (MVC) is an extension to the H.264/MPEG-4 AVC video compression standard developed with joint efforts by MPEG/VCEG to enable efficient encoding of sequences captured simultaneously from multiple cameras using a single video stream. Therefore the design is aimed at exploiting inter-view dependencies in addition to reducing temporal redundancies. However, this further increases the overall encoding complexity In this paper, the high correlation between a macroblock and its enclosed partitions is utilised to estimate motion homogeneity, and based on the result inter-view prediction is selectively enabled or disabled. Moreover, if the MVC is divided into three layers in terms of motion prediction; the first being the full and sub-pixel motion search, the second being the mode selection process and the third being repetition of the first and second for inter-view prediction, the proposed algorithm significantly reduces the complexity in the three layers. To assess the proposed algorithm, a comprehensive set of experiments were conducted. The results show that the proposed algorithm significantly reduces the motion estimation time whilst maintaining similar Rate Distortion performance, when compared to both the H.264/MVC reference software and recently reported work.

  7. Adaptive sensing and optimal power allocation for wireless video sensors with sigma-delta imager.

    Science.gov (United States)

    Marijan, Malisa; Demirkol, Ilker; Maricić I, Danijel; Sharma, Gaurav; Ignjatovi, Zeljko

    2010-10-01

    We consider optimal power allocation for wireless video sensors (WVSs), including the image sensor subsystem in the system analysis. By assigning a power-rate-distortion (P-R-D) characteristic for the image sensor, we build a comprehensive P-R-D optimization framework for WVSs. For a WVS node operating under a power budget, we propose power allocation among the image sensor, compression, and transmission modules, in order to minimize the distortion of the video reconstructed at the receiver. To demonstrate the proposed optimization method, we establish a P-R-D model for an image sensor based upon a pixel level sigma-delta (Σ∆) image sensor design that allows investigation of the tradeoff between the bit depth of the captured images and spatio-temporal characteristics of the video sequence under the power constraint. The optimization results obtained in this setting confirm that including the image sensor in the system optimization procedure can improve the overall video quality under power constraint and prolong the lifetime of the WVSs. In particular, when the available power budget for a WVS node falls below a threshold, adaptive sensing becomes necessary to ensure that the node communicates useful information about the video content while meeting its power budget.

  8. Videography-Based Unconstrained Video Analysis.

    Science.gov (United States)

    Li, Kang; Li, Sheng; Oh, Sangmin; Fu, Yun

    2017-05-01

    Video analysis and understanding play a central role in visual intelligence. In this paper, we aim to analyze unconstrained videos, by designing features and approaches to represent and analyze videography styles in the videos. Videography denotes the process of making videos. The unconstrained videos are defined as the long duration consumer videos that usually have diverse editing artifacts and significant complexity of contents. We propose to construct a videography dictionary, which can be utilized to represent every video clip as a sequence of videography words. In addition to semantic features, such as foreground object motion and camera motion, we also incorporate two novel interpretable features to characterize videography, including the scale information and the motion correlations. We then demonstrate that, by using statistical analysis methods, the unique videography signatures extracted from different events can be automatically identified. For real-world applications, we explore the use of videography analysis for three types of applications, including content-based video retrieval, video summarization (both visual and textual), and videography-based feature pooling. In the experiments, we evaluate the performance of our approach and other methods on a large-scale unconstrained video dataset, and show that the proposed approach significantly benefits video analysis in various ways.

  9. Video time encoding machines.

    Science.gov (United States)

    Lazar, Aurel A; Pnevmatikakis, Eftychios A

    2011-03-01

    We investigate architectures for time encoding and time decoding of visual stimuli such as natural and synthetic video streams (movies, animation). The architecture for time encoding is akin to models of the early visual system. It consists of a bank of filters in cascade with single-input multi-output neural circuits. Neuron firing is based on either a threshold-and-fire or an integrate-and-fire spiking mechanism with feedback. We show that analog information is represented by the neural circuits as projections on a set of band-limited functions determined by the spike sequence. Under Nyquist-type and frame conditions, the encoded signal can be recovered from these projections with arbitrary precision. For the video time encoding machine architecture, we demonstrate that band-limited video streams of finite energy can be faithfully recovered from the spike trains and provide a stable algorithm for perfect recovery. The key condition for recovery calls for the number of neurons in the population to be above a threshold value.

  10. Compact video synopsis via global spatiotemporal optimization.

    Science.gov (United States)

    Nie, Yongwei; Xiao, Chunxia; Sun, Hanqiu; Li, Ping

    2013-10-01

    Video synopsis aims at providing condensed representations of video data sets that can be easily captured from digital cameras nowadays, especially for daily surveillance videos. Previous work in video synopsis usually moves active objects along the time axis, which inevitably causes collisions among the moving objects if compressed much. In this paper, we propose a novel approach for compact video synopsis using a unified spatiotemporal optimization. Our approach globally shifts moving objects in both spatial and temporal domains, which shifting objects temporally to reduce the length of the video and shifting colliding objects spatially to avoid visible collision artifacts. Furthermore, using a multilevel patch relocation (MPR) method, the moving space of the original video is expanded into a compact background based on environmental content to fit with the shifted objects. The shifted objects are finally composited with the expanded moving space to obtain the high-quality video synopsis, which is more condensed while remaining free of collision artifacts. Our experimental results have shown that the compact video synopsis we produced can be browsed quickly, preserves relative spatiotemporal relationships, and avoids motion collisions.

  11. Towards User Experience-Driven Adaptive Uplink Video Transmission for Automotive Applications

    OpenAIRE

    Lottermann, Christian

    2016-01-01

    The focus of this thesis is to enable user experience-driven uplink video streaming from mobile video sources with limited computational capacity and to apply these to resource-constraint automotive environments. The first part investigates perceptual quality-aware encoding of videos, the second part proposes camera context-based estimators of temporal and spatial activities for videos captured by a front-facing camera of a vehicle, and the last part studies the upstreaming of videos from a m...

  12. Integral photography capture and electronic holography display

    Science.gov (United States)

    Ichihashi, Yasuyuki; Yamamoto, Kenji

    2014-06-01

    This paper describes electronic holography output of three-dimensional (3D) video with integral photography as input. A real-time 3D image reconstruction system was implemented by using a 4K (3840×2160) resolution IP camera to capture 3D images and converting them to 8K (7680×4320) resolution holograms. Multiple graphics processing units (GPUs) were used to create 8K holograms from 4K IP images. In addition, higher resolution holograms were created to successfully reconstruct live-scene video having a diagonal size of 6 cm using a large electronic holography display.

  13. Comparison of the Abbott RealTime High-Risk Human Papillomavirus (HPV), Roche Cobas HPV, and Hybrid Capture 2 assays to direct sequencing and genotyping of HPV DNA.

    Science.gov (United States)

    Park, Yongjung; Lee, Eunhee; Choi, Jonghyeon; Jeong, Seri; Kim, Hyon-Suk

    2012-07-01

    Infection with high-risk (HR) human papillomavirus (HPV) genotypes is an important risk factor for cervical cancers. We evaluated the clinical performances of two new real-time PCR assays for detecting HR HPVs compared to that of the Hybrid Capture 2 test (HC2). A total of 356 cervical swab specimens, which had been examined for cervical cytology, were assayed by Abbott RealTime HR and Roche Cobas HPV as well as HC2. Sensitivities and specificities of these assays were determined based on the criteria that concordant results among the three assays were regarded as true-positive or -negative and that the results of genotyping and sequencing were considered true findings when the HPV assays presented discrepant results. The overall concordance rate among the results for the three assays was 82.6%, and RealTime HR and Cobas HPV assays agreed with HC2 in 86.1% and 89.9% of cases, respectively. The two real-time PCR assays agreed with each other for 89.6% of the samples, and the concordance rate between them was equal to or greater than 98.0% for detecting HPV type 16 or 18. HC2 demonstrated a sensitivity of 96.6% with a specificity of 89.1% for detecting HR HPVs, while RealTime HR presented a sensitivity of 78.3% with a specificity of 99.2%. The sensitivity and specificity of Cobas HPV for detecting HR HPVs were 91.7% and 97.0%. The new real-time PCR assays exhibited lower sensitivities for detecting HR HPVs than that of HC2. Nevertheless, the newly introduced assays have an advantage of simultaneously identifying HPV types 16 and 18 from clinical samples.

  14. Cellphones in Classrooms Land Teachers on Online Video Sites

    Science.gov (United States)

    Honawar, Vaishali

    2007-01-01

    Videos of teachers that students taped in secrecy are all over online sites like YouTube and MySpace. Angry teachers, enthusiastic teachers, teachers clowning around, singing, and even dancing are captured, usually with camera phones, for the whole world to see. Some students go so far as to create elaborately edited videos, shot over several…

  15. Building 3D Event Logs for Video Investigation

    NARCIS (Netherlands)

    Dang, T.K.; Worring, M.; Bui, T.D.

    2015-01-01

    In scene investigation, creating a video log captured using a handheld camera is more convenient and more complete than taking photos and notes. By introducing video analysis and computer vision techniques, it is possible to build a spatio-temporal representation of the investigation. Such a

  16. In vivo skin elastography with high-definition optical videos.

    Science.gov (United States)

    Zhang, Yong; Brodell, Robert T; Mostow, Eliot N; Vinyard, Christopher J; Marie, Hazel

    2009-08-01

    Continuous measurements of biomechanical properties of skin provide potentially valuable information to dermatologists for both clinical diagnosis and quantitative assessment of therapy. This paper presents an experimental study on in vivo imaging of skin elastic properties using high-definition optical videos. The objective is to (i) investigate whether skin property abnormalities can be detected in the computed strain elastograms, (ii) quantify property abnormalities with a Relative Strain Index (RSI), so that an objective rating system can be established, (iii) determine whether certain skin diseases are more amenable to optical elastography and (iv) identify factors that may have an adverse impact on the quality of strain elastograms. There are three steps in optical skin elastography: (i) skin deformations are recorded in a video sequence using a high-definition camcorder, (ii) a dense motion field between two adjacent video frames is obtained using a robust optical flow algorithm, with which a cumulative motion field between two frames of a larger interval is derived and (iii) a strain elastogram is computed by applying two weighted gradient filters to the cumulative motion data. Experiments were carried out using videos of 25 patients. In the three cases presented in this article (hypertrophic lichen planus, seborrheic keratosis and psoriasis vulgaris), abnormal tissues associated with the skin diseases were successfully identified in the elastograms. There exists a good correspondence between the shape of property abnormalities and the area of diseased skin. The computed RSI gives a quantitative measure of the magnitude of property abnormalities that is consistent with the skin stiffness observed on clinical examinations. Optical elastography is a promising imaging modality that is capable of capturing disease-induced property changes. Its main advantage is that an elastogram presents a continuous description of the spatial variation of skin properties on

  17. Video-based convolutional neural networks for activity recognition from robot-centric videos

    Science.gov (United States)

    Ryoo, M. S.; Matthies, Larry

    2016-05-01

    In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.

  18. Super-Resolution Still and Video Reconstruction from MPEG Coded Video

    National Research Council Canada - National Science Library

    Altunbasak, Yucel

    2004-01-01

    Transform coding is a popular and effective compression method for both still images and video sequences, as is evident from its widespread use in international media coding standards such as MPEG, H.263 and JPEG...

  19. Simultaneous video stabilization and moving object detection in turbulence.

    Science.gov (United States)

    Oreifej, Omar; Li, Xin; Shah, Mubarak

    2013-02-01

    Turbulence mitigation refers to the stabilization of videos with nonuniform deformations due to the influence of optical turbulence. Typical approaches for turbulence mitigation follow averaging or dewarping techniques. Although these methods can reduce the turbulence, they distort the independently moving objects, which can often be of great interest. In this paper, we address the novel problem of simultaneous turbulence mitigation and moving object detection. We propose a novel three-term low-rank matrix decomposition approach in which we decompose the turbulence sequence into three components: the background, the turbulence, and the object. We simplify this extremely difficult problem into a minimization of nuclear norm, Frobenius norm, and l1 norm. Our method is based on two observations: First, the turbulence causes dense and Gaussian noise and therefore can be captured by Frobenius norm, while the moving objects are sparse and thus can be captured by l1 norm. Second, since the object's motion is linear and intrinsically different from the Gaussian-like turbulence, a Gaussian-based turbulence model can be employed to enforce an additional constraint on the search space of the minimization. We demonstrate the robustness of our approach on challenging sequences which are significantly distorted with atmospheric turbulence and include extremely tiny moving objects.

  20. Heartbeat Rate Measurement from Facial Video

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal

    2016-01-01

    by combining a ‘Good feature to track’ and a ‘Supervised descent method’ in order to overcome the limitations of currently available facial video based HR measuring systems. Such limitations include, e.g., unrealistic restriction of the subject’s movement and artificial lighting during data capture. A face...

  1. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... NEI YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration ... Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: ...

  2. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia ... of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ...

  3. NEI You Tube Videos: Amblyopia

    Science.gov (United States)

    ... YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia ... of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ...

  4. Video surveillance using JPEG 2000

    Science.gov (United States)

    Dufaux, Frederic; Ebrahimi, Touradj

    2004-11-01

    This paper describes a video surveillance system which is composed of three key components, smart cameras, a server, and clients, connected through IP-networks in wired or wireless configurations. The system has been designed so as to protect the privacy of people under surveillance. Smart cameras are based on JPEG 2000 compression where an analysis module allows for events detection and regions of interest identification. The resulting regions of interest can then be encoded with better quality and scrambled. Compressed video streams are scrambled and signed for the purpose of privacy and data integrity verification using JPSEC compliant methods. The same bitstream may also be protected for robustness to transmission errors based on JPWL compliant methods. The server receives, stores, manages and transmits the video sequences on wired and wireless channels to a variety of clients and users with different device capabilities, channel characteristics and preferences. Use of seamless scalable coding of video sequences prevents any need for transcoding operations at any point in the system.

  5. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed ... Activity Role of Body Weight in Osteoarthritis Educational Videos for Patients Rheumatoid Arthritis Educational Video Series Psoriatic ...

  6. Using underwater video imaging as an assessment tool for coastal condition

    Science.gov (United States)

    As part of an effort to monitor ecological conditions in nearshore habitats, from 2009-2012 underwater videos were captured at over 400 locations throughout the Laurentian Great Lakes. This study focuses on developing a video rating system and assessing video images. This ratin...

  7. Understanding Collective Activities of People from Videos.

    Science.gov (United States)

    Wongun Choi; Savarese, Silvio

    2014-06-01

    This paper presents a principled framework for analyzing collective activities at different levels of semantic granularity from videos. Our framework is capable of jointly tracking multiple individuals, recognizing activities performed by individuals in isolation (i.e., atomic activities such as walking or standing), recognizing the interactions between pairs of individuals (i.e., interaction activities) as well as understanding the activities of group of individuals (i.e., collective activities). A key property of our work is that it can coherently combine bottom-up information stemming from detections or fragments of tracks (or tracklets) with top-down evidence. Top-down evidence is provided by a newly proposed descriptor that captures the coherent behavior of groups of individuals in a spatial-temporal neighborhood of the sequence. Top-down evidence provides contextual information for establishing accurate associations between detections or tracklets across frames and, thus, for obtaining more robust tracking results. Bottom-up evidence percolates upwards so as to automatically infer collective activity labels. Experimental results on two challenging data sets demonstrate our theoretical claims and indicate that our model achieves enhances tracking results and the best collective classification results to date.

  8. 61214++++','DOAJ-ART-EN'); return false;" href="+++++https://jual.nipissingu.ca/wp-content/uploads/sites/25/2014/06/v61214.m4v">61214++++">Jailed - Video

    Directory of Open Access Journals (Sweden)

    Cameron CULBERT

    2012-07-01

    Full Text Available As the public education system in Northern Ontario continues to take a downward spiral, a plethora of secondary school students are being placed in an alternative educational environment. Juxtaposing the two educational settings reveals very similar methods and characteristics of educating our youth as opposed to using a truly alternative approach to education. This video reviews the relationship between public education and alternative education in a remote Northern Ontario setting. It is my belief that the traditional methods of teaching are not appropriate in educating at risk students in alternative schools. Paper and pencil worksheets do not motivate these students to learn and succeed. Alternative education should emphasize experiential learning, a just in time curriculum based on every unique individual and the students true passion for everyday life. Cameron Culbert was born on February 3rd, 1977 in North Bay, Ontario. His teenage years were split between attending public school and his willed curriculum on the ski hill. Culbert spent 10 years (1996-2002 & 2006-2010 competing for Canada as an alpine ski racer. His passion for teaching and coaching began as an athlete and has now transferred into the classroom and the community. As a graduate of Nipissing University (BA, BEd, MEd. Camerons research interests are alternative education, physical education and technology in the classroom. Currently Cameron is an active educator and coach in Northern Ontario.

  9. Subjective Video Quality Assessment in H.264/AVC Video Coding Standard

    Directory of Open Access Journals (Sweden)

    Z. Miličević

    2012-11-01

    Full Text Available This paper seeks to provide an approach for subjective video quality assessment in the H.264/AVC standard. For this purpose a special software program for the subjective assessment of quality of all the tested video sequences is developed. It was developed in accordance with recommendation ITU-T P.910, since it is suitable for the testing of multimedia applications. The obtained results show that in the proposed selective intra prediction and optimized inter prediction algorithm there is a small difference in picture quality (signal-to-noise ratio between decoded original and modified video sequences.

  10. Inpainting for videos with dynamic objects using texture and structure reconstruction

    Science.gov (United States)

    Voronin, V. V.; Marchuk, V. I.; Gapon, N. V.; Zhuravlev, A. V.; Maslennikov, S.; Stradanchenko, S.

    2015-05-01

    This paper describes a novel inpainting approach for removing marked dynamic objects from videos captured with a camera, so long as the objects occlude parts of the scene with a static background. Proposed approach allow to remove objects or restore missing or tainted regions present in a video sequence by utilizing spatial and temporal information from neighboring scenes. The algorithm iteratively performs following operations: achieve frame; update the scene model; update positions of moving objects; replace parts of the frame occupied by the objects marked for remove with use of a background model. In this paper, we extend an image inpainting algorithm based texture and structure reconstruction by incorporating an improved strategy for video. An image inpainting approach based on the construction of a composite curve for the restoration of the edges of objects in a frame using the concepts of parametric and geometric continuity is presented. It is shown that this approach allows to restore the curved edges and provide more flexibility for curve design in damaged frame by interpolating the boundaries of objects by cubic splines. After edge restoration stage, a texture reconstruction using patch-based method is carried out. We demonstrate the performance of a new approach via several examples, showing the effectiveness of our algorithm and compared with state-of-the-art video inpainting methods.

  11. SCALABLE PHOTOGRAMMETRIC MOTION CAPTURE SYSTEM “MOSCA”: DEVELOPMENT AND APPLICATION

    Directory of Open Access Journals (Sweden)

    V. A. Knyaz

    2015-05-01

    Full Text Available Wide variety of applications (from industrial to entertainment has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  12. Video Design Games

    DEFF Research Database (Denmark)

    Smith, Rachel Charlotte; Christensen, Kasper Skov; Iversen, Ole Sejer

    We introduce Video Design Games to train educators in teaching design. The Video Design Game is a workshop format consisting of three rounds in which participants observe, reflect and generalize based on video snippets from their own practice. The paper reports on a Video Design Game workshop...

  13. VORTEX: video retrieval and tracking from compressed multimedia databases--template matching from MPEG-2 video compression standard

    Science.gov (United States)

    Schonfeld, Dan; Lelescu, Dan

    1998-10-01

    In this paper, a novel visual search engine for video retrieval and tracking from compressed multimedia databases is proposed. Our approach exploits the structure of video compression standards in order to perform object matching directly on the compressed video data. This is achieved by utilizing motion compensation--a critical prediction filter embedded in video compression standards--to estimate and interpolate the desired method for template matching. Motion analysis is used to implement fast tracking of objects of interest on the compressed video data. Being presented with a query in the form of template images of objects, the system operates on the compressed video in order to find the images or video sequences where those objects are presented and their positions in the image. This in turn enables the retrieval and display of the query-relevant sequences.

  14. Characterization of social video

    Science.gov (United States)

    Ostrowski, Jeffrey R.; Sarhan, Nabil J.

    2009-01-01

    The popularity of social media has grown dramatically over the World Wide Web. In this paper, we analyze the video popularity distribution of well-known social video websites (YouTube, Google Video, and the AOL Truveo Video Search engine) and characterize their workload. We identify trends in the categories, lengths, and formats of those videos, as well as characterize the evolution of those videos over time. We further provide an extensive analysis and comparison of video content amongst the main regions of the world.

  15. Video visual analytics

    OpenAIRE

    Höferlin, Markus Johannes

    2013-01-01

    The amount of video data recorded world-wide is tremendously growing and has already reached hardly manageable dimensions. It originates from a wide range of application areas, such as surveillance, sports analysis, scientific video analysis, surgery documentation, and entertainment, and its analysis represents one of the challenges in computer science. The vast amount of video data renders manual analysis by watching the video data impractical. However, automatic evaluation of video material...

  16. Capture Their Attention: Capturing Lessons Using Screen Capture Software

    Science.gov (United States)

    Drumheller, Kristina; Lawler, Gregg

    2011-01-01

    When students miss classes for university activities such as athletic and academic events, they inevitably miss important class material. Students can get notes from their peers or visit professors to find out what they missed, but when students miss new and challenging material these steps are sometimes not enough. Screen capture and recording…

  17. Metazen - metadata capture for metagenomes.

    Science.gov (United States)

    Bischof, Jared; Harrison, Travis; Paczian, Tobias; Glass, Elizabeth; Wilke, Andreas; Meyer, Folker

    2014-01-01

    As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. Unfortunately, these tools are not specifically designed for metagenomic surveys; in particular, they lack the appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility.

  18. Perceptual compressive sensing scalability in mobile video

    Science.gov (United States)

    Bivolarski, Lazar

    2011-09-01

    Scalability features embedded within the video sequences allows for streaming over heterogeneous networks to a variety of end devices. Compressive sensing techniques that will allow for lowering the complexity increase the robustness of the video scalability are reviewed. Human visual system models are often used in establishing perceptual metrics that would evaluate quality of video. Combining of perceptual and compressive sensing approach outlined from recent investigations. The performance and the complexity of different scalability techniques are evaluated. Application of perceptual models to evaluation of the quality of compressive sensing scalability is considered in the near perceptually lossless case and to the appropriate coding schemes is reviewed.

  19. No-Reference Video Quality Assessment by HEVC Codec Analysis

    DEFF Research Database (Denmark)

    Huang, Xin; Søgaard, Jacob; Forchhammer, Søren

    2015-01-01

    the transform coefficients, estimates the distortion, and assesses the video quality. The proposed scheme generates VQA features based on Intra coded frames, and then maps features using an Elastic Net to predict subjective video quality. A set of HEVC coded 4K UHD sequences are tested. Results show......This paper proposes a No-Reference (NR) Video Quality Assessment (VQA) method for videos subject to the distortion given by High Efficiency Video Coding (HEVC). The proposed assessment can be performed either as a BitstreamBased (BB) method or as a Pixel-Based (PB). It extracts or estimates...

  20. Video-assisted segmentation of speech and audio track

    Science.gov (United States)

    Pandit, Medha; Yusoff, Yusseri; Kittler, Josef; Christmas, William J.; Chilton, E. H. S.

    1999-08-01

    Video database research is commonly concerned with the storage and retrieval of visual information invovling sequence segmentation, shot representation and video clip retrieval. In multimedia applications, video sequences are usually accompanied by a sound track. The sound track contains potential cues to aid shot segmentation such as different speakers, background music, singing and distinctive sounds. These different acoustic categories can be modeled to allow for an effective database retrieval. In this paper, we address the problem of automatic segmentation of audio track of multimedia material. This audio based segmentation can be combined with video scene shot detection in order to achieve partitioning of the multimedia material into semantically significant segments.

  1. Teaching Knowledge Management by Combining Wikis and Screen Capture Videos

    Science.gov (United States)

    Makkonen, Pekka; Siakas, Kerstin; Vaidya, Shakespeare

    2011-01-01

    Purpose: This paper aims to report on the design and creation of a knowledge management course aimed at facilitating student creation and use of social interactive learning tools for enhanced learning. Design/methodology/approach: The era of social media and web 2.0 has enabled a bottom-up collaborative approach and new ways to publish work on the…

  2. Capturing Undergraduate Experience through Participant-Generated Video

    Science.gov (United States)

    O'Toole, Paddy

    2013-01-01

    The enrolment and attrition rate in science degrees in the Western world is of increasing concern, both nationally and at university level. At the same time, teaching undergraduate science requires universities to invest in laboratories, staff and equipment to meet the initial demand of enrolling students. In this article, I discuss…

  3. Motion statistics at the saccade landing point: attentional capture by spatiotemporal features in a gaze-contingent reference

    Science.gov (United States)

    Belardinelli, Anna; Carbone, Andrea

    2012-06-01

    Motion is known to play a fundamental role in attentional capture, still it is not always included in computational models of visual attention. A wealth of literature in the past years has investigated natural image statistics at the centre of gaze to assess static low-level features accounting for fixation capture on images. A motion counterpart describing which features trigger saccades on dynamic scenes has been less looked into, whereas it would provide significant insight on the visuomotor behaviour when attending to events instead of less realistic still images. Such knowledge would be paramount to devise active vision systems that can spot interesting or malicious activities and disregard less relevant patterns. In this paper, we present an analysis of spatiotemporal features at the future centre of gaze to extract possible regularities in the fixation distribution to contrast with the feature distribution of non-fixated points. A substantial novelty in the methodology is the evaluation of the features in a gaze-contingent reference. Each video sequence fragment is indeed foveated with respect to the current fixation, while features are collected at the next saccade landing point. This allows us to estimate covertly selected motion cues in a retinotopic fashion. We consider video sequences and eye-tracking data from a recent state-of-the art dataset and test a bottom-up motion saliency measure against human performance. Obtained results can be used to further tune saliency computational models and to learn to predict human fixations on video sequences or generate meaningful shifts of active sensors in real world scenarios.

  4. Object Recognition in Videos Utilizing Hierarchical and Temporal Objectness with Deep Neural Networks

    OpenAIRE

    Peng, Liang

    2017-01-01

    This dissertation develops a novel system for object recognition in videos. The input of the system is a set of unconstrained videos containing a known set of objects. The output is the locations and categories for each object in each frame across all videos. Initially, a shot boundary detection algorithm is applied to the videos to divide them into multiple sequences separated by the identified shot boundaries. Since each of these sequences still contains moderate content variations, we furt...

  5. Unsupervised video-based lane detection using location-enhanced topic models

    Science.gov (United States)

    Sun, Hao; Wang, Cheng; Wang, Boliang; El-Sheimy, Naser

    2010-10-01

    An unsupervised learning algorithm based on topic models is presented for lane detection in video sequences observed by uncalibrated moving cameras. Our contributions are twofold. First, we introduce the maximally stable extremal region (MSER) detector for lane-marking feature extraction and derive a novel shape descriptor in an affine invariant manner to describe region shapes and a modified scale-invariant feature transform descriptor to capture feature appearance characteristics. MSER features are more stable compared to edge points or line pairs and hence provide robustness to lane-marking variations in scale, lighting, viewpoint, and shadows. Second, we proposed a novel location-enhanced probabilistic latent semantic analysis (pLSA) topic model for simultaneous lane recognition and localization. The proposed model overcomes the limitation of a pLSA model for effective topic localization. Experimental results on traffic sequences in various scenarios demonstrate the effectiveness and robustness of the proposed method.

  6. Human recognition at a distance in video

    CERN Document Server

    Bhanu, Bir

    2010-01-01

    Most biometric systems employed for human recognition require physical contact with, or close proximity to, a cooperative subject. Far more challenging is the ability to reliably recognize individuals at a distance, when viewed from an arbitrary angle under real-world environmental conditions. Gait and face data are the two biometrics that can be most easily captured from a distance using a video camera. This comprehensive and logically organized text/reference addresses the fundamental problems associated with gait and face-based human recognition, from color and infrared video data that are

  7. Defect detection on videos using neural network

    Directory of Open Access Journals (Sweden)

    Sizyakin Roman

    2017-01-01

    Full Text Available In this paper, we consider a method for defects detection in a video sequence, which consists of three main steps; frame compensation, preprocessing by a detector, which is base on the ranking of pixel values, and the classification of all pixels having anomalous values using convolutional neural networks. The effectiveness of the proposed method shown in comparison with the known techniques on several frames of the video sequence with damaged in natural conditions. The analysis of the obtained results indicates the high efficiency of the proposed method. The additional use of machine learning as postprocessing significantly reduce the likelihood of false alarm.

  8. Viz-A-Vis: toward visualizing video through computer vision.

    Science.gov (United States)

    Romero, Mario; Summet, Jay; Stasko, John; Abowd, Gregory

    2008-01-01

    In the established procedural model of information visualization, the first operation is to transform raw data into data tables [1]. The transforms typically include abstractions that aggregate and segment relevant data and are usually defined by a human, user or programmer. The theme of this paper is that for video, data transforms should be supported by low level computer vision. High level reasoning still resides in the human analyst, while part of the low level perception is handled by the computer. To illustrate this approach, we present Viz-A-Vis, an overhead video capture and access system for activity analysis in natural settings over variable periods of time. Overhead video provides rich opportunities for long-term behavioral and occupancy analysis, but it poses considerable challenges. We present initial steps addressing two challenges. First, overhead video generates overwhelmingly large volumes of video impractical to analyze manually. Second, automatic video analysis remains an open problem for computer vision.

  9. Perceived Quality of Full HD Video - Subjective Quality Assessment

    Directory of Open Access Journals (Sweden)

    Juraj Bienik

    2016-01-01

    Full Text Available In recent years, an interest in multimedia services has become a global trend and this trend is still rising. The video quality is a very significant part from the bundle of multimedia services, which leads to a requirement for quality assessment in the video domain. Video quality of a streamed video across IP networks is generally influenced by two factors “transmission link imperfection and efficiency of compression standards. This paper deals with subjective video quality assessment and the impact of the compression standards H.264, H.265 and VP9 on perceived video quality of these compression standards. The evaluation is done for four full HD sequences, the difference of scenes is in the content“ distinction is based on Spatial (SI and Temporal (TI Index of test sequences. Finally, experimental results follow up to 30% bitrate reducing of H.265 and VP9 compared with the reference H.264.

  10. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... support group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ... group for me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support ...

  11. Video Games and Citizenship

    National Research Council Canada - National Science Library

    Bourgonjon, Jeroen; Soetaert, Ronald

    2013-01-01

    ... by exploring a particular aspect of digitization that affects young people, namely video games. They explore the new social spaces which emerge in video game culture and how these spaces relate to community building and citizenship...

  12. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Doctor Find a Provider Meet the Team Blog Articles News Resources Links Videos Podcasts Webinars For the ... Doctor Find a Provider Meet the Team Blog Articles News Provider Directory Donate Resources Links Videos Podcasts ...

  13. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Doctor Find a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For ... Doctor Find a Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources Links Videos ...

  14. Digital Video in Research

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2012-01-01

    questions of our media literacy pertaining to authoring multimodal texts (visual, verbal, audial, etc.) in research practice and the status of multimodal texts in academia. The implications of academic video extend to wider issues of how researchers harness opportunities to author different types of texts......Is video becoming “the new black” in academia, if so, what are the challenges? The integration of video in research methodology (for collection, analysis) is well-known, but the use of “academic video” for dissemination is relatively new (Eriksson and Sørensen). The focus of this paper is academic...... video, or short video essays produced for the explicit purpose of communicating research processes, topics, and research-based knowledge (see the journal of academic videos: www.audiovisualthinking.org). Video is increasingly used in popular showcases for video online, such as YouTube and Vimeo, as well...

  15. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Back Support Groups Is a support group for me? Find a Group Upcoming Events Video Library Photo ... Support Groups Back Is a support group for me? Find a group Back Upcoming events Video Library ...

  16. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork ... for me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ...

  17. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For the Media For Clinicians For ... Family Caregivers Glossary Menu In this section Links Videos Podcasts Webinars For the Media For Clinicians For ...

  18. Videos, Podcasts and Livechats

    Science.gov (United States)

    ... the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For the Media For Clinicians For ... Family Caregivers Glossary Menu In this section Links Videos Podcasts Webinars For the Media For Clinicians For ...

  19. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For the Media ... a Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources Links Videos Podcasts Webinars ...

  20. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer ... me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork ...

  1. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... News Resources Links Videos Podcasts Webinars For the Media For Clinicians For Policymakers For Family Caregivers Glossary ... this section Links Videos Podcasts Webinars For the Media For Clinicians For Policymakers For Family Caregivers Glossary ...

  2. High-speed digital video tracking system for generic applications

    Science.gov (United States)

    Walton, James S.; Hallamasek, Karen G.

    2001-04-01

    The value of high-speed imaging for making subjective assessments is widely recognized, but the inability to acquire useful data from image sequences in a timely fashion has severely limited the use of the technology. 4DVideo has created a foundation for a generic instrument that can capture kinematic data from high-speed images. The new system has been designed to acquire (1) two-dimensional trajectories of points; (2) three-dimensional kinematics of structures or linked rigid-bodies; and (3) morphological reconstructions of boundaries. The system has been designed to work with an unlimited number of cameras configured as nodes in a network, with each camera able to acquire images at 1000 frames per second (fps) or better, with a spatial resolution of 512 X 512 or better, and an 8-bit gray scale. However, less demanding configurations are anticipated. The critical technology is contained in the custom hardware that services the cameras. This hardware optimizes the amount of information stored, and maximizes the available bandwidth. The system identifies targets using an algorithm implemented in hardware. When complete, the system software will provide all of the functionality required to capture and process video data from multiple perspectives. Thereafter it will extract, edit and analyze the motions of finite targets and boundaries.

  3. An innovative technique for recording picture-in-picture ultrasound videos.

    Science.gov (United States)

    Rajasekaran, Sathish; Finnoff, Jonathan T

    2013-08-01

    Many ultrasound educational products and ultrasound researchers present diagnostic and interventional ultrasound information using picture-in-picture videos, which simultaneously show the ultrasound image and transducer and patient positions. Traditional techniques for creating picture-in-picture videos are expensive, nonportable, or time-consuming. This article describes an inexpensive, simple, and portable way of creating picture-in-picture ultrasound videos. This technique uses a laptop computer with a video capture device to acquire the ultrasound feed. Simultaneously, a webcam captures a live video feed of the transducer and patient position and live audio. Both sources are streamed onto the computer screen and recorded by screen capture software. This technique makes the process of recording picture-in-picture ultrasound videos more accessible for ultrasound educators and researchers for use in their presentations or publications.

  4. Transmission of compressed video

    Science.gov (United States)

    Pasch, H. L.

    1990-09-01

    An overview of video coding is presented. The aim is not to give a technical summary of possible coding techniques, but to address subjects related to video compression in general and to the transmission of compressed video in more detail. Bit rate reduction is in general possible by removing redundant information; removing information the eye does not use anyway; and reducing the quality of the video. The codecs which are used for reducing the bit rate, can be divided into two groups: Constant Bit rate Codecs (CBC's), which keep the bit rate constant, but vary the video quality; and Variable Bit rate Codecs (VBC's), which keep the video quality constant by varying the bit rate. VBC's can be in general reach a higher video quality than CBC's using less bandwidth, but need a transmission system that allows the bandwidth of a connection to fluctuate in time. The current and the next generation of the PSTN does not allow this; ATM might. There are several factors which influence the quality of video: the bit error rate of the transmission channel, slip rate, packet loss rate/packet insertion rate, end-to-end delay, phase shift between voice and video, and bit rate. Based on the bit rate of the coded video, the following classification of coded video can be made: High Definition Television (HDTV); Broadcast Quality Television (BQTV); video conferencing; and video telephony. The properties of these classes are given. The video conferencing and video telephony equipment available now and in the next few years can be divided into three categories: conforming to 1984 CCITT standard for video conferencing; conforming to 1988 CCITT standard; and conforming to no standard.

  5. Making good physics videos

    Science.gov (United States)

    Lincoln, James

    2017-05-01

    Online videos are an increasingly important way technology is contributing to the improvement of physics teaching. Students and teachers have begun to rely on online videos to provide them with content knowledge and instructional strategies. Online audiences are expecting greater production value, and departments are sometimes requesting educators to post video pre-labs or to flip our classrooms. In this article, I share my advice on creating engaging physics videos.

  6. Desktop video conferencing

    OpenAIRE

    Potter, Ray; Roberts, Deborah

    2007-01-01

    This guide aims to provide an introduction to Desktop Video Conferencing. You may be familiar with video conferencing, where participants typically book a designated conference room and communicate with another group in a similar room on another site via a large screen display. Desktop video conferencing (DVC), as the name suggests, allows users to video conference from the comfort of their own office, workplace or home via a desktop/laptop Personal Computer. DVC provides live audio and visua...

  7. 47 CFR 79.3 - Video description of video programming.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Video description of video programming. 79.3... CLOSED CAPTIONING AND VIDEO DESCRIPTION OF VIDEO PROGRAMMING § 79.3 Video description of video programming. (a) Definitions. For purposes of this section the following definitions shall apply: (1...

  8. COMPARATIVE STUDY OF COMPRESSION TECHNIQUES FOR SYNTHETIC VIDEOS

    OpenAIRE

    Ayman Abdalla; Ahmad Mazhar; Mosa Salah

    2014-01-01

    We evaluate the performance of three state of the art video codecs on synthetic videos. The evaluation is based on both subjective and objective quality metrics. The subjective quality of the compressed video sequences is evaluated using the Double Stimulus Impairment Scale (DSIS) assessment metric while the Peak Signal-to-Noise Ratio (PSNR) is used for the objective evaluation. An extensive number of experiments are conducted to study the effect of frame rate and resolution o...

  9. Object detection in surveillance video from dense trajectories

    OpenAIRE

    Zhai, Mengyao

    2015-01-01

    Detecting objects such as humans or vehicles is a central problem in surveillance video. Myriad standard approaches exist for this problem. At their core, approaches consider either the appearance of people, patterns of their motion, or differences from the background. In this paper we build on dense trajectories, a state-of-the-art approach for describing spatio-temporal patterns in video sequences. We demonstrate an application of dense trajectories to object detection in surveillance video...

  10. High-speed three-frame image recording system using colored flash units and low-cost video equipment

    Science.gov (United States)

    Racca, Roberto G.; Scotten, Larry N.

    1995-05-01

    This article describes a method that allows the digital recording of sequences of three black and white images at rates of several thousand frames per second using a system consisting of an ordinary CCD camcorder, three flash units with color filters, a PC-based frame grabber board and some additional electronics. The maximum framing rate is determined by the duration of the flashtube emission, and for common photographic flash units lasting about 20 microsecond(s) it can exceed 10,000 frames per second in actual use. The subject under study is strobe- illuminated using a red, a green and a blue flash unit controlled by a special sequencer, and the three images are captured by a color CCD camera on a single video field. Color is used as the distinguishing parameter that allows the overlaid exposures to be resolved. The video output for that particular field will contain three individual scenes, one for each primary color component, which potentially can be resolved with no crosstalk between them. The output is electronically decoded into the primary color channels, frame grabbed and stored into digital memory, yielding three time-resolved images of the subject. A synchronization pulse provided by the flash sequencer triggers the frame grabbing so that the correct video field is acquired. A scheme involving the use of videotape as intermediate storage allows the frame grabbing to be performed using a monochrome video digitizer. Ideally each flash- illuminated scene would be confined to one color channel, but in practice various factors, both optical and electronic, affect color separation. Correction equations have been derived that counteract these effects in the digitized images and minimize 'ghosting' between frames. Once the appropriate coefficients have been established through a calibration procedure that needs to be performed only once for a given configuration of the equipment, the correction process is carried out transparently in software every time a

  11. Video Self-Modeling

    Science.gov (United States)

    Buggey, Tom; Ogle, Lindsey

    2012-01-01

    Video self-modeling (VSM) first appeared on the psychology and education stage in the early 1970s. The practical applications of VSM were limited by lack of access to tools for editing video, which is necessary for almost all self-modeling videos. Thus, VSM remained in the research domain until the advent of camcorders and VCR/DVD players and,…

  12. Developing a Promotional Video

    Science.gov (United States)

    Epley, Hannah K.

    2014-01-01

    There is a need for Extension professionals to show clientele the benefits of their program. This article shares how promotional videos are one way of reaching audiences online. An example is given on how a promotional video has been used and developed using iMovie software. Tips are offered for how professionals can create a promotional video and…

  13. A Method for Estimating Surveillance Video Georeferences

    Directory of Open Access Journals (Sweden)

    Aleksandar Milosavljević

    2017-07-01

    Full Text Available The integration of a surveillance camera video with a three-dimensional (3D geographic information system (GIS requires the georeferencing of that video. Since a video consists of separate frames, each frame must be georeferenced. To georeference a video frame, we rely on the information about the camera view at the moment that the frame was captured. A camera view in 3D space is completely determined by the camera position, orientation, and field-of-view. Since the accurate measuring of these parameters can be extremely difficult, in this paper we propose a method for their estimation based on matching video frame coordinates of certain point features with their 3D geographic locations. To obtain these coordinates, we rely on high-resolution orthophotos and digital elevation models (DEM of the area of interest. Once an adequate number of points are matched, Levenberg–Marquardt iterative optimization is applied to find the most suitable video frame georeference, i.e., position and orientation of the camera.

  14. Carbon Capture and Storage

    NARCIS (Netherlands)

    Benson, S.M.; Bennaceur, K.; Cook, P.; Davison, J.; Coninck, H. de; Farhat, K.; Ramirez, C.A.; Simbeck, D.; Surles, T.; Verma, P.; Wright, I.

    2012-01-01

    Emissions of carbon dioxide, the most important long-lived anthropogenic greenhouse gas, can be reduced by Carbon Capture and Storage (CCS). CCS involves the integration of four elements: CO 2 capture, compression of the CO2 from a gas to a liquid or a denser gas, transportation of pressurized CO 2

  15. CAPTURED India Country Evaluation

    NARCIS (Netherlands)

    O'Donoghue, R.; Brouwers, J.H.A.M.

    2012-01-01

    This report provides the findings of the India Country Evaluation and is produced as part of the overall CAPTURED End Evaluation. After five years of support by the CAPTURED project the End Evaluation has assessed that results are commendable. I-AIM was able to design an approach in which health

  16. Temporal Segmentation of MPEG Video Streams

    Directory of Open Access Journals (Sweden)

    Janko Calic

    2002-06-01

    Full Text Available Many algorithms for temporal video partitioning rely on the analysis of uncompressed video features. Since the information relevant to the partitioning process can be extracted directly from the MPEG compressed stream, higher efficiency can be achieved utilizing information from the MPEG compressed domain. This paper introduces a real-time algorithm for scene change detection that analyses the statistics of the macroblock features extracted directly from the MPEG stream. A method for extraction of the continuous frame difference that transforms the 3D video stream into a 1D curve is presented. This transform is then further employed to extract temporal units within the analysed video sequence. Results of computer simulations are reported.

  17. PROTOTIPE VIDEO EDITOR DENGAN MENGGUNAKAN DIRECT X DAN DIRECT SHOW

    Directory of Open Access Journals (Sweden)

    Djoni Haryadi Setiabudi

    2004-01-01

    Full Text Available Technology development had given people the chance to capture their memorable moments in video format. A high quality digital video is a result of a good editing process. Which in turn, arise the new need of an editor application. In accordance to the problem, here the process of making a simple application for video editing needs. The application development use the programming techniques often applied in multimedia applications, especially video. First part of the application will begin with the video file compression and decompression, then we'll step into the editing part of the digital video file. Furthermore, the application also equipped with the facilities needed for the editing processes. The application made with Microsoft Visual C++ with DirectX technology, particularly DirectShow. The application provides basic facilities that will help the editing process of a digital video file. The application will produce an AVI format file after the editing process is finished. Through the testing process of this application shows the ability of this application to do the 'cut' and 'insert' of video files in AVI, MPEG, MPG and DAT formats. The 'cut' and 'insert' process only can be done in static order. Further, the aplication also provide the effects facility for transition process in each clip. Lastly, the process of saving the new edited video file in AVI format from the application. Abstract in Bahasa Indonesia : Perkembangan teknologi memberi kesempatan masyarakat untuk mengabadikan saat - saat yang penting menggunakan video. Pembentukan video digital yang baik membutuhkan proses editing yang baik pula. Untuk melakukan proses editing video digital dibutuhkan program editor. Berdasarkan permasalahan diatas maka pada penelitian ini dibuat prototipe editor sederhana untuk video digital. Pembuatan aplikasi memakai teknik pemrograman di bidang multimedia, khususnya video. Perencanaan dalam pembuatan aplikasi tersebut dimulai dengan pembentukan

  18. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  19. Intelligent video surveillance systems

    CERN Document Server

    Dufour, Jean-Yves

    2012-01-01

    Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi

  20. VBR video traffic models

    CERN Document Server

    Tanwir, Savera

    2014-01-01

    There has been a phenomenal growth in video applications over the past few years. An accurate traffic model of Variable Bit Rate (VBR) video is necessary for performance evaluation of a network design and for generating synthetic traffic that can be used for benchmarking a network. A large number of models for VBR video traffic have been proposed in the literature for different types of video in the past 20 years. Here, the authors have classified and surveyed these models and have also evaluated the models for H.264 AVC and MVC encoded video and discussed their findings.

  1. Flip Video for Dummies

    CERN Document Server

    Hutsko, Joe

    2010-01-01

    The full-color guide to shooting great video with the Flip Video camera. The inexpensive Flip Video camera is currently one of the hottest must-have gadgets. It's portable and connects easily to any computer to transfer video you shoot onto your PC or Mac. Although the Flip Video camera comes with a quick-start guide, it lacks a how-to manual, and this full-color book fills that void! Packed with full-color screen shots throughout, Flip Video For Dummies shows you how to shoot the best possible footage in a variety of situations. You'll learn how to transfer video to your computer and then edi

  2. Making Sure What You See Is What You Get: Digital Video Technology and the Preparation of Teachers of Elementary Science

    Science.gov (United States)

    Bueno de Mesquita, Paul; Dean, Ross F.; Young, Betty J.

    2010-01-01

    Advances in digital video technology create opportunities for more detailed qualitative analyses of actual teaching practice in science and other subject areas. User-friendly digital cameras and highly developed, flexible video-analysis software programs have made the tasks of video capture, editing, transcription, and subsequent data analysis…

  3. Comparison of the Abbott RealTime High-Risk Human Papillomavirus (HPV), Roche Cobas HPV, and Hybrid Capture 2 Assays to Direct Sequencing and Genotyping of HPV DNA

    OpenAIRE

    Park, Yongjung; Lee, Eunhee; Choi, Jonghyeon; Jeong, Seri; Kim, Hyon-Suk

    2012-01-01

    Infection with high-risk (HR) human papillomavirus (HPV) genotypes is an important risk factor for cervical cancers. We evaluated the clinical performances of two new real-time PCR assays for detecting HR HPVs compared to that of the Hybrid Capture 2 test (HC2). A total of 356 cervical swab specimens, which had been examined for cervical cytology, were assayed by Abbott RealTime HR and Roche Cobas HPV as well as HC2. Sensitivities and specificities of these assays were determined based on the...

  4. Preparing to Capture Carbon

    National Research Council Canada - National Science Library

    Daniel P. Schrag

    2007-01-01

    .... Scientific and economic challenges still exist, but none are serious enough to suggest that carbon capture and storage will not work at the scale required to offset trillions of tons of carbon...

  5. Marine turtle capture data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — To estimate abundance, growth, and survival rate and to collect tissue samples, marine turtles are captured at nesting beaches and foraging grounds through various...

  6. Semisupervised feature selection via spline regression for video semantic recognition.

    Science.gov (United States)

    Han, Yahong; Yang, Yi; Yan, Yan; Ma, Zhigang; Sebe, Nicu; Zhou, Xiaofang

    2015-02-01

    To improve both the efficiency and accuracy of video semantic recognition, we can perform feature selection on the extracted video features to select a subset of features from the high-dimensional feature set for a compact and accurate video data representation. Provided the number of labeled videos is small, supervised feature selection could fail to identify the relevant features that are discriminative to target classes. In many applications, abundant unlabeled videos are easily accessible. This motivates us to develop semisupervised feature selection algorithms to better identify the relevant video features, which are discriminative to target classes by effectively exploiting the information underlying the huge amount of unlabeled video data. In this paper, we propose a framework of video semantic recognition by semisupervised feature selection via spline regression (S(2)FS(2)R) . Two scatter matrices are combined to capture both the discriminative information and the local geometry structure of labeled and unlabeled training videos: A within-class scatter matrix encoding discriminative information of labeled training videos and a spline scatter output from a local spline regression encoding data distribution. An l2,1 -norm is imposed as a regularization term on the transformation matrix to ensure it is sparse in rows, making it particularly suitable for feature selection. To efficiently solve S(2)FS(2)R , we develop an iterative algorithm and prove its convergency. In the experiments, three typical tasks of video semantic recognition, such as video concept detection, video classification, and human action recognition, are used to demonstrate that the proposed S(2)FS(2)R achieves better performance compared with the state-of-the-art methods.

  7. A novel 3D scalable video compression algorithm

    Science.gov (United States)

    Somasundaram, Siva; Subbalakshmi, Koduvayur P.

    2003-05-01

    In this paper we propose a scalable video coding scheme that utilizes the embedded block coding with optimal truncation (EBCOT) compression algorithm. Three dimensional spatio-temporal decomposition of the video sequence succeeded by compression using the EBCOT generates a SNR and resolution scalable bit stream. The proposed video coding algorithm not only performs closer to the MPEG-4 video coding standard in compression efficiency but also provides better SNR and resolution scalability. Experimental results show that the performance of the proposed algorithm does better than the 3-D SPIHT (Set Partitioning in Hierarchial Trees) algorithm by 1.5dB.

  8. Understanding Video Games

    DEFF Research Database (Denmark)

    Heide Smith, Jonas; Tosca, Susana Pajares; Egenfeldt-Nielsen, Simon

    From Pong to PlayStation 3 and beyond, Understanding Video Games is the first general introduction to the exciting new field of video game studies. This textbook traces the history of video games, introduces the major theories used to analyze games such as ludology and narratology, reviews...... the economics of the game industry, examines the aesthetics of game design, surveys the broad range of game genres, explores player culture, and addresses the major debates surrounding the medium, from educational benefits to the effects of violence. Throughout the book, the authors ask readers to consider...... larger questions about the medium: * What defines a video game? * Who plays games? * Why do we play games? * How do games affect the player? Extensively illustrated, Understanding Video Games is an indispensable and comprehensive resource for those interested in the ways video games are reshaping...

  9. Collaborative Video Sketching

    DEFF Research Database (Denmark)

    Henningsen, Birgitte; Gundersen, Peter Bukovica; Hautopp, Heidi

    2017-01-01

    This paper introduces to what we define as a collaborative video sketching process. This process links various sketching techniques with digital storytelling approaches and creative reflection processes in video productions. Traditionally, sketching has been used by designers across various...... forms and through empirical examples, we present and discuss the video recording of sketching sessions, as well as development of video sketches by rethinking, redoing and editing the recorded sessions. The empirical data is based on workshop sessions with researchers and students from universities...... and university colleges and primary and secondary school teachers. As researchers, we have had different roles in these action research case studies where various video sketching techniques were applied.The analysis illustrates that video sketching can take many forms, and two common features are important...

  10. Reflections on academic video

    Directory of Open Access Journals (Sweden)

    Thommy Eriksson

    2012-11-01

    Full Text Available As academics we study, research and teach audiovisual media, yet rarely disseminate and mediate through it. Today, developments in production technologies have enabled academic researchers to create videos and mediate audiovisually. In academia it is taken for granted that everyone can write a text. Is it now time to assume that everyone can make a video essay? Using the online journal of academic videos Audiovisual Thinking and the videos published in it as a case study, this article seeks to reflect on the emergence and legacy of academic audiovisual dissemination. Anchoring academic video and audiovisual dissemination of knowledge in two critical traditions, documentary theory and semiotics, we will argue that academic video is in fact already present in a variety of academic disciplines, and that academic audiovisual essays are bringing trends and developments that have long been part of academic discourse to their logical conclusion.

  11. Cost-effective solution to synchronized audio-visual capture using multiple sensors

    NARCIS (Netherlands)

    Lichtenauer, Jeroen; Valstar, Michel; Shen, Jie; Pantic, Maja

    2009-01-01

    Applications such as surveillance and human motion capture require high-bandwidth recording from multiple cameras. Furthermore, the recent increase in research on sensor fusion has raised the demand on synchronization accuracy between video, audio and other sensor modalities. Previously, capturing

  12. A Super-resolution Reconstruction Algorithm for Surveillance Video

    Directory of Open Access Journals (Sweden)

    Jian Shao

    2017-01-01

    Full Text Available Recent technological developments have resulted in surveillance video becoming a primary method of preserving public security. Many city crimes are observed in surveillance video. The most abundant evidence collected by the police is also acquired through surveillance video sources. Surveillance video footage offers very strong support for solving criminal cases, therefore, creating an effective policy, and applying useful methods to the retrieval of additional evidence is becoming increasingly important. However, surveillance video has had its failings, namely, video footage being captured in low resolution (LR and bad visual quality. In this paper, we discuss the characteristics of surveillance video and describe the manual feature registration – maximum a posteriori – projection onto convex sets to develop a super-resolution reconstruction method, which improves the quality of surveillance video. From this method, we can make optimal use of information contained in the LR video image, but we can also control the image edge clearly as well as the convergence of the algorithm. Finally, we make a suggestion on how to adjust the algorithm adaptability by analyzing the prior information of target image.

  13. Quality-aware Content Adaptation in Digital Video Streaming

    OpenAIRE

    Wilk, Stefan

    2016-01-01

    User-generated video has attracted a lot of attention due to the success of Video Sharing Sites such as YouTube and Online Social Networks. Recently, a shift towards live consumption of these videos is observable. The content is captured and instantly shared over the Internet using smart mobile devices such as smartphones. Large-scale platforms arise such as YouTube.Live, YouNow or Facebook.Live which enable the smartphones of users to livestream to the public. These platforms achieve the dis...

  14. MPEG-2 Compressed-Domain Algorithms for Video Analysis

    Directory of Open Access Journals (Sweden)

    Hesseler Wolfgang

    2006-01-01

    Full Text Available This paper presents new algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Three algorithms for efficient low-level metadata extraction in preprocessing stages are described. The first algorithm detects camera motion using the motion vector field of an MPEG-2 video. The second method extends the idea of motion detection to a limited region of interest, yielding an efficient algorithm to track objects inside video sequences. The third algorithm performs a cut detection using macroblock types and motion vectors.

  15. Learning Latent Super-Events to Detect Multiple Activities in Videos

    OpenAIRE

    Piergiovanni, AJ; Ryoo, Michael S.

    2017-01-01

    In this paper, we introduce the concept of learning latent \\emph{super-events} from activity videos, and present how it benefits activity detection in continuous videos. We define a super-event as a set of multiple events occurring together in videos with a particular temporal organization; it is the opposite concept of sub-events. Real-world videos contain multiple activities and are rarely segmented (e.g., surveillance videos), and learning latent super-events allows the model to capture ho...

  16. Sending Safety Video over WiMAX in Vehicle Communications

    Directory of Open Access Journals (Sweden)

    Jun Steed Huang

    2013-10-01

    Full Text Available This paper reports on the design of an OPNET simulation platform to test the performance of sending real-time safety video over VANET (Vehicular Adhoc NETwork using the WiMAX technology. To provide a more realistic environment for streaming real-time video, a video model was created based on the study of video traffic traces captured from a realistic vehicular camera, and different design considerations were taken into account. A practical controller over real-time streaming protocol is implemented to control data traffic congestion for future road safety development. Our driving video model was then integrated with the WiMAX OPNET model along with a mobility model based on real road maps. Using this simulation platform, different mobility cases have been studied and the performance evaluated in terms of end-to-end delay, jitter and visual experience.

  17. Low-complexity 2D to 3D video conversion

    Science.gov (United States)

    Chen, Ying; Zhang, Rong; Karczewicz, Marta

    2011-03-01

    3D film and 3D TV are becoming reality. More facilities and devices are now 3D capable. Compared to capture 3D video content directly, 2D to 3D video conversion is a low-cost, backward compatible alternate. There also exists a tremendous amount of monoscopic 2D video content that are of high interest to be displayed on 3D devices with noticeable immersiveness. 2D to 3D video conversion, therefore, has drawn lots of attention recently. In this paper, a low complexity 2D to 3D conversion algorithm is presented. The conversion generates stereo video pairs by 3D warping based on estimated per-pixel depth maps. The depth maps are estimated jointly by motion and color cues. Subjective tests show that the proposed algorithm achieves 3D perception with acceptable artifact.

  18. Video super-resolution using simultaneous motion and intensity calculations

    DEFF Research Database (Denmark)

    Keller, Sune Høgild; Lauze, Francois Bernard; Nielsen, Mads

    2011-01-01

    In this paper we propose an energy based algorithm for motion compensated video super-resolution (VSR) targeted on upscaling of standard definition (SD) video to high definition (HD) video. Since the motion (flow field) of the image sequence is generally unknown, we introduce a formulation...... for super-resolved sequences. Computing super-resolved flows has to our knowledge not been done before. Most advanced super-resolution (SR) methods found in literature cannot be applied to general video with arbitrary scene content and/or arbitrary optical flows, as it is possible with our simultaneous VSR...... method. Series of experiments show that our method outperforms other VSR methods when dealing with general video input and that it continues to provide good results even for large scaling factors, up to 8×8....

  19. Green Power Partnership Videos

    Science.gov (United States)

    The Green Power Partnership develops videos on a regular basis that explore a variety of topics including, Green Power partnership, green power purchasing, Renewable energy certificates, among others.

  20. Research on Agricultural Surveillance Video of Intelligent Tracking

    Science.gov (United States)

    Cai, Lecai; Xu, Jijia; Liangping, Jin; He, Zhiyong

    Intelligent video tracking technology is the digital video processing and analysis of an important field of application in the civilian and military defense have a wide range of applications. In this paper, a systematic study on the surveillance video of the Smart in the agricultural tracking, particularly in target detection and tracking problem of the study, respectively for the static background of the video sequences of moving targets detection and tracking algorithm, the goal of agricultural production for rapid detection and tracking algorithm and Mean Shift-based translation and rotation of the target tracking algorithm. Experimental results show that the system can effectively and accurately track the target in the surveillance video. Therefore, in agriculture for the intelligent video surveillance tracking study, whether it is from the environmental protection or social security, economic efficiency point of view, are very meaningful.

  1. Development of a Kinect Software Tool to Classify Movements during Active Video Gaming

    National Research Council Canada - National Science Library

    Rosenberg, Michael; Thornton, Ashleigh L; Lay, Brendan S; Ward, Brodie; Nathan, David; Hunt, Daniel; Braham, Rebecca

    2016-01-01

    ... movement during active video game play in relationship to fundamental movement skills. The aim of this study was to validate software utilising Kinect sensor motion capture technology to recognise fundamental movement skills (FMS...

  2. Development of a 3D Flash LADAR Video Camera for Entry, Decent and Landing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced Scientific Concepts, Inc. (ASC) has developed a 128 x 128 frame, 3D Flash LADAR video camera capable of a 30 Hz frame rate. Because Flash LADAR captures an...

  3. Development of a 3D Flash LADAR Video Camera for Entry, Decent, and Landing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced Scientific Concepts, Inc. (ASC) has developed a 128 x 128 frame, 3D Flash LADAR video camera which produces 3-D point clouds at 30 Hz. Flash LADAR captures...

  4. CARVE: In-flight Videos from the CARVE Aircraft, Alaska, 2012-2015

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset contains videos captured by a camera mounted on the CARVE aircraft during airborne campaigns over the Alaskan and Canadian Arctic for the Carbon in...

  5. Particle capture in ciliary filter-feeding gymnolaemate and phylactolaemate bryozoans - a comparative study

    DEFF Research Database (Denmark)

    Riisgård, Hans Ulrik; Okamura, Beth; Funch, Peter

    2010-01-01

    We studied particle capture using video-microscopy in two gymnolaemates, the marine cheilostome Electra pilosa and the freshwater ctenostome Paludicella articulata, and three phylactolaemates, Fredericella sultana with a circular funnel-shaped lophophore, and Cristatella mucedo and Lophophus...... crystallinus, both with a horseshoe-shaped lophophore. The video-microscope observations along with studies of lophophore morphology and ultrastructure indicated that phylactolaemate and gymnolaemate bryozoans with a diversity of lophophore shapes rely on the same basic structures and mechanisms for particle...... capture. Our study also demonstrates that essential features of the particle capture process resemble one another in bryozoans, brachiopods and phoronids....

  6. Spatio-temporal image inpainting for video applications

    Directory of Open Access Journals (Sweden)

    Voronin Viacheslav

    2017-01-01

    Full Text Available Video inpainting or completion is a vital video improvement technique used to repair or edit digital videos. This paper describes a framework for temporally consistent video completion. The proposed method allows to remove dynamic objects or restore missing or tainted regions presented in a video sequence by utilizing spatial and temporal information from neighboring scenes. Masking algorithm is used for detection of scratches or damaged portions in video frames. The algorithm iteratively performs the following operations: achieve frame; update the scene model; update positions of moving objects; replace parts of the frame occupied by the objects marked for remove by using a background model. In this paper, we extend an image inpainting algorithm based texture and structure reconstruction by incorporating an improved strategy for video. Our algorithm is able to deal with a variety of challenging situations which naturally arise in video inpainting, such as the correct reconstruction of dynamic textures, multiple moving objects and moving background. Experimental comparisons to state-of-the-art video completion methods demonstrate the effectiveness of the proposed approach. It is shown that the proposed spatio-temporal image inpainting method allows restoring a missing blocks and removing a text from the scenes on videos.

  7. Moving Shadow Detection in Video Using Cepstrum

    Directory of Open Access Journals (Sweden)

    Fuat Cogun

    2013-01-01

    Full Text Available Moving shadows constitute problems in various applications such as image segmentation and object tracking. The main cause of these problems is the misclassification of the shadow pixels as target pixels. Therefore, the use of an accurate and reliable shadow detection method is essential to realize intelligent video processing applications. In this paper, a cepstrum-based method for moving shadow detection is presented. The proposed method is tested on outdoor and indoor video sequences using well-known benchmark test sets. To show the improvements over previous approaches, quantitative metrics are introduced and comparisons based on these metrics are made.

  8. Revisiting video game ratings: Shift from content-centric to parent-centric approach

    Directory of Open Access Journals (Sweden)

    Jiow Hee Jhee

    2017-01-01

    Full Text Available The rapid adoption of video gaming among children has placed tremendous strain on parents’ ability to manage their children’s consumption. While parents refer online to video games ratings (VGR information to support their mediation efforts, there are many difficulties associated with such practice. This paper explores the popular VGR sites, and highlights the inadequacies of VGRs to capture the parents’ concerns, such as time displacement, social interactions, financial spending and various video game effects, beyond the widespread panics over content issues, that is subjective, ever-changing and irrelevant. As such, this paper argues for a shift from content-centric to a parent-centric approach in VGRs, that captures the evolving nature of video gaming, and support parents, the main users of VGRs, in their management of their young video gaming children. This paper proposes a Video Games Repository for Parents to represent that shift.

  9. A Survey of Advances in Vision-Based Human Motion Capture and Analysis

    DEFF Research Database (Denmark)

    Moeslund, Thomas B.; Hilton, Adrian; Krüger, Volker

    2006-01-01

    This survey reviews advances in human motion capture and analysis from 2000 to 2006, following a previous survey of papers up to 2000 Human motion capture continues to be an increasingly active research area in computer vision with over 350 publications over this period. A number of significant...... actions and behavior. This survey reviews recent trends in video based human capture and analysis, as well as discussing open problems for future research to achieve automatic visual analysis of human movement....

  10. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    Science.gov (United States)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.

    2008-12-01

    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  11. video supported performance feedback to nursing students after simulated practice events.

    OpenAIRE

    Monger, Eloise; Weal, Mark J.; Gobbi, Mary; Michaelides, Danius; Shepherd, Matthew; Wilson, Matthew; Barnard, Thomas

    2008-01-01

    Within the field of health care education, simulation is used increasingly to provide students with opportunities to develop their clinical skills (Alnier, 2006), often occurring in specially designed facilities with audio-video capture of student performance. The video capture enables analysis and assessment of student performance and or competence, the analysis of events (DiGiacomo et al, 1997), processes (Ram et al, 1999), and Objective Clinical Examinations (Humphris and Kaney, 2000 ; Viv...

  12. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer Support Program Community Connections Overview ... group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork Peer Support Program ...

  13. Reviews in instructional video

    NARCIS (Netherlands)

    van der Meij, Hans

    2017-01-01

    This study investigates the effectiveness of a video tutorial for software training whose construction was based on a combination of insights from multimedia learning and Demonstration-Based Training. In the videos, a model of task performance was enhanced with instructional features that were

  14. Digital Video Editing

    Science.gov (United States)

    McConnell, Terry

    2004-01-01

    Monica Adams, head librarian at Robinson Secondary in Fairfax country, Virginia, states that librarians should have the technical knowledge to support projects related to digital video editing. The process of digital video editing and the cables, storage issues and the computer system with software is described.

  15. AudioMove Video

    DEFF Research Database (Denmark)

    2012-01-01

    Live drawing video experimenting with low tech techniques in the field of sketching and visual sense making. In collaboration with Rune Wehner and Teater Katapult.......Live drawing video experimenting with low tech techniques in the field of sketching and visual sense making. In collaboration with Rune Wehner and Teater Katapult....

  16. Making Good Physics Videos

    Science.gov (United States)

    Lincoln, James

    2017-01-01

    Online videos are an increasingly important way technology is contributing to the improvement of physics teaching. Students and teachers have begun to rely on online videos to provide them with content knowledge and instructional strategies. Online audiences are expecting greater production value, and departments are sometimes requesting educators…

  17. SECRETS OF SONG VIDEO

    Directory of Open Access Journals (Sweden)

    Chernyshov Alexander V.

    2014-04-01

    Full Text Available The article focuses on the origins of the song videos as TV and Internet-genre. In addition, it considers problems of screen images creation depending on the musical form and the text of a songs in connection with relevant principles of accent and phraseological video editing and filming techniques as well as with additional frames and sound elements.

  18. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... support group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer ... group for me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork ...

  19. Personal Digital Video Stories

    DEFF Research Database (Denmark)

    Ørngreen, Rikke; Henningsen, Birgitte Sølbeck; Louw, Arnt Vestergaard

    2016-01-01

    agenda focusing on video productions in combination with digital storytelling, followed by a presentation of the digital storytelling features. The paper concludes with a suggestion to initiate research in what is identified as Personal Digital Video (PDV) Stories within longitudinal settings, while...

  20. The Video Generation.

    Science.gov (United States)

    Provenzo, Eugene F., Jr.

    1992-01-01

    Video games are neither neutral nor harmless but represent very specific social and symbolic constructs. Research on the social content of today's video games reveals that sex bias and gender stereotyping are widely evident throughout the Nintendo games. Violence and aggression also pervade the great majority of the games. (MLF)

  1. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos ... member of our patient care team. Managing Your Arthritis Managing Your Arthritis Managing Chronic Pain and Depression ...

  2. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed to help you learn more about Rheumatoid Arthritis (RA). You will learn how the diagnosis of ...

  3. Rheumatoid Arthritis Educational Video Series

    Science.gov (United States)

    ... Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed to help you learn more about Rheumatoid Arthritis (RA). You will learn how the diagnosis of ...

  4. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... questions Clinical Studies Publications Catalog Photos and Images Spanish Language Information Grants and Funding Extramural Research Division ... Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video ...

  5. Social video content delivery

    CERN Document Server

    Wang, Zhi; Zhu, Wenwu

    2016-01-01

    This brief presents new architecture and strategies for distribution of social video content. A primary framework for socially-aware video delivery and a thorough overview of the possible approaches is provided. The book identifies the unique characteristics of socially-aware video access and social content propagation, revealing the design and integration of individual modules that are aimed at enhancing user experience in the social network context. The change in video content generation, propagation, and consumption for online social networks, has significantly challenged the traditional video delivery paradigm. Given the massive amount of user-generated content shared in online social networks, users are now engaged as active participants in the social ecosystem rather than as passive receivers of media content. This revolution is being driven further by the deep penetration of 3G/4G wireless networks and smart mobile devices that are seamlessly integrated with online social networking and media-sharing s...

  6. Side Information and Noise Learning for Distributed Video Coding using Optical Flow and Clustering

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Rakêt, Lars Lau; Huang, Xin

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The coding efficiency of DVC critically depends on the quality of side information generation and accuracy of noise modeling. This paper considers...... side information frames. Clustering is introduced to capture cross band correlation and increase local adaptivity in the noise modeling. This paper also proposes techniques to learn from previously decoded (WZ) frames. Different techniques are combined by calculating a number of candidate soft side...... information for (LDPCA) decoding. The proposed decoder side techniques for side information and noise learning (SING) are integrated in a TDWZ scheme. On test sequences, the proposed SING codec robustly improves the coding efficiency of TDWZ DVC. For WZ frames using a GOP size of 2, up to 4dB improvement...

  7. RST-Resilient Video Watermarking Using Scene-Based Feature Extraction

    OpenAIRE

    Jung Han-Seung; Lee Young-Yoon; Lee Sang Uk

    2004-01-01

    Watermarking for video sequences should consider additional attacks, such as frame averaging, frame-rate change, frame shuffling or collusion attacks, as well as those of still images. Also, since video is a sequence of analogous images, video watermarking is subject to interframe collusion. In order to cope with these attacks, we propose a scene-based temporal watermarking algorithm. In each scene, segmented by scene-change detection schemes, a watermark is embedded temporally to one-dimens...

  8. Uses of Video in Understanding and Improving Mathematical Thinking and Teaching

    Science.gov (United States)

    Schoenfeld, Alan H.

    2017-01-01

    This article characterizes my use of video as a tool for research, design and development. I argue that videos, while a potentially overwhelming source of data, provide the kind of large bandwidth that enables one to capture phenomena that one might otherwise miss; and that although the act of taping is in itself an act of selection, there is…

  9. Enhancing Perceived Quality of Compressed Images and Video with Anisotropic Diffusion and Fuzzy Filtering

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Korhonen, Jari; Forchhammer, Søren

    2013-01-01

    video sequences. For the video sequences, different filters are applied to luminance (Y) and chrominance (U,V) components. The performance of the proposed method has been compared against several other methods by using different objective quality metrics and a subjective comparison study. Both objective...

  10. TEM Video Compressive Sensing

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-01

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental conditions

  11. Presence capture cameras - a new challenge to the image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  12. Video Shot Boundary Detection based on Multifractal Analisys

    Directory of Open Access Journals (Sweden)

    B. D. Reljin

    2011-11-01

    Full Text Available Extracting video shots is an essential preprocessing step to almost all video analysis, indexing, and other content-based operations. This process is equivalent to detecting the shot boundaries in a video. In this paper we presents video Shot Boundary Detection (SBD based on Multifractal Analysis (MA. Low-level features (color and texture features are extracted from each frame in video sequence. Features are concatenated in feature vectors (FVs and stored in feature matrix. Matrix rows correspond to FVs of frames from video sequence, while columns are time series of particular FV component. Multifractal analysis is applied to FV component time series, and shot boundaries are detected as high singularities of time series above pre defined treshold. Proposed SBD method is tested on real video sequence with 64 shots, with manually labeled shot boundaries. Detection accuracy depends on number FV components used. For only one FV component detection accuracy lies in the range 76-92% (depending on selected threshold, while by combining two FV components all shots are detected completely (accuracy of 100%.

  13. Video-based Mobile Mapping System Using Smartphones

    Science.gov (United States)

    Al-Hamad, A.; Moussa, A.; El-Sheimy, N.

    2014-11-01

    The last two decades have witnessed a huge growth in the demand for geo-spatial data. This demand has encouraged researchers around the world to develop new algorithms and design new mapping systems in order to obtain reliable sources for geo-spatial data. Mobile Mapping Systems (MMS) are one of the main sources for mapping and Geographic Information Systems (GIS) data. MMS integrate various remote sensing sensors, such as cameras and LiDAR, along with navigation sensors to provide the 3D coordinates of points of interest from moving platform (e.g. cars, air planes, etc.). Although MMS can provide accurate mapping solution for different GIS applications, the cost of these systems is not affordable for many users and only large scale companies and institutions can benefits from MMS systems. The main objective of this paper is to propose a new low cost MMS with reasonable accuracy using the available sensors in smartphones and its video camera. Using the smartphone video camera, instead of capturing individual images, makes the system easier to be used by non-professional users since the system will automatically extract the highly overlapping frames out of the video without the user intervention. Results of the proposed system are presented which demonstrate the effect of the number of the used images in mapping solution. In addition, the accuracy of the mapping results obtained from capturing a video is compared to the same results obtained from using separate captured images instead of video.

  14. Deriving video content type from HEVC bitstream semantics

    Science.gov (United States)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio R.

    2014-05-01

    As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can

  15. A new video programme

    CERN Multimedia

    CERN video productions

    2011-01-01

    "What's new @ CERN?", a new monthly video programme, will be broadcast on the Monday of every month on webcast.cern.ch. Aimed at the general public, the programme will cover the latest CERN news, with guests and explanatory features. Tune in on Monday 3 October at 4 pm (CET) to see the programme in English, and then at 4:20 pm (CET) for the French version.   var flash_video_player=get_video_player_path(); insert_player_for_external('Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-0753-kbps-640x360-25-fps-audio-64-kbps-44-kHz-stereo', 'mms://mediastream.cern.ch/MediaArchive/Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-Multirate-200-to-753-kbps-640x360-25-fps.wmv', 'false', 480, 360, 'https://mediastream.cern.ch/MediaArchive/Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-posterframe-640x360-at-10-percent.jpg', '1383406', true, 'Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-0600-kbps-maxH-360-25-fps-...

  16. Gamifying Video Object Segmentation.

    Science.gov (United States)

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  17. Video narratives: creativity and growth in teacher education

    NARCIS (Netherlands)

    Admiraal, W.; Boesenkool, F.; van Duin, G.; van de Kamp, M.-T.; Montane, M.; Salazar, J.

    2010-01-01

    Portfolios are widely used as instruments in initial teacher education in order to assess teacher competences. Video footages provides the opportunity to capture the richness ad complexity of work practices. This means that not only a larger variety of teacher competences can be demonstrated, but

  18. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks

    Science.gov (United States)

    Aghdasi, Hadi S.; Abbaspour, Maghsoud; Moghadam, Mohsen Ebrahimi; Samei, Yasaman

    2008-01-01

    Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS) and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN). With regards to the constraints of video-based sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture). This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality. PMID:27873772

  19. First results on video meteors from Crete, Greece

    Science.gov (United States)

    Maravelias, G.

    2012-01-01

    This work presents the first systematic video meteor observations from a, forthcoming permanent, station in Crete, Greece, operating as the first official node within the International Meteor Organization's Video Network. It consists of a Watec 902 H2 Ultimate camera equipped with a Panasonic WV-LA1208 (focal length 12mm, f/0.8) lens running MetRec. The system operated for 42 nights during 2011 (August 19-December 30, 2011) recording 1905 meteors. It is significantly more performant than a previous system used by the author during the Perseids 2010 (DMK camera 21AF04.AS by The Imaging Source, CCTV lens of focal length 2.8 mm, UFO Capture v2.22), which operated for 17 nights (August 4-22, 2010) recording 32 meteors. Differences - according to the author's experience - between the two softwares (MetRec, UFO Capture) are discussed along with a small guide to video meteor hardware.

  20. Categorizing Video Game Audio

    DEFF Research Database (Denmark)

    Westerberg, Andreas Rytter; Schoenau-Fog, Henrik

    2015-01-01

    This paper dives into the subject of video game audio and how it can be categorized in order to deliver a message to a player in the most precise way. A new categorization, with a new take on the diegetic spaces, can be used a tool of inspiration for sound- and game-designers to rethink how...... they can use audio in video games. The conclusion of this study is that the current models' view of the diegetic spaces, used to categorize video game audio, is not t to categorize all sounds. This can however possibly be changed though a rethinking of how the player interprets audio....

  1. Brains on video games

    OpenAIRE

    Bavelier, Daphne; Green, C. Shawn; Han, Doug Hyun; Renshaw, Perry F.; Merzenich, Michael M.; Gentile, Douglas A.

    2011-01-01

    The popular press is replete with stories about the effects of video and computer games on the brain. Sensationalist headlines claiming that video games ‘damage the brain’ or ‘boost brain power’ do not do justice to the complexities and limitations of the studies involved, and create a confusing overall picture about the effects of gaming on the brain. Here, six experts in the field shed light on our current understanding of the positive and negative ways in which playing video games can affe...

  2. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  3. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  4. Analysis of unstructured video based on camera motion

    Science.gov (United States)

    Abdollahian, Golnaz; Delp, Edward J.

    2007-01-01

    Although considerable work has been done in management of "structured" video such as movies, sports, and television programs that has known scene structures, "unstructured" video analysis is still a challenging problem due to its unrestricted nature. The purpose of this paper is to address issues in the analysis of unstructured video and in particular video shot by a typical unprofessional user (i.e home video). We describe how one can make use of camera motion information for unstructured video analysis. A new concept, "camera viewing direction," is introduced as the building block of home video analysis. Motion displacement vectors are employed to temporally segment the video based on this concept. We then find the correspondence between the camera behavior with respect to the subjective importance of the information in each segment and describe how different patterns in the camera motion can indicate levels of interest in a particular object or scene. By extracting these patterns, the most representative frames, keyframes, for the scenes are determined and aggregated to summarize the video sequence.

  5. Perceptual learning during action video game playing.

    Science.gov (United States)

    Green, C Shawn; Li, Renjie; Bavelier, Daphne

    2010-04-01

    Action video games have been shown to enhance behavioral performance on a wide variety of perceptual tasks, from those that require effective allocation of attentional resources across the visual scene, to those that demand the successful identification of fleetingly presented stimuli. Importantly, these effects have not only been shown in expert action video game players, but a causative link has been established between action video game play and enhanced processing through training studies. Although an account based solely on attention fails to capture the variety of enhancements observed after action game playing, a number of models of perceptual learning are consistent with the observed results, with behavioral modeling favoring the hypothesis that avid video game players are better able to form templates for, or extract the relevant statistics of, the task at hand. This may suggest that the neural site of learning is in areas where information is integrated and actions are selected; yet changes in low-level sensory areas cannot be ruled out. Copyright © 2009 Cognitive Science Society, Inc.

  6. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Care Disease Types FAQ Handout for Patients and Families Is It Right for You How to Get ... For the Media For Clinicians For Policymakers For Family Caregivers Glossary Menu In this section Links Videos ...

  7. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Donate Search Search What Is It Definition Pediatric Palliative Care Disease Types FAQ Handout for Patients and Families ... Policymakers For Family Caregivers Glossary Resources Browse our palliative care resources below: Links Videos Podcasts Webinars For the ...

  8. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Click to learn more... LOGIN CALENDAR DONATE NEWS Home Learn Back Learn about acoustic neuroma AN Facts ... Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video Ronson and Kerri Albany Support ...

  9. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Donate Search Search What Is It Definition Pediatric Palliative Care Disease Types FAQ Handout for Patients and ... Policymakers For Family Caregivers Glossary Resources Browse our palliative care resources below: Links Videos Podcasts Webinars For ...

  10. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Click to learn more... LOGIN CALENDAR DONATE NEWS Home Learn Back Learn about acoustic neuroma AN Facts ... Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video Howard of NJ Gloria hiking ...

  11. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Mission, Vision & Values Shop ANA Leadership & Staff Annual Reports Acoustic Neuroma Association 600 Peachtree Parkway Suite 108 ... About ANA Mission, Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video English English ...

  12. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Disease Types Stories FAQ Handout for Patients and Families Is It Right for You How to Get ... For the Media For Clinicians For Policymakers For Family Caregivers Glossary Menu In this section Links Videos ...

  13. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Search Search What Is It Definition Pediatric Palliative Care Disease Types FAQ Handout for Patients and Families ... For Family Caregivers Glossary Resources Browse our palliative care resources below: Links Videos Podcasts Webinars For the ...

  14. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Educational Video Scott at the Grand Canyon Proton Center load more hold SHIFT key to load all load all Stay Connected with ANA Newly Diagnosed Living with AN Healthcare Providers Acoustic Neuroma Association Donate Now Newly Diagnosed ...

  15. The video violence debate.

    Science.gov (United States)

    Lande, R G

    1993-04-01

    Some researchers and theorists are convinced that graphic scenes of violence on television and in movies are inextricably linked to human aggression. Others insist that a link has not been conclusively established. This paper summarizes scientific studies that have informed these two perspectives. Although many instances of children and adults imitating video violence have been documented, no court has imposed liability for harm allegedly resulting from a video program, an indication that considerable doubt still exists about the role of video violence in stimulating human aggression. The author suggests that a small group of vulnerable viewers are probably more impressionable and therefore more likely to suffer deleterious effects from violent programming. He proposes that research on video violence be narrowed to identifying and describing the vulnerable viewer.

  16. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... a patient kit Keywords Join/Renew Programs Back Support Groups Is a support group for me? Find ... Events Video Library Photo Gallery One-on-One Support ANetwork Peer Support Program Community Connections Overview Find ...

  17. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Click to learn more... LOGIN CALENDAR DONATE NEWS Home Learn Back Learn about acoustic neuroma AN Facts ... Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video English English Arabic Catalan Chinese ( ...

  18. Video i VIA

    DEFF Research Database (Denmark)

    2012-01-01

    Artiklen beskriver et udviklingsprojekt, hvor 13 grupper af lærere på tværs af fag og uddannelser producerede video til undervsioningsbrug. Der beskrives forskellige tilgange og anvendelser samt læringen i projektet...

  19. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... to your Doctor Find a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts ... to your Doctor Find a Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources ...

  20. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Click to learn more... LOGIN CALENDAR DONATE NEWS Home Learn Back Learn about acoustic neuroma AN Facts ... Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video Keck Medicine of USC ANWarriors ...

  1. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... illness: Toby’s palliative care story Access the Provider Directory Handout for Patients and Families Is it Right ... Provider Meet the Team Blog Articles News Provider Directory Donate Resources Links Videos Podcasts Webinars For the ...

  2. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Click to learn more... LOGIN EVENTS DONATE NEWS Home Learn Back Learn about acoustic neuroma AN Facts ... Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video Scott at the Grand Canyon ...

  3. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Is a support group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer Support Program Community Connections Overview Find a Meeting ...

  4. Photos and Videos

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Observers are required to take photos and/or videos of all incidentally caught sea turtles, marine mammals, seabirds and unusual or rare fish. On the first 3...

  5. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... All rights reserved. GetPalliativeCare.org does not provide medical advice, diagnosis or treatment. ... the Team Blog Articles & Stories News Provider Directory Donate Resources Links Videos ...

  6. SEFIS Video Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This is a fishery-independent survey that collects data on reef fish in southeast US waters using multiple gears, including chevron traps, video cameras, ROVs,...

  7. Adaptive Error Resilience for Video Streaming

    Directory of Open Access Journals (Sweden)

    Lakshmi R. Siruvuri

    2009-01-01

    Full Text Available Compressed video sequences are vulnerable to channel errors, to the extent that minor errors and/or small losses can result in substantial degradation. Thus, protecting compressed data against channel errors is imperative. The use of channel coding schemes can be effective in reducing the impact of channel errors, although this requires that extra parity bits to be transmitted, thus utilizing more bandwidth. However, this can be ameliorated if the transmitter can tailor the parity data rate based on its knowledge regarding current channel conditions. This can be achieved via feedback from the receiver to the transmitter. This paper describes a channel emulation system comprised of a server/proxy/client combination that utilizes feedback from the client to adapt the number of Reed-Solomon parity symbols used to protect compressed video sequences against channel errors.

  8. Online coupled camera pose estimation and dense reconstruction from video

    Science.gov (United States)

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  9. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... search for current job openings visit HHS USAJobs Home > NEI YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia Animations Blindness Cataract ...

  10. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... Amaurosis Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia NEI Home Contact Us A-Z Site Map NEI on Social Media Information in Spanish (Información en español) Website, ...

  11. Studenterproduceret video til eksamen

    DEFF Research Database (Denmark)

    Jensen, Kristian Nøhr; Hansen, Kenneth

    2016-01-01

    Formålet med denne artikel er at vise, hvordan læringsdesign og stilladsering kan anvendes til at skabe en ramme for studenterproduceret video til eksamen på videregående uddannelser. Artiklen tager udgangspunkt i en problemstilling, hvor uddannelsesinstitutionerne skal håndtere og koordinere...... de fagfaglige og mediefaglige undervisere et redskab til at fokusere og koordinere indsatsen frem mod målet med, at de studerende producerer og anvender video til eksamen....

  12. Video Editing System

    Science.gov (United States)

    Schlecht, Leslie E.; Kutler, Paul (Technical Monitor)

    1998-01-01

    This is a proposal for a general use system based, on the SGI IRIS workstation platform, for recording computer animation to videotape. In addition, this system would provide features for simple editing and enhancement. Described here are a list of requirements for the system, and a proposed configuration including the SGI VideoLab Integrator, VideoMedia VLAN animation controller and the Pioneer rewritable laserdisc recorder.

  13. Video Games and Citizenship

    OpenAIRE

    Bourgonjon, Jeroen; Soetaert, Ronald

    2013-01-01

    In their article "Video Games and Citizenship" Jeroen Bourgonjon and Ronald Soetaert argue that digitization problematizes and broadens our perspective on culture and popular media, and that this has important ramifications for our understanding of citizenship. Bourgonjon and Soetaert respond to the call of Gert Biesta for the contextualized study of young people's practices by exploring a particular aspect of digitization that affects young people, namely video games. They explore the new so...

  14. Android Video Streaming

    Science.gov (United States)

    2014-05-01

    be processed by a nearby high -performance computing asset and returned to a squad of Soldiers with annotations indicating the location of friendly and...is to change the resolution, bitrate, and/or framerate of the video being transmitted to the client, reducing the bandwidth requirements of the...video. This solution is typically not viable because a progressive download is required to have a constant resolution, bitrate, and framerate because

  15. Networked telepresence system using web browsers and omni-directional video streams

    Science.gov (United States)

    Ishikawa, Tomoya; Yamazawa, Kazumasa; Sato, Tomokazu; Ikeda, Sei; Nakamura, Yutaka; Fujikawa, Kazutoshi; Sunahara, Hideki; Yokoya, Naokazu

    2005-03-01

    In this paper, we describe a new telepresence system which enables a user to look around a virtualized real world easily in network environments. The proposed system includes omni-directional video viewers on web browsers and allows the user to look around the omni-directional video contents on the web browsers. The omni-directional video viewer is implemented as an Active-X program so that the user can install the viewer automatically only by opening the web site which contains the omni-directional video contents. The system allows many users at different sites to look around the scene just like an interactive TV using a multi-cast protocol without increasing the network traffic. This paper describes the implemented system and the experiments using live and stored video streams. In the experiment with stored video streams, the system uses an omni-directional multi-camera system for video capturing. We can look around high resolution and high quality video contents. In the experiment with live video streams, a car-mounted omni-directional camera acquires omni-directional video streams surrounding the car, running in an outdoor environment. The acquired video streams are transferred to the remote site through the wireless and wired network using multi-cast protocol. We can see the live video contents freely in arbitrary direction. In the both experiments, we have implemented a view-dependent presentation with a head-mounted display (HMD) and a gyro sensor for realizing more rich presence.

  16. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    Directory of Open Access Journals (Sweden)

    Nakamura Satoshi

    2004-01-01

    Full Text Available We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  17. Robust Adaptable Video Copy Detection

    DEFF Research Database (Denmark)

    Assent, Ira; Kremer, Hardy

    2009-01-01

    Video copy detection should be capable of identifying video copies subject to alterations e.g. in video contrast or frame rates. We propose a video copy detection scheme that allows for adaptable detection of videos that are altered temporally (e.g. frame rate change) and/or visually (e.g. change...... in contrast). Our query processing combines filtering and indexing structures for efficient multistep computation of video copies under this model. We show that our model successfully identifies altered video copies and does so more reliably than existing models....

  18. Robust video object cosegmentation.

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing; Li, Xuelong; Porikli, Fatih

    2015-10-01

    With ever-increasing volumes of video data, automatic extraction of salient object regions became even more significant for visual analytic solutions. This surge has also opened up opportunities for taking advantage of collective cues encapsulated in multiple videos in a cooperative manner. However, it also brings up major challenges, such as handling of drastic appearance, motion pattern, and pose variations, of foreground objects as well as indiscriminate backgrounds. Here, we present a cosegmentation framework to discover and segment out common object regions across multiple frames and multiple videos in a joint fashion. We incorporate three types of cues, i.e., intraframe saliency, interframe consistency, and across-video similarity into an energy optimization framework that does not make restrictive assumptions on foreground appearance and motion model, and does not require objects to be visible in all frames. We also introduce a spatio-temporal scale-invariant feature transform (SIFT) flow descriptor to integrate across-video correspondence from the conventional SIFT-flow into interframe motion flow from optical flow. This novel spatio-temporal SIFT flow generates reliable estimations of common foregrounds over the entire video data set. Experimental results show that our method outperforms the state-of-the-art on a new extensive data set (ViCoSeg).

  19. Combined Scalable Video Coding Method for Wireless Transmission

    Directory of Open Access Journals (Sweden)

    Achmad Affandi

    2011-08-01

    Full Text Available Mobile video streaming is one of multimedia services that has developed very rapidly. Recently, bandwidth utilization for wireless transmission is the main problem in the field of multimedia communications. In this research, we offer a combination of scalable methods as the most attractive solution to this problem. Scalable method for wireless communication should adapt to input video sequence. Standard ITU (International Telecommunication Union - Joint Scalable Video Model (JSVM is employed to produce combined scalable video coding (CSVC method that match the required quality of video streaming services for wireless transmission. The investigation in this paper shows that combined scalable technique outperforms the non-scalable one, in using bit rate capacity at certain layer.

  20. SVC VIDEO STREAM ALLOCATION AND ADAPTATION IN HETEROGENEOUS NETWORK

    Directory of Open Access Journals (Sweden)

    E. A. Pakulova

    2016-07-01

    Full Text Available The paper deals with video data transmission in format H.264/SVC standard with QoS requirements satisfaction. The Sender-Side Path Scheduling (SSPS algorithm and Sender-Side Video Adaptation (SSVA algorithm were developed. SSPS algorithm gives the possibility to allocate video traffic among several interfaces while SSVA algorithm dynamically changes the quality of video sequence in relation to QoS requirements. It was shown that common usage of two developed algorithms enables to aggregate throughput of access networks, increase parameters of Quality of Experience and decrease losses in comparison with Round Robin algorithm. For evaluation of proposed solution, the set-up was made. The trace files with throughput of existing public networks were used in experiments. Based on this information the throughputs of networks were limited and losses for paths were set. The results of research may be used for study and transmission of video data in heterogeneous wireless networks.

  1. Using content models to build audio-video summaries

    Science.gov (United States)

    Saarela, Janne; Merialdo, Bernard

    1998-12-01

    The amount of digitized video in archives is becoming so huge, that easier access and content browsing tools are desperately needed. Also, video is no longer one big piece of data, but a collection of useful smaller building blocks, which can be accessed and used independently from the original context of presentation. In this paper, we demonstrate a content model for audio video sequences, with the purpose of enabling the automatic generation of video summaries. The model is based on descriptors, which indicate various properties and relations of audio and video segments. In practice, these descriptors could either be generated automatically by methods of analysis, or produced manually (or computer-assisted) by the content provider. We analyze the requirements and characteristics of the different data segments, with respect to the problem of summarization, and we define our model as a set of constraints, which allow to produce good quality summaries.

  2. Analisis Pengembangan Media Pembelajaran Pengolah Angka (Spreadsheet Berbasis Video Screencast

    Directory of Open Access Journals (Sweden)

    Muhammad Munir

    2013-09-01

    Full Text Available The objectives of this study were to develop Screencast-based learning media for the course of spreadsheets  and to investigate its performance. This study utilised research and development approach that consists of: (1 Preparation that includes preparing the tools and materials (2 Recording that includes selecting an area of captures, recording mode screencast, and audio settings on the recording device, (3 Editing that includes adding drawing and callouts elements, editing the timeline, adding zooming effects, animation effects, and audio supports for the introduction, backsound, and narration, (4 Publishing that includes publishing the edited video into a single unit, converting the video format into mp4 with the format factory, (5 Finishing that includes making quizzes then merging the videos and quizzes into united media with the.exe extension. The performance of the media video has achieved the determined plan. When it is run, auto play menu appears to select the screencast.exe.

  3. Capturing the Future: Direct and Indirect Probes of Neutron Capture

    Energy Technology Data Exchange (ETDEWEB)

    Couture, Aaron Joseph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-31

    This report documents aspects of direct and indirect neutron capture. The importance of neutron capture rates and methods to determine them are presented. The following conclusions are drawn: direct neutron capture measurements remain a backbone of experimental study; work is being done to take increased advantage of indirect methods for neutron capture; both instrumentation and facilities are making new measurements possible; more work is needed on the nuclear theory side to understand what is needed furthest from stability.

  4. Reduction in Fall Rate in Dementia Managed Care Through Video Incident Review: Pilot Study.

    Science.gov (United States)

    Bayen, Eleonore; Jacquemot, Julien; Netscher, George; Agrawal, Pulkit; Tabb Noyce, Lynn; Bayen, Alexandre

    2017-10-17

    Falls of individuals with dementia are frequent, dangerous, and costly. Early detection and access to the history of a fall is crucial for efficient care and secondary prevention in cognitively impaired individuals. However, most falls remain unwitnessed events. Furthermore, understanding why and how a fall occurred is a challenge. Video capture and secure transmission of real-world falls thus stands as a promising assistive tool. The objective of this study was to analyze how continuous video monitoring and review of falls of individuals with dementia can support better quality of care. A pilot observational study (July-September 2016) was carried out in a Californian memory care facility. Falls were video-captured (24×7), thanks to 43 wall-mounted cameras (deployed in all common areas and in 10 out of 40 private bedrooms of consenting residents and families). Video review was provided to facility staff, thanks to a customized mobile device app. The outcome measures were the count of residents' falls happening in the video-covered areas, the acceptability of video recording, the analysis of video review, and video replay possibilities for care practice. Over 3 months, 16 falls were video-captured. A drop in fall rate was observed in the last month of the study. Acceptability was good. Video review enabled screening for the severity of falls and fall-related injuries. Video replay enabled identifying cognitive-behavioral deficiencies and environmental circumstances contributing to the fall. This allowed for secondary prevention in high-risk multi-faller individuals and for updated facility care policies regarding a safer living environment for all residents. Video monitoring offers high potential to support conventional care in memory care facilities.

  5. Moving Shadow Detection in Video Using Cepstrum Regular Paper

    OpenAIRE

    Cogun, Fuat; Cetin, Ahmet Enis

    2013-01-01

    Moving shadows constitute problems in various applications such as image segmentation and object tracking. The main cause of these problems is the misclassification of the shadow pixels as target pixels. Therefore, the use of an accurate and reliable shadow detection method is essential to realize intelligent video processing applications. In this paper, a cepstrum‐based method for moving shadow detection is presented. The proposed method is tested on outdoor and indoor video sequences using ...

  6. How to evaluate objective video quality metrics reliably

    DEFF Research Database (Denmark)

    Korhonen, Jari; Burini, Nino; You, Junyong

    2012-01-01

    The typical procedure for evaluating the performance of different objective quality metrics and indices involves comparisons between subjective quality ratings and the quality indices obtained using the objective metrics in question on the known video sequences. Several correlation indicators can...... as processing of subjective data. We also suggest some general guidelines for researchers to make comparison studies of objective video quality metrics more reliable and useful for the practitioners in the field....

  7. Lunar Sulfur Capture System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Lunar Sulfur Capture System (LSCS) is an innovative method to capture greater than 90 percent of sulfur gases evolved during thermal treatment of lunar soils....

  8. Efficient Coding of Shape and Transparency for Video Objects

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2007-01-01

    A novel scheme for coding gray-level alpha planes in object-based video is presented. Gray-level alpha planes convey the shape and the transparency information, which are required for smooth composition of video objects. The algorithm proposed is based on the segmentation of the alpha plane...... shape layer is processed by a novel video shape coder. In intra mode, the DSLSC binary image coder presented in is used. This is extended here with an intermode utilizing temporal redundancies in shape image sequences. Then the opaque layer is compressed by a newly designed scheme which models...

  9. Frame Rate versus Spatial Quality: Which Video Characteristics Do Matter?

    DEFF Research Database (Denmark)

    Korhonen, Jari; Reiter, Ulrich; Ukhanova, Ann

    2013-01-01

    and temporal quality levels. We also propose simple yet powerful metrics for characterizing spatial and temporal properties of a video sequence, and demonstrate how these metrics can be applied for evaluating the relative impact of spatial and temporal quality on the perceived overall quality.......Several studies have shown that the relationship between perceived video quality and frame rate is dependent on the video content. In this paper, we have analyzed the content characteristics and compared them against the subjective results derived from preference decisions between different spatial...

  10. Research of Video Steganalysis Algorithm Based on H265 Protocol

    Directory of Open Access Journals (Sweden)

    Wu Kaicheng

    2015-01-01

    This paper researches LSB matching VSA based on H265 protocol with the research background of 26 original Video sequences, it firstly extracts classification features out from training samples as input of SVM, and trains in SVM to obtain high-quality category classification model, and then tests whether there is suspicious information in the video sample. The experimental results show that VSA algorithm based on LSB matching can be more practical to obtain all frame embedded secret information and carrier and video of local frame embedded. In addition, VSA adopts the method of frame by frame with a strong robustness in resisting attack in the corresponding time domain.

  11. Modelling retinal pulsatile blood flow from video data.

    Science.gov (United States)

    Betz-Stablein, Brigid; Hazelton, Martin L; Morgan, William H

    2016-09-01

    Modern day datasets continue to increase in both size and diversity. One example of such 'big data' is video data. Within the medical arena, more disciplines are using video as a diagnostic tool. Given the large amount of data stored within a video image, it is one of most time consuming types of data to process and analyse. Therefore, it is desirable to have automated techniques to extract, process and analyse data from video images. While many methods have been developed for extracting and processing video data, statistical modelling to analyse the outputted data has rarely been employed. We develop a method to take a video sequence of periodic nature, extract the RGB data and model the changes occurring across the contiguous images. We employ harmonic regression to model periodicity with autoregressive terms accounting for the error process associated with the time series nature of the data. A linear spline is included to account for movement between frames. We apply this model to video sequences of retinal vessel pulsation, which is the pulsatile component of blood flow. Slope and amplitude are calculated for the curves generated from the application of the harmonic model, providing clinical insight into the location of obstruction within the retinal vessels. The method can be applied to individual vessels, or to smaller segments such as 2 × 2 pixels which can then be interpreted easily as a heat map. © The Author(s) 2016.

  12. Consumer-based technology for distribution of surgical videos for objective evaluation.

    Science.gov (United States)

    Gonzalez, Ray; Martinez, Jose M; Lo Menzo, Emanuele; Iglesias, Alberto R; Ro, Charles Y; Madan, Atul K

    2012-08-01

    The Global Operative Assessment of Laparoscopic Skill (GOALS) is one validated metric utilized to grade laparoscopic skills and has been utilized to score recorded operative videos. To facilitate easier viewing of these recorded videos, we are developing novel techniques to enable surgeons to view these videos. The objective of this study is to determine the feasibility of utilizing widespread current consumer-based technology to assist in distributing appropriate videos for objective evaluation. Videos from residents were recorded via a direct connection from the camera processor via an S-video output via a cable into a hub to connect to a standard laptop computer via a universal serial bus (USB) port. A standard consumer-based video editing program was utilized to capture the video and record in appropriate format. We utilized mp4 format, and depending on the size of the file, the videos were scaled down (compressed), their format changed (using a standard video editing program), or sliced into multiple videos. Standard available consumer-based programs were utilized to convert the video into a more appropriate format for handheld personal digital assistants. In addition, the videos were uploaded to a social networking website and video sharing websites. Recorded cases of laparoscopic cholecystectomy in a porcine model were utilized. Compression was required for all formats. All formats were accessed from home computers, work computers, and iPhones without difficulty. Qualitative analyses by four surgeons demonstrated appropriate quality to grade for these formats. Our preliminary results show promise that, utilizing consumer-based technology, videos can be easily distributed to surgeons to grade via GOALS via various methods. Easy accessibility may help make evaluation of resident videos less complicated and cumbersome.

  13. An Introduction to Recording, Editing, and Streaming Picture-in-Picture Ultrasound Videos.

    Science.gov (United States)

    Rajasekaran, Sathish; Hall, Mederic M; Finnoff, Jonathan T

    2016-08-01

    This paper describes the process by which high-definition resolution (up to 1920 × 1080 pixels) ultrasound video can be captured in conjunction with high-definition video of the transducer position (picture-in-picture). In addition, we describe how to edit the recorded video feeds to combine both feeds, and to crop, resize, split, stitch, cut, annotate videos, and also change the frame rate, insert pictures, edit the audio feed, and use chroma keying. We also describe how to stream a picture-in-picture ultrasound feed during a videoconference. Copyright © 2016 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  14. 3D video coding: an overview of present and upcoming standards

    Science.gov (United States)

    Merkle, Philipp; Müller, Karsten; Wiegand, Thomas

    2010-07-01

    An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.

  15. Progressive Dictionary Learning with Hierarchical Predictive Structure for Scalable Video Coding.

    Science.gov (United States)

    Dai, Wenrui; Shen, Yangmei; Xiong, Hongkai; Jiang, Xiaoqian; Zou, Junni; Taubman, David

    2017-04-12

    Dictionary learning has emerged as a promising alternative to the conventional hybrid coding framework. However, the rigid structure of sequential training and prediction degrades its performance in scalable video coding. This paper proposes a progressive dictionary learning framework with hierarchical predictive structure for scalable video coding, especially in low bitrate region. For pyramidal layers, sparse representation based on spatio-temporal dictionary is adopted to improve the coding efficiency of enhancement layers (ELs) with a guarantee of reconstruction performance. The overcomplete dictionary is trained to adaptively capture local structures along motion trajectories as well as exploit the correlations between neighboring layers of resolutions. Furthermore, progressive dictionary learning is developed to enable the scalability in temporal domain and restrict the error propagation in a close-loop predictor. Under the hierarchical predictive structure, online learning is leveraged to guarantee the training and prediction performance with an improved convergence rate. To accommodate with the stateof- the-art scalable extension of H.264/AVC and latest HEVC, standardized codec cores are utilized to encode the base and enhancement layers. Experimental results show that the proposed method outperforms the latest SHVC and HEVC simulcast over extensive test sequences with various resolutions.

  16. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yasaman Samei

    2008-08-01

    Full Text Available Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN. With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture. This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  17. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks.

    Science.gov (United States)

    Aghdasi, Hadi S; Abbaspour, Maghsoud; Moghadam, Mohsen Ebrahimi; Samei, Yasaman

    2008-08-04

    Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS) and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN). With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture). This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  18. Talking Video in 'Everyday Life'

    DEFF Research Database (Denmark)

    McIlvenny, Paul

    For better or worse, video technologies have made their way into many domains of social life, for example in the domain of therapeutics. Techniques such as Marte Meo, Video Interaction Guidance (ViG), Video-Enhanced Reflection on Communication, Video Home Training and Video intervention....../prevention (VIP) all promote the use of video as a therapeutic tool. This paper focuses on media therapeutics and the various in situ uses of video technologies in the mass media for therapeutic purposes. Reality TV parenting programmes such as Supernanny, Little Angels, The House of Tiny Tearaways, Honey, We......’re Killing the Kids, and Driving Mum and Dad Mad all use video as a prominent element of not only the audiovisual spectacle of reality television but also the interactional therapy, counselling, coaching and/or instruction intrinsic to these programmes. Thus, talk-on-video is used to intervene...

  19. Metazen – metadata capture for metagenomes

    Science.gov (United States)

    2014-01-01

    Background As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. Unfortunately, these tools are not specifically designed for metagenomic surveys; in particular, they lack the appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Results Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Conclusions Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility. PMID:25780508

  20. Capturing the Daylight Dividend

    Energy Technology Data Exchange (ETDEWEB)

    Peter Boyce; Claudia Hunter; Owen Howlett

    2006-04-30

    Capturing the Daylight Dividend conducted activities to build market demand for daylight as a means of improving indoor environmental quality, overcoming technological barriers to effective daylighting, and informing and assisting state and regional market transformation and resource acquisition program implementation efforts. The program clarified the benefits of daylight by examining whole building systems energy interactions between windows, lighting, heating, and air conditioning in daylit buildings, and daylighting's effect on the human circadian system and productivity. The project undertook work to advance photosensors, dimming systems, and ballasts, and provided technical training in specifying and operating daylighting controls in buildings. Future daylighting work is recommended in metric development, technology development, testing, training, education, and outreach.

  1. Robust automated knowledge capture.

    Energy Technology Data Exchange (ETDEWEB)

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  2. A reduced-reference perceptual image and video quality metric based on edge preservation

    Science.gov (United States)

    Martini, Maria G.; Villarini, Barbara; Fiorucci, Federico

    2012-12-01

    In image and video compression and transmission, it is important to rely on an objective image/video quality metric which accurately represents the subjective quality of processed images and video sequences. In some scenarios, it is also important to evaluate the quality of the received video sequence with minimal reference to the transmitted one. For instance, for quality improvement of video transmission through closed-loop optimisation, the video quality measure can be evaluated at the receiver and provided as feedback information to the system controller. The original image/video sequence--prior to compression and transmission--is not usually available at the receiver side, and it is important to rely at the receiver side on an objective video quality metric that does not need reference or needs minimal reference to the original video sequence. The observation that the human eye is very sensitive to edge and contour information of an image underpins the proposal of our reduced reference (RR) quality metric, which compares edge information between the distorted and the original image. Results highlight that the metric correlates well with subjective observations, also in comparison with commonly used full-reference metrics and with a state-of-the-art RR metric.

  3. 100 Million Views of Electronic Cigarette YouTube Videos and Counting: Quantification, Content Evaluation, and Engagement Levels of Videos.

    Science.gov (United States)

    Huang, Jidong; Kornfield, Rachel; Emery, Sherry L

    2016-03-18

    The video-sharing website, YouTube, has become an important avenue for product marketing, including tobacco products. It may also serve as an important medium for promoting electronic cigarettes, which have rapidly increased in popularity and are heavily marketed online. While a few studies have examined a limited subset of tobacco-related videos on YouTube, none has explored e-cigarette videos' overall presence on the platform. To quantify e-cigarette-related videos on YouTube, assess their content, and characterize levels of engagement with those videos. Understanding promotion and discussion of e-cigarettes on YouTube may help clarify the platform's impact on consumer attitudes and behaviors and inform regulations. Using an automated crawling procedure and keyword rules, e-cigarette-related videos posted on YouTube and their associated metadata were collected between July 1, 2012, and June 30, 2013. Metadata were analyzed to describe posting and viewing time trends, number of views, comments, and ratings. Metadata were content coded for mentions of health, safety, smoking cessation, promotional offers, Web addresses, product types, top-selling brands, or names of celebrity endorsers. As of June 30, 2013, approximately 28,000 videos related to e-cigarettes were captured. Videos were posted by approximately 10,000 unique YouTube accounts, viewed more than 100 million times, rated over 380,000 times, and commented on more than 280,000 times. More than 2200 new videos were being uploaded every month by June 2013. The top 1% of most-viewed videos accounted for 44% of total views. Text fields for the majority of videos mentioned websites (70.11%); many referenced health (13.63%), safety (10.12%), smoking cessation (9.22%), or top e-cigarette brands (33.39%). The number of e-cigarette-related YouTube videos was projected to exceed 65,000 by the end of 2014, with approximately 190 million views. YouTube is a major information-sharing platform for electronic cigarettes

  4. Optimizing Training Set Construction for Video Semantic Classification

    Directory of Open Access Journals (Sweden)

    Xiuqing Wu

    2007-12-01

    Full Text Available We exploit the criteria to optimize training set construction for the large-scale video semantic classification. Due to the large gap between low-level features and higher-level semantics, as well as the high diversity of video data, it is difficult to represent the prototypes of semantic concepts by a training set of limited size. In video semantic classification, most of the learning-based approaches require a large training set to achieve good generalization capacity, in which large amounts of labor-intensive manual labeling are ineluctable. However, it is observed that the generalization capacity of a classifier highly depends on the geometrical distribution of the training data rather than the size. We argue that a training set which includes most temporal and spatial distribution information of the whole data will achieve a good performance even if the size of training set is limited. In order to capture the geometrical distribution characteristics of a given video collection, we propose four metrics for constructing/selecting an optimal training set, including salience, temporal dispersiveness, spatial dispersiveness, and diversity. Furthermore, based on these metrics, we propose a set of optimization rules to capture the most distribution information of the whole data using a training set with a given size. Experimental results demonstrate these rules are effective for training set construction in video semantic classification, and significantly outperform random training set selection.

  5. Action video game playing is associated with improved visual sensitivity, but not alterations in visual sensory memory

    National Research Council Canada - National Science Library

    Appelbaum, L Gregory; Cain, Matthew S; Darling, Elise F; Mitroff, Stephen R

    2013-01-01

    .... These benefits are captured through a wide range of psychometric tasks and have led to the proposition that action video game experience may promote the ability to extract statistical evidence from sensory stimuli...

  6. Video y desarrollo rural

    Directory of Open Access Journals (Sweden)

    Fraser Colin

    2015-01-01

    Full Text Available Las primeras experiencias de video rural fueron realizadas en Perú y México. El proyecto peruano es conocido como CESPAC (Centro de Servicios de Pedagogía Audiovisual para la Capacitación. Con financiamiento externo de la FAO fue iniciado en la década del 70. El proyecto mexicano fue bautizado con el nombre de PRODERITH (Programa de Desarrollo Rural Integrado del Trópico Húmedo. Su componente de video rural tuvo un éxito muy particular a nivel de base.La evaluación concluyó en que el video rural como sistema de comunicación social para el desarrollo es excelente y de bajo costo

  7. A Big Video Manifesto

    DEFF Research Database (Denmark)

    Mcilvenny, Paul Bruce; Davidsen, Jacob

    2017-01-01

    For the last few years, we have witnessed a hype about the potential results and insights that quantitative big data can bring to the social sciences. The wonder of big data has moved into education, traffic planning, and disease control with a promise of making things better with big numbers...... and beautiful visualisations. However, we also need to ask what the tools of big data can do both for the Humanities and for more interpretative approaches and methods. Thus, we prefer to explore how the power of computation, new sensor technologies and massive storage can also help with video-based qualitative...... inquiry, such as video ethnography, ethnovideo, performance documentation, anthropology and multimodal interaction analysis. That is why we put forward, half-jokingly at first, a Big Video manifesto to spur innovation in the Digital Humanities....

  8. Online video examination

    DEFF Research Database (Denmark)

    Qvist, Palle

    courses are accredited to the master programme. The programme is online, worldwide and on demand. It recruits students from all over the world. The programme is organized exemplary in accordance the principles in the problem-based and project-based learning method used at Aalborg University where students......The Master programme in Problem-Based Learning in Engineering and Science, MPBL (www.mpbl.aau.dk), at Aalborg University, is an international programme offering formalized staff development. The programme is also offered in smaller parts as single subject courses (SSC). Passed single subject...... have large influence on their own teaching, learning and curriculum. The programme offers streamed videos in combination with other learning resources. It is a concept which offers video as pure presentation - video lectures - but also as an instructional tool which gives the students the possibility...

  9. Brains on video games.

    Science.gov (United States)

    Bavelier, Daphne; Green, C Shawn; Han, Doug Hyun; Renshaw, Perry F; Merzenich, Michael M; Gentile, Douglas A

    2011-11-18

    The popular press is replete with stories about the effects of video and computer games on the brain. Sensationalist headlines claiming that video games 'damage the brain' or 'boost brain power' do not do justice to the complexities and limitations of the studies involved, and create a confusing overall picture about the effects of gaming on the brain. Here, six experts in the field shed light on our current understanding of the positive and negative ways in which playing video games can affect cognition and behaviour, and explain how this knowledge can be harnessed for educational and rehabilitation purposes. As research in this area is still in its early days, the contributors of this Viewpoint also discuss several issues and challenges that should be addressed to move the field forward.

  10. A low-light-level video recursive filtering technology based on the three-dimensional coefficients

    Science.gov (United States)

    Fu, Rongguo; Feng, Shu; Shen, Tianyu; Luo, Hao; Wei, Yifang; Yang, Qi

    2017-08-01

    Low light level video is an important method of observation under low illumination condition, but the SNR of low light level video is low, the effect of observation is poor, so the noise reduction processing must be carried out. Low light level video noise mainly includes Gauss noise, Poisson noise, impulse noise, fixed pattern noise and dark current noise. In order to remove the noise in low-light-level video effectively, improve the quality of low-light-level video. This paper presents an improved time domain recursive filtering algorithm with three dimensional filtering coefficients. This algorithm makes use of the correlation between the temporal domain of the video sequence. In the video sequences, the proposed algorithm adaptively adjusts the local window filtering coefficients in space and time by motion estimation techniques, for the different pixel points of the same frame of the image, the different weighted coefficients are used. It can reduce the image tail, and ensure the noise reduction effect well. Before the noise reduction, a pretreatment based on boxfilter is used to reduce the complexity of the algorithm and improve the speed of the it. In order to enhance the visual effect of low-light-level video, an image enhancement algorithm based on guided image filter is used to enhance the edge of the video details. The results of experiment show that the hybrid algorithm can remove the noise of the low-light-level video effectively, enhance the edge feature and heighten the visual effects of video.

  11. Investigation of linguistic comprehension processing by capture software

    Directory of Open Access Journals (Sweden)

    Vera Wannmacher Pereira

    2015-01-01

    Full Text Available Among many computer tools that are provided for research development, especially with regard to language, the capture software technologies are important for the study of cognitive processes while performing activities using the computer as an electronic support. This article presents one – the capture software SnagIt – which records videos of the user's movements with the mouse, during the linguistic comprehension process, enabling analysis and reflections on the user’s journey and thereby their cognitive processing. Two psycholinguistic studies developed at the Reference Center for Language Development – CELIN/ FALE/ PUCRS – used this capture software in order to examine the linguistic comprehension strategies applied by the subjects. These studies are presented for demonstration and explanation.

  12. Video library for video imaging detection at intersection stop lines.

    Science.gov (United States)

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  13. User aware video streaming

    Science.gov (United States)

    Kerofsky, Louis; Jagannath, Abhijith; Reznik, Yuriy

    2015-03-01

    We describe the design of a video streaming system using adaptation to viewing conditions to reduce the bitrate needed for delivery of video content. A visual model is used to determine sufficient resolution needed under various viewing conditions. Sensors on a mobile device estimate properties of the viewing conditions, particularly the distance to the viewer. We leverage the framework of existing adaptive bitrate streaming systems such as HLS, Smooth Streaming or MPEG-DASH. The client rate selection logic is modified to include a sufficient resolution computed using the visual model and the estimated viewing conditions. Our experiments demonstrate significant bitrate savings compare to conventional streaming methods which do not exploit viewing conditions.

  14. Contextual analysis of videos

    CERN Document Server

    Thida, Myo; Monekosso, Dorothy

    2013-01-01

    Video context analysis is an active and vibrant research area, which provides means for extracting, analyzing and understanding behavior of a single target and multiple targets. Over the last few decades, computer vision researchers have been working to improve the accuracy and robustness of algorithms to analyse the context of a video automatically. In general, the research work in this area can be categorized into three major topics: 1) counting number of people in the scene 2) tracking individuals in a crowd and 3) understanding behavior of a single target or multiple targets in the scene.

  15. Video-based rendering

    CERN Document Server

    Magnor, Marcus A

    2005-01-01

    Driven by consumer-market applications that enjoy steadily increasing economic importance, graphics hardware and rendering algorithms are a central focus of computer graphics research. Video-based rendering is an approach that aims to overcome the current bottleneck in the time-consuming modeling process and has applications in areas such as computer games, special effects, and interactive TV. This book offers an in-depth introduction to video-based rendering, a rapidly developing new interdisciplinary topic employing techniques from computer graphics, computer vision, and telecommunication en

  16. CERN Video News

    CERN Multimedia

    2003-01-01

    From Monday you can see on the web the new edition of CERN's Video News. Thanks to a collaboration between the audiovisual teams at CERN and Fermilab, you can see a report made by the American laboratory. The clip concerns the LHC magnets that are being constructed at Fermilab. Also in the programme: the spectacular rotation of one of the ATLAS coils, the arrival at CERN of the first American magnet made at Brookhaven, the story of the discovery 20 years ago of the W and Z bosons at CERN. http://www.cern.ch/video or Bulletin web page.

  17. SCTP as scalable video coding transport

    Science.gov (United States)

    Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.

    2013-12-01

    This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.

  18. Video special effects editing in MPEG-2 compressed video

    OpenAIRE

    Fernando, WAC; Canagarajah, CN; Bull, David

    2000-01-01

    With the increase of digital technology in video production, several types of complex video special effects editing have begun to appear in video clips. In this paper we consider fade-out and fade-in special effects editing in MPEG-2 compressed video without full frame decompression and motion estimation. We estimated the DCT coefficients and use these coefficients together with the existing motion vectors to produce these special effects editing in compressed domain. Results show that both o...

  19. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  20. A chromosome conformation capture ordered sequence of the barley genome

    Czech Academy of Sciences Publication Activity Database

    Mascher, M.; Gundlach, H.; Himmelbach, A.; Beier, S.; Twardziok, S. O.; Wicker, T.; Šimková, Hana; Staňková, Helena; Vrána, Jan; Chan, S.; Munoz-Amatrian, M.; Houben, A.; Doležel, Jaroslav; Ayling, S.; Lonardi, S.; Mayer, K.F.X.; Zhang, G.; Braumann, I.; Spannagl, M.; Li, C.; Waugh, R.; Stein, N.

    2017-01-01

    Roč. 544, č. 7651 (2017), s. 427-433 ISSN 0028-0836 R&D Projects: GA MŠk(CZ) LO1204 Institutional support: RVO:61389030 Keywords : bacterial artificial chromosomes * inverted-repeat elements * complex-plant genomes * hi-c * environmental adaptation * ltr retrotransposons * structural variation * maize genome * software * database Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 40.137, year: 2016

  1. Outward-looking circular motion analysis of large image sequences.

    Science.gov (United States)

    Jiang, Guang; Wei, Yichen; Quan, Long; Tsui, Hung-tat; Shum, Heung Yeung

    2005-02-01

    This paper presents a novel and simple method of analyzing the motion of a large image sequence captured by a calibrated outward-looking video camera moving on a circular trajectory for large-scale environment applications. Previous circular motion algorithms mainly focus on inward-looking turntable-like setups. They are not suitable for outward-looking motion where the conic trajectory of corresponding points degenerates to straight lines. The circular motion of a calibrated camera essentially has only one unknown rotation angle for each frame. The motion recovery for the entire sequence computes only one fundamental matrix of a pair of frames to extract the angular motion of the pair using Laguerre's formula and then propagates the computation of the unknown rotation angles to the other frames by tracking one point over at least three frames. Finally, a maximum-likelihood estimation is developed for the optimization of the whole sequence. Extensive experiments demonstrate the validity of the method and the feasibility of the application in image-based rendering.

  2. Streaming Video--The Wave of the Video Future!

    Science.gov (United States)

    Brown, Laura

    2004-01-01

    Videos and DVDs give the teachers more flexibility than slide projectors, filmstrips, and 16mm films but teachers and students are excited about a new technology called streaming. Streaming allows the educators to view videos on demand via the Internet, which works through the transfer of digital media like video, and voice data that is received…

  3. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    Directory of Open Access Journals (Sweden)

    Chang Yao-Jen

    2002-01-01

    Full Text Available This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network and the H.324 WAN (wide-area network users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  4. Keys to Successful Interactive Storytelling: A Study of the Booming "Choose-Your-Own-Adventure" Video Game Industry

    Science.gov (United States)

    Tyndale, Eric; Ramsoomair, Franklin

    2016-01-01

    Video gaming has become a multi-billion dollar industry that continues to capture the hearts, minds and pocketbooks of millions of gamers who span all ages. Narrative and interactive games form part of this market. The popularity of tablet computers and the technological advances of video games have led to a renaissance in the genre for both youth…

  5. Fingerprint multicast in secure video streaming.

    Science.gov (United States)

    Zhao, H Vicky; Liu, K J Ray

    2006-01-01

    Digital fingerprinting is an emerging technology to protect multimedia content from illegal redistribution, where each distributed copy is labeled with unique identification information. In video streaming, huge amount of data have to be transmitted to a large number of users under stringent latency constraints, so the bandwidth-efficient distribution of uniquely fingerprinted copies is crucial. This paper investigates the secure multicast of anticollusion fingerprinted video in streaming applications and analyzes their performance. We first propose a general fingerprint multicast scheme that can be used with most spread spectrum embedding-based multimedia fingerprinting systems. To further improve the bandwidth efficiency, we explore the special structure of the fingerprint design and propose a joint fingerprint design and distribution scheme. From our simulations, the two proposed schemes can reduce the bandwidth requirement by 48% to 87%, depending on the number of users, the characteristics of video sequences, and the network and computation constraints. We also show that under the constraint that all colluders have the same probability of detection, the embedded fingerprints in the two schemes have approximately the same collusion resistance. Finally, we propose a fingerprint drift compensation scheme to improve the quality of the reconstructed sequences at the decoder's side without introducing extra communication overhead.

  6. Adaptive subband coding of full motion video

    Science.gov (United States)

    Sharifi, Kamran; Xiao, Leping; Leon-Garcia, Alberto

    1993-10-01

    In this paper a new algorithm for digital video coding is presented that is suitable for digital storage and video transmission applications in the range of 5 to 10 Mbps. The scheme is based on frame differencing and, unlike recent proposals, does not employ motion estimation and compensation. A novel adaptive grouping structure is used to segment the video sequence into groups of frames of variable sizes. Within each group, the frame difference is taken in a closed loop Differential Pulse Code Modulation (DPCM) structure and then decomposed into different frequency subbands. The important subbands are transformed using the Discrete Cosine Transform (DCT) and the resulting coefficients are adaptively quantized and runlength coded. The adaptation is based on the variance of sample values in each subband. To reduce the computation load, a very simple and efficient way has been used to estimate the variance of the subbands. It is shown that for many types of sequences, the performance of the proposed coder is comparable to that of coding methods which use motion parameters.

  7. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... Grants and Funding Extramural Research Division of Extramural Science Programs Division of Extramural Activities Extramural Contacts NEI ... Amaurosis Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded ...

  8. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five ... was designed to help you learn more about Rheumatoid Arthritis (RA). You will learn how the diagnosis ...

  9. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Our Staff Rheumatology Specialty Centers You are here: Home / Patient Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video ... to take a more active role in your care. The information in these videos should not take ...

  10. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... will allow you to take a more active role in your care. The information in these videos ... Stategies to Increase your Level of Physical Activity Role of Body Weight in Osteoarthritis Educational Videos for ...

  11. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... here. Will You Support the Education of Arthritis Patients? Each year, over 1 million people visit this ... of Body Weight in Osteoarthritis Educational Videos for Patients Rheumatoid Arthritis Educational Video Series Psoriatic Arthritis 101 ...

  12. Videos & Tools: MedlinePlus

    Science.gov (United States)

    ... of this page: https://medlineplus.gov/videosandcooltools.html Videos & Tools To use the sharing features on this page, please enable JavaScript. Watch health videos on topics such as anatomy, body systems, and ...

  13. Health Videos: MedlinePlus

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/anatomyvideos.html.htm Health Videos To use the sharing features on this page, please enable JavaScript. These animated videos show the anatomy of body parts and organ ...

  14. Scanning laser video camera/ microscope

    Science.gov (United States)

    Wang, C. P.; Bow, R. T.

    1984-10-01

    A laser scanning system capable of scanning at standard video rate has been developed. The scanning mirrors, circuit design and system performance, as well as its applications to video cameras and ultra-violet microscopes, are discussed.

  15. Astronomy Video Contest

    Science.gov (United States)

    McFarland, John

    2008-05-01

    During Galileo's lifetime his staunchest supporter was Johannes Kepler, Imperial Mathematician to the Holy Roman Emperor. Johannes Kepler will be in St. Louis to personally offer a tribute to Galileo. Set Galileo's astronomy discoveries to music and you get the newest song by the well known acappella group, THE CHROMATICS. The song, entitled "Shoulders of Giants” was written specifically for IYA-2009 and will be debuted at this conference. The song will also be used as a base to create a music video by synchronizing a person's own images to the song's lyrics and tempo. Thousands of people already do this for fun and post their videos on YOU TUBE and other sites. The ASTRONOMY VIDEO CONTEST will be launched as a vehicle to excite, enthuse and educate people about astronomy and science. It will be an annual event administered by the Johannes Kepler Project and will continue to foster the goals of IYA-2009 for years to come. During this presentation the basic categories, rules, and prizes for the Astronomy Video Contest will be covered and finally the new song "Shoulders of Giants” by THE CHROMATICS will be unveiled

  16. Provocative Video Scenarios

    DEFF Research Database (Denmark)

    Caglio, Agnese

    This paper presents the use of ”provocative videos”, as a tool to support and deepen findings from ethnographic investigation on the theme of remote videocommunication. The videos acted as a resource to also investigate potential for novel technologies supporting continuous connection between...

  17. Video Content Foraging

    NARCIS (Netherlands)

    van Houten, Ynze; Schuurman, Jan Gerrit; Verhagen, Pleunes Willem; Enser, Peter; Kompatsiaris, Yiannis; O’Connor, Noel E.; Smeaton, Alan F.; Smeulders, Arnold W.M.

    2004-01-01

    With information systems, the real design problem is not increased access to information, but greater efficiency in finding useful information. In our approach to video content browsing, we try to match the browsing environment with human information processing structures by applying ideas from

  18. Internet video search

    NARCIS (Netherlands)

    Snoek, C.G.M.; Smeulders, A.W.M.

    2011-01-01

    In this tutorial, we focus on the challenges in internet video search, present methods how to achieve state-of-the-art performance while maintaining efficient execution, and indicate how to obtain improvements in the near future. Moreover, we give an overview of the latest developments and future

  19. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Donate Resources Links Videos Podcasts Webinars For the Media For Clinicians For Policymakers For Family Caregivers Glossary Sign Up for Our Blog Subscribe to Blog Enter your email address to subscribe to this blog and receive notifications of new posts by email. Email Address CLOSE Home About ...

  20. Scalable Video Coding

    NARCIS (Netherlands)

    Choupani, R.

    2017-01-01

    With the rapid improvements in digital communication technologies, distributing high-definition visual information has become more widespread. However, the available technologies were not sufficient to support the rising demand for high-definition video. This situation is further complicated when

  1. Video processing project

    CSIR Research Space (South Africa)

    Globisch, R

    2009-03-01

    Full Text Available Video processing source code for algorithms and tools used in software media pipelines (e.g. image scalers, colour converters, etc.) The currently available source code is written in C++ with their associated libraries and DirectShow- Filters....

  2. Video narrativer i sygeplejerskeuddannelsen

    DEFF Research Database (Denmark)

    Jensen, Inger

    2009-01-01

    I artiklen gives nogle bud på hvordan video narrativer kan bruges i sygeplejerskeuddannelsen som triggers, der åbner for diskussioner og udvikling af meningsfulde holdninger til medmennesker. Det belyses også hvordan undervisere i deres didaktiske overvejelser kan inddrage elementer fra teori om...

  3. Streaming-video produktion

    DEFF Research Database (Denmark)

    Grønkjær, Poul

    2004-01-01

     E-learning Lab på Aalborg Universitet har i forbindelse med forskningsprojektet Virtuelle Læringsformer og Læringsmiljøer foretaget en række praktiske eksperimenter med streaming-video produktioner. Hensigten med denne artikel er at formidle disse erfaringer. Artiklen beskriver hele produktionsf...... E-learning Lab på Aalborg Universitet har i forbindelse med forskningsprojektet Virtuelle Læringsformer og Læringsmiljøer foretaget en række praktiske eksperimenter med streaming-video produktioner. Hensigten med denne artikel er at formidle disse erfaringer. Artiklen beskriver hele...... produktionsforløbet: fra ide til færdigt produkt, forskellige typer af præsentationer, dramaturgiske overvejelser samt en konceptskitse. Streaming-video teknologien er nu så udviklet med et så tilfredsstillende audiovisuelt udtryk at vi kan begynde at fokusere på, hvilket indhold der er velegnet til at blive gjort...... tilgængeligt uafhængigt af tid og sted. Afslutningsvis er der en række kildehenvisninger, blandt andet en oversigt over de streaming-video produktioner, som denne artikel bygger på....

  4. Characteristics of Instructional Videos

    Science.gov (United States)

    Beheshti, Mobina; Taspolat, Ata; Kaya, Omer Sami; Sapanca, Hamza Fatih

    2018-01-01

    Nowadays, video plays a significant role in education in terms of its integration into traditional classes, the principal delivery system of information in classes particularly in online courses as well as serving as a foundation of many blended classes. Hence, education is adopting a modern approach of instruction with the target of moving away…

  5. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available Home About Donate Search Search What Is It Definition Pediatric Palliative Care Disease Types FAQ Handout for Patients and Families Is It Right for You How to Get It Talk to your Doctor Find a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts ...

  6. Mobiele video voor bedrijfscommunicatie

    NARCIS (Netherlands)

    Niamut, O.A.; Weerdt, C.A. van der; Havekes, A.

    2009-01-01

    Het project Penta Mobilé liep van juni tot november 2009 en had als doel de mogelijkheden van mobiele video voor bedrijfscommunicatie toepassingen in kaart te brengen. Dit onderzoek werd uitgevoerd samen met vijf (‘Penta’) partijen: Business Tales, Condor Digital, European Communication Projects

  7. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Surgery What is acoustic neuroma Diagnosing Symptoms Side effects ... Groups Is a support group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer Support Program Community Connections Overview Find a Meeting ...

  8. Clinical sequencing: is WGS the better WES?

    Science.gov (United States)

    Meienberg, Janine; Bruggmann, Rémy; Oexle, Konrad; Matyas, Gabor

    2016-03-01

    Current clinical next-generation sequencing is done by using gene panels and exome analysis, both of which involve selective capturing of target regions. However, capturing has limitations in sufficiently covering coding exons, especially GC-rich regions. We compared whole exome sequencing (WES) with the most recent PCR-free whole genome sequencing (WGS), showing that only the latter is able to provide hitherto unprecedented complete coverage of the coding region of the genome. Thus, from a clinical/technical point of view, WGS is the better WES so that capturing is no longer necessary for the most comprehensive genomic testing of Mendelian disorders.

  9. Inland capture fisheries.

    Science.gov (United States)

    Welcomme, Robin L; Cowx, Ian G; Coates, David; Béné, Christophe; Funge-Smith, Simon; Halls, Ashley; Lorenzen, Kai

    2010-09-27

    The reported annual yield from inland capture fisheries in 2008 was over 10 million tonnes, although real catches are probably considerably higher than this. Inland fisheries are extremely complex, and in many cases poorly understood. The numerous water bodies and small rivers are inhabited by a wide range of species and several types of fisher community with diversified livelihood strategies for whom inland fisheries are extremely important. Many drivers affect the fisheries, including internal fisheries management practices. There are also many drivers from outside the fishery that influence the state and functioning of the environment as well as the social and economic framework within which the fishery is pursued. The drivers affecting the various types of inland water, rivers, lakes, reservoirs and wetlands may differ, particularly with regard to ecosystem function. Many of these depend on land-use practices and demand for water which conflict with the sustainability of the fishery. Climate change is also exacerbating many of these factors. The future of inland fisheries varies between continents. In Asia and Africa the resources are very intensely exploited and there is probably little room for expansion; it is here that resources are most at risk. Inland fisheries are less heavily exploited in South and Central America, and in the North and South temperate zones inland fisheries are mostly oriented to recreation rather than food production.

  10. Capture-recapture methodology

    Science.gov (United States)

    Gould, William R.; Kendall, William L.

    2013-01-01

    Capture-recapture methods were initially developed to estimate human population abundance, but since that time have seen widespread use for fish and wildlife populations to estimate and model various parameters of population, metapopulation, and disease dynamics. Repeated sampling of marked animals provides information for estimating abundance and tracking the fate of individuals in the face of imperfect detection. Mark types have evolved from clipping or tagging to use of noninvasive methods such as photography of natural markings and DNA collection from feces. Survival estimation has been emphasized more recently as have transition probabilities between life history states and/or geographical locations, even where some states are unobservable or uncertain. Sophisticated software has been developed to handle highly parameterized models, including environmental and individual covariates, to conduct model selection, and to employ various estimation approaches such as maximum likelihood and Bayesian approaches. With these user-friendly tools, complex statistical models for studying population dynamics have been made available to ecologists. The future will include a continuing trend toward integrating data types, both for tagged and untagged individuals, to produce more precise and robust population models.

  11. Inland capture fisheries

    Science.gov (United States)

    Welcomme, Robin L.; Cowx, Ian G.; Coates, David; Béné, Christophe; Funge-Smith, Simon; Halls, Ashley; Lorenzen, Kai

    2010-01-01

    The reported annual yield from inland capture fisheries in 2008 was over 10 million tonnes, although real catches are probably considerably higher than this. Inland fisheries are extremely complex, and in many cases poorly understood. The numerous water bodies and small rivers are inhabited by a wide range of species and several types of fisher community with diversified livelihood strategies for whom inland fisheries are extremely important. Many drivers affect the fisheries, including internal fisheries management practices. There are also many drivers from outside the fishery that influence the state and functioning of the environment as well as the social and economic framework within which the fishery is pursued. The drivers affecting the various types of inland water, rivers, lakes, reservoirs and wetlands may differ, particularly with regard to ecosystem function. Many of these depend on land-use practices and demand for water which conflict with the sustainability of the fishery. Climate change is also exacerbating many of these factors. The future of inland fisheries varies between continents. In Asia and Africa the resources are very intensely exploited and there is probably little room for expansion; it is here that resources are most at risk. Inland fisheries are less heavily exploited in South and Central America, and in the North and South temperate zones inland fisheries are mostly oriented to recreation rather than food production. PMID:20713391

  12. Developing a Video Steganography Toolkit

    OpenAIRE

    Ridgway, James; Stannett, Mike

    2014-01-01

    Although techniques for separate image and audio steganography are widely known, relatively little has been described concerning the hiding of information within video streams ("video steganography"). In this paper we review the current state of the art in this field, and describe the key issues we have encountered in developing a practical video steganography system. A supporting video is also available online at http://www.youtube.com/watch?v=YhnlHmZolRM

  13. 100 Million Views of Electronic Cigarette YouTube Videos and Counting: Quantification, Content Evaluation, and Engagement Levels of Videos

    Science.gov (United States)

    2016-01-01

    Background The video-sharing website, YouTube, has become an important avenue for product marketing, including tobacco products. It may also serve as an important medium for promoting electronic cigarettes, which have rapidly increased in popularity and are heavily marketed online. While a few studies have examined a limited subset of tobacco-related videos on YouTube, none has explored e-cigarette videos’ overall presence on the platform. Objective To quantify e-cigarette-related videos on YouTube, assess their content, and characterize levels of engagement with those videos. Understanding promotion and discussion of e-cigarettes on YouTube may help clarify the platform’s impact on consumer attitudes and behaviors and inform regulations. Methods Using an automated crawling procedure and keyword rules, e-cigarette-related videos posted on YouTube and their associated metadata were collected between July 1, 2012, and June 30, 2013. Metadata were analyzed to describe posting and viewing time trends, number of views, comments, and ratings. Metadata were content coded for mentions of health, safety, smoking cessation, promotional offers, Web addresses, product types, top-selling brands, or names of celebrity endorsers. Results As of June 30, 2013, approximately 28,000 videos related to e-cigarettes were captured. Videos were posted by approximately 10,000 unique YouTube accounts, viewed more than 100 million times, rated over 380,000 times, and commented on more than 280,000 times. More than 2200 new videos were being uploaded every month by June 2013. The top 1% of most-viewed videos accounted for 44% of total views. Text fields for the majority of videos mentioned websites (70.11%); many referenced health (13.63%), safety (10.12%), smoking cessation (9.22%), or top e-cigarette brands (33.39%). The number of e-cigarette-related YouTube videos was projected to exceed 65,000 by the end of 2014, with approximately 190 million views. Conclusions YouTube is a major

  14. Functionalization of Gold-plasmonic Devices for Protein Capture

    KAUST Repository

    Battista, E.

    2017-07-13

    Here we propose a straightforward method to functionalize gold nanostructures by using an appropriate peptide sequence already selected toward gold surfaces and derivatized with another sequence for the capture of a molecular target. Large scale 3D-plasmonic devices with different nanostructures were fabricated by means of direct nanoimprint technique. The present work is aimed to address different innovative aspects related to the fabrication of large-area 3D plasmonic arrays, their direct and easy functionalization with capture elements, and their spectroscopic verifications through enhanced Raman and enhanced fluorescence techniques.

  15. Chromosome Conformation Capture Carbon Copy (5C) in Budding Yeast.

    Science.gov (United States)

    Belton, Jon-Matthew; Dekker, Job

    2015-06-01

    Chromosome conformation capture carbon copy (5C) is a high-throughput method for detecting ligation products of interest in a chromosome conformation capture (3C) library. 5C uses ligation-mediated amplification (LMA) to generate carbon copies of 3C ligation product junctions using single-stranded oligonucleotide probes. This procedure produces a 5C library of short DNA molecules which represent the interactions between the corresponding restriction fragments. The 5C library can be amplified using universal primers containing the Illumina paired-end adaptor sequences for subsequent high-throughput sequencing. © 2015 Cold Spring Harbor Laboratory Press.

  16. Study of Temporal Effects on Subjective Video Quality of Experience.

    Science.gov (United States)

    Bampis, Christos George; Zhi Li; Moorthy, Anush Krishna; Katsavounidis, Ioannis; Aaron, Anne; Bovik, Alan Conrad

    2017-11-01

    HTTP adaptive streaming is being increasingly deployed by network content providers, such as Netflix and YouTube. By dividing video content into data chunks encoded at different bitrates, a client is able to request the appropriate bitrate for the segment to be played next based on the estimated network conditions. However, this can introduce a number of impairments, including compression artifacts and rebuffering events, which can severely impact an end-user's quality of experience (QoE). We have recently created a new video quality database, which simulates a typical video streaming application, using long video sequences and interesting Netflix content. Going beyond previous efforts, the new database contains highly diverse and contemporary content, and it includes the subjective opinions of a sizable number of human subjects regarding the effects on QoE of both rebuffering and compression distortions. We observed that rebuffering is always obvious and unpleasant to subjects, while bitrate changes may be less obvious due to content-related dependencies. Transient bitrate drops were preferable over rebuffering only on low complexity video content, while consistently low bitrates were poorly tolerated. We evaluated different objective video quality assessment algorithms on our database and found that objective video quality models are unreliable for QoE prediction on videos suffering from both rebuffering events and bitrate changes. This implies the need for more general QoE models that take into account objective quality models, rebuffering-aware information, and memory. The publicly available video content as well as metadata for all of the videos in the new database can be found at http://live.ece.utexas.edu/research/LIVE_NFLXStudy/nflx_index.html.

  17. Shadow Detection Based on Regions of Light Sources for Object Extraction in Nighttime Video.

    Science.gov (United States)

    Lee, Gil-Beom; Lee, Myeong-Jin; Lee, Woo-Kyung; Park, Joo-Heon; Kim, Tae-Hwan

    2017-03-22

    Intelligent video surveillance systems detect pre-configured surveillance events through background modeling, foreground and object extraction, object tracking, and event detection. Shadow regions inside video frames sometimes appear as foreground objects, interfere with ensuing processes, and finally degrade the event detection performance of the systems. Conventional studies have mostly used intensity, color, texture, and geometric information to perform shadow detection in daytime video, but these methods lack the capability of removing shadows in nighttime video. In this paper, a novel shadow detection algorithm for nighttime video is proposed; this algorithm partitions each foreground object based on the object's vertical histogram and screens out shadow objects by validating their orientations heading toward regions of light sources. From the experimental results, it can be seen that the proposed algorithm shows more than 93.8% shadow removal and 89.9% object extraction rates for nighttime video sequences, and the algorithm outperforms conventional shadow removal algorithms designed for daytime videos.

  18. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia Animations Blindness Cataract Convergence Insufficiency Diabetic Eye Disease Dilated Eye Exam Dry Eye For Kids Glaucoma ...

  19. CERN Video News on line

    CERN Multimedia

    2003-01-01

    The latest CERN video news is on line. In this issue : an interview with the Director General and reports on the new home for the DELPHI barrel and the CERN firemen's spectacular training programme. There's also a vintage video news clip from 1954. See: www.cern.ch/video or Bulletin web page

  20. We All Stream for Video

    Science.gov (United States)

    Technology & Learning, 2008

    2008-01-01

    More than ever, teachers are using digital video to enhance their lessons. In fact, the number of schools using video streaming increased from 30 percent to 45 percent between 2004 and 2006, according to Market Data Retrieval. Why the popularity? For starters, video-streaming products are easy to use. They allow teachers to punctuate lessons with…