WorldWideScience

Sample records for video processing applications

  1. Memory-cenric video processing

    NARCIS (Netherlands)

    Beric, A.; Meerbergen, van J.; Haan, de G.; Sethuraman, R.

    2008-01-01

    This work presents a domain-specific memory subsystem based on a two-level memory hierarchy. It targets the application domain of video post-processing applications including video enhancement and format conversion. These applications are based on motion compensation and/or broad class of content

  2. A flexible software architecture for scalable real-time image and video processing applications

    Science.gov (United States)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  3. Multimedia applications in nursing curriculum: the process of producing streaming videos for medication administration skills.

    Science.gov (United States)

    Sowan, Azizeh K

    2014-07-01

    Streaming videos (SVs) are commonly used multimedia applications in clinical health education. However, there are several negative aspects related to the production and delivery of SVs. Only a few published studies have included sufficient descriptions of the videos and the production process and design innovations. This paper describes the production of innovative SVs for medication administration skills for undergraduate nursing students at a public university in Jordan and focuses on the ethical and cultural issues in producing this type of learning resource. The curriculum development committee approved the modification of educational techniques for medication administration procedures to include SVs within an interactive web-based learning environment. The production process of the videos adhered to established principles for "protecting patients' rights when filming and recording" and included: preproduction, production and postproduction phases. Medication administration skills were videotaped in a skills laboratory where they are usually taught to students and also in a hospital setting with real patients. The lab videos included critical points and Do's and Don'ts and the hospital videos fostered real-world practices. The range of time of the videos was reasonable to eliminate technical difficulty in access. Eight SVs were produced that covered different types of the medication administration skills. The production of SVs required the collaborative efforts of experts in IT, multimedia, nursing and informatics educators, and nursing care providers. Results showed that the videos were well-perceived by students, and the instructors who taught the course. The process of producing the videos in this project can be used as a valuable framework for schools considering utilizing multimedia applications in teaching. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. Video motion detection for physical security applications

    International Nuclear Information System (INIS)

    Matter, J.C.

    1990-01-01

    Physical security specialists have been attracted to the concept of video motion detection for several years. Claimed potential advantages included additional benefit from existing video surveillance systems, automatic detection, improved performance compared to human observers, and cost-effectiveness. In recent years, significant advances in image-processing dedicated hardware and image analysis algorithms and software have accelerated the successful application of video motion detection systems to a variety of physical security applications. Early video motion detectors (VMDs) were useful for interior applications of volumetric sensing. Success depended on having a relatively well-controlled environment. Attempts to use these systems outdoors frequently resulted in an unacceptable number of nuisance alarms. Currently, Sandia National Laboratories (SNL) is developing several advanced systems that employ image-processing techniques for a broader set of safeguards and security applications. The Target Cueing and Tracking System (TCATS), the Video Imaging System for Detection, Tracking, and Assessment (VISDTA), the Linear Infrared Scanning Array (LISA); the Mobile Intrusion Detection and Assessment System (MIDAS), and the Visual Artificially Intelligent Surveillance (VAIS) systems are described briefly

  5. Adaptive modeling of sky for video processing and coding applications

    NARCIS (Netherlands)

    Zafarifar, B.; With, de P.H.N.; Lagendijk, R.L.; Weber, Jos H.; Berg, van den A.F.M.

    2006-01-01

    Video content analysis for still- and moving images can be used for various applications, such as high-level semantic-driven operations or pixel-level contentdependent image manipulation. Within video content analysis, sky regions of an image form visually important objects, for which interesting

  6. Current and Emerging Topics in Sports Video Processing

    NARCIS (Netherlands)

    Xinguo, Yu; Farin, D.S.

    2005-01-01

    Sports video processing is an interesting topic for research, since the clearly defined game rules in sports provide the rich domain knowledge for analysis. Moreover, it is interesting because many specialized applications for sports video processing are emerging. This paper gives an overview of

  7. Multimedia image and video processing

    CERN Document Server

    Guan, Ling

    2012-01-01

    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  8. Video sensor architecture for surveillance applications.

    Science.gov (United States)

    Sánchez, Jordi; Benet, Ginés; Simó, José E

    2012-01-01

    This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.

  9. Video Sensor Architecture for Surveillance Applications

    Directory of Open Access Journals (Sweden)

    José E. Simó

    2012-02-01

    Full Text Available This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.

  10. Video watermarking for mobile phone applications

    Science.gov (United States)

    Mitrea, M.; Duta, S.; Petrescu, M.; Preteux, F.

    2005-08-01

    Nowadays, alongside with the traditional voice signal, music, video, and 3D characters tend to become common data to be run, stored and/or processed on mobile phones. Hence, to protect their related intellectual property rights also becomes a crucial issue. The video sequences involved in such applications are generally coded at very low bit rates. The present paper starts by presenting an accurate statistical investigation on such a video as well as on a very dangerous attack (the StirMark attack). The obtained results are turned into practice when adapting a spread spectrum watermarking method to such applications. The informed watermarking approach was also considered: an outstanding method belonging to this paradigm has been adapted and re evaluated under the low rate video constraint. The experimental results were conducted in collaboration with the SFR mobile services provider in France. They also allow a comparison between the spread spectrum and informed embedding techniques.

  11. Rapid, low-cost, image analysis through video processing

    International Nuclear Information System (INIS)

    Levinson, R.A.; Marrs, R.W.; Grantham, D.G.

    1976-01-01

    Remote Sensing now provides the data necessary to solve many resource problems. However, many of the complex image processing and analysis functions used in analysis of remotely-sensed data are accomplished using sophisticated image analysis equipment. High cost of this equipment places many of these techniques beyond the means of most users. A new, more economical, video system capable of performing complex image analysis has now been developed. This report describes the functions, components, and operation of that system. Processing capability of the new video image analysis system includes many of the tasks previously accomplished with optical projectors and digital computers. Video capabilities include: color separation, color addition/subtraction, contrast stretch, dark level adjustment, density analysis, edge enhancement, scale matching, image mixing (addition and subtraction), image ratioing, and construction of false-color composite images. Rapid input of non-digital image data, instantaneous processing and display, relatively low initial cost, and low operating cost gives the video system a competitive advantage over digital equipment. Complex pre-processing, pattern recognition, and statistical analyses must still be handled through digital computer systems. The video system at the University of Wyoming has undergone extensive testing, comparison to other systems, and has been used successfully in practical applications ranging from analysis of x-rays and thin sections to production of color composite ratios of multispectral imagery. Potential applications are discussed including uranium exploration, petroleum exploration, tectonic studies, geologic mapping, hydrology sedimentology and petrography, anthropology, and studies on vegetation and wildlife habitat

  12. Processing Decoded Video for LCD-LED Backlight Display

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan

    The quality of digital images and video signal on visual media such as TV screens and LCD displays is affected by two main factors; the display technology and compression standards. Accurate knowledge about the characteristics of display and the video signal can be utilized to develop advanced...... on local LED-LCD backlight. Second, removing the digital video codec artifacts such as blocking and ringing artifacts by post-processing algorithms. A novel algorithm based on image features with optimal balance between visual quality and power consumption was developed. In addition, to remove flickering...... algorithms for signal (image or video) enhancement. One particular application of such algorithms is the case of LCDs with dynamic local backlight. The thesis addressed two main problems; first, designing algorithms that improve the visual quality of perceived image and video and reduce power consumption...

  13. Grid Portal for Image and Video Processing

    International Nuclear Information System (INIS)

    Dinitrovski, I.; Kakasevski, G.; Buckovska, A.; Loskovska, S.

    2007-01-01

    Users are typically best served by G rid Portals . G rid Portals a re web servers that allow the user to configure or run a class of applications. The server is then given the task of authentication of the user with the Grid and invocation of the required grid services to launch the user's application. PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML. PHP is powerful and modern server-side scripting language producing HTML or XML output which easily can be accessed by everyone via web interface (with the browser of your choice) and can execute shell scripts on the server side. The aim of our work is development of Grid portal for image and video processing. The shell scripts contains gLite and globus commands for obtaining proxy certificate, job submission, data management etc. Using this technique we can easily create web interface to the Grid infrastructure. The image and video processing algorithms are implemented in C++ language using various image processing libraries. (Author)

  14. Intelligent control for scalable video processing

    NARCIS (Netherlands)

    Wüst, C.C.

    2006-01-01

    In this thesis we study a problem related to cost-effective video processing in software by consumer electronics devices, such as digital TVs. Video processing is the task of transforming an input video signal into an output video signal, for example to improve the quality of the signal. This

  15. Video processing for human perceptual visual quality-oriented video coding.

    Science.gov (United States)

    Oh, Hyungsuk; Kim, Wonha

    2013-04-01

    We have developed a video processing method that achieves human perceptual visual quality-oriented video coding. The patterns of moving objects are modeled by considering the limited human capacity for spatial-temporal resolution and the visual sensory memory together, and an online moving pattern classifier is devised by using the Hedge algorithm. The moving pattern classifier is embedded in the existing visual saliency with the purpose of providing a human perceptual video quality saliency model. In order to apply the developed saliency model to video coding, the conventional foveation filtering method is extended. The proposed foveation filter can smooth and enhance the video signals locally, in conformance with the developed saliency model, without causing any artifacts. The performance evaluation results confirm that the proposed video processing method shows reliable improvements in the perceptual quality for various sequences and at various bandwidths, compared to existing saliency-based video coding methods.

  16. Research of real-time video processing system based on 6678 multi-core DSP

    Science.gov (United States)

    Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang

    2017-10-01

    In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.

  17. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    Science.gov (United States)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  18. An introduction to video image compression and authentication technology for safeguards applications

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1995-01-01

    Verification of a video image has been a major problem for safeguards for several years. Various verification schemes have been tried on analog video signals ever since the mid-1970's. These schemes have provided a measure of protection but have never been widely adopted. The development of reasonably priced complex video processing integrated circuits makes it possible to digitize a video image and then compress the resulting digital file into a smaller file without noticeable loss of resolution. Authentication and/or encryption algorithms can be more easily applied to digital video files that have been compressed. The compressed video files require less time for algorithm processing and image transmission. An important safeguards application for authenticated, compressed, digital video images is in unattended video surveillance systems and remote monitoring systems. The use of digital images in the surveillance system makes it possible to develop remote monitoring systems that send images over narrow bandwidth channels such as the common telephone line. This paper discusses the video compression process, authentication algorithm, and data format selected to transmit and store the authenticated images

  19. Application of robust face recognition in video surveillance systems

    Science.gov (United States)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  20. DAVID: A new video motion sensor for outdoor perimeter applications

    International Nuclear Information System (INIS)

    Alexander, J.C.

    1986-01-01

    To be effective, a perimeter intrusion detection system must comprise both sensor and rapid assessment components. The use of closed circuit television (CCTV) to provide the rapid assessment capability, makes possible the use of video motion detection (VMD) processing as a system sensor component. Despite it's conceptual appeal, video motion detection has not been widely used in outdoor perimeter systems because of an inability to discriminate between genuine intrusions and numerous environmental effects such as cloud shadows, wind motion, reflections, precipitation, etc. The result has been an unacceptably high false alarm rate and operator work-load. DAVID (Digital Automatic Video Intrusion Detector) utilizes new digital signal processing techniques to achieve a dramatic improvement in discrimination performance thereby making video motion detection practical for outdoor applications. This paper begins with a discussion of the key considerations in implementing an outdoor video intrusion detection system, followed by a description of the DAVID design in light of these considerations

  1. Automated processing of massive audio/video content using FFmpeg

    Directory of Open Access Journals (Sweden)

    Kia Siang Hock

    2014-01-01

    Full Text Available Audio and video content forms an integral, important and expanding part of the digital collections in libraries and archives world-wide. While these memory institutions are familiar and well-versed in the management of more conventional materials such as books, periodicals, ephemera and images, the handling of audio (e.g., oral history recordings and video content (e.g., audio-visual recordings, broadcast content requires additional toolkits. In particular, a robust and comprehensive tool that provides a programmable interface is indispensable when dealing with tens of thousands of hours of audio and video content. FFmpeg is comprehensive and well-established open source software that is capable of the full-range of audio/video processing tasks (such as encode, decode, transcode, mux, demux, stream and filter. It is also capable of handling a wide-range of audio and video formats, a unique challenge in memory institutions. It comes with a command line interface, as well as a set of developer libraries that can be incorporated into applications.

  2. 47 CFR 76.41 - Franchise application process.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Franchise application process. 76.41 Section 76... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Cable Franchise Applications § 76.41 Franchise application process. (a) Definition. Competitive franchise applicant. For the purpose of this section, an applicant...

  3. Video processing project

    CSIR Research Space (South Africa)

    Globisch, R

    2009-03-01

    Full Text Available Video processing source code for algorithms and tools used in software media pipelines (e.g. image scalers, colour converters, etc.) The currently available source code is written in C++ with their associated libraries and DirectShow- Filters....

  4. Improving Video Generation for Multi-functional Applications

    OpenAIRE

    Kratzwald, Bernhard; Huang, Zhiwu; Paudel, Danda Pani; Dinesh, Acharya; Van Gool, Luc

    2017-01-01

    In this paper, we aim to improve the state-of-the-art video generative adversarial networks (GANs) with a view towards multi-functional applications. Our improved video GAN model does not separate foreground from background nor dynamic from static patterns, but learns to generate the entire video clip conjointly. Our model can thus be trained to generate - and learn from - a broad set of videos with no restriction. This is achieved by designing a robust one-stream video generation architectur...

  5. Handbook of video databases design and applications

    CERN Document Server

    Furht, Borko

    2003-01-01

    INTRODUCTIONIntroduction to Video DatabasesOge Marques and Borko FurhtVIDEO MODELING AND REPRESENTATIONModeling Video Using Input/Output Markov Models with Application to Multi-Modal Event DetectionAshutosh Garg, Milind R. Naphade, and Thomas S. HuangStatistical Models of Video Structure and SemanticsNuno VasconcelosFlavor: A Language for Media RepresentationAlexandros Eleftheriadis and Danny HongIntegrating Domain Knowledge and Visual Evidence to Support Highlight Detection in Sports VideosJuergen Assfalg, Marco Bertini, Carlo Colombo, and Alberto Del BimboA Generic Event Model and Sports Vid

  6. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  7. Video performance for high security applications

    International Nuclear Information System (INIS)

    Connell, Jack C.; Norman, Bradley C.

    2010-01-01

    The complexity of physical protection systems has increased to address modern threats to national security and emerging commercial technologies. A key element of modern physical protection systems is the data presented to the human operator used for rapid determination of the cause of an alarm, whether false (e.g., caused by an animal, debris, etc.) or real (e.g., a human adversary). Alarm assessment, the human validation of a sensor alarm, primarily relies on imaging technologies and video systems. Developing measures of effectiveness (MOE) that drive the design or evaluation of a video system or technology becomes a challenge, given the subjectivity of the application (e.g., alarm assessment). Sandia National Laboratories has conducted empirical analysis using field test data and mathematical models such as binomial distribution and Johnson target transfer functions to develop MOEs for video system technologies. Depending on the technology, the task of the security operator and the distance to the target, the Probability of Assessment (PAs) can be determined as a function of a variety of conditions or assumptions. PAs used as an MOE allows the systems engineer to conduct trade studies, make informed design decisions, or evaluate new higher-risk technologies. This paper outlines general video system design trade-offs, discusses ways video can be used to increase system performance and lists MOEs for video systems used in subjective applications such as alarm assessment.

  8. Video over DSL with LDGM Codes for Interactive Applications

    Directory of Open Access Journals (Sweden)

    Laith Al-Jobouri

    2016-05-01

    Full Text Available Digital Subscriber Line (DSL network access is subject to error bursts, which, for interactive video, can introduce unacceptable latencies if video packets need to be re-sent. If the video packets are protected against errors with Forward Error Correction (FEC, calculation of the application-layer channel codes themselves may also introduce additional latency. This paper proposes Low-Density Generator Matrix (LDGM codes rather than other popular codes because they are more suitable for interactive video streaming, not only for their computational simplicity but also for their licensing advantage. The paper demonstrates that a reduction of up to 4 dB in video distortion is achievable with LDGM Application Layer (AL FEC. In addition, an extension to the LDGM scheme is demonstrated, which works by rearranging the columns of the parity check matrix so as to make it even more resilient to burst errors. Telemedicine and video conferencing are typical target applications.

  9. PROTOTIPE VIDEO EDITOR DENGAN MENGGUNAKAN DIRECT X DAN DIRECT SHOW

    Directory of Open Access Journals (Sweden)

    Djoni Haryadi Setiabudi

    2004-01-01

    Full Text Available Technology development had given people the chance to capture their memorable moments in video format. A high quality digital video is a result of a good editing process. Which in turn, arise the new need of an editor application. In accordance to the problem, here the process of making a simple application for video editing needs. The application development use the programming techniques often applied in multimedia applications, especially video. First part of the application will begin with the video file compression and decompression, then we'll step into the editing part of the digital video file. Furthermore, the application also equipped with the facilities needed for the editing processes. The application made with Microsoft Visual C++ with DirectX technology, particularly DirectShow. The application provides basic facilities that will help the editing process of a digital video file. The application will produce an AVI format file after the editing process is finished. Through the testing process of this application shows the ability of this application to do the 'cut' and 'insert' of video files in AVI, MPEG, MPG and DAT formats. The 'cut' and 'insert' process only can be done in static order. Further, the aplication also provide the effects facility for transition process in each clip. Lastly, the process of saving the new edited video file in AVI format from the application. Abstract in Bahasa Indonesia : Perkembangan teknologi memberi kesempatan masyarakat untuk mengabadikan saat - saat yang penting menggunakan video. Pembentukan video digital yang baik membutuhkan proses editing yang baik pula. Untuk melakukan proses editing video digital dibutuhkan program editor. Berdasarkan permasalahan diatas maka pada penelitian ini dibuat prototipe editor sederhana untuk video digital. Pembuatan aplikasi memakai teknik pemrograman di bidang multimedia, khususnya video. Perencanaan dalam pembuatan aplikasi tersebut dimulai dengan pembentukan

  10. Video Recording and the Research Process

    Science.gov (United States)

    Leung, Constant; Hawkins, Margaret R.

    2011-01-01

    This is a two-part discussion. Part 1 is entitled "English Language Learning in Subject Lessons", and Part 2 is titled "Video as a Research Tool/Counterpoint". Working with different research concerns, the authors attempt to draw attention to a set of methodological and theoretical issues that have emerged in the research process using video data.…

  11. Providing Memory Management Abstraction for Self-Reconfigurable Video Processing Platforms

    Directory of Open Access Journals (Sweden)

    Kurt Franz Ackermann

    2009-01-01

    Full Text Available This paper presents a concept for an SDRAM controller targeting video processing platforms with dynamically reconfigurable processing units (RPUs. A priority-arbitration algorithm provides the required QoS and supports high bit-rate data streaming of multiple clients. Conforming to common video data structures the controller organizes the memory in partitions, frames, lines, and pixels. The raised level of abstraction drastically reduces the complexity of clients' addressing logic. Its uniform interface structure facilitates instantiations in systems with various clients. In addition to SDRAM controllers for regular applications, special demands of reconfigurable platforms have to be satisfied. The aim of this work is to minimize the number of required bus macros leading to relaxed place and route constraints and reducing the number of critical design paths. A suitable interface protocol is presented, and fundamental implementation issues are outlined.

  12. Possibilities of Applying Video Surveillance and other ICT Tools and Services in the Production Process

    Directory of Open Access Journals (Sweden)

    Adis Rahmanović

    2018-02-01

    Full Text Available The paper presents the possibilities of applying Video surveillance and other ICT tools and services in the production process. The first part of the paper presented the system for controlling video surveillance for and the given opportunity of application of video surveillance for the security of the employees and the assets. In the second part of the paper an analysis of the system for controlling production is given and then a video surveillance of a work excavator. The next part of the paper presents integration of video surveillance and the accompanying tools. At the end of the paper, suggestions were also given for further works in the field of data protection and cryptography in video surveillance use.

  13. Researching on the process of remote sensing video imagery

    Science.gov (United States)

    Wang, He-rao; Zheng, Xin-qi; Sun, Yi-bo; Jia, Zong-ren; Wang, He-zhan

    Unmanned air vehicle remotely-sensed imagery on the low-altitude has the advantages of higher revolution, easy-shooting, real-time accessing, etc. It's been widely used in mapping , target identification, and other fields in recent years. However, because of conditional limitation, the video images are unstable, the targets move fast, and the shooting background is complex, etc., thus it is difficult to process the video images in this situation. In other fields, especially in the field of computer vision, the researches on video images are more extensive., which is very helpful for processing the remotely-sensed imagery on the low-altitude. Based on this, this paper analyzes and summarizes amounts of video image processing achievement in different fields, including research purposes, data sources, and the pros and cons of technology. Meantime, this paper explores the technology methods more suitable for low-altitude video image processing of remote sensing.

  14. Roadside video data analysis deep learning

    CERN Document Server

    Verma, Brijesh; Stockwell, David

    2017-01-01

    This book highlights the methods and applications for roadside video data analysis, with a particular focus on the use of deep learning to solve roadside video data segmentation and classification problems. It describes system architectures and methodologies that are specifically built upon learning concepts for roadside video data processing, and offers a detailed analysis of the segmentation, feature extraction and classification processes. Lastly, it demonstrates the applications of roadside video data analysis including scene labelling, roadside vegetation classification and vegetation biomass estimation in fire risk assessment.

  15. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  16. Impact of different cloud deployments on real-time video applications for mobile video cloud users

    Science.gov (United States)

    Khan, Kashif A.; Wang, Qi; Luo, Chunbo; Wang, Xinheng; Grecos, Christos

    2015-02-01

    The latest trend to access mobile cloud services through wireless network connectivity has amplified globally among both entrepreneurs and home end users. Although existing public cloud service vendors such as Google, Microsoft Azure etc. are providing on-demand cloud services with affordable cost for mobile users, there are still a number of challenges to achieve high-quality mobile cloud based video applications, especially due to the bandwidth-constrained and errorprone mobile network connectivity, which is the communication bottleneck for end-to-end video delivery. In addition, existing accessible clouds networking architectures are different in term of their implementation, services, resources, storage, pricing, support and so on, and these differences have varied impact on the performance of cloud-based real-time video applications. Nevertheless, these challenges and impacts have not been thoroughly investigated in the literature. In our previous work, we have implemented a mobile cloud network model that integrates localized and decentralized cloudlets (mini-clouds) and wireless mesh networks. In this paper, we deploy a real-time framework consisting of various existing Internet cloud networking architectures (Google Cloud, Microsoft Azure and Eucalyptus Cloud) and a cloudlet based on Ubuntu Enterprise Cloud over wireless mesh networking technology for mobile cloud end users. It is noted that the increasing trend to access real-time video streaming over HTTP/HTTPS is gaining popularity among both research and industrial communities to leverage the existing web services and HTTP infrastructure in the Internet. To study the performance under different deployments using different public and private cloud service providers, we employ real-time video streaming over the HTTP/HTTPS standard, and conduct experimental evaluation and in-depth comparative analysis of the impact of different deployments on the quality of service for mobile video cloud users. Empirical

  17. Image processing of integrated video image obtained with a charged-particle imaging video monitor system

    International Nuclear Information System (INIS)

    Iida, Takao; Nakajima, Takehiro

    1988-01-01

    A new type of charged-particle imaging video monitor system was constructed for video imaging of the distributions of alpha-emitting and low-energy beta-emitting nuclides. The system can display not only the scintillation image due to radiation on the video monitor but also the integrated video image becoming gradually clearer on another video monitor. The distortion of the image is about 5% and the spatial resolution is about 2 line pairs (lp)mm -1 . The integrated image is transferred to a personal computer and image processing is performed qualitatively and quantitatively. (author)

  18. In-camera video-stream processing for bandwidth reduction in web inspection

    Science.gov (United States)

    Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.

    1996-02-01

    Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.

  19. Fast compressed domain motion detection in H.264 video streams for video surveillance applications

    DEFF Research Database (Denmark)

    Szczerba, Krzysztof; Forchhammer, Søren; Støttrup-Andersen, Jesper

    2009-01-01

    This paper presents a novel approach to fast motion detection in H.264/MPEG-4 advanced video coding (AVC) compressed video streams for IP video surveillance systems. The goal is to develop algorithms which may be useful in a real-life industrial perspective by facilitating the processing of large...... on motion vectors embedded in the video stream without requiring a full decoding and reconstruction of video frames. To improve the robustness to noise, a confidence measure based on temporal and spatial clues is introduced to increase the probability of correct detection. The algorithm was tested on indoor...

  20. Demonstrations of video processing of image data for uranium resource assessments

    International Nuclear Information System (INIS)

    Marrs, R.W.; King, J.K.

    1978-01-01

    Video processing of LANDSAT imagery was performed for nine areas in the western United States to demonstrate the applicability of such analyses for regional uranium resource assessment. The results of these tests, in areas of diverse geology, topography, and vegetation, were mixed. The best success was achieved in arid areas because vegetation cover is extremely limiting in any analysis dealing primarily with rocks and soils. Surface alteration patterns of large areal extent, involving transformation or redistribution of iron oxides, and reflectance contrasts were the only type of alteration consistently detected by video processing of LANDSAT imagery. Alteration often provided the only direct indication of mineralization. Other exploration guides, such as lithologic changes, can often be detected, even in heavily vegetated regions. Structural interpretation of the imagery proved far more successful than spectral analyses as an indicator of regions of possible uranium enrichment

  1. Real-time heterogeneous video transcoding for low-power applications

    CERN Document Server

    Elarabi, Tarek; Bayoumi, Magdy

    2014-01-01

    This book introduces a novel transcoding algorithm for real time video applications, designed to overcome inter-operability problems between MPEG-2 to H.264/AVC. The new algorithm achieves 92.8% reduction in the transcoding run time at a price of an acceptable Peak Signal-to-Noise Ratio (PSNR) degradation, enabling readers to use it for real time video applications. The algorithm described is evaluated through simulation and experimental results. In addition, the authors present a hardware implementation of the new algorithm using Field Programmable Gate Array (FPGA) and Application-specific standard products (ASIC).   • Describes a novel transcoding algorithm for real time video applications, designed to overcome inter-operability problems between H.264/AVC to MPEG-2; • Implements algorithm presented using Field Programmable Gate Array (FPGA) and Application-specific Integrated Circuit (ASIC); • Demonstrates the solution to real problems, with verification through simulation and experimental result...

  2. Task-oriented quality assessment and adaptation in real-time mission critical video streaming applications

    Science.gov (United States)

    Nightingale, James; Wang, Qi; Grecos, Christos

    2015-02-01

    In recent years video traffic has become the dominant application on the Internet with global year-on-year increases in video-oriented consumer services. Driven by improved bandwidth in both mobile and fixed networks, steadily reducing hardware costs and the development of new technologies, many existing and new classes of commercial and industrial video applications are now being upgraded or emerging. Some of the use cases for these applications include areas such as public and private security monitoring for loss prevention or intruder detection, industrial process monitoring and critical infrastructure monitoring. The use of video is becoming commonplace in defence, security, commercial, industrial, educational and health contexts. Towards optimal performances, the design or optimisation in each of these applications should be context aware and task oriented with the characteristics of the video stream (frame rate, spatial resolution, bandwidth etc.) chosen to match the use case requirements. For example, in the security domain, a task-oriented consideration may be that higher resolution video would be required to identify an intruder than to simply detect his presence. Whilst in the same case, contextual factors such as the requirement to transmit over a resource-limited wireless link, may impose constraints on the selection of optimum task-oriented parameters. This paper presents a novel, conceptually simple and easily implemented method of assessing video quality relative to its suitability for a particular task and dynamically adapting videos streams during transmission to ensure that the task can be successfully completed. Firstly we defined two principle classes of tasks: recognition tasks and event detection tasks. These task classes are further subdivided into a set of task-related profiles, each of which is associated with a set of taskoriented attributes (minimum spatial resolution, minimum frame rate etc.). For example, in the detection class

  3. Video Classification and Adaptive QoP/QoS Control for Multiresolution Video Applications on IPTV

    Directory of Open Access Journals (Sweden)

    Huang Shyh-Fang

    2012-01-01

    Full Text Available With the development of heterogeneous networks and video coding standards, multiresolution video applications over networks become important. It is critical to ensure the service quality of the network for time-sensitive video services. Worldwide Interoperability for Microwave Access (WIMAX is a good candidate for delivering video signals because through WIMAX the delivery quality based on the quality-of-service (QoS setting can be guaranteed. The selection of suitable QoS parameters is, however, not trivial for service users. Instead, what a video service user really concerns with is the video quality of presentation (QoP which includes the video resolution, the fidelity, and the frame rate. In this paper, we present a quality control mechanism in multiresolution video coding structures over WIMAX networks and also investigate the relationship between QoP and QoS in end-to-end connections. Consequently, the video presentation quality can be simply mapped to the network requirements by a mapping table, and then the end-to-end QoS is achieved. We performed experiments with multiresolution MPEG coding over WIMAX networks. In addition to the QoP parameters, the video characteristics, such as, the picture activity and the video mobility, also affect the QoS significantly.

  4. Behavioral System Level Power Consumption Modeling of Mobile Video Streaming applications

    OpenAIRE

    Benmoussa , Yahia; Boukhobza , Jalil; Hadjadj-Aoul , Yassine; Lagadec , Loïc; Benazzouz , Djamel

    2012-01-01

    National audience; Nowadays, the use of mobile applications and terminals faces fundamental challenges related to energy constraint. This is due to the limited battery lifetime as compared to the increasing hardware evolution. Video streaming is one of the most energy consuming applications in a mobile system because of its intensive use of bandwidth, memory and processing power. In this work, we aim to propose a methodology for building and validating a high level global power consumption mo...

  5. Video game players show more precise multisensory temporal processing abilities.

    Science.gov (United States)

    Donohue, Sarah E; Woldorff, Marty G; Mitroff, Stephen R

    2010-05-01

    Recent research has demonstrated enhanced visual attention and visual perception in individuals with extensive experience playing action video games. These benefits manifest in several realms, but much remains unknown about the ways in which video game experience alters perception and cognition. In the present study, we examined whether video game players' benefits generalize beyond vision to multisensory processing by presenting auditory and visual stimuli within a short temporal window to video game players and non-video game players. Participants performed two discrimination tasks, both of which revealed benefits for video game players: In a simultaneity judgment task, video game players were better able to distinguish whether simple visual and auditory stimuli occurred at the same moment or slightly offset in time, and in a temporal-order judgment task, they revealed an enhanced ability to determine the temporal sequence of multisensory stimuli. These results suggest that people with extensive experience playing video games display benefits that extend beyond the visual modality to also impact multisensory processing.

  6. Face Recognition and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Swapnil Vitthal Tathe

    2017-07-01

    Full Text Available Advancement in computer vision technology and availability of video capturing devices such as surveillance cameras has evoked new video processing applications. The research in video face recognition is mostly biased towards law enforcement applications. Applications involves human recognition based on face and iris, human computer interaction, behavior analysis, video surveillance etc. This paper presents face tracking framework that is capable of face detection using Haar features, recognition using Gabor feature extraction, matching using correlation score and tracking using Kalman filter. The method has good recognition rate for real-life videos and robust performance to changes due to illumination, environmental factors, scale, pose and orientations.

  7. Designing with video focusing the user-centred design process

    CERN Document Server

    Ylirisku, Salu Pekka

    2007-01-01

    Digital video for user-centered co-design is an emerging field of design, gaining increasing interest in both industry and academia. It merges the techniques and approaches of design ethnography, participatory design, interaction analysis, scenario-based design, and usability studies. This book covers the complete user-centered design project. It illustrates in detail how digital video can be utilized throughout the design process, from early user studies to making sense of video content and envisioning the future with video scenarios to provoking change with video artifacts. The text includes

  8. Towards real-time remote processing of laparoscopic video

    Science.gov (United States)

    Ronaghi, Zahra; Duffy, Edward B.; Kwartowitz, David M.

    2015-03-01

    Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery (IGS) uses images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). The video streams generate approximately 360 megabytes of data per second, demonstrating a trend towards increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process and visualize data in real-time is essential for performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We aim to develop a medical video processing system using an OpenFlow software defined network that is capable of connecting to multiple remote medical facilities and HPC servers.

  9. Video Game Players Show More Precise Multisensory Temporal Processing Abilities

    OpenAIRE

    Donohue, Sarah E.; Woldorff, Marty G.; Mitroff, Stephen R.

    2010-01-01

    Recent research has demonstrated enhanced visual attention and visual perception in individuals with extensive experience playing action video games. These benefits manifest in several realms, but much remains unknown about the ways in which video game experience alters perception and cognition. The current study examined whether video game players’ benefits generalize beyond vision to multisensory processing by presenting video game players and non-video game players auditory and visual stim...

  10. Designing a large-scale video chat application

    OpenAIRE

    Scholl, Jeremiah; Parnes, Peter; McCarthy, John D.; Sasse, Angela

    2005-01-01

    Studies of video conferencing systems generally focus on scenarios where users communicate using an audio channel. However, text chat serves users in a wide variety of contexts, and is commonly included in multimedia conferencing systems as a complement to the audio channel. This paper introduces a prototype application which integrates video and text communication, and describes a formative evaluation of the prototype with 53 users in a social setting. We focus the evaluation on bandwidth an...

  11. Theory and practice of perceptual video processing in broadcast encoders for cable, IPTV, satellite, and internet distribution

    Science.gov (United States)

    McCarthy, S.

    2014-02-01

    This paper describes the theory and application of a perceptually-inspired video processing technology that was recently incorporated into professional video encoders now being used by major cable, IPTV, satellite, and internet video service providers. We will present data that show that this perceptual video processing (PVP) technology can improve video compression efficiency by up to 50% for MPEG-2, H.264, and High Efficiency Video Coding (HEVC). The PVP technology described in this paper works by forming predicted eye-tracking attractor maps that indicate how likely it might be that a free viewing person would look at particular area of an image or video. We will introduce in this paper the novel model and supporting theory used to calculate the eye-tracking attractor maps. We will show how the underlying perceptual model was inspired by electrophysiological studies of the vertebrate retina, and will explain how the model incorporates statistical expectations about natural scenes as well as a novel method for predicting error in signal estimation tasks. Finally, we will describe how the eye-tracking attractor maps are created in real time and used to modify video prior to encoding so that it is more compressible but not noticeably different than the original unmodified video.

  12. Advances in heuristic signal processing and applications

    CERN Document Server

    Chatterjee, Amitava; Siarry, Patrick

    2013-01-01

    There have been significant developments in the design and application of algorithms for both one-dimensional signal processing and multidimensional signal processing, namely image and video processing, with the recent focus changing from a step-by-step procedure of designing the algorithm first and following up with in-depth analysis and performance improvement to instead applying heuristic-based methods to solve signal-processing problems. In this book the contributing authors demonstrate both general-purpose algorithms and those aimed at solving specialized application problems, with a spec

  13. Processing Decoded Video for Backlight Dimming

    DEFF Research Database (Denmark)

    Burini, Nino; Korhonen, Jari

    rendition of the signals, particularly in the case of LCDs with dynamic local backlight. This thesis shows that it is possible to model LCDs with dynamic backlight to design algorithms that improve the visual quality of 2D and 3D content, and that digital video coding artifacts like blocking or ringing can......Quality of digital image and video signals on TV screens is aected by many factors, including the display technology and compression standards. An accurate knowledge of the characteristics of the display andof the video signals can be used to develop advanced algorithms that improve the visual...... be reduced with post-processing. LCD screens with dynamic local backlight are modeled in their main aspects, like pixel luminance, light diusion and light perception. Following the model, novel algorithms based on optimization are presented and extended, then reduced in complexity, to produce backlights...

  14. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    International Nuclear Information System (INIS)

    Wright, R.; Zander, M.; Brown, S.; Sandoval, D.; Gilpatrick, D.; Gibson, H.

    1992-01-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development of both software and hardware for imagetool and its integration with the GTA control system (GTACS) is discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. (Author) (3 figs., 4 refs.)

  15. Recognising safety critical events: can automatic video processing improve naturalistic data analyses?

    Science.gov (United States)

    Dozza, Marco; González, Nieves Pañeda

    2013-11-01

    applications for NDS video processing. As new NDS such as SHRP2 are now providing the equivalent of five years of one vehicle data each day, the development of new methods, such as the one proposed in this paper, seems necessary to guarantee that these data can actually be analysed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. System and Analysis for Low Latency Video Processing using Microservices

    OpenAIRE

    VASUKI BALASUBRAMANIAM, KARTHIKEYAN

    2017-01-01

    The evolution of big data processing and analysis has led to data-parallel frameworks such as Hadoop, MapReduce, Spark, and Hive, which are capable of analyzing large streams of data such as server logs, web transactions, and user reviews. Videos are one of the biggest sources of data and dominate the Internet traffic. Video processing on a large scale is critical and challenging as videos possess spatial and temporal features, which are not taken into account by the existing data-parallel fr...

  17. On-line video image processing system for real-time neutron radiography

    Energy Technology Data Exchange (ETDEWEB)

    Fujine, S; Yoneda, K; Kanda, K [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst.

    1983-09-15

    The neutron radiography system installed at the E-2 experimental hole of the KUR (Kyoto University Reactor) has been used for some NDT applications in the nuclear field. The on-line video image processing system of this facility is introduced in this paper. A 0.5 mm resolution in images was obtained by using a super high quality TV camera developed for X-radiography viewing a NE-426 neutron-sensitive scintillator. The image of the NE-426 on a CRT can be observed directly and visually, thus many test samples can be sequentially observed when necessary for industrial purposes. The video image signals from the TV camera are digitized, with a 33 ms delay, through a video A/D converter (ADC) and can be stored in the image buffer (32 KB DRAM) of a microcomputer (Z-80) system. The digitized pictures are taken with 16 levels of gray scale and resolved to 240 x 256 picture elements (pixels) on a monochrome CRT, with the capability also to display 16 distinct colors on a RGB video display. The direct image of this system could be satisfactory for penetrating the side plates to test MTR type reactor fuels and for the investigation of moving objects.

  18. MoViE: Experiences and attitudes—Learning with a mobile social video application

    Directory of Open Access Journals (Sweden)

    Pauliina Tuomi

    2010-10-01

    Full Text Available Digital media is increasingly finding its way into the discussions of the classroom. Particularly interest is placed on mobile learning—the learning and teaching practices done with or via different mobile devices. Learning with the help of mobile devices is increasingly common and it is considered to be one of the 21st century skills children should adapt already in early stages in schools. The article presents both qualitative and quantitative study on mobile social video application, MoViE, as a part of teaching in biology and geography in 8th and 9th grades. The multidisciplinary data was processed to answer the following question: How did the use of mobile videos promote learning? The actual research question is however twofold: On one hand, it studies the use of mobile videos in mobile learning. On the other hand, it sets out to investigate the implementation of mobile video sharing as a part of the teaching and learning activities.

  19. Video-based rendering

    CERN Document Server

    Magnor, Marcus A

    2005-01-01

    Driven by consumer-market applications that enjoy steadily increasing economic importance, graphics hardware and rendering algorithms are a central focus of computer graphics research. Video-based rendering is an approach that aims to overcome the current bottleneck in the time-consuming modeling process and has applications in areas such as computer games, special effects, and interactive TV. This book offers an in-depth introduction to video-based rendering, a rapidly developing new interdisciplinary topic employing techniques from computer graphics, computer vision, and telecommunication en

  20. Methods and Algorithms for Detecting Objects in Video Files

    Directory of Open Access Journals (Sweden)

    Nguyen The Cuong

    2018-01-01

    Full Text Available Video files are files that store motion pictures and sounds like in real life. In today's world, the need for automated processing of information in video files is increasing. Automated processing of information has a wide range of application including office/home surveillance cameras, traffic control, sports applications, remote object detection, and others. In particular, detection and tracking of object movement in video file plays an important role. This article describes the methods of detecting objects in video files. Today, this problem in the field of computer vision is being studied worldwide.

  1. Quantum Computation-Based Image Representation, Processing Operations and Their Applications

    Directory of Open Access Journals (Sweden)

    Fei Yan

    2014-10-01

    Full Text Available A flexible representation of quantum images (FRQI was proposed to facilitate the extension of classical (non-quantum-like image processing applications to the quantum computing domain. The representation encodes a quantum image in the form of a normalized state, which captures information about colors and their corresponding positions in the images. Since its conception, a handful of processing transformations have been formulated, among which are the geometric transformations on quantum images (GTQI and the CTQI that are focused on the color information of the images. In addition, extensions and applications of FRQI representation, such as multi-channel representation for quantum images (MCQI, quantum image data searching, watermarking strategies for quantum images, a framework to produce movies on quantum computers and a blueprint for quantum video encryption and decryption have also been suggested. These proposals extend classical-like image and video processing applications to the quantum computing domain and offer a significant speed-up with low computational resources in comparison to performing the same tasks on traditional computing devices. Each of the algorithms and the mathematical foundations for their execution were simulated using classical computing resources, and their results were analyzed alongside other classical computing equivalents. The work presented in this review is intended to serve as the epitome of advances made in FRQI quantum image processing over the past five years and to simulate further interest geared towards the realization of some secure and efficient image and video processing applications on quantum computers.

  2. Scalable video on demand adaptive Internet-based distribution

    CERN Document Server

    Zink, Michael

    2013-01-01

    In recent years, the proliferation of available video content and the popularity of the Internet have encouraged service providers to develop new ways of distributing content to clients. Increasing video scaling ratios and advanced digital signal processing techniques have led to Internet Video-on-Demand applications, but these currently lack efficiency and quality. Scalable Video on Demand: Adaptive Internet-based Distribution examines how current video compression and streaming can be used to deliver high-quality applications over the Internet. In addition to analysing the problems

  3. Video-processing-based system for automated pedestrian data collection and analysis when crossing the street

    Science.gov (United States)

    Mansouri, Nabila; Watelain, Eric; Ben Jemaa, Yousra; Motamed, Cina

    2018-03-01

    Computer-vision techniques for pedestrian detection and tracking have progressed considerably and become widely used in several applications. However, a quick glance at the literature shows a minimal use of these techniques in pedestrian behavior and safety analysis, which might be due to the technical complexities facing the processing of pedestrian videos. To extract pedestrian trajectories from a video automatically, all road users must be detected and tracked during sequences, which is a challenging task, especially in a congested open-outdoor urban space. A multipedestrian tracker based on an interframe-detection-association process was proposed and evaluated. The tracker results are used to implement an automatic tool for pedestrians data collection when crossing the street based on video processing. The variations in the instantaneous speed allowed the detection of the street crossing phases (approach, waiting, and crossing). These were addressed for the first time in the pedestrian road security analysis to illustrate the causal relationship between pedestrian behaviors in the different phases. A comparison with a manual data collection method, by computing the root mean square error and the Pearson correlation coefficient, confirmed that the procedures proposed have significant potential to automate the data collection process.

  4. Digital video image processing applications to two phase flow measurements

    International Nuclear Information System (INIS)

    Biscos, Y.; Bismes, F.; Hebrard, P.; Lavergne, G.

    1987-01-01

    Liquid spraying is common in various fields (combustion, cooling of hot surfaces, spray drying,...). For two phase flows modeling, it is necessary to test elementary laws (vaporizing drops, equation of motion of drops or bubbles, heat transfer..). For example, the knowledge of the laws related to the behavior of vaporizing liquid drop in a hot airstream and impinging drops on a hot surface is important for two phase flow modeling. In order to test these different laws in elementary cases, the authors developed different measurement techniques, associating video and microcomputers. The test section (built in perpex or glass) is illuminated with a thin sheet of light generated by a 15mW He-Ne laser and appropriate optical arrangement. Drops, bubbles or liquid film are observed at right angle by a video camera synchronised with a microcomputer either directly or with an optical device (lens, telescope, microscope) providing sufficient magnification. Digitizing the video picture in real time associated with an appropriate numerical treatment allows to obtain, in a non interfering way, a lot of informations relative to the pulverisation and the vaporization as function of space and time (drop size distribution; Sauter mean diameter as function of main flow parameters: air velocity, surface tension, temperature; isoconcentration curves, size evolution relative to vaporizing drops, film thickness evolution spreading on a hot surface...)

  5. Applications of Scalable Multipoint Video and Audio Using the Public Internet

    Directory of Open Access Journals (Sweden)

    Robert D. Gaglianello

    2000-01-01

    Full Text Available This paper describes a scalable multipoint video system, designed for efficient generation and display of high quality, multiple resolution, multiple compressed video streams over IP-based networks. We present our experiences using the system over the public Internet for several “real-world” applications, including distance learning, virtual theater, and virtual collaboration. The trials were a combined effort of Bell Laboratories and the Gertrude Stein Repertory Theatre (TGSRT. We also present current advances in the conferencing system since the trials, new areas for application and future applications.

  6. A Retrieval Optimized Surveillance Video Storage System for Campus Application Scenarios

    Directory of Open Access Journals (Sweden)

    Shengcheng Ma

    2018-01-01

    Full Text Available This paper investigates and analyzes the characteristics of video data and puts forward a campus surveillance video storage system with the university campus as the specific application environment. Aiming at the challenge that the content-based video retrieval response time is too long, the key-frame index subsystem is designed. The key frame of the video can reflect the main content of the video. Extracted from the video, key frames are associated with the metadata information to establish the storage index. The key-frame index is used in lookup operations while querying. This method can greatly reduce the amount of video data reading and effectively improves the query’s efficiency. From the above, we model the storage system by a stochastic Petri net (SPN and verify the promotion of query performance by quantitative analysis.

  7. Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes.

    Science.gov (United States)

    Kwon, Min-Woo; Kim, Seung-Cheol; Kim, Eun-Soo

    2016-01-20

    A three-directional motion-compensation mask-based novel look-up table method is proposed and implemented on graphics processing units (GPUs) for video-rate generation of digital holographic videos of three-dimensional (3D) scenes. Since the proposed method is designed to be well matched with the software and memory structures of GPUs, the number of compute-unified-device-architecture kernel function calls can be significantly reduced. This results in a great increase of the computational speed of the proposed method, allowing video-rate generation of the computer-generated hologram (CGH) patterns of 3D scenes. Experimental results reveal that the proposed method can generate 39.8 frames of Fresnel CGH patterns with 1920×1080 pixels per second for the test 3D video scenario with 12,088 object points on dual GPU boards of NVIDIA GTX TITANs, and they confirm the feasibility of the proposed method in the practical application fields of electroholographic 3D displays.

  8. Innovative Second Language Speaking Practice with Interactive Videos in a Rich Internet Application Environment

    Science.gov (United States)

    Pereira, Juan A.; Sanz-Santamaría, Silvia; Montero, Raúl; Gutiérrez, Julián

    2012-01-01

    Attaining a satisfactory level of oral communication in a second language is a laborious process. In this action research paper we describe a new method applied through the use of interactive videos and the Babelium Project Rich Internet Application (RIA), which allows students to practice speaking skills through a variety of exercises. We present…

  9. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  10. Dimensioning Method for Conversational Video Applications in Wireless Convergent Networks

    Directory of Open Access Journals (Sweden)

    Raquel Perez Leal

    2007-12-01

    Full Text Available New convergent services are becoming possible, thanks to the expansion of IP networks based on the availability of innovative advanced coding formats such as H.264, which reduce network bandwidth requirements providing good video quality, and the rapid growth in the supply of dual-mode WiFi cellular terminals. This paper provides, first, a comprehensive subject overview as several technologies are involved, such as medium access protocol in IEEE802.11, H.264 advanced video coding standards, and conversational application characterization and recommendations. Second, the paper presents a new and simple dimensioning model of conversational video over wireless LAN. WLAN is addressed under the optimal network throughput and the perspective of video quality. The maximum number of simultaneous users resulting from throughput is limited by the collisions taking place in the shared medium with the statistical contention protocol. The video quality is conditioned by the packet loss in the contention protocol. Both approaches are analyzed within the scope of the advanced video codecs used in conversational video over IP, to conclude that conversational video dimensioning based on network throughput is not enough to ensure a satisfactory user experience, and video quality has to be taken also into account. Finally, the proposed model has been applied to a real-office scenario.

  11. Dimensioning Method for Conversational Video Applications in Wireless Convergent Networks

    Directory of Open Access Journals (Sweden)

    Alonso JoséI

    2008-01-01

    Full Text Available Abstract New convergent services are becoming possible, thanks to the expansion of IP networks based on the availability of innovative advanced coding formats such as H.264, which reduce network bandwidth requirements providing good video quality, and the rapid growth in the supply of dual-mode WiFi cellular terminals. This paper provides, first, a comprehensive subject overview as several technologies are involved, such as medium access protocol in IEEE802.11, H.264 advanced video coding standards, and conversational application characterization and recommendations. Second, the paper presents a new and simple dimensioning model of conversational video over wireless LAN. WLAN is addressed under the optimal network throughput and the perspective of video quality. The maximum number of simultaneous users resulting from throughput is limited by the collisions taking place in the shared medium with the statistical contention protocol. The video quality is conditioned by the packet loss in the contention protocol. Both approaches are analyzed within the scope of the advanced video codecs used in conversational video over IP, to conclude that conversational video dimensioning based on network throughput is not enough to ensure a satisfactory user experience, and video quality has to be taken also into account. Finally, the proposed model has been applied to a real-office scenario.

  12. A Hybrid Scheme Based on Pipelining and Multitasking in Mobile Application Processors for Advanced Video Coding

    Directory of Open Access Journals (Sweden)

    Muhammad Asif

    2015-01-01

    Full Text Available One of the key requirements for mobile devices is to provide high-performance computing at lower power consumption. The processors used in these devices provide specific hardware resources to handle computationally intensive video processing and interactive graphical applications. Moreover, processors designed for low-power applications may introduce limitations on the availability and usage of resources, which present additional challenges to the system designers. Owing to the specific design of the JZ47x series of mobile application processors, a hybrid software-hardware implementation scheme for H.264/AVC encoder is proposed in this work. The proposed scheme distributes the encoding tasks among hardware and software modules. A series of optimization techniques are developed to speed up the memory access and data transferring among memories. Moreover, an efficient data reusage design is proposed for the deblock filter video processing unit to reduce the memory accesses. Furthermore, fine grained macroblock (MB level parallelism is effectively exploited and a pipelined approach is proposed for efficient utilization of hardware processing cores. Finally, based on parallelism in the proposed design, encoding tasks are distributed between two processing cores. Experiments show that the hybrid encoder is 12 times faster than a highly optimized sequential encoder due to proposed techniques.

  13. Video technical characteristics and recommendations for optical surveillance

    International Nuclear Information System (INIS)

    Wilson, G.L.; Whichello, J.V.

    1991-01-01

    The application of new video surveillance electronics to safeguards has introduced an urgent need to formulate and adopt video standards that will ensure the highest possible video quality and the orderly introduction of data insertion. Standards will provide guidance in the application of image processing and digital techniques. Realistic and practical standards are a benefit to the IAEA, Member States, Support Programme equipment developers and facility operators, as they assist in the efficient utilisation of available resources. Moreover, standards shall provide a clear path for orderly introduction of newer technologies, whilst ensuring authentication and verification of the original image through the video process. Standards emerging from IAEA are an outcome of experience based on current knowledge, both within the safeguards arena and the video parent industry which comprises commercial and professional television. This paper provides a brief synopsis of recent developments which have highlighted the need for a surveillance based video standard together with a brief outline of these standards

  14. Collaborative web-based annotation of video footage of deep-sea life, ecosystems and geological processes

    Science.gov (United States)

    Kottmann, R.; Ratmeyer, V.; Pop Ristov, A.; Boetius, A.

    2012-04-01

    More and more seagoing scientific expeditions use video-controlled research platforms such as Remote Operating Vehicles (ROV), Autonomous Underwater Vehicles (AUV), and towed camera systems. These produce many hours of video material which contains detailed and scientifically highly valuable footage of the biological, chemical, geological, and physical aspects of the oceans. Many of the videos contain unique observations of unknown life-forms which are rare, and which cannot be sampled and studied otherwise. To make such video material online accessible and to create a collaborative annotation environment the "Video Annotation and processing platform" (V-App) was developed. A first solely web-based installation for ROV videos is setup at the German Center for Marine Environmental Sciences (available at http://videolib.marum.de). It allows users to search and watch videos with a standard web browser based on the HTML5 standard. Moreover, V-App implements social web technologies allowing a distributed world-wide scientific community to collaboratively annotate videos anywhere at any time. It has several features fully implemented among which are: • User login system for fine grained permission and access control • Video watching • Video search using keywords, geographic position, depth and time range and any combination thereof • Video annotation organised in themes (tracks) such as biology and geology among others in standard or full screen mode • Annotation keyword management: Administrative users can add, delete, and update single keywords for annotation or upload sets of keywords from Excel-sheets • Download of products for scientific use This unique web application system helps making costly ROV videos online available (estimated cost range between 5.000 - 10.000 Euros per hour depending on the combination of ship and ROV). Moreover, with this system each expert annotation adds instantaneous available and valuable knowledge to otherwise uncharted

  15. Mono or 3D video production for scientific dissemination of nuclear energy applications

    International Nuclear Information System (INIS)

    Freitas, Victor Goncalves G.; Mol, Antonio Carlos A.; Biermann, Bruna; Jorge, Carlos Alexandre F.; Araujo, Tawein

    2011-01-01

    This work presents results of educational videos development, mono or stereo, for scientific dissemination of nuclear energy applications. Nuclear energy span through many important applications for the society, ranging from electrical power generation to nuclear medicine, among others. Thus, the purpose is to disseminate this information for the general public and specially for students. Educational videos consist in a good approach for this purpose, due to the involvement of the public they provide, more than simply text or oral exposition, or even static images presentation. Stereo videos result in even more involvement of the public, besides immersion, the later due to the realism 3D views provide. The video developed in this work deals with explanations of electrical power generation, including nuclear reactor operation, shows the percentage of nuclear source as power generation all over the world, and explains also nuclear energy application in medicine. It is expected all these characteristics provided by the use of video or virtual reality techniques will achieve the purpose of disseminating such important information, regarding the benefits of nuclear energy to the society. (author)

  16. Mono or 3D video production for scientific dissemination of nuclear energy applications

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, Victor Goncalves G.; Mol, Antonio Carlos A.; Biermann, Bruna; Jorge, Carlos Alexandre F., E-mail: mol@ien.gov.b, E-mail: vgoncalves@ien.gov.b, E-mail: calexandre@ien.gov.b [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Araujo, Tawein [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Escola de Belas Artes; Legey, Ana Paula [Universidade Gama Filho (UGF), Rio de Janeiro, RJ (Brazil)

    2011-07-01

    This work presents results of educational videos development, mono or stereo, for scientific dissemination of nuclear energy applications. Nuclear energy span through many important applications for the society, ranging from electrical power generation to nuclear medicine, among others. Thus, the purpose is to disseminate this information for the general public and specially for students. Educational videos consist in a good approach for this purpose, due to the involvement of the public they provide, more than simply text or oral exposition, or even static images presentation. Stereo videos result in even more involvement of the public, besides immersion, the later due to the realism 3D views provide. The video developed in this work deals with explanations of electrical power generation, including nuclear reactor operation, shows the percentage of nuclear source as power generation all over the world, and explains also nuclear energy application in medicine. It is expected all these characteristics provided by the use of video or virtual reality techniques will achieve the purpose of disseminating such important information, regarding the benefits of nuclear energy to the society. (author)

  17. Development and application of traffic flow information collecting and analysis system based on multi-type video

    Science.gov (United States)

    Lu, Mujie; Shang, Wenjie; Ji, Xinkai; Hua, Mingzhuang; Cheng, Kuo

    2015-12-01

    Nowadays, intelligent transportation system (ITS) has already become the new direction of transportation development. Traffic data, as a fundamental part of intelligent transportation system, is having a more and more crucial status. In recent years, video observation technology has been widely used in the field of traffic information collecting. Traffic flow information contained in video data has many advantages which is comprehensive and can be stored for a long time, but there are still many problems, such as low precision and high cost in the process of collecting information. This paper aiming at these problems, proposes a kind of traffic target detection method with broad applicability. Based on three different ways of getting video data, such as aerial photography, fixed camera and handheld camera, we develop a kind of intelligent analysis software which can be used to extract the macroscopic, microscopic traffic flow information in the video, and the information can be used for traffic analysis and transportation planning. For road intersections, the system uses frame difference method to extract traffic information, for freeway sections, the system uses optical flow method to track the vehicles. The system was applied in Nanjing, Jiangsu province, and the application shows that the system for extracting different types of traffic flow information has a high accuracy, it can meet the needs of traffic engineering observations and has a good application prospect.

  18. Despeckle filtering for ultrasound imaging and video II selected applications

    CERN Document Server

    Loizou, Christos P

    2015-01-01

    In ultrasound imaging and video visual perception is hindered by speckle multiplicative noise that degrades the quality. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image/video segmentation, texture analysis and encoding in ultrasound imaging and video. The goal of the first book (book 1 of 2 books) was to introduce the problem of speckle in ultrasound image and video as well as the theoretical background, algorithmic steps, and the MatlabTM for the following group of despeckle filters:

  19. Short Project-Based Learning with MATLAB Applications to Support the Learning of Video-Image Processing

    Science.gov (United States)

    Gil, Pablo

    2017-10-01

    University courses concerning Computer Vision and Image Processing are generally taught using a traditional methodology that is focused on the teacher rather than on the students. This approach is consequently not effective when teachers seek to attain cognitive objectives involving their students' critical thinking. This manuscript covers the development, implementation and assessment of a short project-based engineering course with MATLAB applications Multimedia Engineering being taken by Bachelor's degree students. The principal goal of all course lectures and hands-on laboratory activities was for the students to not only acquire image-specific technical skills but also a general knowledge of data analysis so as to locate phenomena in pixel regions of images and video frames. This would hopefully enable the students to develop skills regarding the implementation of the filters, operators, methods and techniques used for image processing and computer vision software libraries. Our teaching-learning process thus permits the accomplishment of knowledge assimilation, student motivation and skill development through the use of a continuous evaluation strategy to solve practical and real problems by means of short projects designed using MATLAB applications. Project-based learning is not new. This approach has been used in STEM learning in recent decades. But there are many types of projects. The aim of the current study is to analyse the efficacy of short projects as a learning tool when compared to long projects during which the students work with more independence. This work additionally presents the impact of different types of activities, and not only short projects, on students' overall results in this subject. Moreover, a statistical study has allowed the author to suggest a link between the students' success ratio and the type of content covered and activities completed on the course. The results described in this paper show that those students who took part

  20. People detection in nuclear plants by video processing for safety purpose

    Energy Technology Data Exchange (ETDEWEB)

    Jorge, Carlos Alexandre F.; Mol, Antonio Carlos A., E-mail: calexandre@ien.gov.b, E-mail: mol@ien.gov.b [Instituto de Engenharia Nuclear (IEN/CNEN), Rio de Janeiro, RJ (Brazil); Seixas, Jose M.; Silva, Eduardo Antonio B., E-mail: seixas@lps.ufrj.b, E-mail: eduardo@lps.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Eletrica; Cota, Raphael E.; Ramos, Bruno L., E-mail: brunolange@poli.ufrj.b [Universidade Federal do Rio de Janeiro (EP/UFRJ), RJ (Brazil). Dept. de Engenharia Eletronica e de Computacao

    2011-07-01

    This work describes the development of a surveillance system for safety purposes in nuclear plants. The final objective is to track people online in videos, in order to estimate the dose received by personnel, during the execution of working tasks in nuclear plants. The estimation will be based on their tracked positions and on dose rate mapping in a real nuclear plant at Instituto de Engenharia Nuclear, Argonauta nuclear research reactor. Cameras have been installed within Argonauta's room, supplying the data needed. Both video processing and statistical signal processing techniques may be used for detection, segmentation and tracking people in video. This first paper reports people segmentation in video using background subtraction, by two different approaches, namely frame differences, and blind signal separation based on the independent component analysis method. Results are commented, along with perspectives for further work. (author)

  1. People detection in nuclear plants by video processing for safety purpose

    International Nuclear Information System (INIS)

    Jorge, Carlos Alexandre F.; Mol, Antonio Carlos A.; Seixas, Jose M.; Silva, Eduardo Antonio B.; Cota, Raphael E.; Ramos, Bruno L.

    2011-01-01

    This work describes the development of a surveillance system for safety purposes in nuclear plants. The final objective is to track people online in videos, in order to estimate the dose received by personnel, during the execution of working tasks in nuclear plants. The estimation will be based on their tracked positions and on dose rate mapping in a real nuclear plant at Instituto de Engenharia Nuclear, Argonauta nuclear research reactor. Cameras have been installed within Argonauta's room, supplying the data needed. Both video processing and statistical signal processing techniques may be used for detection, segmentation and tracking people in video. This first paper reports people segmentation in video using background subtraction, by two different approaches, namely frame differences, and blind signal separation based on the independent component analysis method. Results are commented, along with perspectives for further work. (author)

  2. Color in Image and Video Processing: Most Recent Trends and Future Research Directions

    Directory of Open Access Journals (Sweden)

    Tominaga Shoji

    2008-01-01

    Full Text Available Abstract The motivation of this paper is to provide an overview of the most recent trends and of the future research directions in color image and video processing. Rather than covering all aspects of the domain this survey covers issues related to the most active research areas in the last two years. It presents the most recent trends as well as the state-of-the-art, with a broad survey of the relevant literature, in the main active research areas in color imaging. It also focuses on the most promising research areas in color imaging science. This survey gives an overview about the issues, controversies, and problems of color image science. It focuses on human color vision, perception, and interpretation. It focuses also on acquisition systems, consumer imaging applications, and medical imaging applications. Next it gives a brief overview about the solutions, recommendations, most recent trends, and future trends of color image science. It focuses on color space, appearance models, color difference metrics, and color saliency. It focuses also on color features, color-based object tracking, scene illuminant estimation and color constancy, quality assessment and fidelity assessment, color characterization and calibration of a display device. It focuses on quantization, filtering and enhancement, segmentation, coding and compression, watermarking, and lastly on multispectral color image processing. Lastly, it addresses the research areas which still need addressing and which are the next and future perspectives of color in image and video processing.

  3. Color in Image and Video Processing: Most Recent Trends and Future Research Directions

    Directory of Open Access Journals (Sweden)

    Konstantinos N. Plataniotis

    2008-05-01

    Full Text Available The motivation of this paper is to provide an overview of the most recent trends and of the future research directions in color image and video processing. Rather than covering all aspects of the domain this survey covers issues related to the most active research areas in the last two years. It presents the most recent trends as well as the state-of-the-art, with a broad survey of the relevant literature, in the main active research areas in color imaging. It also focuses on the most promising research areas in color imaging science. This survey gives an overview about the issues, controversies, and problems of color image science. It focuses on human color vision, perception, and interpretation. It focuses also on acquisition systems, consumer imaging applications, and medical imaging applications. Next it gives a brief overview about the solutions, recommendations, most recent trends, and future trends of color image science. It focuses on color space, appearance models, color difference metrics, and color saliency. It focuses also on color features, color-based object tracking, scene illuminant estimation and color constancy, quality assessment and fidelity assessment, color characterization and calibration of a display device. It focuses on quantization, filtering and enhancement, segmentation, coding and compression, watermarking, and lastly on multispectral color image processing. Lastly, it addresses the research areas which still need addressing and which are the next and future perspectives of color in image and video processing.

  4. Multiresolution approach to processing images for different applications interaction of lower processing with higher vision

    CERN Document Server

    Vujović, Igor

    2015-01-01

    This book presents theoretical and practical aspects of the interaction between low and high level image processing. Multiresolution analysis owes its popularity mostly to wavelets and is widely used in a variety of applications. Low level image processing is important for the performance of many high level applications. The book includes examples from different research fields, i.e. video surveillance; biomedical applications (EMG and X-ray); improved communication, namely teleoperation, telemedicine, animation, augmented/virtual reality and robot vision; monitoring of the condition of ship systems and image quality control.

  5. Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications

    Science.gov (United States)

    Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David

    2017-10-01

    The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.

  6. Implementing Network Video for Traditional Security and Innovative Applications: Best Practices and Uses for Network Video in K-12 Schools

    Science.gov (United States)

    Wren, Andrew

    2008-01-01

    Administrators are constantly seeking ways to cost-effectively and adequately increase security and improve efficiency in K-12 schools. While video is not a new tool to schools, the shift from analog to network technology has increased the accessibility and usability in a variety of applications. Properly installed and used, video is a powerful…

  7. Maps in video games – range of applications

    Directory of Open Access Journals (Sweden)

    Chądzyńska Dominika

    2015-09-01

    Full Text Available The authors discuss the role of the map in various game genres, specifically video games. Presented examples illustrate widespread map usage in various ways and forms by the authors of games, both classic and video. The article takes a closer look at the classification and development of video games within the last few decades. Presently, video games use advanced geospatial models and data resources. Users are keen on a detailed representation of the real world. Game authors use advanced visualization technologies, which often are innovative and very attractive. Joint efforts of cartographers, geo-information specialists and game producers can bring interesting effects in the future. Although games are mainly made for entertainment, they are more frequently used for other purposes. There is a growing need for data reliability as well as for some effective means of transmission cartographic content. This opens up a new area of both scientific and implementation activity for cartographers. There is no universally accessible data on the role of cartographers in game production, but apparently it is quite limited at the moment. However, a wider application of cartographic methodology would have a positive effect on the development of games and, conversely, methods and technologies applied by game makers can influence the development of cartography.

  8. Heterogeneity image patch index and its application to consumer video summarization.

    Science.gov (United States)

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.

  9. Improving Video Game Development: Facilitating Heterogeneous Team Collaboration through Flexible Software Processes

    Science.gov (United States)

    Musil, Juergen; Schweda, Angelika; Winkler, Dietmar; Biffl, Stefan

    Based on our observations of Austrian video game software development (VGSD) practices we identified a lack of systematic processes/method support and inefficient collaboration between various involved disciplines, i.e. engineers and artists. VGSD includes heterogeneous disciplines, e.g. creative arts, game/content design, and software. Nevertheless, improving team collaboration and process support is an ongoing challenge to enable a comprehensive view on game development projects. Lessons learned from software engineering practices can help game developers to increase game development processes within a heterogeneous environment. Based on a state of the practice survey in the Austrian games industry, this paper presents (a) first results with focus on process/method support and (b) suggests a candidate flexible process approach based on Scrum to improve VGSD and team collaboration. Results showed (a) a trend to highly flexible software processes involving various disciplines and (b) identified the suggested flexible process approach as feasible and useful for project application.

  10. Action video games and improved attentional control: Disentangling selection- and response-based processes.

    Science.gov (United States)

    Chisholm, Joseph D; Kingstone, Alan

    2015-10-01

    Research has demonstrated that experience with action video games is associated with improvements in a host of cognitive tasks. Evidence from paradigms that assess aspects of attention has suggested that action video game players (AVGPs) possess greater control over the allocation of attentional resources than do non-video-game players (NVGPs). Using a compound search task that teased apart selection- and response-based processes (Duncan, 1985), we required participants to perform an oculomotor capture task in which they made saccades to a uniquely colored target (selection-based process) and then produced a manual directional response based on information within the target (response-based process). We replicated the finding that AVGPs are less susceptible to attentional distraction and, critically, revealed that AVGPs outperform NVGPs on both selection-based and response-based processes. These results not only are consistent with the improved-attentional-control account of AVGP benefits, but they suggest that the benefit of action video game playing extends across the full breadth of attention-mediated stimulus-response processes that impact human performance.

  11. If a Picture Is Worth a Thousand Words Is Video Worth a Million? Differences in Affective and Cognitive Processing of Video and Text Cases

    Science.gov (United States)

    Yadav, Aman; Phillips, Michael M.; Lundeberg, Mary A.; Koehler, Matthew J.; Hilden, Katherine; Dirkin, Kathryn H.

    2011-01-01

    In this investigation we assessed whether different formats of media (video, text, and video + text) influenced participants' engagement, cognitive processing and recall of non-fiction cases of people diagnosed with HIV/AIDS. For each of the cases used in the study, we designed three informationally-equivalent versions: video, text, and video +…

  12. Video Retrieval Berdasarkan Teks dan Gambar

    Directory of Open Access Journals (Sweden)

    Rahmi Hidayati

    2013-01-01

    Abstract Retrieval video has been used to search a video based on the query entered by user which were text and image. This system could increase the searching ability on video browsing and expected to reduce the video’s retrieval time. The research purposes were designing and creating a software application of retrieval video based on the text and image on the video. The index process for the text is tokenizing, filtering (stopword, stemming. The results of stemming to saved in the text index table. Index process for the image is to create an image color histogram and compute the mean and standard deviation at each primary color red, green and blue (RGB of each image. The results of feature extraction is stored in the image table The process of video retrieval using the query text, images or both. To text query system to process the text query by looking at the text index tables. If there is a text query on the index table system will display information of the video according to the text query. To image query system to process the image query by finding the value of the feature extraction means red, green means, means blue, red standard deviation, standard deviation and standard deviation of blue green. If the value of the six features extracted query image on the index table image will display the video information system according to the query image. To query text and query images, the system will display the video information if the query text and query images have a relationship that is query text and query image has the same film title.   Keywords—  video, index, retrieval, text, image

  13. Effects of video-game play on information processing: a meta-analytic investigation.

    Science.gov (United States)

    Powers, Kasey L; Brooks, Patricia J; Aldrich, Naomi J; Palladino, Melissa A; Alfieri, Louis

    2013-12-01

    Do video games enhance cognitive functioning? We conducted two meta-analyses based on different research designs to investigate how video games impact information-processing skills (auditory processing, executive functions, motor skills, spatial imagery, and visual processing). Quasi-experimental studies (72 studies, 318 comparisons) compare habitual gamers with controls; true experiments (46 studies, 251 comparisons) use commercial video games in training. Using random-effects models, video games led to improved information processing in both the quasi-experimental studies, d = 0.61, 95% CI [0.50, 0.73], and the true experiments, d = 0.48, 95% CI [0.35, 0.60]. Whereas the quasi-experimental studies yielded small to large effect sizes across domains, the true experiments yielded negligible effects for executive functions, which contrasted with the small to medium effect sizes in other domains. The quasi-experimental studies appeared more susceptible to bias than were the true experiments, with larger effects being reported in higher-tier than in lower-tier journals, and larger effects reported by the most active research groups in comparison with other labs. The results are further discussed with respect to other moderators and limitations in the extant literature.

  14. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    and evaluated. On-board there are six video cameras each capturing images of 1024times1024 pixels of 12 bpp at a frame rate of 15 fps, thus totalling 1080 Mbits/s. In comparison the average downlink data rate for these images is projected to be 50 kbit/s. This calls for efficient on-board processing to select...

  15. Spatial data processing for the purpose of video games

    Directory of Open Access Journals (Sweden)

    Chądzyńska Dominika

    2016-03-01

    Full Text Available Advanced terrain models are currently commonly used in many video/computers games. Professional GIS technologies, existing spatial datasets and cartographic methodology are more widely used in their development. This allows for achieving a realistic model of the world. On the other hand, the so-called game engines have very high capability of spatial data visualization. Preparing terrain models for the purpose of video games requires knowledge and experience of GIS specialists and cartographers, although it is also accessible for non-professionals. The authors point out commonness and variety of use of terrain models in video games and the existence of a series of ready, advanced tools and procedures of terrain model creating. Finally the authors describe the experiment of performing the process of data modeling for “Condor Soar Simulator”.

  16. Visual analysis of trash bin processing on garbage trucks in low resolution video

    Science.gov (United States)

    Sidla, Oliver; Loibner, Gernot

    2015-03-01

    We present a system for trash can detection and counting from a camera which is mounted on a garbage collection truck. A working prototype has been successfully implemented and tested with several hours of real-world video. The detection pipeline consists of HOG detectors for two trash can sizes, and meanshift tracking and low level image processing for the analysis of the garbage disposal process. Considering the harsh environment and unfavorable imaging conditions, the process works already good enough so that very useful measurements from video data can be extracted. The false positive/false negative rate of the full processing pipeline is about 5-6% at fully automatic operation. Video data of a full day (about 8 hrs) can be processed in about 30 minutes on a standard PC.

  17. Understanding Behaviors in Videos through Behavior-Specific Dictionaries

    DEFF Research Database (Denmark)

    Ren, Huamin; Liu, Weifeng; Olsen, Søren Ingvor

    2018-01-01

    Understanding behaviors is the core of video content analysis, which is highly related to two important applications: abnormal event detection and action recognition. Dictionary learning, as one of the mid-level representations, is an important step to process a video. It has achieved state...

  18. Practical image and video processing using MATLAB

    CERN Document Server

    Marques, Oge

    2011-01-01

    "The book provides a practical introduction to the most important topics in image and video processing using MATLAB (and its Image Processing Toolbox) as a tool to demonstrate the most important techniques and algorithms. The contents are presented in a clear, technically accurate, objective way, with just enough mathematical detail. Most of the chapters are supported by figures, examples, illustrative problems, MATLAB scripts, suggestions for further reading, bibliographical references, useful Web sites, and exercises and computer projects to extend the understanding of their contents"--

  19. A video imaging system and related control hardware for nuclear safeguards surveillance applications

    International Nuclear Information System (INIS)

    Whichello, J.V.

    1987-03-01

    A novel video surveillance system has been developed for safeguards applications in nuclear installations. The hardware was tested at a small experimental enrichment facility located at the Lucas Heights Research Laboratories. The system uses digital video techniques to store, encode and transmit still television pictures over the public telephone network to a receiver located in the Australian Safeguards Office at Kings Cross, Sydney. A decoded, reconstructed picture is then obtained using a second video frame store. A computer-controlled video cassette recorder is used automatically to archive the surveillance pictures. The design of the surveillance system is described with examples of its operation

  20. GIFT-Grab: Real-time C++ and Python multi-channel video capture, processing and encoding API

    Directory of Open Access Journals (Sweden)

    Dzhoshkun Ismail Shakir

    2017-10-01

    Full Text Available GIFT-Grab is an open-source API for acquiring, processing and encoding video streams in real time. GIFT-Grab supports video acquisition using various frame-grabber hardware as well as from standard-compliant network streams and video files. The current GIFT-Grab release allows for multi-channel video acquisition and encoding at the maximum frame rate of supported hardware – 60 frames per second (fps. GIFT-Grab builds on well-established highly configurable multimedia libraries including FFmpeg and OpenCV. GIFT-Grab exposes a simplified high-level API, aimed at facilitating integration into client applications with minimal coding effort. The core implementation of GIFT-Grab is in C++11. GIFT-Grab also features a Python API compatible with the widely used scientific computing packages NumPy and SciPy. GIFT-Grab was developed for capturing multiple simultaneous intra-operative video streams from medical imaging devices. Yet due to the ubiquity of video processing in research, GIFT-Grab can be used in many other areas. GIFT-Grab is hosted and managed on the software repository of the Centre for Medical Image Computing (CMIC at University College London, and is also mirrored on GitHub. In addition it is available for installation from the Python Package Index (PyPI via the pip installation tool. Funding statement: This work was supported through an Innovative Engineering for Health award by the Wellcome Trust [WT101957], the Engineering and Physical Sciences Research Council (EPSRC [NS/A000027/1] and a National Institute for Health Research Biomedical Research Centre UCLH/UCL High Impact Initiative. Sébastien Ourselin receives funding from the EPSRC (EP/H046410/1, EP/J020990/1, EP/K005278 and the MRC (MR/J01107X/1. Luis C. García-Peraza-Herrera is supported by the EPSRC-funded UCL Centre for Doctoral Training in Medical Imaging (EP/L016478/1.

  1. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  2. The effects of an action video game on visual and affective information processing.

    Science.gov (United States)

    Bailey, Kira; West, Robert

    2013-04-04

    Playing action video games can have beneficial effects on visuospatial cognition and negative effects on social information processing. However, these two effects have not been demonstrated in the same individuals in a single study. The current study used event-related brain potentials (ERPs) to examine the effects of playing an action or non-action video game on the processing of emotion in facial expression. The data revealed that 10h of playing an action or non-action video game had differential effects on the ERPs relative to a no-contact control group. Playing an action game resulted in two effects: one that reflected an increase in the amplitude of the ERPs following training over the right frontal and posterior regions that was similar for angry, happy, and neutral faces; and one that reflected a reduction in the allocation of attention to happy faces. In contrast, playing a non-action game resulted in changes in slow wave activity over the central-parietal and frontal regions that were greater for targets (i.e., angry and happy faces) than for non-targets (i.e., neutral faces). These data demonstrate that the contrasting effects of action video games on visuospatial and emotion processing occur in the same individuals following the same level of gaming experience. This observation leads to the suggestion that caution should be exercised when using action video games to modify visual processing, as this experience could also have unintended effects on emotion processing. Published by Elsevier B.V.

  3. Visual Speed of Processing and Publically Observable Feedback in Video-Game Players

    OpenAIRE

    Patten, James William

    2016-01-01

    Time spent playing action-oriented video-games has been proposed to improve the functioning of visual attention and perception in a number of areas. These benefits are not always consistently reported, however. It was hypothesized that an improvement to visual Speed of Processing (SOP) in action-oriented Video-Game Players (VGPs) underlies many of the benefits of action video-game play, and furthermore the expression of this improvement was modulated by a Hawthorne effect (individuals behavin...

  4. PixonVision real-time video processor

    Science.gov (United States)

    Puetter, R. C.; Hier, R. G.

    2007-09-01

    PixonImaging LLC and DigiVision, Inc. have developed a real-time video processor, the PixonVision PV-200, based on the patented Pixon method for image deblurring and denoising, and DigiVision's spatially adaptive contrast enhancement processor, the DV1000. The PV-200 can process NTSC and PAL video in real time with a latency of 1 field (1/60 th of a second), remove the effects of aerosol scattering from haze, mist, smoke, and dust, improve spatial resolution by up to 2x, decrease noise by up to 6x, and increase local contrast by up to 8x. A newer version of the processor, the PV-300, is now in prototype form and can handle high definition video. Both the PV-200 and PV-300 are FPGA-based processors, which could be spun into ASICs if desired. Obvious applications of these processors include applications in the DOD (tanks, aircraft, and ships), homeland security, intelligence, surveillance, and law enforcement. If developed into an ASIC, these processors will be suitable for a variety of portable applications, including gun sights, night vision goggles, binoculars, and guided munitions. This paper presents a variety of examples of PV-200 processing, including examples appropriate to border security, battlefield applications, port security, and surveillance from unmanned aerial vehicles.

  5. The ASDEX upgrade digital video processing system for real-time machine protection

    Energy Technology Data Exchange (ETDEWEB)

    Drube, Reinhard, E-mail: reinhard.drube@ipp.mpg.de [Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany); Neu, Gregor [Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany); Cole, Richard H.; Lüddecke, Klaus [Unlimited Computer Systems GmbH, Seeshaupterstr. 15, 82393 Iffeldorf (Germany); Lunt, Tilmann; Herrmann, Albrecht [Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany)

    2013-11-15

    Highlights: • We present the Real-Time Video diagnostic system of ASDEX Upgrade. • We show the implemented image processing algorithms for machine protection. • The way to achieve a robust operating multi-threading Real-Time system is described. -- Abstract: This paper describes the design, implementation, and operation of the Video Real-Time (VRT) diagnostic system of the ASDEX Upgrade plasma experiment and its integration with the ASDEX Upgrade Discharge Control System (DCS). Hot spots produced by heating systems erroneously or accidentally hitting the vessel walls, or from objects in the vessel reaching into the plasma outer border, show up as bright areas in the videos during and after the reaction. A system to prevent damage to the machine by allowing for intervention in a running discharge of the experiment was proposed and implemented. The VRT was implemented on a multi-core real-time Linux system. Up to 16 analog video channels (color and b/w) are acquired and multiple regions of interest (ROI) are processed on each video frame. Detected critical states can be used to initiate appropriate reactions – e.g. gracefully terminate the discharge. The system has been in routine operation since 2007.

  6. Dual-Layer Video Encryption using RSA Algorithm

    Science.gov (United States)

    Chadha, Aman; Mallik, Sushmit; Chadha, Ankit; Johar, Ravdeep; Mani Roja, M.

    2015-04-01

    This paper proposes a video encryption algorithm using RSA and Pseudo Noise (PN) sequence, aimed at applications requiring sensitive video information transfers. The system is primarily designed to work with files encoded using the Audio Video Interleaved (AVI) codec, although it can be easily ported for use with Moving Picture Experts Group (MPEG) encoded files. The audio and video components of the source separately undergo two layers of encryption to ensure a reasonable level of security. Encryption of the video component involves applying the RSA algorithm followed by the PN-based encryption. Similarly, the audio component is first encrypted using PN and further subjected to encryption using the Discrete Cosine Transform. Combining these techniques, an efficient system, invulnerable to security breaches and attacks with favorable values of parameters such as encryption/decryption speed, encryption/decryption ratio and visual degradation; has been put forth. For applications requiring encryption of sensitive data wherein stringent security requirements are of prime concern, the system is found to yield negligible similarities in visual perception between the original and the encrypted video sequence. For applications wherein visual similarity is not of major concern, we limit the encryption task to a single level of encryption which is accomplished by using RSA, thereby quickening the encryption process. Although some similarity between the original and encrypted video is observed in this case, it is not enough to comprehend the happenings in the video.

  7. Improved embedded non-linear processing of video for camera surveillance

    NARCIS (Netherlands)

    Cvetkovic, S.D.; With, de P.H.N.

    2009-01-01

    For a real time imaging in surveillance applications, image fidelity is of primary importance to ensure customer confidence. The fidelity is obtained amongst others via dynamic range expansion and video signal enhancement. The dynamic range of the signal needs adaptation, because the sensor signal

  8. Video Content Search System for Better Students Engagement in the Learning Process

    Directory of Open Access Journals (Sweden)

    Alanoud Alotaibi

    2014-12-01

    Full Text Available As a component of the e-learning educational process, content plays an essential role. Increasingly, the video-recorded lectures in e-learning systems are becoming more important to learners. In most cases, a single video-recorded lecture contains more than one topic or sub-topic. Therefore, to enable learners to find the desired topic and reduce learning time, e-learning systems need to provide a search capability for searching within the video content. This can be accomplished by enabling learners to identify the video or portion that contains a keyword they are looking for. This research aims to develop Video Content Search system to facilitate searching in educational videos and its contents. Preliminary results of an experimentation were conducted on a selected university course. All students needed a system to avoid time-wasting problem of watching long videos with no significant benefit. The statistics showed that the number of learners increased during the experiment. Future work will include studying impact of VCS system on students’ performance and satisfaction.

  9. Learning Electron Transport Chain Process in Photosynthesis Using Video and Serious Game

    Science.gov (United States)

    Espinoza Morales, Cecilia

    This research investigates students' learning about the electron transport chain (ETC) process in photosynthesis by watching a video followed by playing a serious board game-Electron Chute- that models the ETC process. To accomplish this goal, several learning outcomes regarding the misconceptions students' hold about photosynthesis and the ETC process in photosynthesis were defined. Middle school students need opportunities to develop cohesive models that explain the mechanistic processes of biological systems to support their learning. A six-week curriculum on photosynthesis included a one day learning activity using an ETC video and the Electron Chute game to model the ETC process. The ETC model explained how sunlight energy was converted to chemical energy (ATP) at the molecular level involving a flow of electrons. The learning outcomes and the experiences were developed based on the Indiana Academic Standards for biology and the Next Generation Science Standards (NGSS) for the life sciences. Participants were 120 eighth grade science students from an urban public school. The participants were organized into six classes based on their level of academic readiness, regular and challenge, by the school corporation. Four classes were identified as regular classes and two of them as challenge classes. Students in challenge classes had the opportunity to be challenged with more difficult content knowledge and required higher level thinking skills. The regular classes were the mainstream at school. A quasi-experimental design known as non-equivalent group design (NEGD) was used in this study. This experimental design consisted of a pretest-posttest experiment in two similar groups to begin with-the video only and video+game treatments. Intact classes were distributed into the treatments. The video only watched the ETC video and the video+game treatment watched the ETC video and played the Electron Chute game. The instrument (knowledge test) consisted of a multiple

  10. Voice and Video Telephony Services in Smartphone

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available Multimedia telephony is a delay-sensitive application. Packet losses, relatively less critical than delay, are allowed up to a certain threshold. They represent the QoS constraints that have to be respected to guarantee the operation of the telephony service and user satisfaction. In this work we introduce a new smartphone architecture characterized by two process levels called application processor (AP and mobile termination (MT, respectively. Here, they communicate through a serial channel. Moreover, we focus our attention on two very important UMTS services: voice and video telephony. Through a simulation study the impact of voice and video telephony is evaluated on the structure considered using the protocols known at this moment to realize voice and video telephony

  11. Occupational Therapy and Video Modeling for Children with Autism

    Science.gov (United States)

    Becker, Emily Ann; Watry-Christian, Meghan; Simmons, Amanda; Van Eperen, Ashleigh

    2016-01-01

    This review explores the evidence in support of using video modeling for teaching children with autism. The process of implementing video modeling, the use of various perspectives, and a wide range of target skills are addressed. Additionally, several helpful clinician resources including handheld device applications, books, and websites are…

  12. PC image processing

    International Nuclear Information System (INIS)

    Hwa, Mok Jin Il; Am, Ha Jeng Ung

    1995-04-01

    This book starts summary of digital image processing and personal computer, and classification of personal computer image processing system, digital image processing, development of personal computer and image processing, image processing system, basic method of image processing such as color image processing and video processing, software and interface, computer graphics, video image and video processing application cases on image processing like satellite image processing, color transformation of image processing in high speed and portrait work system.

  13. Video Enhancement and Dynamic Range Control of HDR Sequences for Automotive Applications

    Directory of Open Access Journals (Sweden)

    Giovanni Ramponi

    2007-01-01

    Full Text Available CMOS video cameras with high dynamic range (HDR output are particularly suitable for driving assistance applications, where lighting conditions can strongly vary, going from direct sunlight to dark areas in tunnels. However, common visualization devices can only handle a low dynamic range, and thus a dynamic range reduction is needed. Many algorithms have been proposed in the literature to reduce the dynamic range of still pictures. Anyway, extending the available methods to video is not straightforward, due to the peculiar nature of video data. We propose an algorithm for both reducing the dynamic range of video sequences and enhancing its appearance, thus improving visual quality and reducing temporal artifacts. We also provide an optimized version of our algorithm for a viable hardware implementation on an FPGA. The feasibility of this implementation is demonstrated by means of a case study.

  14. Video Quality Prediction Models Based on Video Content Dynamics for H.264 Video over UMTS Networks

    Directory of Open Access Journals (Sweden)

    Asiya Khan

    2010-01-01

    Full Text Available The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS networks. In order to characterize the Quality of Service (QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS. The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.

  15. Implications of the law on video recording in clinical practice.

    Science.gov (United States)

    Henken, Kirsten R; Jansen, Frank Willem; Klein, Jan; Stassen, Laurents P S; Dankelman, Jenny; van den Dobbelsteen, John J

    2012-10-01

    Technological developments allow for a variety of applications of video recording in health care, including endoscopic procedures. Although the value of video registration is recognized, medicolegal concerns regarding the privacy of patients and professionals are growing. A clear understanding of the legal framework is lacking. Therefore, this research aims to provide insight into the juridical position of patients and professionals regarding video recording in health care practice. Jurisprudence was searched to exemplify legislation on video recording in health care. In addition, legislation was translated for different applications of video in health care found in the literature. Three principles in Western law are relevant for video recording in health care practice: (1) regulations on privacy regarding personal data, which apply to the gathering and processing of video data in health care settings; (2) the patient record, in which video data can be stored; and (3) professional secrecy, which protects the privacy of patients including video data. Practical implementation of these principles in video recording in health care does not exist. Practical regulations on video recording in health care for different specifically defined purposes are needed. Innovations in video capture technology that enable video data to be made anonymous automatically can contribute to protection for the privacy of all the people involved.

  16. Multimodal interaction in image and video applications

    CERN Document Server

    Sappa, Angel D

    2013-01-01

    Traditional Pattern Recognition (PR) and Computer Vision (CV) technologies have mainly focused on full automation, even though full automation often proves elusive or unnatural in many applications, where the technology is expected to assist rather than replace the human agents. However, not all the problems can be automatically solved being the human interaction the only way to tackle those applications. Recently, multimodal human interaction has become an important field of increasing interest in the research community. Advanced man-machine interfaces with high cognitive capabilities are a hot research topic that aims at solving challenging problems in image and video applications. Actually, the idea of computer interactive systems was already proposed on the early stages of computer science. Nowadays, the ubiquity of image sensors together with the ever-increasing computing performance has open new and challenging opportunities for research in multimodal human interaction. This book aims to show how existi...

  17. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yasaman Samei

    2008-08-01

    Full Text Available Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN. With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture. This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  18. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks.

    Science.gov (United States)

    Aghdasi, Hadi S; Abbaspour, Maghsoud; Moghadam, Mohsen Ebrahimi; Samei, Yasaman

    2008-08-04

    Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS) and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN). With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture). This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  19. Automated video surveillance: teaching an old dog new tricks

    Science.gov (United States)

    McLeod, Alastair

    1993-12-01

    The automated video surveillance market is booming with new players, new systems, new hardware and software, and an extended range of applications. This paper reviews available technology, and describes the features required for a good automated surveillance system. Both hardware and software are discussed. An overview of typical applications is also given. A shift towards PC-based hybrid systems, use of parallel processing, neural networks, and exploitation of modern telecomms are introduced, highlighting the evolution modern video surveillance systems.

  20. A review of video security training and assessment-systems and their applications

    International Nuclear Information System (INIS)

    Cellucci, J.; Hall, R.J.

    1991-01-01

    This paper reports that during the last 10 years computer-aided video data collection and playback systems have been used as nuclear facility security training and assessment tools with varying degrees of success. These mobile systems have been used by trained security personnel for response force training, vulnerability assessment, force-on-force exercises and crisis management. Typically, synchronous recordings from multiple video cameras, communications audio, and digital sensor inputs; are played back to the exercise participants and then edited for training and briefing. Factors that have influence user acceptance include: frequency of use, the demands placed on security personnel, fear of punishment, user training requirements and equipment cost. The introduction of S-VHS video and new software for scenario planning, video editing and data reduction; should bring about a wider range of security applications and supply the opportunity for significant cost sharing with other user groups

  1. Reduced complexity MPEG2 video post-processing for HD display

    DEFF Research Database (Denmark)

    Virk, Kamran; Li, Huiying; Forchhammer, Søren

    2008-01-01

    implementation. The enhanced deringing combined with the deblocking achieves PSNR improvements on average of 0.5 dB over the basic deblocking and deringing on SDTV and HDTV test sequences. The deblocking and deringing models described in the paper are generic and applicable to a wide variety of common (8times8......) DCT-block based real-time video schemes....

  2. Moving Shadow Detection in Video Using Cepstrum

    Directory of Open Access Journals (Sweden)

    Fuat Cogun

    2013-01-01

    Full Text Available Moving shadows constitute problems in various applications such as image segmentation and object tracking. The main cause of these problems is the misclassification of the shadow pixels as target pixels. Therefore, the use of an accurate and reliable shadow detection method is essential to realize intelligent video processing applications. In this paper, a cepstrum-based method for moving shadow detection is presented. The proposed method is tested on outdoor and indoor video sequences using well-known benchmark test sets. To show the improvements over previous approaches, quantitative metrics are introduced and comparisons based on these metrics are made.

  3. XbD Video 3, The SEEing process of qualitative data analysis

    DEFF Research Database (Denmark)

    2013-01-01

    This is the third video in the Experience-based Designing series. It presents a live classroom demonstration of a nine step qualitative data analysis process called SEEing: The process is useful for uncovering or discovering deeper layers of 'meaning' and meaning structures in an experience...

  4. Medical video server construction.

    Science.gov (United States)

    Dańda, Jacek; Juszkiewicz, Krzysztof; Leszczuk, Mikołaj; Loziak, Krzysztof; Papir, Zdzisław; Sikora, Marek; Watza, Rafal

    2003-01-01

    The paper discusses two implementation options for a Digital Video Library, a repository used for archiving, accessing, and browsing of video medical records. Two crucial issues to be decided on are a video compression format and a video streaming platform. The paper presents numerous decision factors that have to be taken into account. The compression formats being compared are DICOM as a format representative for medical applications, both MPEGs, and several new formats targeted for an IP networking. The comparison includes transmission rates supported, compression rates, and at least options for controlling a compression process. The second part of the paper presents the ISDN technique as a solution for provisioning of tele-consultation services between medical parties that are accessing resources uploaded to a digital video library. There are several backbone techniques (like corporate LANs/WANs, leased lines or even radio/satellite links) available, however, the availability of network resources for hospitals was the prevailing choice criterion pointing to ISDN solutions. Another way to provide access to the Digital Video Library is based on radio frequency domain solutions. The paper describes possibilities of both, wireless and cellular network's data transmission service to be used as a medical video server transport layer. For the cellular net-work based solution two communication techniques are used: Circuit Switched Data and Packet Switched Data.

  5. Action video games do not improve the speed of information processing in simple perceptual tasks.

    Science.gov (United States)

    van Ravenzwaaij, Don; Boekel, Wouter; Forstmann, Birte U; Ratcliff, Roger; Wagenmakers, Eric-Jan

    2014-10-01

    Previous research suggests that playing action video games improves performance on sensory, perceptual, and attentional tasks. For instance, Green, Pouget, and Bavelier (2010) used the diffusion model to decompose data from a motion detection task and estimate the contribution of several underlying psychological processes. Their analysis indicated that playing action video games leads to faster information processing, reduced response caution, and no difference in motor responding. Because perceptual learning is generally thought to be highly context-specific, this transfer from gaming is surprising and warrants corroborative evidence from a large-scale training study. We conducted 2 experiments in which participants practiced either an action video game or a cognitive game in 5 separate, supervised sessions. Prior to each session and following the last session, participants performed a perceptual discrimination task. In the second experiment, we included a third condition in which no video games were played at all. Behavioral data and diffusion model parameters showed similar practice effects for the action gamers, the cognitive gamers, and the nongamers and suggest that, in contrast to earlier reports, playing action video games does not improve the speed of information processing in simple perceptual tasks.

  6. Smartphone based automatic organ validation in ultrasound video.

    Science.gov (United States)

    Vaish, Pallavi; Bharath, R; Rajalakshmi, P

    2017-07-01

    Telesonography involves transmission of ultrasound video from remote areas to the doctors for getting diagnosis. Due to the lack of trained sonographers in remote areas, the ultrasound videos scanned by these untrained persons do not contain the proper information that is required by a physician. As compared to standard methods for video transmission, mHealth driven systems need to be developed for transmitting valid medical videos. To overcome this problem, we are proposing an organ validation algorithm to evaluate the ultrasound video based on the content present. This will guide the semi skilled person to acquire the representative data from patient. Advancement in smartphone technology allows us to perform high medical image processing on smartphone. In this paper we have developed an Application (APP) for a smartphone which can automatically detect the valid frames (which consist of clear organ visibility) in an ultrasound video and ignores the invalid frames (which consist of no-organ visibility), and produces a compressed sized video. This is done by extracting the GIST features from the Region of Interest (ROI) of the frame and then classifying the frame using SVM classifier with quadratic kernel. The developed application resulted with the accuracy of 94.93% in classifying valid and invalid images.

  7. Rapid prototyping of an automated video surveillance system: a hardware-software co-design approach

    Science.gov (United States)

    Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.; Ives, Robert W.

    2011-06-01

    FPGA devices with embedded DSP and memory blocks, and high-speed interfaces are ideal for real-time video processing applications. In this work, a hardware-software co-design approach is proposed to effectively utilize FPGA features for a prototype of an automated video surveillance system. Time-critical steps of the video surveillance algorithm are designed and implemented in the FPGAs logic elements to maximize parallel processing. Other non timecritical tasks are achieved by executing a high level language program on an embedded Nios-II processor. Pre-tested and verified video and interface functions from a standard video framework are utilized to significantly reduce development and verification time. Custom and parallel processing modules are integrated into the video processing chain by Altera's Avalon Streaming video protocol. Other data control interfaces are achieved by connecting hardware controllers to a Nios-II processor using Altera's Avalon Memory Mapped protocol.

  8. Application of MPEG-7 descriptors for content-based indexing of sports videos

    Science.gov (United States)

    Hoeynck, Michael; Auweiler, Thorsten; Ohm, Jens-Rainer

    2003-06-01

    The amount of multimedia data available worldwide is increasing every day. There is a vital need to annotate multimedia data in order to allow universal content access and to provide content-based search-and-retrieval functionalities. Since supervised video annotation can be time consuming, an automatic solution is appreciated. We review recent approaches to content-based indexing and annotation of videos for different kind of sports, and present our application for the automatic annotation of equestrian sports videos. Thereby, we especially concentrate on MPEG-7 based feature extraction and content description. We apply different visual descriptors for cut detection. Further, we extract the temporal positions of single obstacles on the course by analyzing MPEG-7 edge information and taking specific domain knowledge into account. Having determined single shot positions as well as the visual highlights, the information is jointly stored together with additional textual information in an MPEG-7 description scheme. Using this information, we generate content summaries which can be utilized in a user front-end in order to provide content-based access to the video stream, but further content-based queries and navigation on a video-on-demand streaming server.

  9. Mobile, portable lightweight wireless video recording solutions for homeland security, defense, and law enforcement applications

    Science.gov (United States)

    Sandy, Matt; Goldburt, Tim; Carapezza, Edward M.

    2015-05-01

    It is desirable for executive officers of law enforcement agencies and other executive officers in homeland security and defense, as well as first responders, to have some basic information about the latest trend on mobile, portable lightweight wireless video recording solutions available on the market. This paper reviews and discusses a number of studies on the use and effectiveness of wireless video recording solutions. It provides insights into the features of wearable video recording devices that offer excellent applications for the category of security agencies listed in this paper. It also provides answers to key questions such as: how to determine the type of video recording solutions most suitable for the needs of your agency, the essential features to look for when selecting a device for your video needs, and the privacy issues involved with wearable video recording devices.

  10. Video pedagogy

    OpenAIRE

    Länsitie, Janne; Stevenson, Blair; Männistö, Riku; Karjalainen, Tommi; Karjalainen, Asko

    2016-01-01

    The short film is an introduction to the concept of video pedagogy. The five categories of video pedagogy further elaborate how videos can be used as a part of instruction and learning process. Most pedagogical videos represent more than one category. A video itself doesn’t necessarily define the category – the ways in which the video is used as a part of pedagogical script are more defining factors. What five categories did you find? Did you agree with the categories, or are more...

  11. Industrial-Strength Streaming Video.

    Science.gov (United States)

    Avgerakis, George; Waring, Becky

    1997-01-01

    Corporate training, financial services, entertainment, and education are among the top applications for streaming video servers, which send video to the desktop without downloading the whole file to the hard disk, saving time and eliminating copyrights questions. Examines streaming video technology, lists ten tips for better net video, and ranks…

  12. Problem with multi-video format M-learning applications

    CSIR Research Space (South Africa)

    Adeyeye, MO

    2014-01-01

    Full Text Available in conjunction with the technical aspects of video display in browsers, when varying media formats are used. The <video> tag used in this work renders videos from two sources with different MIME types. Feeds from the video sources, namely YouTube and UCT...

  13. Gaming to see: action video gaming is associated with enhanced processing of masked stimuli.

    Science.gov (United States)

    Pohl, Carsten; Kunde, Wilfried; Ganz, Thomas; Conzelmann, Annette; Pauli, Paul; Kiesel, Andrea

    2014-01-01

    Recent research revealed that action video game players outperform non-players in a wide range of attentional, perceptual and cognitive tasks. Here we tested if expertise in action video games is related to differences regarding the potential of shortly presented stimuli to bias behavior. In a response priming paradigm, participants classified four animal pictures functioning as targets as being smaller or larger than a reference frame. Before each target, one of the same four animal pictures was presented as a masked prime to influence participants' responses in a congruent or incongruent way. Masked primes induced congruence effects, that is, faster responses for congruent compared to incongruent conditions, indicating processing of hardly visible primes. Results also suggested that action video game players showed a larger congruence effect than non-players for 20 ms primes, whereas there was no group difference for 60 ms primes. In addition, there was a tendency for action video game players to detect masked primes for some prime durations better than non-players. Thus, action video game expertise may be accompanied by faster and more efficient processing of shortly presented visual stimuli.

  14. Gaming to see: Action Video Gaming is associated with enhanced processing of masked stimuli

    Directory of Open Access Journals (Sweden)

    Carsten ePohl

    2014-02-01

    Full Text Available Recent research revealed that action video game players outperform non-players in a wide range of attentional, perceptual and cognitive tasks. Here we tested if expertise in action video games is related to differences regarding the potential of shortly presented stimuli to bias behaviour. In a response priming paradigm, participants classified four animal pictures functioning as targets as being smaller or larger than a reference frame. Before each target, one of the same four animal pictures was presented as a masked prime to influence participants’ responses in a congruent or incongruent way. Masked primes induced congruence effects, that is, faster responses for congruent compared to incongruent conditions, indicating processing of hardly visible primes. Results also suggested that action video game players showed a larger congruence effect than non-players for 20 ms primes, whereas there was no group difference for 60 ms primes. In addition, there was a tendency for action video game players to detect masked primes for some prime durations better than non-players. Thus, action video game expertise may be accompanied by faster and more efficient processing of shortly presented visual stimuli.

  15. Overview of image processing tools to extract physical information from JET videos

    Science.gov (United States)

    Craciunescu, T.; Murari, A.; Gelfusa, M.; Tiseanu, I.; Zoita, V.; EFDA Contributors, JET

    2014-11-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the

  16. Overview of image processing tools to extract physical information from JET videos

    International Nuclear Information System (INIS)

    Craciunescu, T; Tiseanu, I; Zoita, V; Murari, A; Gelfusa, M

    2014-01-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the

  17. Design and develop a video conferencing framework for real-time telemedicine applications using secure group-based communication architecture.

    Science.gov (United States)

    Mat Kiah, M L; Al-Bakri, S H; Zaidan, A A; Zaidan, B B; Hussain, Muzammil

    2014-10-01

    One of the applications of modern technology in telemedicine is video conferencing. An alternative to traveling to attend a conference or meeting, video conferencing is becoming increasingly popular among hospitals. By using this technology, doctors can help patients who are unable to physically visit hospitals. Video conferencing particularly benefits patients from rural areas, where good doctors are not always available. Telemedicine has proven to be a blessing to patients who have no access to the best treatment. A telemedicine system consists of customized hardware and software at two locations, namely, at the patient's and the doctor's end. In such cases, the video streams of the conferencing parties may contain highly sensitive information. Thus, real-time data security is one of the most important requirements when designing video conferencing systems. This study proposes a secure framework for video conferencing systems and a complete management solution for secure video conferencing groups. Java Media Framework Application Programming Interface classes are used to design and test the proposed secure framework. Real-time Transport Protocol over User Datagram Protocol is used to transmit the encrypted audio and video streams, and RSA and AES algorithms are used to provide the required security services. Results show that the encryption algorithm insignificantly increases the video conferencing computation time.

  18. Non-intrusive telemetry applications in the oilsands: from visible light and x-ray video to acoustic imaging and spectroscopy

    Science.gov (United States)

    Shaw, John M.

    2013-06-01

    While the production, transport and refining of oils from the oilsands of Alberta, and comparable resources elsewhere is performed at industrial scales, numerous technical and technological challenges and opportunities persist due to the ill defined nature of the resource. For example, bitumen and heavy oil comprise multiple bulk phases, self-organizing constituents at the microscale (liquid crystals) and the nano scale. There are no quantitative measures available at the molecular level. Non-intrusive telemetry is providing promising paths toward solutions, be they enabling technologies targeting process design, development or optimization, or more prosaic process control or process monitoring applications. Operation examples include automated large object and poor quality ore during mining, and monitoring the thickness and location of oil water interfacial zones within separation vessels. These applications involve real-time video image processing. X-ray transmission video imaging is used to enumerate organic phases present within a vessel, and to detect individual phase volumes, densities and elemental compositions. This is an enabling technology that provides phase equilibrium and phase composition data for production and refining process development, and fluid property myth debunking. A high-resolution two-dimensional acoustic mapping technique now at the proof of concept stage is expected to provide simultaneous fluid flow and fluid composition data within porous inorganic media. Again this is an enabling technology targeting visualization of diverse oil production process fundamentals at the pore scale. Far infrared spectroscopy coupled with detailed quantum mechanical calculations, may provide characteristic molecular motifs and intermolecular association data required for fluid characterization and process modeling. X-ray scattering (SAXS/WAXS/USAXS) provides characteristic supramolecular structure information that impacts fluid rheology and process

  19. 3-D discrete shearlet transform and video processing.

    Science.gov (United States)

    Negi, Pooran Singh; Labate, Demetrio

    2012-06-01

    In this paper, we introduce a digital implementation of the 3-D shearlet transform and illustrate its application to problems of video denoising and enhancement. The shearlet representation is a multiscale pyramid of well-localized waveforms defined at various locations and orientations, which was introduced to overcome the limitations of traditional multiscale systems in dealing with multidimensional data. While the shearlet approach shares the general philosophy of curvelets and surfacelets, it is based on a very different mathematical framework, which is derived from the theory of affine systems and uses shearing matrices rather than rotations. This allows a natural transition from the continuous setting to the digital setting and a more flexible mathematical structure. The 3-D digital shearlet transform algorithm presented in this paper consists in a cascade of a multiscale decomposition and a directional filtering stage. The filters employed in this decomposition are implemented as finite-length filters, and this ensures that the transform is local and numerically efficient. To illustrate its performance, the 3-D discrete shearlet transform is applied to problems of video denoising and enhancement, and compared against other state-of-the-art multiscale techniques, including curvelets and surfacelets.

  20. Real-time video quality monitoring

    Science.gov (United States)

    Liu, Tao; Narvekar, Niranjan; Wang, Beibei; Ding, Ran; Zou, Dekun; Cash, Glenn; Bhagavathy, Sitaram; Bloom, Jeffrey

    2011-12-01

    The ITU-T Recommendation G.1070 is a standardized opinion model for video telephony applications that uses video bitrate, frame rate, and packet-loss rate to measure the video quality. However, this model was original designed as an offline quality planning tool. It cannot be directly used for quality monitoring since the above three input parameters are not readily available within a network or at the decoder. And there is a great room for the performance improvement of this quality metric. In this article, we present a real-time video quality monitoring solution based on this Recommendation. We first propose a scheme to efficiently estimate the three parameters from video bitstreams, so that it can be used as a real-time video quality monitoring tool. Furthermore, an enhanced algorithm based on the G.1070 model that provides more accurate quality prediction is proposed. Finally, to use this metric in real-world applications, we present an example emerging application of real-time quality measurement to the management of transmitted videos, especially those delivered to mobile devices.

  1. Linear and non-linear video and TV applications using IPv6 and IPv6 multicast

    CERN Document Server

    Minoli, Daniel

    2012-01-01

    Provides options for implementing IPv6 and IPv6 multicast in service provider networks New technologies, viewing paradigms, and content distribution approaches are taking the TV/video services industry by storm. Linear and Nonlinear Video and TV Applications: Using IPv6 and IPv6 Multicast identifies five emerging trends in next-generation delivery of entertainment-quality video. These trends are observable and can be capitalized upon by progressive service providers, telcos, cable operators, and ISPs. This comprehensive guide explores these evolving directions in the TV/v

  2. Communicating pictures a course in image and video coding

    CERN Document Server

    Bull, David R

    2014-01-01

    Communicating Pictures starts with a unique historical perspective of the role of images in communications and then builds on this to explain the applications and requirements of a modern video coding system. It draws on the author's extensive academic and professional experience of signal processing and video coding to deliver a text that is algorithmically rigorous, yet accessible, relevant to modern standards, and practical. It offers a thorough grounding in visual perception, and demonstrates how modern image and video compression methods can be designed in order to meet the rate-quality performance levels demanded by today's applications, networks and users. With this book you will learn: Practical issues when implementing a codec, such as picture boundary extension and complexity reduction, with particular emphasis on efficient algorithms for transforms, motion estimators and error resilience Conflicts between conventional video compression, based on variable length coding and spatiotemporal prediction,...

  3. Turning Video Resource Management into Cloud Computing

    Directory of Open Access Journals (Sweden)

    Weili Kou

    2016-07-01

    Full Text Available Big data makes cloud computing more and more popular in various fields. Video resources are very useful and important to education, security monitoring, and so on. However, issues of their huge volumes, complex data types, inefficient processing performance, weak security, and long times for loading pose challenges in video resource management. The Hadoop Distributed File System (HDFS is an open-source framework, which can provide cloud-based platforms and presents an opportunity for solving these problems. This paper presents video resource management architecture based on HDFS to provide a uniform framework and a five-layer model for standardizing the current various algorithms and applications. The architecture, basic model, and key algorithms are designed for turning video resources into a cloud computing environment. The design was tested by establishing a simulation system prototype.

  4. Video interpretability rating scale under network impairments

    Science.gov (United States)

    Kreitmair, Thomas; Coman, Cristian

    2014-01-01

    This paper presents the results of a study of the impact of network transmission channel parameters on the quality of streaming video data. A common practice for estimating the interpretability of video information is to use the Motion Imagery Quality Equation (MIQE). MIQE combines a few technical features of video images (such as: ground sampling distance, relative edge response, modulation transfer function, gain and signal-to-noise ratio) to estimate the interpretability level. One observation of this study is that the MIQE does not fully account for video-specific parameters such as spatial and temporal encoding, which are relevant to appreciating degradations caused by the streaming process. In streaming applications the main artifacts impacting the interpretability level are related to distortions in the image caused by lossy decompression of video data (due to loss of information and in some cases lossy re-encoding by the streaming server). One parameter in MIQE that is influenced by network transmission errors is the Relative Edge Response (RER). The automated calculation of RER includes the selection of the best edge in the frame, which in case of network errors may be incorrectly associated with a blocked region (e.g. low resolution areas caused by loss of information). A solution is discussed in this document to address this inconsistency by removing corrupted regions from the image analysis process. Furthermore, a recommendation is made on how to account for network impairments in the MIQE, such that a more realistic interpretability level is estimated in case of streaming applications.

  5. Bridging Inter-flow and Intra-flow Network Coding for Video Applications

    DEFF Research Database (Denmark)

    Hansen, Jonas; Krigslund, Jeppe; Roetter, Daniel Enrique Lucani

    2013-01-01

    transmission approach to decide how much and when to send redundancy in the network, and a minimalistic feedback mechanism to guarantee delivery of generations of the different flows. Given the delay constraints of video applications, we proposed a simple yet effective coding mechanism, Block Coding On The Fly...

  6. Marketing Strategy Implementation Process in the Creative Industry of Video Games

    Directory of Open Access Journals (Sweden)

    Maryangela Drumond de Abreu Negrão

    2013-06-01

    Full Text Available This article contributes to the understanding of marketing strategy process when it presents the organizational and human factors that support the processes of implementation, identified in a qualitative study conducted in the creative industry of video game development. The research, a case study applied to four video and computer game companies was based on the Sashittal and Jassawalla (2001 marketing strategic model, and on the concepts of the creative behavior and innovation in organizations proposed by Amabile (1997. The analysis suggests that the marketing strategy implementation is anchored in innovative administrative process, creative skills and the adoption of modern control technologies. It was observed that a vision that associates production, process, the market orientation and the delivery of value-adding is essential for the implementation of strategies in creative and innovative organizational structures. The research contributes to the marketing strategy implementation studies in creative and innovative environments under the approach of smaller organizations. It also contributes with the marketing strategy theory when it suggests that the analysis of the process, the control and the management skills be included as categories into the theoretical model in future investigations.

  7. Video Self-Modeling

    Science.gov (United States)

    Buggey, Tom; Ogle, Lindsey

    2012-01-01

    Video self-modeling (VSM) first appeared on the psychology and education stage in the early 1970s. The practical applications of VSM were limited by lack of access to tools for editing video, which is necessary for almost all self-modeling videos. Thus, VSM remained in the research domain until the advent of camcorders and VCR/DVD players and,…

  8. Turbulent structure of concentration plumes through application of video imaging

    Energy Technology Data Exchange (ETDEWEB)

    Dabberdt, W.F.; Martin, C. [National Center for Atmospheric Research, Boulder, CO (United States); Hoydysh, W.G.; Holynskyj, O. [Environmental Science & Services Corp., Long Island City, NY (United States)

    1994-12-31

    Turbulent flows and dispersion in the presence of building wakes and terrain-induced local circulations are particularly difficult to simulate with numerical models or measure with conventional fluid modeling and ambient measurement techniques. The problem stems from the complexity of the kinematics and the difficulty in making representative concentration measurements. New laboratory video imaging techniques are able to overcome many of these limitations and are being applied to study a range of difficult problems. Here the authors apply {open_quotes}tomographic{close_quotes} video imaging techniques to the study of the turbulent structure of an ideal elevated plume and the relationship of short-period peak concentrations to long-period average values. A companion paper extends application of the technique to characterization of turbulent plume-concentration fields in the wake of a complex building configuration.

  9. Augmented video viewing: transforming video consumption into an active experience

    OpenAIRE

    WIJNANTS, Maarten; Leën, Jeroen; QUAX, Peter; LAMOTTE, Wim

    2014-01-01

    Traditional video productions fail to cater to the interactivity standards that the current generation of digitally native customers have become accustomed to. This paper therefore advertises the \\activation" of the video consumption process. In particular, it proposes to enhance HTML5 video playback with interactive features in order to transform video viewing into a dynamic pastime. The objective is to enable the authoring of more captivating and rewarding video experiences for end-users. T...

  10. Efficient Execution of Video Applications on Heterogeneous Multi- and Many-Core Processors

    NARCIS (Netherlands)

    Pereira de Azevedo Filho, A.

    2011-01-01

    In this dissertation we present methodologies and evaluations aiming at increasing the efficiency of video coding applications for heterogeneous many-core processors composed of SIMD-only, scratchpad memory based cores. Our contributions are spread in three different fronts: thread-level parallelism

  11. Hierarchical event selection for video storyboards with a case study on snooker video visualization.

    Science.gov (United States)

    Parry, Matthew L; Legg, Philip A; Chung, David H S; Griffiths, Iwan W; Chen, Min

    2011-12-01

    Video storyboard, which is a form of video visualization, summarizes the major events in a video using illustrative visualization. There are three main technical challenges in creating a video storyboard, (a) event classification, (b) event selection and (c) event illustration. Among these challenges, (a) is highly application-dependent and requires a significant amount of application specific semantics to be encoded in a system or manually specified by users. This paper focuses on challenges (b) and (c). In particular, we present a framework for hierarchical event representation, and an importance-based selection algorithm for supporting the creation of a video storyboard from a video. We consider the storyboard to be an event summarization for the whole video, whilst each individual illustration on the board is also an event summarization but for a smaller time window. We utilized a 3D visualization template for depicting and annotating events in illustrations. To demonstrate the concepts and algorithms developed, we use Snooker video visualization as a case study, because it has a concrete and agreeable set of semantic definitions for events and can make use of existing techniques of event detection and 3D reconstruction in a reliable manner. Nevertheless, most of our concepts and algorithms developed for challenges (b) and (c) can be applied to other application areas. © 2010 IEEE

  12. Improved people detection in nuclear plants by video processing for safety purpose

    Energy Technology Data Exchange (ETDEWEB)

    Jorge, Carlos Alexandre F.; Mol, Antonio Carlos A.; Carvalho, Paulo Victor R., E-mail: calexandre@ien.gov.br, E-mail: mol@ien.gov.br, E-mail: paulov@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Seixas, Jose M.; Silva, Eduardo Antonio B., E-mail: seixas@lps.ufrj.br, E-mail: eduardo@smt.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Eletrica; Waintraub, Fabio, E-mail: fabiowaintraub@hotmail.com [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Escola Politecnica. Departamento de Engenharia Eletronica e de Computacao

    2013-07-01

    This work describes improvements in a surveillance system for safety purposes in nuclear plants. The objective is to track people online in video, in order to estimate the dose received by personnel, during working tasks executed in nuclear plants. The estimation will be based on their tracked positions and on dose rate mapping in a nuclear research reactor, Argonauta. Cameras have been installed within Argonauta room, supplying the data needed. Video processing methods were combined for detecting and tracking people in video. More specifically, segmentation, performed by background subtraction, was combined with a tracking method based on color distribution. The use of both methods improved the overall results. An alternative approach was also evaluated, by means of blind source signal separation. Results are commented, along with perspectives. (author)

  13. Improved people detection in nuclear plants by video processing for safety purpose

    International Nuclear Information System (INIS)

    Jorge, Carlos Alexandre F.; Mol, Antonio Carlos A.; Carvalho, Paulo Victor R.; Seixas, Jose M.; Silva, Eduardo Antonio B.; Waintraub, Fabio

    2013-01-01

    This work describes improvements in a surveillance system for safety purposes in nuclear plants. The objective is to track people online in video, in order to estimate the dose received by personnel, during working tasks executed in nuclear plants. The estimation will be based on their tracked positions and on dose rate mapping in a nuclear research reactor, Argonauta. Cameras have been installed within Argonauta room, supplying the data needed. Video processing methods were combined for detecting and tracking people in video. More specifically, segmentation, performed by background subtraction, was combined with a tracking method based on color distribution. The use of both methods improved the overall results. An alternative approach was also evaluated, by means of blind source signal separation. Results are commented, along with perspectives. (author)

  14. 360-degree interactive video application for Cultural Heritage Education

    OpenAIRE

    Argyriou, L.; Economou, D.; Bouki, V.

    2017-01-01

    There is a growing interest nowadays of using immersive technologies to promote Cultural Heritage (CH), engage and educate visitors, tourists and citizens. Such examples refer mainly to the use of Virtual Reality (VR) technology or focus on the enhancement of the real world by superimposing digital artefacts, so called Augmented Reality (AR) applications. A new medium that has been introduced lately as an innovative form of experiencing immersion is the 360-degree video, imposing further rese...

  15. Visual hashing of digital video : applications and techniques

    NARCIS (Netherlands)

    Oostveen, J.; Kalker, A.A.C.M.; Haitsma, J.A.; Tescher, A.G.

    2001-01-01

    his paper present the concept of robust video hashing as a tool for video identification. We present considerations and a technique for (i) extracting essential perceptual features from a moving image sequences and (ii) for identifying any sufficiently long unknown video segment by efficiently

  16. Learning Process and Learning Outcomes of Video Podcasts Including the Instructor and PPT Slides: A Chinese Case

    Science.gov (United States)

    Pi, Zhongling; Hong, Jianzhong

    2016-01-01

    Video podcasts have become one of the fastest developing trends in learning and teaching. The study explored the effect of the presenting mode of educational video podcasts on the learning process and learning outcomes. Prior to viewing a video podcast, the 94 Chinese undergraduates participating in the study completed a demographic questionnaire…

  17. Application aware approach to compression and transmission of H.264 encoded video for automated and centralized transportation surveillance.

    Science.gov (United States)

    2012-10-01

    In this report we present a transportation video coding and wireless transmission system specically tailored to automated : vehicle tracking applications. By taking into account the video characteristics and the lossy nature of the wireless channe...

  18. Learning to diagnose using patient video case in paediatrics: perceptive and cognitive processes

    OpenAIRE

    Balslev, Thomas

    2012-01-01

    Thomas Balslev, a paediatric neurologist and educational researcher, defended his thesis on 24 November 2011. The thesis included five published papers, and investigated learning with authentic, brief patient video cases. With analysis of a video case in a small group, learning processes and sharing of knowledge was intensely stimulated. Small group discussion and subsequent listening to an expert’s think-aloud were particularly effective approaches to enhance diagnostic accuracy among non-ex...

  19. Common and Innovative Visuals: A sparsity modeling framework for video.

    Science.gov (United States)

    Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder

    2014-05-02

    Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.

  20. Application results for an augmented video tracker

    Science.gov (United States)

    Pierce, Bill

    1991-08-01

    The Relay Mirror Experiment (RME) is a research program to determine the pointing accuracy and stability levels achieved when a laser beam is reflected by the RME satellite from one ground station to another. This paper reports the results of using a video tracker augmented with a quad cell signal to improve the RME ground station tracking system performance. The video tracker controls a mirror to acquire the RME satellite, and provides a robust low bandwidth tracking loop to remove line of sight (LOS) jitter. The high-passed, high-gain quad cell signal is added to the low bandwidth, low-gain video tracker signal to increase the effective tracking loop bandwidth, and significantly improves LOS disturbance rejection. The quad cell augmented video tracking system is analyzed, and the math model for the tracker is developed. A MATLAB model is then developed from this, and performance as a function of bandwidth and disturbances is given. Improvements in performance due to the addition of the video tracker and the augmentation with the quad cell are provided. Actual satellite test results are then presented and compared with the simulated results.

  1. ESVD: An Integrated Energy Scalable Framework for Low-Power Video Decoding Systems

    Directory of Open Access Journals (Sweden)

    Wen Ji

    2010-01-01

    Full Text Available Video applications using mobile wireless devices are a challenging task due to the limited capacity of batteries. The higher complex functionality of video decoding needs high resource requirements. Thus, power efficient control has become more critical design with devices integrating complex video processing techniques. Previous works on power efficient control in video decoding systems often aim at the low complexity design and not explicitly consider the scalable impact of subfunctions in decoding process, and seldom consider the relationship with the features of compressed video date. This paper is dedicated to developing an energy-scalable video decoding (ESVD strategy for energy-limited mobile terminals. First, ESVE can dynamically adapt the variable energy resources due to the device aware technique. Second, ESVD combines the decoder control with decoded data, through classifying the data into different partition profiles according to its characteristics. Third, it introduces utility theoretical analysis during the resource allocation process, so as to maximize the resource utilization. Finally, it adapts the energy resource as different energy budget and generates the scalable video decoding output under energy-limited systems. Experimental results demonstrate the efficiency of the proposed approach.

  2. Big Data Analytics: Challenges And Applications For Text, Audio, Video, And Social Media Data

    OpenAIRE

    Jai Prakash Verma; Smita Agrawal; Bankim Patel; Atul Patel

    2016-01-01

    All types of machine automated systems are generating large amount of data in different forms like statistical, text, audio, video, sensor, and bio-metric data that emerges the term Big Data. In this paper we are discussing issues, challenges, and application of these types of Big Data with the consideration of big data dimensions. Here we are discussing social media data analytics, content based analytics, text data analytics, audio, and video data analytics their issues and expected applica...

  3. Using the Periscope Live Video-Streaming Application for Global Pathology Education: A Brief Introduction.

    Science.gov (United States)

    Fuller, Maren Y; Mukhopadhyay, Sanjay; Gardner, Jerad M

    2016-07-21

    Periscope is a live video-streaming smartphone application (app) that allows any individual with a smartphone to broadcast live video simultaneously to multiple smartphone users around the world. The aim of this review is to describe the potential of this emerging technology for global pathology education. To our knowledge, since the launch of the Periscope app (2015), only a handful of educational presentations by pathologists have been streamed as live video via Periscope. This review includes links to these initial attempts, a step-by-step guide for those interested in using the app for pathology education, and a summary of the pros and cons, including ethical/legal issues. We hope that pathologists will appreciate the potential of Periscope for sharing their knowledge, expertise, and research with a live (and potentially large) audience without the barriers associated with traditional video equipment and standard classroom/conference settings.

  4. Application of Video Recognition Technology in Landslide Monitoring System

    Directory of Open Access Journals (Sweden)

    Qingjia Meng

    2018-01-01

    Full Text Available The video recognition technology is applied to the landslide emergency remote monitoring system. The trajectories of the landslide are identified by this system in this paper. The system of geological disaster monitoring is applied synthetically to realize the analysis of landslide monitoring data and the combination of video recognition technology. Landslide video monitoring system will video image information, time point, network signal strength, power supply through the 4G network transmission to the server. The data is comprehensively analysed though the remote man-machine interface to conduct to achieve the threshold or manual control to determine the front-end video surveillance system. The system is used to identify the target landslide video for intelligent identification. The algorithm is embedded in the intelligent analysis module, and the video frame is identified, detected, analysed, filtered, and morphological treatment. The algorithm based on artificial intelligence and pattern recognition is used to mark the target landslide in the video screen and confirm whether the landslide is normal. The landslide video monitoring system realizes the remote monitoring and control of the mobile side, and provides a quick and easy monitoring technology.

  5. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  6. Towards a Video Passive Content Fingerprinting Method for Partial-Copy Detection Robust against Non-Simulated Attacks.

    Directory of Open Access Journals (Sweden)

    Zobeida Jezabel Guzman-Zavaleta

    Full Text Available Passive content fingerprinting is widely used for video content identification and monitoring. However, many challenges remain unsolved especially for partial-copies detection. The main challenge is to find the right balance between the computational cost of fingerprint extraction and fingerprint dimension, without compromising detection performance against various attacks (robustness. Fast video detection performance is desirable in several modern applications, for instance, in those where video detection involves the use of large video databases or in applications requiring real-time video detection of partial copies, a process whose difficulty increases when videos suffer severe transformations. In this context, conventional fingerprinting methods are not fully suitable to cope with the attacks and transformations mentioned before, either because the robustness of these methods is not enough or because their execution time is very high, where the time bottleneck is commonly found in the fingerprint extraction and matching operations. Motivated by these issues, in this work we propose a content fingerprinting method based on the extraction of a set of independent binary global and local fingerprints. Although these features are robust against common video transformations, their combination is more discriminant against severe video transformations such as signal processing attacks, geometric transformations and temporal and spatial desynchronization. Additionally, we use an efficient multilevel filtering system accelerating the processes of fingerprint extraction and matching. This multilevel filtering system helps to rapidly identify potential similar video copies upon which the fingerprint process is carried out only, thus saving computational time. We tested with datasets of real copied videos, and the results show how our method outperforms state-of-the-art methods regarding detection scores. Furthermore, the granularity of our method makes

  7. Digital Light Processing update: status and future applications

    Science.gov (United States)

    Hornbeck, Larry J.

    1999-05-01

    Digital Light Processing (DLP) projection displays based on the Digital Micromirror Device (DMD) were introduced to the market in 1996. Less than 3 years later, DLP-based projectors are found in such diverse applications as mobile, conference room, video wall, home theater, and large-venue. They provide high-quality, seamless, all-digital images that have exceptional stability as well as freedom from both flicker and image lag. Marked improvements have been made in the image quality of DLP-based projection display, including brightness, resolution, contrast ratio, and border image. DLP-based mobile projectors that weighted about 27 pounds in 1996 now weight only about 7 pounds. This weight reduction has been responsible for the definition of an entirely new projector class, the ultraportable. New applications are being developed for this important new projection display technology; these include digital photofinishing for high process speed minilab and maxilab applications and DLP Cinema for the digital delivery of films to audiences around the world. This paper describes the status of DLP-based projection display technology, including its manufacturing, performance improvements, and new applications, with emphasis on DLP Cinema.

  8. Relacije umetnosti i video igara / Relations of Art and Video Games

    OpenAIRE

    Manojlo Maravić

    2012-01-01

    When discussing the art of video games, three different contexts need to be considered: the 'high' art (video games and the art); commercial video games (video games as the art) and the fan art. Video games are a legitimate artistic medium subject to modifications and recontextualisations in the process of creating a specific experience of the player/user/audience and political action by referring to particular social problems. They represent a high technological medium that increases, with p...

  9. Secured web-based video repository for multicenter studies.

    Science.gov (United States)

    Yan, Ling; Hicks, Matt; Winslow, Korey; Comella, Cynthia; Ludlow, Christy; Jinnah, H A; Rosen, Ami R; Wright, Laura; Galpern, Wendy R; Perlmutter, Joel S

    2015-04-01

    We developed a novel secured web-based dystonia video repository for the Dystonia Coalition, part of the Rare Disease Clinical Research network funded by the Office of Rare Diseases Research and the National Institute of Neurological Disorders and Stroke. A critical component of phenotypic data collection for all projects of the Dystonia Coalition includes a standardized video of each participant. We now describe our method for collecting, serving and securing these videos that is widely applicable to other studies. Each recruiting site uploads standardized videos to a centralized secured server for processing to permit website posting. The streaming technology used to view the videos from the website does not allow downloading of video files. With appropriate institutional review board approval and agreement with the hosting institution, users can search and view selected videos on the website using customizable, permissions-based access that maintains security yet facilitates research and quality control. This approach provides a convenient platform for researchers across institutions to evaluate and analyze shared video data. We have applied this methodology for quality control, confirmation of diagnoses, validation of rating scales, and implementation of new research projects. We believe our system can be a model for similar projects that require access to common video resources. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Global optimization for motion estimation with applications to ultrasound videos of carotid artery plaques

    Science.gov (United States)

    Murillo, Sergio; Pattichis, Marios; Soliz, Peter; Barriga, Simon; Loizou, C. P.; Pattichis, C. S.

    2010-03-01

    Motion estimation from digital video is an ill-posed problem that requires a regularization approach. Regularization introduces a smoothness constraint that can reduce the resolution of the velocity estimates. The problem is further complicated for ultrasound videos (US), where speckle noise levels can be significant. Motion estimation using optical flow models requires the modification of several parameters to satisfy the optical flow constraint as well as the level of imposed smoothness. Furthermore, except in simulations or mostly unrealistic cases, there is no ground truth to use for validating the velocity estimates. This problem is present in all real video sequences that are used as input to motion estimation algorithms. It is also an open problem in biomedical applications like motion analysis of US of carotid artery (CA) plaques. In this paper, we study the problem of obtaining reliable ultrasound video motion estimates for atherosclerotic plaques for use in clinical diagnosis. A global optimization framework for motion parameter optimization is presented. This framework uses actual carotid artery motions to provide optimal parameter values for a variety of motions and is tested on ten different US videos using two different motion estimation techniques.

  11. VBR video traffic models

    CERN Document Server

    Tanwir, Savera

    2014-01-01

    There has been a phenomenal growth in video applications over the past few years. An accurate traffic model of Variable Bit Rate (VBR) video is necessary for performance evaluation of a network design and for generating synthetic traffic that can be used for benchmarking a network. A large number of models for VBR video traffic have been proposed in the literature for different types of video in the past 20 years. Here, the authors have classified and surveyed these models and have also evaluated the models for H.264 AVC and MVC encoded video and discussed their findings.

  12. Developing web application for video chat

    OpenAIRE

    ŠTEBE, NEJC

    2016-01-01

    Cilj diplomskega dela je razviti spletno aplikacijo za video klepet in prenos datotek. Uporabniki se združujejo v sobah, v katerih lahko vzpostavijo prenos video posnetka, si izmenjujejo besedilna sporočila ali pošiljajo datoteke. V okviru diplomskega dela je bila razvita spletna aplikacija, ki je sestavljena iz dela, ki se izvaja v brskalniku odjemalca in dela, ki se izvaja na strežniku. Za izdelavo so bile uporabljene najnovejše tehnologije spletnega razvoja in vmesniki odprtokodnega projek...

  13. Collaborative Video Sketching

    DEFF Research Database (Denmark)

    Henningsen, Birgitte; Gundersen, Peter Bukovica; Hautopp, Heidi

    2017-01-01

    This paper introduces to what we define as a collaborative video sketching process. This process links various sketching techniques with digital storytelling approaches and creative reflection processes in video productions. Traditionally, sketching has been used by designers across various...... findings: 1) They are based on a collaborative approach. 2) The sketches act as a mean to externalizing hypotheses and assumptions among the participants. Based on our analysis we present an overview of factors involved in collaborative video sketching and shows how the factors relate to steps, where...... the participants: shape, record, review and edit their work, leading the participants to new insights about their work....

  14. A System based on Adaptive Background Subtraction Approach for Moving Object Detection and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Bahadır KARASULU

    2013-04-01

    Full Text Available Video surveillance systems are based on video and image processing research areas in the scope of computer science. Video processing covers various methods which are used to browse the changes in existing scene for specific video. Nowadays, video processing is one of the important areas of computer science. Two-dimensional videos are used to apply various segmentation and object detection and tracking processes which exists in multimedia content-based indexing, information retrieval, visual and distributed cross-camera surveillance systems, people tracking, traffic tracking and similar applications. Background subtraction (BS approach is a frequently used method for moving object detection and tracking. In the literature, there exist similar methods for this issue. In this research study, it is proposed to provide a more efficient method which is an addition to existing methods. According to model which is produced by using adaptive background subtraction (ABS, an object detection and tracking system’s software is implemented in computer environment. The performance of developed system is tested via experimental works with related video datasets. The experimental results and discussion are given in the study

  15. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    Science.gov (United States)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  16. Motion estimation for video coding efficient algorithms and architectures

    CERN Document Server

    Chakrabarti, Indrajit; Chatterjee, Sumit Kumar

    2015-01-01

    The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.

  17. Intelligent Model for Video Survillance Security System

    Directory of Open Access Journals (Sweden)

    J. Vidhya

    2013-12-01

    Full Text Available Video surveillance system senses and trails out all the threatening issues in the real time environment. It prevents from security threats with the help of visual devices which gather the information related to videos like CCTV’S and IP (Internet Protocol cameras. Video surveillance system has become a key for addressing problems in the public security. They are mostly deployed on the IP based network. So, all the possible security threats exist in the IP based application might also be the threats available for the reliable application which is available for video surveillance. In result, it may increase cybercrime, illegal video access, mishandling videos and so on. Hence, in this paper an intelligent model is used to propose security for video surveillance system which ensures safety and it provides secured access on video.

  18. Video Surveillance using a Multi-Camera Tracking and Fusion System

    OpenAIRE

    Zhang , Zhong; Scanlon , Andrew; Yin , Weihong; Yu , Li; Venetianer , Péter L.

    2008-01-01

    International audience; Usage of intelligent video surveillance (IVS) systems is spreading rapidly. These systems are being utilized in a wide range of applications. In most cases, even in multi-camera installations, the video is processed independently in each feed. This paper describes a system that fuses tracking information from multiple cameras, thus vastly expanding its capabilities. The fusion relies on all cameras being calibrated to a site map, while the individual sensors remain lar...

  19. APPLICABILITY ANALYSIS OF THE PHASE CORRELATION ALGORITHM FOR STABILIZATION OF VIDEO FRAMES SEQUENCES FOR CAPILLARY BLOOD FLOW

    Directory of Open Access Journals (Sweden)

    K. A. Karimov

    2016-05-01

    Full Text Available Videocapillaroscopy is a convenient and non-invasive method of blood flow parameters recovery in the capillaries. Capillaries position can vary at recorded video sequences due to the registration features of capillary blood flow. Stabilization algorithm of video capillary blood flow based on phase correlation is proposed and researched. This algorithm is compared to the known algorithms of video frames stabilization with full-frame superposition and with key points. Programs, based on discussed algorithms, are compared under processing the experimentally recorded video sequences of human capillaries and under processing of computer-simulated sequences of video frames with the specified offset. The full-frame superposition algorithm provides high quality of stabilization; however, the program based on this algorithm requires significant computational resources. Software implementation of the algorithm based on the detection of the key points is characterized by good performance, but provides low quality of stabilization for video sequences capillary blood flow. Algorithm based on phase correlation method provides high quality of stabilization and program realization of this algorithm requires minimal computational resources. It is shown that the phase correlation algorithm is the most useful for stabilization of video sequences for capillaries blood flow. Obtained findings can be used in the software for biomedical diagnostics.

  20. Design and Implementation of Video Shot Detection on Field Programmable Gate Arrays

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2012-09-01

    Full Text Available Video has become an interactive medium of communication in everyday life. The sheer volume of video makes it extremely difficult to browse through and find the required data. Hence extraction of key frames from the video which represents the abstract of the entire video becomes necessary. The aim of the video shot detection is to find the position of the shot boundaries, so that key frames can be selected from each shot for subsequent processing such as video summarization, indexing etc. For most of the surveillance applications like video summery, face recognition etc., the hardware (real time implementation of these algorithms becomes necessary. Here in this paper we present the architecture for simultaneous accessing of consecutive frames, which are then used for the implementation of various Video Shot Detection algorithms. We also present the real time implementation of three video shot detection algorithms using the above mentioned architecture on FPGA (Field Programmable Gate Arrays.

  1. Distributed Real-Time Embedded Video Processing

    National Research Council Canada - National Science Library

    Lv, Tiehan

    2004-01-01

    .... A deployable multi-camera video system must perform distributed computation, including computation near the camera as well as remote computations, in order to meet performance and power requirements...

  2. Watermarking textures in video games

    Science.gov (United States)

    Liu, Huajian; Berchtold, Waldemar; Schäfer, Marcel; Lieb, Patrick; Steinebach, Martin

    2014-02-01

    Digital watermarking is a promising solution to video game piracy. In this paper, based on the analysis of special challenges and requirements in terms of watermarking textures in video games, a novel watermarking scheme for DDS textures in video games is proposed. To meet the performance requirements in video game applications, the proposed algorithm embeds the watermark message directly in the compressed stream in DDS files and can be straightforwardly applied in watermark container technique for real-time embedding. Furthermore, the embedding approach achieves high watermark payload to handle collusion secure fingerprinting codes with extreme length. Hence, the scheme is resistant to collusion attacks, which is indispensable in video game applications. The proposed scheme is evaluated in aspects of transparency, robustness, security and performance. Especially, in addition to classical objective evaluation, the visual quality and playing experience of watermarked games is assessed subjectively in game playing.

  3. Video fluoroscopic techniques for the study of Oral Food Processing

    Science.gov (United States)

    Matsuo, Koichiro; Palmer, Jeffrey B.

    2016-01-01

    Food oral processing and pharyngeal food passage cannot be observed directly from the outside of the body without instrumental methods. Videofluoroscopy (x-ray video recording) reveals the movement of oropharyngeal anatomical structures in two dimensions. By adding a radiopaque contrast medium, the motion and shape of the food bolus can be also visualized, providing critical information about the mechanisms of eating, drinking, and swallowing. For quantitative analysis of the kinematics of oral food processing, radiopaque markers are attached to the teeth, tongue or soft palate. This approach permits kinematic analysis with a variety of textures and consistencies, both solid and liquid. Fundamental mechanisms of food oral processing are clearly observed with videofluoroscopy in lateral and anteroposterior projections. PMID:27213138

  4. A semi-automatic annotation tool for cooking video

    Science.gov (United States)

    Bianco, Simone; Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo; Margherita, Roberto; Marini, Gianluca; Gianforme, Giorgio; Pantaleo, Giuseppe

    2013-03-01

    In order to create a cooking assistant application to guide the users in the preparation of the dishes relevant to their profile diets and food preferences, it is necessary to accurately annotate the video recipes, identifying and tracking the foods of the cook. These videos present particular annotation challenges such as frequent occlusions, food appearance changes, etc. Manually annotate the videos is a time-consuming, tedious and error-prone task. Fully automatic tools that integrate computer vision algorithms to extract and identify the elements of interest are not error free, and false positive and false negative detections need to be corrected in a post-processing stage. We present an interactive, semi-automatic tool for the annotation of cooking videos that integrates computer vision techniques under the supervision of the user. The annotation accuracy is increased with respect to completely automatic tools and the human effort is reduced with respect to completely manual ones. The performance and usability of the proposed tool are evaluated on the basis of the time and effort required to annotate the same video sequences.

  5. Markerless registration for image guided surgery. Preoperative image, intraoperative video image, and patient

    International Nuclear Information System (INIS)

    Kihara, Tomohiko; Tanaka, Yuko

    1998-01-01

    Real-time and volumetric acquisition of X-ray CT, MR, and SPECT is the latest trend of the medical imaging devices. A clinical challenge is to use these multi-modality volumetric information complementary on patient in the entire diagnostic and surgical processes. The intraoperative image and patient integration intents to establish a common reference frame by image in diagnostic and surgical processes. This provides a quantitative measure during surgery, for which we have been relied mostly on doctors' skills and experiences. The intraoperative image and patient integration involves various technologies, however, we think one of the most important elements is the development of markerless registration, which should be efficient and applicable to the preoperative multi-modality data sets, intraoperative image, and patient. We developed a registration system which integrates preoperative multi-modality images, intraoperative video image, and patient. It consists of a real-time registration of video camera for intraoperative use, a markerless surface sampling matching of patient and image, our previous works of markerless multi-modality image registration of X-ray CT, MR, and SPECT, and an image synthesis on video image. We think these techniques can be used in many applications which involve video camera like devices such as video camera, microscope, and image Intensifier. (author)

  6. First International Conference Multimedia Processing, Communication and Computing Applications

    CERN Document Server

    Guru, Devanur

    2013-01-01

    ICMCCA 2012 is the first International Conference on Multimedia Processing, Communication and Computing Applications and the theme of the Conference is chosen as ‘Multimedia Processing and its Applications’. Multimedia processing has been an active research area contributing in many frontiers of today’s science and technology. This book presents peer-reviewed quality papers on multimedia processing, which covers a very broad area of science and technology. The prime objective of the book is to familiarize readers with the latest scientific developments that are taking place in various fields of multimedia processing and is widely used in many disciplines such as Medical Diagnosis, Digital Forensic, Object Recognition, Image and Video Analysis, Robotics, Military, Automotive Industries, Surveillance and Security, Quality Inspection, etc. The book will assist the research community to get the insight of the overlapping works which are being carried out across the globe at many medical hospitals and instit...

  7. Real-time unmanned aircraft systems surveillance video mosaicking using GPU

    Science.gov (United States)

    Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.

    2010-04-01

    Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.

  8. A Video Game-Based Framework for Analyzing Human-Robot Interaction: Characterizing Interface Design in Real-Time Interactive Multimedia Applications

    Science.gov (United States)

    2006-01-01

    segments video game interaction into domain-independent components which together form a framework that can be used to characterize real-time interactive...multimedia applications in general and HRI in particular. We provide examples of using the components in both the video game and the Unmanned Aerial

  9. Video: useful tool for delivering family planning messages.

    Science.gov (United States)

    Sumarsono, S K

    1985-10-01

    In 1969, the Government of Indonesia declared that the population explosion was a national problem. The National Family Planning Program was consequently launched to encourage adoption of the ideal of a small, happy and prosperous family norm. Micro-approach messages are composed of the following: physiology of menstruation; reproductive process; healthy pregnancy; rational family planning; rational application of contraceptives; infant and child care; nutrition improvement; increase in breastfeeding; increase in family income; education in family life; family health; and deferred marriage age. Macro-approach messages include: the population problem and its impact on socioeconomic aspects; efforts to cope with the population problem; and improvement of women's lot. In utilizing the media and communication channels, the program encourages the implementation of units and working units of IEC to produce IEC materials; utilizes all possible existing media and IEC channels; maintains the consistent linkage between the activity of mass media and the IEC activities in the field; and encourages the private sector to participate in the production of IEC media and materials. A media production center was set up and carries out the following activities: producing video cassettes for tv broadcasts of family planning drama, family planning news, and tv spots; producing duplicates of the video cassettes for distribution to provinces in support of the video network; producing teaching materials for family planning workers; and transfering family planning films into video cassettes. A video network was developed and includes video monitors in family planning service points such as hospitals, family planning clinics and public places like bus stations. In 1985, the program will be expanded by 50 mobile information units equipped with video monitors. Video has potentials to increase the productivity and effectiveness of the family planning program. The video production process is

  10. Quality metric for spherical panoramic video

    Science.gov (United States)

    Zakharchenko, Vladyslav; Choi, Kwang Pyo; Park, Jeong Hoon

    2016-09-01

    Virtual reality (VR)/ augmented reality (AR) applications allow users to view artificial content of a surrounding space simulating presence effect with a help of special applications or devices. Synthetic contents production is well known process form computer graphics domain and pipeline has been already fixed in the industry. However emerging multimedia formats for immersive entertainment applications such as free-viewpoint television (FTV) or spherical panoramic video require different approaches in content management and quality assessment. The international standardization on FTV has been promoted by MPEG. This paper is dedicated to discussion of immersive media distribution format and quality estimation process. Accuracy and reliability of the proposed objective quality estimation method had been verified with spherical panoramic images demonstrating good correlation results with subjective quality estimation held by a group of experts.

  11. Analysis of two dimensional charged particle scintillation using video image processing techniques

    International Nuclear Information System (INIS)

    Sinha, A.; Bhave, B.D.; Singh, B.; Panchal, C.G.; Joshi, V.M.; Shyam, A.; Srinivasan, M.

    1993-01-01

    A novel method for video recording of individual charged particle scintillation images and their offline analysis using digital image processing techniques for obtaining position, time and energy information is presented . Results of an exploratory experiment conducted using 241 Am and 239 Pu alpha sources are presented. (author). 3 figs., 4 tabs

  12. A low-cost system for graphical process monitoring with colour video symbol display units

    International Nuclear Information System (INIS)

    Grauer, H.; Jarsch, V.; Mueller, W.

    1977-01-01

    A system for computer controlled graphic process supervision, using color symbol video displays is described. It has the following characteristics: - compact unit: no external memory for image storage - problem oriented simple descriptive cut to the process program - no restriction of the graphical representation of process variables - computer and display independent, by implementation of colours and parameterized code creation for the display. (WB) [de

  13. Robust Adaptable Video Copy Detection

    DEFF Research Database (Denmark)

    Assent, Ira; Kremer, Hardy

    2009-01-01

    in contrast). Our query processing combines filtering and indexing structures for efficient multistep computation of video copies under this model. We show that our model successfully identifies altered video copies and does so more reliably than existing models.......Video copy detection should be capable of identifying video copies subject to alterations e.g. in video contrast or frame rates. We propose a video copy detection scheme that allows for adaptable detection of videos that are altered temporally (e.g. frame rate change) and/or visually (e.g. change...

  14. Video Liveness for Citizen Journalism: Attacks and Defenses

    OpenAIRE

    Rahman, Mahmudur; Azimpourkivi, Mozhgan; Topkara, Umut; Carbunar, Bogdan

    2017-01-01

    The impact of citizen journalism raises important video integrity and credibility issues. In this article, we introduce Vamos, the first user transparent video "liveness" verification solution based on video motion, that accommodates the full range of camera movements, and supports videos of arbitrary length. Vamos uses the agreement between video motion and camera movement to corroborate the video authenticity. Vamos can be integrated into any mobile video capture application without requiri...

  15. Efficient genre-specific semantic video indexing

    NARCIS (Netherlands)

    Wu, J.; Worring, M.

    2012-01-01

    Large video collections such as YouTube contain many different video genres, while in many applications the user might be interested in one or two specific video genres only. Thus, when users are querying the system with a specific semantic concept like AnchorPerson, and MovieStars, they are likely

  16. Automated Indexing and Search of Video Data in Large Collections with inVideo

    Directory of Open Access Journals (Sweden)

    Shuangbao Paul Wang

    2017-08-01

    Full Text Available In this paper, we present a novel system, inVideo, for automatically indexing and searching videos based on the keywords spoken in the audio track and the visual content of the video frames. Using the highly efficient video indexing engine we developed, inVideo is able to analyze videos using machine learning and pattern recognition without the need for initial viewing by a human. The time-stamped commenting and tagging features refine the accuracy of search results. The cloud-based implementation makes it possible to conduct elastic search, augmented search, and data analytics. Our research shows that inVideo presents an efficient tool in processing and analyzing videos and increasing interactions in video-based online learning environment. Data from a cybersecurity program with more than 500 students show that applying inVideo to current video material, interactions between student-student and student-faculty increased significantly across 24 sections program-wide.

  17. Scalable gastroscopic video summarization via similar-inhibition dictionary selection.

    Science.gov (United States)

    Wang, Shuai; Cong, Yang; Cao, Jun; Yang, Yunsheng; Tang, Yandong; Zhao, Huaici; Yu, Haibin

    2016-01-01

    This paper aims at developing an automated gastroscopic video summarization algorithm to assist clinicians to more effectively go through the abnormal contents of the video. To select the most representative frames from the original video sequence, we formulate the problem of gastroscopic video summarization as a dictionary selection issue. Different from the traditional dictionary selection methods, which take into account only the number and reconstruction ability of selected key frames, our model introduces the similar-inhibition constraint to reinforce the diversity of selected key frames. We calculate the attention cost by merging both gaze and content change into a prior cue to help select the frames with more high-level semantic information. Moreover, we adopt an image quality evaluation process to eliminate the interference of the poor quality images and a segmentation process to reduce the computational complexity. For experiments, we build a new gastroscopic video dataset captured from 30 volunteers with more than 400k images and compare our method with the state-of-the-arts using the content consistency, index consistency and content-index consistency with the ground truth. Compared with all competitors, our method obtains the best results in 23 of 30 videos evaluated based on content consistency, 24 of 30 videos evaluated based on index consistency and all videos evaluated based on content-index consistency. For gastroscopic video summarization, we propose an automated annotation method via similar-inhibition dictionary selection. Our model can achieve better performance compared with other state-of-the-art models and supplies more suitable key frames for diagnosis. The developed algorithm can be automatically adapted to various real applications, such as the training of young clinicians, computer-aided diagnosis or medical report generation. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. A method of mobile video transmission based on J2ee

    Science.gov (United States)

    Guo, Jian-xin; Zhao, Ji-chun; Gong, Jing; Chun, Yang

    2013-03-01

    As 3G (3rd-generation) networks evolve worldwide, the rising demand for mobile video services and the enormous growth of video on the internet is creating major new revenue opportunities for mobile network operators and application developers. The text introduced a method of mobile video transmission based on J2ME, giving the method of video compressing, then describing the video compressing standard, and then describing the software design. The proposed mobile video method based on J2EE is a typical mobile multimedia application, which has a higher availability and a wide range of applications. The users can get the video through terminal devices such as phone.

  19. Subjective Video Quality Assessment in H.264/AVC Video Coding Standard

    Directory of Open Access Journals (Sweden)

    Z. Miličević

    2012-11-01

    Full Text Available This paper seeks to provide an approach for subjective video quality assessment in the H.264/AVC standard. For this purpose a special software program for the subjective assessment of quality of all the tested video sequences is developed. It was developed in accordance with recommendation ITU-T P.910, since it is suitable for the testing of multimedia applications. The obtained results show that in the proposed selective intra prediction and optimized inter prediction algorithm there is a small difference in picture quality (signal-to-noise ratio between decoded original and modified video sequences.

  20. Heterogeneous CPU-GPU moving targets detection for UAV video

    Science.gov (United States)

    Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan

    2017-07-01

    Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.

  1. Advanced digital video surveillance for safeguard and physical protection

    International Nuclear Information System (INIS)

    Kumar, R.

    2002-01-01

    Full text: Video surveillance is a very crucial component in safeguard and physical protection. Digital technology has revolutionized the surveillance scenario and brought in various new capabilities like better image quality, faster search and retrieval of video images, less storage space for recording, efficient transmission and storage of video, better protection of recorded video images, and easy remote accesses to live and recorded video etc. The basic safeguard requirement for verifiably uninterrupted surveillance has remained largely unchanged since its inception. However, changes to the inspection paradigm to admit automated review and remote monitoring have dramatically increased the demands on safeguard surveillance system. Today's safeguard systems can incorporate intelligent motion detection with very low rate of false alarm and less archiving volume, embedded image processing capability for object behavior and event based indexing, object recognition, efficient querying and report generation etc. It also demands cryptographically authenticating, encrypted, and highly compressed video data for efficient, secure, tamper indicating and transmission. In physical protection, intelligent on robust video motion detection, real time moving object detection and tracking from stationary and moving camera platform, multi-camera cooperative tracking, activity detection and recognition, human motion analysis etc. is going to play a key rote in perimeter security. Incorporation of front and video imagery exploitation tools like automatic number plate recognition, vehicle identification and classification, vehicle undercarriage inspection, face recognition, iris recognition and other biometric tools, gesture recognition etc. makes personnel and vehicle access control robust and foolproof. Innovative digital image enhancement techniques coupled with novel sensor design makes low cost, omni-directional vision capable, all weather, day night surveillance a reality

  2. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  3. Identifying hidden voice and video streams

    Science.gov (United States)

    Fan, Jieyan; Wu, Dapeng; Nucci, Antonio; Keralapura, Ram; Gao, Lixin

    2009-04-01

    Given the rising popularity of voice and video services over the Internet, accurately identifying voice and video traffic that traverse their networks has become a critical task for Internet service providers (ISPs). As the number of proprietary applications that deliver voice and video services to end users increases over time, the search for the one methodology that can accurately detect such services while being application independent still remains open. This problem becomes even more complicated when voice and video service providers like Skype, Microsoft, and Google bundle their voice and video services with other services like file transfer and chat. For example, a bundled Skype session can contain both voice stream and file transfer stream in the same layer-3/layer-4 flow. In this context, traditional techniques to identify voice and video streams do not work. In this paper, we propose a novel self-learning classifier, called VVS-I , that detects the presence of voice and video streams in flows with minimum manual intervention. Our classifier works in two phases: training phase and detection phase. In the training phase, VVS-I first extracts the relevant features, and subsequently constructs a fingerprint of a flow using the power spectral density (PSD) analysis. In the detection phase, it compares the fingerprint of a flow to the existing fingerprints learned during the training phase, and subsequently classifies the flow. Our classifier is not only capable of detecting voice and video streams that are hidden in different flows, but is also capable of detecting different applications (like Skype, MSN, etc.) that generate these voice/video streams. We show that our classifier can achieve close to 100% detection rate while keeping the false positive rate to less that 1%.

  4. Real-time embedded system for stereo video processing for multiview displays

    Science.gov (United States)

    Berretty, R.-P. M.; Riemens, A. K.; Machado, P. F.

    2007-02-01

    In video systems, the introduction of 3D video might be the next revolution after the introduction of color. Nowadays multiview auto-stereoscopic displays are entering the market. Such displays offer various views at the same time. Depending on its positions, the viewers' eyes see different images. Hence, the viewers' left eye receives a signal that is different from what his right eye gets; this gives, provided the signals have been properly processed, the impression of depth. New auto-stereoscopic products use an image-plus-depth interface. On the other hand, a growing number of 3D productions from the entertainment industry use a stereo format. In this paper, we show how to compute depth from the stereo signal to comply with the display interface format. Furthermore, we present a realisation suitable for a real-time cost-effective implementation on an embedded media processor.

  5. High efficiency video coding (HEVC) algorithms and architectures

    CERN Document Server

    Budagavi, Madhukar; Sullivan, Gary

    2014-01-01

    This book provides developers, engineers, researchers and students with detailed knowledge about the High Efficiency Video Coding (HEVC) standard. HEVC is the successor to the widely successful H.264/AVC video compression standard, and it provides around twice as much compression as H.264/AVC for the same level of quality. The applications for HEVC will not only cover the space of the well-known current uses and capabilities of digital video – they will also include the deployment of new services and the delivery of enhanced video quality, such as ultra-high-definition television (UHDTV) and video with higher dynamic range, wider range of representable color, and greater representation precision than what is typically found today. HEVC is the next major generation of video coding design – a flexible, reliable and robust solution that will support the next decade of video applications and ease the burden of video on world-wide network traffic. This book provides a detailed explanation of the various parts ...

  6. Rate control scheme for consistent video quality in scalable video codec.

    Science.gov (United States)

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame.

  7. Probabilistic Approaches to Video Retrieval

    NARCIS (Netherlands)

    Ianeva, Tzvetanka; Boldareva, L.; Westerveld, T.H.W.; Cornacchia, Roberto; Hiemstra, Djoerd; de Vries, A.P.

    Our experiments for TRECVID 2004 further investigate the applicability of the so-called “Generative Probabilistic Models to video retrieval��?. TRECVID 2003 results demonstrated that mixture models computed from video shot sequences improve the precision of “query by examples��? results when

  8. Low-complexity video encoding method for wireless image transmission in capsule endoscope.

    Science.gov (United States)

    Takizawa, Kenichi; Hamaguchi, Kiyoshi

    2010-01-01

    This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.

  9. CameraCast: flexible access to remote video sensors

    Science.gov (United States)

    Kong, Jiantao; Ganev, Ivan; Schwan, Karsten; Widener, Patrick

    2007-01-01

    New applications like remote surveillance and online environmental or traffic monitoring are making it increasingly important to provide flexible and protected access to remote video sensor devices. Current systems use application-level codes like web-based solutions to provide such access. This requires adherence to user-level APIs provided by such services, access to remote video information through given application-specific service and server topologies, and that the data being captured and distributed is manipulated by third party service codes. CameraCast is a simple, easily used system-level solution to remote video access. It provides a logical device API so that an application can identically operate on local vs. remote video sensor devices, using its own service and server topologies. In addition, the application can take advantage of API enhancements to protect remote video information, using a capability-based model for differential data protection that offers fine grain control over the information made available to specific codes or machines, thereby limiting their ability to violate privacy or security constraints. Experimental evaluations of CameraCast show that the performance of accessing remote video information approximates that of accesses to local devices, given sufficient networking resources. High performance is also attained when protection restrictions are enforced, due to an efficient kernel-level realization of differential data protection.

  10. Facilitation or disengagement? Attention bias in facial affect processing after short-term violent video game exposure.

    Directory of Open Access Journals (Sweden)

    Yanling Liu

    Full Text Available Previous research has been inconsistent on whether violent video games exert positive and/or negative effects on cognition. In particular, attentional bias in facial affect processing after violent video game exposure continues to be controversial. The aim of the present study was to investigate attentional bias in facial recognition after short term exposure to violent video games and to characterize the neural correlates of this effect. In order to accomplish this, participants were exposed to either neutral or violent video games for 25 min and then event-related potentials (ERPs were recorded during two emotional search tasks. The first search task assessed attentional facilitation, in which participants were required to identify an emotional face from a crowd of neutral faces. In contrast, the second task measured disengagement, in which participants were required to identify a neutral face from a crowd of emotional faces. Our results found a significant presence of the ERP component, N2pc, during the facilitation task; however, no differences were observed between the two video game groups. This finding does not support a link between attentional facilitation and violent video game exposure. Comparatively, during the disengagement task, N2pc responses were not observed when participants viewed happy faces following violent video game exposure; however, a weak N2pc response was observed after neutral video game exposure. These results provided only inconsistent support for the disengagement hypothesis, suggesting that participants found it difficult to separate a neutral face from a crowd of emotional faces.

  11. Facilitation or disengagement? Attention bias in facial affect processing after short-term violent video game exposure.

    Science.gov (United States)

    Liu, Yanling; Lan, Haiying; Teng, Zhaojun; Guo, Cheng; Yao, Dezhong

    2017-01-01

    Previous research has been inconsistent on whether violent video games exert positive and/or negative effects on cognition. In particular, attentional bias in facial affect processing after violent video game exposure continues to be controversial. The aim of the present study was to investigate attentional bias in facial recognition after short term exposure to violent video games and to characterize the neural correlates of this effect. In order to accomplish this, participants were exposed to either neutral or violent video games for 25 min and then event-related potentials (ERPs) were recorded during two emotional search tasks. The first search task assessed attentional facilitation, in which participants were required to identify an emotional face from a crowd of neutral faces. In contrast, the second task measured disengagement, in which participants were required to identify a neutral face from a crowd of emotional faces. Our results found a significant presence of the ERP component, N2pc, during the facilitation task; however, no differences were observed between the two video game groups. This finding does not support a link between attentional facilitation and violent video game exposure. Comparatively, during the disengagement task, N2pc responses were not observed when participants viewed happy faces following violent video game exposure; however, a weak N2pc response was observed after neutral video game exposure. These results provided only inconsistent support for the disengagement hypothesis, suggesting that participants found it difficult to separate a neutral face from a crowd of emotional faces.

  12. Facilitation or disengagement? Attention bias in facial affect processing after short-term violent video game exposure

    Science.gov (United States)

    Liu, Yanling; Lan, Haiying; Teng, Zhaojun; Guo, Cheng; Yao, Dezhong

    2017-01-01

    Previous research has been inconsistent on whether violent video games exert positive and/or negative effects on cognition. In particular, attentional bias in facial affect processing after violent video game exposure continues to be controversial. The aim of the present study was to investigate attentional bias in facial recognition after short term exposure to violent video games and to characterize the neural correlates of this effect. In order to accomplish this, participants were exposed to either neutral or violent video games for 25 min and then event-related potentials (ERPs) were recorded during two emotional search tasks. The first search task assessed attentional facilitation, in which participants were required to identify an emotional face from a crowd of neutral faces. In contrast, the second task measured disengagement, in which participants were required to identify a neutral face from a crowd of emotional faces. Our results found a significant presence of the ERP component, N2pc, during the facilitation task; however, no differences were observed between the two video game groups. This finding does not support a link between attentional facilitation and violent video game exposure. Comparatively, during the disengagement task, N2pc responses were not observed when participants viewed happy faces following violent video game exposure; however, a weak N2pc response was observed after neutral video game exposure. These results provided only inconsistent support for the disengagement hypothesis, suggesting that participants found it difficult to separate a neutral face from a crowd of emotional faces. PMID:28249033

  13. Accelerating wavelet-based video coding on graphics hardware using CUDA

    NARCIS (Netherlands)

    Laan, van der W.J.; Roerdink, J.B.T.M.; Jalba, A.C.; Zinterhof, P.; Loncaric, S.; Uhl, A.; Carini, A.

    2009-01-01

    The DiscreteWavelet Transform (DWT) has a wide range of applications from signal processing to video and image compression. This transform, by means of the lifting scheme, can be performed in a memory and computation efficient way on modern, programmable GPUs, which can be regarded as massively

  14. Accelerating Wavelet-Based Video Coding on Graphics Hardware using CUDA

    NARCIS (Netherlands)

    Laan, Wladimir J. van der; Roerdink, Jos B.T.M.; Jalba, Andrei C.; Zinterhof, P; Loncaric, S; Uhl, A; Carini, A

    2009-01-01

    The Discrete Wavelet Transform (DWT) has a wide range of applications from signal processing to video and image compression. This transform, by means of the lifting scheme, can be performed in a memory mid computation efficient way on modern, programmable GPUs, which can be regarded as massively

  15. Video Conferencing for Opening Classroom Doors in Initial Teacher Education: Sociocultural Processes of Mimicking and Improvisation

    Directory of Open Access Journals (Sweden)

    Rolf Wiesemes

    2010-11-01

    Full Text Available In this article, we present an alternative framework for conceptualising video-conferencing uses in initial teacher education and in Higher Education (HE more generally. This alternative framework takes into account the existing models in the field, but – based on a set of interviews conducted with teacher trainees and wider analysis of the related literature – we suggest that there is a need to add to existing models the notions of ‘mimicking’ (copying practice and improvisation (unplanned and spontaneous personal learning moments. These two notions are considered to be vital, as they remain valid throughout teachers’ careers and constitute key affordances of video-conferencing uses in HE. In particular, we argue that improvisational processes can be considered as key for developing professional practice and lifelong learning and that video-conferencing uses in initial teacher education can contribute to an understanding of training and learning processes. Current conceptualisations of video conferencing as suggested by Coyle (2004 and Marsh et al. (2009 remain valid, but also are limited in their scope with respect to focusing predominantly on pragmatic and instrumental teacher-training issues. Our article suggests that the theoretical conceptualisations of video conferencing should be expanded to include elements of mimicking and ultimately improvisation. This allows us to consider not just etic aspects of practice, but equally emic practices and related personal professional development. We locate these arguments more widely in a sociocultural-theory framework, as it enables us to describe interactions in dialectical rather than dichotomous terms (Lantolf & Poehner, 2008.

  16. A Complexity-Aware Video Adaptation Mechanism for Live Streaming Systems

    Directory of Open Access Journals (Sweden)

    Chen Homer H

    2007-01-01

    Full Text Available The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.

  17. A Complexity-Aware Video Adaptation Mechanism for Live Streaming Systems

    Science.gov (United States)

    Lu, Meng-Ting; Yao, Jason J.; Chen, Homer H.

    2007-12-01

    The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.

  18. Digital video transcoding for transmission and storage

    CERN Document Server

    Sun, Huifang; Chen, Xuemin

    2004-01-01

    Professionals in the video and multimedia industries need a book that explains industry standards for video coding and how to convert the compressed information between standards. Digital Video Transcoding for Transmission and Storage answers this demand while also supplying the theories and principles of video compression and transcoding technologies. Emphasizing digital video transcoding techniques, this book summarizes its content via examples of practical methods for transcoder implementation. It relates almost all of its featured transcoding technologies to practical applications.This vol

  19. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  20. Mode extraction on wind turbine blades via phase-based video motion estimation

    Science.gov (United States)

    Sarrafi, Aral; Poozesh, Peyman; Niezrecki, Christopher; Mao, Zhu

    2017-04-01

    In recent years, image processing techniques are being applied more often for structural dynamics identification, characterization, and structural health monitoring. Although as a non-contact and full-field measurement method, image processing still has a long way to go to outperform other conventional sensing instruments (i.e. accelerometers, strain gauges, laser vibrometers, etc.,). However, the technologies associated with image processing are developing rapidly and gaining more attention in a variety of engineering applications including structural dynamics identification and modal analysis. Among numerous motion estimation and image-processing methods, phase-based video motion estimation is considered as one of the most efficient methods regarding computation consumption and noise robustness. In this paper, phase-based video motion estimation is adopted for structural dynamics characterization on a 2.3-meter long Skystream wind turbine blade, and the modal parameters (natural frequencies, operating deflection shapes) are extracted. Phase-based video processing adopted in this paper provides reliable full-field 2-D motion information, which is beneficial for manufacturing certification and model updating at the design stage. The phase-based video motion estimation approach is demonstrated through processing data on a full-scale commercial structure (i.e. a wind turbine blade) with complex geometry and properties, and the results obtained have a good correlation with the modal parameters extracted from accelerometer measurements, especially for the first four bending modes, which have significant importance in blade characterization.

  1. Genre-Specific Semantic Video Indexing

    NARCIS (Netherlands)

    Wu, J.; Worring, M.

    2010-01-01

    In many applications, we find large video collections from different genres where the user is often only interested in one or two specific video genres. So, when users are querying the system with a specific semantic concept, they are likely aiming a genre specific instantiation of this concept.

  2. The RUBA Watchdog Video Analysis Tool

    DEFF Research Database (Denmark)

    Bahnsen, Chris Holmberg; Madsen, Tanja Kidholm Osmann; Jensen, Morten Bornø

    We have developed a watchdog video analysis tool called RUBA (Road User Behaviour Analysis) to use for processing of traffic video. This report provides an overview of the functions of RUBA and gives a brief introduction into how analyses can be made in RUBA.......We have developed a watchdog video analysis tool called RUBA (Road User Behaviour Analysis) to use for processing of traffic video. This report provides an overview of the functions of RUBA and gives a brief introduction into how analyses can be made in RUBA....

  3. The production of scientific videos: a theoretical approach

    Directory of Open Access Journals (Sweden)

    Carlos Ernesto Gavilondo Rodriguez

    2016-12-01

    Full Text Available The article presents the results of theoretical research on the production of scientific videos and its application to the teaching-learning process carried out in schools in the city of Guayaquil, Ecuador. It is located within the production line and Audiovisual Communication. Creation of scientific videos, from the Communication major with a concentration in audiovisual production and multimedia of the Salesian Polytechnic University. For the realization of the article it was necessary to use key terms that helped subsequently to data collection. used terms such as: audiovisual production, understood as the production of content for audiovisual media; the following term used audiovisual communication is recognized as the process in which there is an exchange of messages through an audible and / or visual system; and the last term we use is scientifically video, which is one that uses audiovisual resources to obtain relevant and reliable information.As part of the theoretical results a methodological proposal for the video production is presented for educational purposes. In conclusion set out, first, that from the communicative statement in recent times, current social relations, constitute a successful context of possibilities shown to education to generate meeting points between the world of the everyday and the knowledge. Another indicator validated as part of the investigation, is that teachers surveyed use the potential of the audiovisual media, and supported them, deploy alternatives for use. 

  4. Dynamic Textures Modeling via Joint Video Dictionary Learning.

    Science.gov (United States)

    Wei, Xian; Li, Yuanxiang; Shen, Hao; Chen, Fang; Kleinsteuber, Martin; Wang, Zhongfeng

    2017-04-06

    Video representation is an important and challenging task in the computer vision community. In this paper, we consider the problem of modeling and classifying video sequences of dynamic scenes which could be modeled in a dynamic textures (DT) framework. At first, we assume that image frames of a moving scene can be modeled as a Markov random process. We propose a sparse coding framework, named joint video dictionary learning (JVDL), to model a video adaptively. By treating the sparse coefficients of image frames over a learned dictionary as the underlying "states", we learn an efficient and robust linear transition matrix between two adjacent frames of sparse events in time series. Hence, a dynamic scene sequence is represented by an appropriate transition matrix associated with a dictionary. In order to ensure the stability of JVDL, we impose several constraints on such transition matrix and dictionary. The developed framework is able to capture the dynamics of a moving scene by exploring both sparse properties and the temporal correlations of consecutive video frames. Moreover, such learned JVDL parameters can be used for various DT applications, such as DT synthesis and recognition. Experimental results demonstrate the strong competitiveness of the proposed JVDL approach in comparison with state-of-the-art video representation methods. Especially, it performs significantly better in dealing with DT synthesis and recognition on heavily corrupted data.

  5. No-Reference Video Quality Assessment Based on Statistical Analysis in 3D-DCT Domain.

    Science.gov (United States)

    Li, Xuelong; Guo, Qun; Lu, Xiaoqiang

    2016-05-13

    It is an important task to design models for universal no-reference video quality assessment (NR-VQA) in multiple video processing and computer vision applications. However, most existing NR-VQA metrics are designed for specific distortion types which are not often aware in practical applications. A further deficiency is that the spatial and temporal information of videos is hardly considered simultaneously. In this paper, we propose a new NR-VQA metric based on the spatiotemporal natural video statistics (NVS) in 3D discrete cosine transform (3D-DCT) domain. In the proposed method, a set of features are firstly extracted based on the statistical analysis of 3D-DCT coefficients to characterize the spatiotemporal statistics of videos in different views. These features are used to predict the perceived video quality via the efficient linear support vector regression (SVR) model afterwards. The contributions of this paper are: 1) we explore the spatiotemporal statistics of videos in 3DDCT domain which has the inherent spatiotemporal encoding advantage over other widely used 2D transformations; 2) we extract a small set of simple but effective statistical features for video visual quality prediction; 3) the proposed method is universal for multiple types of distortions and robust to different databases. The proposed method is tested on four widely used video databases. Extensive experimental results demonstrate that the proposed method is competitive with the state-of-art NR-VQA metrics and the top-performing FR-VQA and RR-VQA metrics.

  6. Maximizing Resource Utilization in Video Streaming Systems

    Science.gov (United States)

    Alsmirat, Mohammad Abdullah

    2013-01-01

    Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…

  7. Objective video quality measure for application to tele-echocardiography.

    Science.gov (United States)

    Moore, Peter Thomas; O'Hare, Neil; Walsh, Kevin P; Ward, Neil; Conlon, Niamh

    2008-08-01

    Real-time tele-echocardiography is widely used to remotely diagnose or exclude congenital heart defects. Cost effective technical implementation is realised using low-bandwidth transmission systems and lossy compression (videoconferencing) schemes. In our study, DICOM video sequences were converted to common multimedia formats, which were then, compressed using three lossy compression algorithms. We then applied a digital (multimedia) video quality metric (VQM) to determine objectively a value for degradation due to compression. Three levels of compression were simulated by varying system bandwidth and compared to a subjective assessment of video clip quality by three paediatric cardiologists with more than 5 years of experience.

  8. Fast and predictable video compression in software design and implementation of an H.261 codec

    Science.gov (United States)

    Geske, Dagmar; Hess, Robert

    1998-09-01

    The use of software codecs for video compression becomes commonplace in several videoconferencing applications. In order to reduce conflicts with other applications used at the same time, mechanisms for resource reservation on endsystems need to determine an upper bound for computing time used by the codec. This leads to the demand for predictable execution times of compression/decompression. Since compression schemes as H.261 inherently depend on the motion contained in the video, an adaptive admission control is required. This paper presents a data driven approach based on dynamical reduction of the number of processed macroblocks in peak situations. Besides the absolute speed is a point of interest. The question, whether and how software compression of high quality video is feasible on today's desktop computers, is examined.

  9. The effect of music video exposure on students' perceived clinical applications of popular music in the field of music therapy: a pilot study.

    Science.gov (United States)

    Gooding, Lori F; Mori-Inoue, Satoko

    2011-01-01

    The purpose of this study was to examine the effect of video exposure on music therapy students' perceptions of clinical applications of popular music in the field of music therapy. Fifty-one participants were randomly divided into two groups and exposed to a popular song in either audio-only or music video format. Participants were asked to indicate clinical applications; specifically, participants chose: (a) possible population(s), (b) most appropriate population(s), (c) possible age range(s), (d) most appropriate age ranges, (e) possible goal area(s) and (f) most appropriate goal area. Data for each of these categories were compiled and analyzed, with no significant differences found in the choices made by the audio-only and video groups. Three items, (a) selection of the bereavement population, (b) selection of bereavement as the most appropriate population and (c) selection of the age ranges of pre teen/mature adult, were additionally selected for further analysis due to their relationship to the video content. Analysis results revealed a significant difference between the video and audio-only groups for the selection of these specific items, with the video group's selections more closely aligned to the video content. Results of this pilot study suggest that music video exposure to popular music can impact how students choose to implement popular songs in the field of music therapy.

  10. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  11. In-network adaptation of SHVC video in software-defined networks

    Science.gov (United States)

    Awobuluyi, Olatunde; Nightingale, James; Wang, Qi; Alcaraz Calero, Jose Maria; Grecos, Christos

    2016-04-01

    Software Defined Networks (SDN), when combined with Network Function Virtualization (NFV) represents a paradigm shift in how future networks will behave and be managed. SDN's are expected to provide the underpinning technologies for future innovations such as 5G mobile networks and the Internet of Everything. The SDN architecture offers features that facilitate an abstracted and centralized global network view in which packet forwarding or dropping decisions are based on application flows. Software Defined Networks facilitate a wide range of network management tasks, including the adaptation of real-time video streams as they traverse the network. SHVC, the scalable extension to the recent H.265 standard is a new video encoding standard that supports ultra-high definition video streams with spatial resolutions of up to 7680×4320 and frame rates of 60fps or more. The massive increase in bandwidth required to deliver these U-HD video streams dwarfs the bandwidth requirements of current high definition (HD) video. Such large bandwidth increases pose very significant challenges for network operators. In this paper we go substantially beyond the limited number of existing implementations and proposals for video streaming in SDN's all of which have primarily focused on traffic engineering solutions such as load balancing. By implementing and empirically evaluating an SDN enabled Media Adaptation Network Entity (MANE) we provide a valuable empirical insight into the benefits and limitations of SDN enabled video adaptation for real time video applications. The SDN-MANE is the video adaptation component of our Video Quality Assurance Manager (VQAM) SDN control plane application, which also includes an SDN monitoring component to acquire network metrics and a decision making engine using algorithms to determine the optimum adaptation strategy for any real time video application flow given the current network conditions. Our proposed VQAM application has been implemented and

  12. A Systematic Approach to Design Low-Power Video Codec Cores

    Directory of Open Access Journals (Sweden)

    Corporaal Henk

    2007-01-01

    Full Text Available The higher resolutions and new functionality of video applications increase their throughput and processing requirements. In contrast, the energy and heat limitations of mobile devices demand low-power video cores. We propose a memory and communication centric design methodology to reach an energy-efficient dedicated implementation. First, memory optimizations are combined with algorithmic tuning. Then, a partitioning exploration introduces parallelism using a cyclo-static dataflow model that also expresses implementation-specific aspects of communication channels. Towards hardware, these channels are implemented as a restricted set of communication primitives. They enable an automated RTL development strategy for rigorous functional verification. The FPGA/ASIC design of an MPEG-4 Simple Profile video codec demonstrates the methodology. The video pipeline exploits the inherent functional parallelism of the codec and contains a tailored memory hierarchy with burst accesses to external memory. 4CIF encoding at 30 fps, consumes 71 mW in a 180 nm, 1.62 V UMC technology.

  13. A Systematic Approach to Design Low-Power Video Codec Cores

    Directory of Open Access Journals (Sweden)

    Kristof Denolf

    2007-05-01

    Full Text Available The higher resolutions and new functionality of video applications increase their throughput and processing requirements. In contrast, the energy and heat limitations of mobile devices demand low-power video cores. We propose a memory and communication centric design methodology to reach an energy-efficient dedicated implementation. First, memory optimizations are combined with algorithmic tuning. Then, a partitioning exploration introduces parallelism using a cyclo-static dataflow model that also expresses implementation-specific aspects of communication channels. Towards hardware, these channels are implemented as a restricted set of communication primitives. They enable an automated RTL development strategy for rigorous functional verification. The FPGA/ASIC design of an MPEG-4 Simple Profile video codec demonstrates the methodology. The video pipeline exploits the inherent functional parallelism of the codec and contains a tailored memory hierarchy with burst accesses to external memory. 4CIF encoding at 30 fps, consumes 71 mW in a 180 nm, 1.62 V UMC technology.

  14. Capabilities of Raspberry Pi 2 for Big Data and Video Streaming Applications in Data Centres

    NARCIS (Netherlands)

    Schot, Nick J.; Velthuis, Paul J.E.; Postema, Björn Frits; Remke, Anne Katharina Ingrid; Remke, A.K.I.; Haverkort, Boudewijn R.H.M.; Haverkort, B.R.H.M.

    Many new data centres have been built in recent years in order to keep up with the rising demand for server capacity. These data centres require a lot of electrical energy and cooling. Big data and video streaming are two heavily used applications in data centres. This paper experimentally

  15. Holovideo: Real-time 3D range video encoding and decoding on GPU

    Science.gov (United States)

    Karpinsky, Nikolaus; Zhang, Song

    2012-02-01

    We present a 3D video-encoding technique called Holovideo that is capable of encoding high-resolution 3D videos into standard 2D videos, and then decoding the 2D videos back into 3D rapidly without significant loss of quality. Due to the nature of the algorithm, 2D video compression such as JPEG encoding with QuickTime Run Length Encoding (QTRLE) can be applied with little quality loss, resulting in an effective way to store 3D video at very small file sizes. We found that under a compression ratio of 134:1, Holovideo to OBJ file format, the 3D geometry quality drops at a negligible level. Several sets of 3D videos were captured using a structured light scanner, compressed using the Holovideo codec, and then uncompressed and displayed to demonstrate the effectiveness of the codec. With the use of OpenGL Shaders (GLSL), the 3D video codec can encode and decode in realtime. We demonstrated that for a video size of 512×512, the decoding speed is 28 frames per second (FPS) with a laptop computer using an embedded NVIDIA GeForce 9400 m graphics processing unit (GPU). Encoding can be done with this same setup at 18 FPS, making this technology suitable for applications such as interactive 3D video games and 3D video conferencing.

  16. Digital Video in Research

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2012-01-01

    Is video becoming “the new black” in academia, if so, what are the challenges? The integration of video in research methodology (for collection, analysis) is well-known, but the use of “academic video” for dissemination is relatively new (Eriksson and Sørensen). The focus of this paper is academic......). In the video, I appear (along with other researchers) and two Danish film directors, and excerpts from their film. My challenges included how to edit the academic video and organize the collaborative effort. I consider video editing as a semiotic, transformative process of “reassembling” voices....... In the discussion, I review academic video in terms of relevance and implications for research practice. The theoretical background is social constructivist, combining social semiotics (Kress, van Leeuwen, McCloud), visual anthropology (Banks, Pink) and dialogic theory (Bakhtin). The Bakhtinian notion of “voices...

  17. Resource-Constrained Low-Complexity Video Coding for Wireless Transmission

    DEFF Research Database (Denmark)

    Ukhanova, Ann

    of video quality. We proposed a new metric for objective quality assessment that considers frame rate. As many applications deal with wireless video transmission, we performed an analysis of compression and transmission systems with a focus on power-distortion trade-off. We proposed an approach...... for ratedistortion-complexity optimization of upcoming video compression standard HEVC. We also provided a new method allowing decrease of power consumption on mobile devices in 3G networks. Finally, we proposed low-delay and low-power approaches for video transmission over wireless personal area networks, including......Constrained resources like memory, power, bandwidth and delay requirements in many mobile systems pose limitations for video applications. Standard approaches for video compression and transmission do not always satisfy system requirements. In this thesis we have shown that it is possible to modify...

  18. BDVC (Bimodal Database of Violent Content): A database of violent audio and video

    Science.gov (United States)

    Rivera Martínez, Jose Luis; Mijes Cruz, Mario Humberto; Rodríguez Vázqu, Manuel Antonio; Rodríguez Espejo, Luis; Montoya Obeso, Abraham; García Vázquez, Mireya Saraí; Ramírez Acosta, Alejandro Álvaro

    2017-09-01

    Nowadays there is a trend towards the use of unimodal databases for multimedia content description, organization and retrieval applications of a single type of content like text, voice and images, instead bimodal databases allow to associate semantically two different types of content like audio-video, image-text, among others. The generation of a bimodal database of audio-video implies the creation of a connection between the multimedia content through the semantic relation that associates the actions of both types of information. This paper describes in detail the used characteristics and methodology for the creation of the bimodal database of violent content; the semantic relationship is stablished by the proposed concepts that describe the audiovisual information. The use of bimodal databases in applications related to the audiovisual content processing allows an increase in the semantic performance only and only if these applications process both type of content. This bimodal database counts with 580 audiovisual annotated segments, with a duration of 28 minutes, divided in 41 classes. Bimodal databases are a tool in the generation of applications for the semantic web.

  19. Parallel Key Frame Extraction for Surveillance Video Service in a Smart City.

    Science.gov (United States)

    Zheng, Ran; Yao, Chuanwei; Jin, Hai; Zhu, Lei; Zhang, Qin; Deng, Wei

    2015-01-01

    Surveillance video service (SVS) is one of the most important services provided in a smart city. It is very important for the utilization of SVS to provide design efficient surveillance video analysis techniques. Key frame extraction is a simple yet effective technique to achieve this goal. In surveillance video applications, key frames are typically used to summarize important video content. It is very important and essential to extract key frames accurately and efficiently. A novel approach is proposed to extract key frames from traffic surveillance videos based on GPU (graphics processing units) to ensure high efficiency and accuracy. For the determination of key frames, motion is a more salient feature in presenting actions or events, especially in surveillance videos. The motion feature is extracted in GPU to reduce running time. It is also smoothed to reduce noise, and the frames with local maxima of motion information are selected as the final key frames. The experimental results show that this approach can extract key frames more accurately and efficiently compared with several other methods.

  20. Parallel Key Frame Extraction for Surveillance Video Service in a Smart City.

    Directory of Open Access Journals (Sweden)

    Ran Zheng

    Full Text Available Surveillance video service (SVS is one of the most important services provided in a smart city. It is very important for the utilization of SVS to provide design efficient surveillance video analysis techniques. Key frame extraction is a simple yet effective technique to achieve this goal. In surveillance video applications, key frames are typically used to summarize important video content. It is very important and essential to extract key frames accurately and efficiently. A novel approach is proposed to extract key frames from traffic surveillance videos based on GPU (graphics processing units to ensure high efficiency and accuracy. For the determination of key frames, motion is a more salient feature in presenting actions or events, especially in surveillance videos. The motion feature is extracted in GPU to reduce running time. It is also smoothed to reduce noise, and the frames with local maxima of motion information are selected as the final key frames. The experimental results show that this approach can extract key frames more accurately and efficiently compared with several other methods.

  1. Photometric Calibration of Consumer Video Cameras

    Science.gov (United States)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  2. The music video in an environment of media convergence: regimes of meaning and interaction

    Directory of Open Access Journals (Sweden)

    Ana Sílvia Lopes Davi Médola

    2014-01-01

    Full Text Available The article discusses the changes in the relations between communicationand forms of consumption of the video formats guided by new interactivecontent and enabled by the digital technologies of contemporary medias. In light of sociosemiotics by Eric Landowski, regimes of meaning and interaction in the fruition process present in the music video The Time/Dirty Bit, of Black Eyed Peas, and the respective application for mobile devices BEP 360 are discussed.

  3. A discriminative structural similarity measure and its application to video-volume registration for endoscope three-dimensional motion tracking.

    Science.gov (United States)

    Luo, Xiongbiao; Mori, Kensaku

    2014-06-01

    Endoscope 3-D motion tracking, which seeks to synchronize pre- and intra-operative images in endoscopic interventions, is usually performed as video-volume registration that optimizes the similarity between endoscopic video and pre-operative images. The tracking performance, in turn, depends significantly on whether a similarity measure can successfully characterize the difference between video sequences and volume rendering images driven by pre-operative images. The paper proposes a discriminative structural similarity measure, which uses the degradation of structural information and takes image correlation or structure, luminance, and contrast into consideration, to boost video-volume registration. By applying the proposed similarity measure to endoscope tracking, it was demonstrated to be more accurate and robust than several available similarity measures, e.g., local normalized cross correlation, normalized mutual information, modified mean square error, or normalized sum squared difference. Based on clinical data evaluation, the tracking error was reduced significantly from at least 14.6 mm to 4.5 mm. The processing time was accelerated more than 30 frames per second using graphics processing unit.

  4. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  5. Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification

    Science.gov (United States)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-02-01

    Experimental or operational modal analysis traditionally requires physically-attached wired or wireless sensors for vibration measurement of structures. This instrumentation can result in mass-loading on lightweight structures, and is costly and time-consuming to install and maintain on large civil structures, especially for long-term applications (e.g., structural health monitoring) that require significant maintenance for cabling (wired sensors) or periodic replacement of the energy supply (wireless sensors). Moreover, these sensors are typically placed at a limited number of discrete locations, providing low spatial sensing resolution that is hardly sufficient for modal-based damage localization, or model correlation and updating for larger-scale structures. Non-contact measurement methods such as scanning laser vibrometers provide high-resolution sensing capacity without the mass-loading effect; however, they make sequential measurements that require considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation, optical flow), video camera based measurements have been successfully used for vibration measurements and subsequent modal analysis, based on techniques such as the digital image correlation (DIC) and the point-tracking. However, they typically require speckle pattern or high-contrast markers to be placed on the surface of structures, which poses challenges when the measurement area is large or inaccessible. This work explores advanced computer vision and video processing algorithms to develop a novel video measurement and vision-based operational (output-only) modal analysis method that alleviate the need of structural surface preparation associated with existing vision-based methods and can be implemented in a relatively efficient and autonomous manner with little

  6. Video frame processor

    International Nuclear Information System (INIS)

    Joshi, V.M.; Agashe, Alok; Bairi, B.R.

    1993-01-01

    This report provides technical description regarding the Video Frame Processor (VFP) developed at Bhabha Atomic Research Centre. The instrument provides capture of video images available in CCIR format. Two memory planes each with a capacity of 512 x 512 x 8 bit data enable storage of two video image frames. The stored image can be processed on-line and on-line image subtraction can also be carried out for image comparisons. The VFP is a PC Add-on board and is I/O mapped within the host IBM PC/AT compatible computer. (author). 9 refs., 4 figs., 19 photographs

  7. Search in Real-Time Video Games

    OpenAIRE

    Cowling, Peter I.; Buro, Michael; Bida, Michal; Botea, Adi; Bouzy, Bruno; Butz, Martin V.; Hingston, Philip; Muñoz-Avila, Hector; Nau, Dana; Sipper, Moshe

    2013-01-01

    This chapter arises from the discussions of an experienced international group of researchers interested in the potential for creative application of algorithms for searching finite discrete graphs, which have been highly successful in a wide range of application areas, to address a broad range of problems arising in video games. The chapter first summarises the state of the art in search algorithms for games. It then considers the challenges in implementing these algorithms in video games (p...

  8. Toward enhancing the distributed video coder under a multiview video codec framework

    Science.gov (United States)

    Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua

    2016-11-01

    The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.

  9. Selection and evaluation of video tape recorders for surveillance applications

    International Nuclear Information System (INIS)

    Martinez, R.L.

    1988-01-01

    Unattended surveillance places unique requirements on video recorders. One such requireemnt, extended operational reliability, often cannot be determined from the manufacturers' data. Subsequent to market surveys and preliminary testing, the Sony 8mm EVO-210 recorder was selected for use in the Modular Integrated Video System (MIVS), while concurrently undergoing extensive reliability testing. A microprocessor based controller was developed to life test and evaluate the performance of the video cassette recorders. The controller has the capability to insert a unique binary count in the vertical interval of the recorder video signal for each scene. This feature allows for automatic verification of the recorded data using a MIVS Review Station. Initially, twenty recorders were subjected to the accelerated lift test, which involves recording one scene (eight video frames) every 15 seconds. The recorders were operated in the exact manner in which they are utilized in the MIVS. This paper describes the results of the preliminary testing, accelerated life test and the extensive testing on 130 Sony EVO-210 recorders

  10. Video camera use at nuclear power plants

    International Nuclear Information System (INIS)

    Estabrook, M.L.; Langan, M.O.; Owen, D.E.

    1990-08-01

    A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs

  11. Iterative Multiview Side Information for Enhanced Reconstruction in Distributed Video Coding

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available Distributed video coding (DVC is a new paradigm for video compression based on the information theoretical results of Slepian and Wolf (SW and Wyner and Ziv (WZ. DVC entails low-complexity encoders as well as separate encoding of correlated video sources. This is particularly attractive for multiview camera systems in video surveillance and camera sensor network applications, where low complexity is required at the encoder. In addition, the separate encoding of the sources implies no communication between the cameras in a practical scenario. This is an advantage since communication is time and power consuming and requires complex networking. In this work, different intercamera estimation techniques for side information (SI generation are explored and compared in terms of estimating quality, complexity, and rate distortion (RD performance. Further, a technique called iterative multiview side information (IMSI is introduced, where the final SI is used in an iterative reconstruction process. The simulation results show that IMSI significantly improves the RD performance for video with significant motion and activity. Furthermore, DVC outperforms AVC/H.264 Intra for video with average and low motion but it is still inferior to the Inter No Motion and Inter Motion modes.

  12. Distributed Video Coding for Multiview and Video-plus-depth Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo

    The interest in Distributed Video Coding (DVC) systems has grown considerably in the academic world in recent years. With DVC the correlation between frames is exploited at the decoder (joint decoding). The encoder codes the frame independently, performing relatively simple operations. Therefore......, with DVC the complexity is shifted from encoder to decoder, making the coding architecture a viable solution for encoders with limited resources. DVC may empower new applications which can benefit from this reversed coding architecture. Multiview Distributed Video Coding (M-DVC) is the application...... of the to-be-decoded frame. Another key element is the Residual estimation, indicating the reliability of the SI, which is used to calculate the parameters of the correlation noise model between SI and original frame. In this thesis new methods for Inter-camera SI generation are analyzed in the Stereo...

  13. Discontinuity minimization for omnidirectional video projections

    Science.gov (United States)

    Alshina, Elena; Zakharchenko, Vladyslav

    2017-09-01

    Advances in display technologies both for head mounted devices and television panels demand resolution increase beyond 4K for source signal in virtual reality video streaming applications. This poses a problem of content delivery trough a bandwidth limited distribution networks. Considering a fact that source signal covers entire surrounding space investigation reviled that compression efficiency may fluctuate 40% in average depending on origin selection at the conversion stage from 3D space to 2D projection. Based on these knowledge the origin selection algorithm for video compression applications has been proposed. Using discontinuity entropy minimization function projection origin rotation may be defined to provide optimal compression results. Outcome of this research may be applied across various video compression solutions for omnidirectional content.

  14. Efficient Temporal Action Localization in Videos

    KAUST Repository

    Alwassel, Humam

    2018-01-01

    as an application of the action spotting problem. Experiments on the THUMOS14 dataset reveal that our model is not only able to explore the video efficiently (observing on average 17.3% of the video) but it also accurately finds human activities with 30.8% mAP (0

  15. m-YouTube Mobile UI: Video Selection Based on Social Influence

    Science.gov (United States)

    Marcus, Aaron; Perez, Angel

    The ease-of-use of Web-based video-publishing services provided by applications like YouTube has encouraged a new means of asynchronous communication, in which users can post videos not only to make them public for review and criticism, but also as a way to express moods, feelings, or intentions to an ever-growing network of friends. Following the current trend of porting Web applications onto mobile platforms, the authors sought to explore user-interface design issues of a mobile-device-based YouTube, which they call m-YouTube. They first analyzed the elements of success of the current YouTube Web site and observed its functionality. Then, they looked for unsolved issues that could give benefit through information-visualization design for small screens on mobile phones to explore a mobile version of such a product/service. The biggest challenge was to reduce the number of functions and amount information to fit into a mobile phone screen, but still be usable, useful, and appealing within the YouTube context of use and user experience. Borrowing ideas from social research in the area of social influence processes, they made design decisions aiming to help YouTube users to make the decision of what video content to watch and to increase the chances of YouTube authors being evaluated and observed by peers. The paper proposes a means to visualize large amounts of video relevant to YouTube users by using their friendship network as a relevance indicator to help in the decision-making process.

  16. Ubiquitous UAVs: a cloud based framework for storing, accessing and processing huge amount of video footage in an efficient way

    Science.gov (United States)

    Efstathiou, Nectarios; Skitsas, Michael; Psaroudakis, Chrysostomos; Koutras, Nikolaos

    2017-09-01

    Nowadays, video surveillance cameras are used for the protection and monitoring of a huge number of facilities worldwide. An important element in such surveillance systems is the use of aerial video streams originating from onboard sensors located on Unmanned Aerial Vehicles (UAVs). Video surveillance using UAVs represent a vast amount of video to be transmitted, stored, analyzed and visualized in a real-time way. As a result, the introduction and development of systems able to handle huge amount of data become a necessity. In this paper, a new approach for the collection, transmission and storage of aerial videos and metadata is introduced. The objective of this work is twofold. First, the integration of the appropriate equipment in order to capture and transmit real-time video including metadata (i.e. position coordinates, target) from the UAV to the ground and, second, the utilization of the ADITESS Versatile Media Content Management System (VMCMS-GE) for storing of the video stream and the appropriate metadata. Beyond the storage, VMCMS-GE provides other efficient management capabilities such as searching and processing of videos, along with video transcoding. For the evaluation and demonstration of the proposed framework we execute a use case where the surveillance of critical infrastructure and the detection of suspicious activities is performed. Collected video Transcodingis subject of this evaluation as well.

  17. Assessing the importance of audio/video synchronization for simultaneous translation of video sequences

    OpenAIRE

    Staelens, Nicolas; De Meulenaere, Jonas; Bleumers, Lizzy; Van Wallendael, Glenn; De Cock, Jan; Geeraert, Koen; Vercammen, Nick; Van den Broeck, Wendy; Vermeulen, Brecht; Van de Walle, Rik; Demeester, Piet

    2012-01-01

    Lip synchronization is considered a key parameter during interactive communication. In the case of video conferencing and television broadcasting, the differential delay between audio and video should remain below certain thresholds, as recommended by several standardization bodies. However, further research has also shown that these thresholds can be relaxed, depending on the targeted application and use case. In this article, we investigate the influence of lip sync on the ability to perfor...

  18. Towards Video Quality Metrics Based on Colour Fractal Geometry

    Directory of Open Access Journals (Sweden)

    Richard Noël

    2010-01-01

    Full Text Available Vision is a complex process that integrates multiple aspects of an image: spatial frequencies, topology and colour. Unfortunately, so far, all these elements were independently took into consideration for the development of image and video quality metrics, therefore we propose an approach that blends together all of them. Our approach allows for the analysis of the complexity of colour images in the RGB colour space, based on the probabilistic algorithm for calculating the fractal dimension and lacunarity. Given that all the existing fractal approaches are defined only for gray-scale images, we extend them to the colour domain. We show how these two colour fractal features capture the multiple aspects that characterize the degradation of the video signal, based on the hypothesis that the quality degradation perceived by the user is directly proportional to the modification of the fractal complexity. We claim that the two colour fractal measures can objectively assess the quality of the video signal and they can be used as metrics for the user-perceived video quality degradation and we validated them through experimental results obtained for an MPEG-4 video streaming application; finally, the results are compared against the ones given by unanimously-accepted metrics and subjective tests.

  19. Video as a technology for interpersonal communications: a new perspective

    Science.gov (United States)

    Whittaker, Steve

    1995-03-01

    Some of the most challenging multimedia applications have involved real- time conferencing, using audio and video to support interpersonal communication. Here we re-examine assumptions about the role, importance and implementation of video information in such systems. Rather than focussing on novel technologies, we present evaluation data relevant to both the classes of real-time multimedia applications we should develop and their design and implementation. Evaluations of videoconferencing systems show that previous work has overestimated the importance of video at the expense of audio. This has strong implications for the implementation of bandwidth allocation and synchronization. Furthermore our recent studies of workplace interaction show that prior work has neglected another potentially vital function of visual information: in assessing the communication availability of others. In this new class of application, rather than providing a supplement to audio information, visual information is used to promote the opportunistic communications that are prevalent in face-to-face settings. We discuss early experiments with such connection applications and identify outstanding design and implementation issues. Finally we examine a different class of application 'video-as-data', where the video image is used to transmit information about the work objects themselves, rather than information about interactants.

  20. Real time polarization sensor image processing on an embedded FPGA/multi-core DSP system

    Science.gov (United States)

    Bednara, Marcus; Chuchacz-Kowalczyk, Katarzyna

    2015-05-01

    Most embedded image processing SoCs available on the market are highly optimized for typical consumer applications like video encoding/decoding, motion estimation or several image enhancement processes as used in DSLR or digital video cameras. For non-consumer applications, on the other hand, optimized embedded hardware is rarely available, so often PC based image processing systems are used. We show how a real time capable image processing system for a non-consumer application - namely polarization image data processing - can be efficiently implemented on an FPGA and multi-core DSP based embedded hardware platform.

  1. A Video Game-Based Framework for Analyzing Human-Robot Interaction: Characterizing Interface Design in Real-Time Interactive Multimedia Applications

    National Research Council Canada - National Science Library

    Richer, Justin; Drury, Jill L

    2006-01-01

    .... This paper segments video game interaction into domain-independent components which together form a framework that can be used to characterize real-time interactive multimedia applications in general...

  2. Comparative assessment of H.265/MPEG-HEVC, VP9, and H.264/MPEG-AVC encoders for low-delay video applications

    Science.gov (United States)

    Grois, Dan; Marpe, Detlev; Nguyen, Tung; Hadar, Ofer

    2014-09-01

    The popularity of low-delay video applications dramatically increased over the last years due to a rising demand for realtime video content (such as video conferencing or video surveillance), and also due to the increasing availability of relatively inexpensive heterogeneous devices (such as smartphones and tablets). To this end, this work presents a comparative assessment of the two latest video coding standards: H.265/MPEG-HEVC (High-Efficiency Video Coding), H.264/MPEG-AVC (Advanced Video Coding), and also of the VP9 proprietary video coding scheme. For evaluating H.264/MPEG-AVC, an open-source x264 encoder was selected, which has a multi-pass encoding mode, similarly to VP9. According to experimental results, which were obtained by using similar low-delay configurations for all three examined representative encoders, it was observed that H.265/MPEG-HEVC provides significant average bit-rate savings of 32.5%, and 40.8%, relative to VP9 and x264 for the 1-pass encoding, and average bit-rate savings of 32.6%, and 42.2% for the 2-pass encoding, respectively. On the other hand, compared to the x264 encoder, typical low-delay encoding times of the VP9 encoder, are about 2,000 times higher for the 1-pass encoding, and are about 400 times higher for the 2-pass encoding.

  3. Teachers' Reports of Learning and Application to Pedagogy Based on Engagement in Collaborative Peer Video Analysis

    Science.gov (United States)

    Christ, Tanya; Arya, Poonam; Chiu, Ming Ming

    2014-01-01

    Given international use of video-based reflective discussions in teacher education, and the limited knowledge about whether teachers apply learning from these discussions, we explored teachers' learning of new ideas about pedagogy and their self-reported application of this learning. Nine inservice and 48 preservice teachers participated in…

  4. Design considerations for computationally constrained two-way real-time video communication

    Science.gov (United States)

    Bivolarski, Lazar M.; Saunders, Steven E.; Ralston, John D.

    2009-08-01

    Today's video codecs have evolved primarily to meet the requirements of the motion picture and broadcast industries, where high-complexity studio encoding can be utilized to create highly-compressed master copies that are then broadcast one-way for playback using less-expensive, lower-complexity consumer devices for decoding and playback. Related standards activities have largely ignored the computational complexity and bandwidth constraints of wireless or Internet based real-time video communications using devices such as cell phones or webcams. Telecommunications industry efforts to develop and standardize video codecs for applications such as video telephony and video conferencing have not yielded image size, quality, and frame-rate performance that match today's consumer expectations and market requirements for Internet and mobile video services. This paper reviews the constraints and the corresponding video codec requirements imposed by real-time, 2-way mobile video applications. Several promising elements of a new mobile video codec architecture are identified, and more comprehensive computational complexity metrics and video quality metrics are proposed in order to support the design, testing, and standardization of these new mobile video codecs.

  5. Hawkes process as a model of social interactions: a view on video dynamics

    International Nuclear Information System (INIS)

    Mitchell, Lawrence; Cates, Michael E

    2010-01-01

    We study by computer simulation the 'Hawkes process' that was proposed in a recent paper by Crane and Sornette (2008 Proc. Natl Acad. Sci. USA 105 15649) as a plausible model for the dynamics of YouTube video viewing numbers. We test the claims made there that robust identification is possible for classes of dynamic response following activity bursts. Our simulated time series for the Hawkes process indeed fall into the different categories predicted by Crane and Sornette. However, the Hawkes process gives a much narrower spread of decay exponents than the YouTube data, suggesting limits to the universality of the Hawkes-based analysis.

  6. Hawkes process as a model of social interactions: a view on video dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Lawrence; Cates, Michael E, E-mail: lawrence.mitchell@ed.ac.u [SUPA, School of Physics and Astronomy, University of Edinburgh, JCMB Kings Buildings, Mayfield Road, Edinburgh EH9 3JZ (United Kingdom)

    2010-01-29

    We study by computer simulation the 'Hawkes process' that was proposed in a recent paper by Crane and Sornette (2008 Proc. Natl Acad. Sci. USA 105 15649) as a plausible model for the dynamics of YouTube video viewing numbers. We test the claims made there that robust identification is possible for classes of dynamic response following activity bursts. Our simulated time series for the Hawkes process indeed fall into the different categories predicted by Crane and Sornette. However, the Hawkes process gives a much narrower spread of decay exponents than the YouTube data, suggesting limits to the universality of the Hawkes-based analysis.

  7. Hawkes process as a model of social interactions: a view on video dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Lawrence; Cates, Michael E, E-mail: lawrence.mitchell@ed.ac.u [SUPA, School of Physics and Astronomy, University of Edinburgh, JCMB Kings Buildings, Mayfield Road, Edinburgh EH9 3JZ (United Kingdom)

    2010-01-29

    We study by computer simulation the 'Hawkes process' that was proposed in a recent paper by Crane and Sornette (2008 Proc. Natl Acad. Sci. USA 105 15649) as a plausible model for the dynamics of YouTube video viewing numbers. We test the claims made there that robust identification is possible for classes of dynamic response following activity bursts. Our simulated time series for the Hawkes process indeed fall into the different categories predicted by Crane and Sornette. However, the Hawkes process gives a much narrower spread of decay exponents than the YouTube data, suggesting limits to the universality of the Hawkes-based analysis.

  8. HEP visualization and video technology

    International Nuclear Information System (INIS)

    Lebrun, P.; Swoboda, D.

    1994-01-01

    The use of scientific visualization for HEP analysis is briefly reviewed. The applications are highly interactive and very dynamical in nature. At Fermilab, E687, in collaboration with Visual Media Services, has produced a 1/2 hour video tape demonstrating the capability of SGI-EXPLORER applied to a Dalitz Analysis of Charm decay. This short contribution describes the authors experience with visualization and video technologies

  9. The MIVS [Modular Integrated Video System] Image Processing System (MIPS) for assisting in the optical surveillance data review process

    International Nuclear Information System (INIS)

    Horton, R.D.

    1990-01-01

    The MIVS (Modular Integrated Video System) Image Processing System (MIPS) is designed to review MIVS surveillance data automatically and identify IAEA defined objects of safeguards interest. To achieve this, MIPS uses both digital image processing and neural network techniques to detect objects of safeguards interest in an image and assist an inspector in the review of the MIVS video tapes. MIPS must be ''trained'' i.e., given example images showing the objects that it must recognize, for each different facility. Image processing techniques are used to first identify significantly changed areas of the image. A neural network is then used to determine if the image contains the important object(s). The MIPS algorithms have demonstrated the capability to detect when a spent fuel shipping cask is present in an image after MIPS is properly trained to detect the cask. The algorithms have also demonstrated the ability to reject uninteresting background activities such as people and crane movement. When MIPS detects an important object, the corresponding image is stored to another media and later replayed for the inspector to review. The MIPS algorithms are being implemented in commercially available hardware: an image processing subsystem and an 80386 Personal Computer. MIPS will have a high-level easy-to-use system interface to allow inspectors to train MIPS on MIVS data from different facilities and on various safeguards significant objects. This paper describes the MIPS algorithms, hardware implementation, and system configuration. 3 refs., 10 figs

  10. Dynamic video encryption algorithm for H.264/AVC based on a spatiotemporal chaos system.

    Science.gov (United States)

    Xu, Hui; Tong, Xiao-Jun; Zhang, Miao; Wang, Zhu; Li, Ling-Hao

    2016-06-01

    Video encryption schemes mostly employ the selective encryption method to encrypt parts of important and sensitive video information, aiming to ensure the real-time performance and encryption efficiency. The classic block cipher is not applicable to video encryption due to the high computational overhead. In this paper, we propose the encryption selection control module to encrypt video syntax elements dynamically which is controlled by the chaotic pseudorandom sequence. A novel spatiotemporal chaos system and binarization method is used to generate a key stream for encrypting the chosen syntax elements. The proposed scheme enhances the resistance against attacks through the dynamic encryption process and high-security stream cipher. Experimental results show that the proposed method exhibits high security and high efficiency with little effect on the compression ratio and time cost.

  11. Advantages of Live Microscope Video for Laboratory and Teaching Applications

    Science.gov (United States)

    Michels, Kristin K.; Michels, Zachary D.; Hotchkiss, Sara C.

    2016-01-01

    Although spatial reasoning and penetrative thinking skills are essential for many disciplines, these concepts are difficult for students to comprehend. In microscopy, traditional educational materials (i.e., photographs) are static. Conversely, video-based training methods convey dimensionality. We implemented a real-time digital video imaging…

  12. Video Tutorial of Continental Food

    Science.gov (United States)

    Nurani, A. S.; Juwaedah, A.; Mahmudatussa'adah, A.

    2018-02-01

    This research is motivated by the belief in the importance of media in a learning process. Media as an intermediary serves to focus on the attention of learners. Selection of appropriate learning media is very influential on the success of the delivery of information itself both in terms of cognitive, affective and skills. Continental food is a course that studies food that comes from Europe and is very complex. To reduce verbalism and provide more real learning, then the tutorial media is needed. Media tutorials that are audio visual can provide a more concrete learning experience. The purpose of this research is to develop tutorial media in the form of video. The method used is the development method with the stages of analyzing the learning objectives, creating a story board, validating the story board, revising the story board and making video tutorial media. The results show that the making of storyboards should be very thorough, and detailed in accordance with the learning objectives to reduce errors in video capture so as to save time, cost and effort. In video capturing, lighting, shooting angles, and soundproofing make an excellent contribution to the quality of tutorial video produced. In shooting should focus more on tools, materials, and processing. Video tutorials should be interactive and two-way.

  13. Design of a highly integrated video acquisition module for smart video flight unit development

    Science.gov (United States)

    Lebre, V.; Gasti, W.

    2017-11-01

    CCD and APS devices are widely used in space missions as instrument sensors and/or in Avionics units like star detectors/trackers. Therefore, various and numerous designs of video acquisition chains have been produced. Basically, a classical video acquisition chain is constituted of two main functional blocks: the Proximity Electronics (PEC), including detector drivers and the Analogue Processing Chain (APC) Electronics that embeds the ADC, a master sequencer and the host interface. Nowadays, low power technologies allow to improve the integration, radiometric performances and power budget optimisation of video units and to standardize video units design and development. To this end, ESA has initiated a development activity through a competitive process requesting the expertise of experienced actors in the field of high resolution electronics for earth observation and Scientific missions. THALES ALENIA SPACE has been granted this activity as a prime contractor through ESA contract called HIVAC that holds for Highly Integrated Video Acquisition Chain. This paper presents main objectives of the on going HIVAC project and focuses on the functionalities and performances offered by the usage of the under development HIVAC board for future optical instruments.

  14. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.

    Science.gov (United States)

    Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2016-01-01

    High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.

  15. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.

    Directory of Open Access Journals (Sweden)

    Yueying Wu

    Full Text Available High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI extraction using the high efficiency video coding (H.265/HEVC standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0. The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.

  16. Network Analysis of an Emergent Massively Collaborative Creation on Video Sharing Website

    Science.gov (United States)

    Hamasaki, Masahiro; Takeda, Hideaki; Nishimura, Takuichi

    The Web technology enables numerous people to collaborate in creation. We designate it as massively collaborative creation via the Web. As an example of massively collaborative creation, we particularly examine video development on Nico Nico Douga, which is a video sharing website that is popular in Japan. We specifically examine videos on Hatsune Miku, a version of a singing synthesizer application software that has inspired not only song creation but also songwriting, illustration, and video editing. As described herein, creators of interact to create new contents through their social network. In this paper, we analyzed the process of developing thousands of videos based on creators' social networks and investigate relationships among creation activity and social networks. The social network reveals interesting features. Creators generate large and sparse social networks including some centralized communities, and such centralized community's members shared special tags. Different categories of creators have different roles in evolving the network, e.g., songwriters gather more links than other categories, implying that they are triggers to network evolution.

  17. Digital cinema video compression

    Science.gov (United States)

    Husak, Walter

    2003-05-01

    The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

  18. Probabilistic Decision Based Block Partitioning for Future Video Coding

    KAUST Repository

    Wang, Zhao

    2017-11-29

    In the latest Joint Video Exploration Team development, the quadtree plus binary tree (QTBT) block partitioning structure has been proposed for future video coding. Compared to the traditional quadtree structure of High Efficiency Video Coding (HEVC) standard, QTBT provides more flexible patterns for splitting the blocks, which results in dramatically increased combinations of block partitions and high computational complexity. In view of this, a confidence interval based early termination (CIET) scheme is proposed for QTBT to identify the unnecessary partition modes in the sense of rate-distortion (RD) optimization. In particular, a RD model is established to predict the RD cost of each partition pattern without the full encoding process. Subsequently, the mode decision problem is casted into a probabilistic framework to select the final partition based on the confidence interval decision strategy. Experimental results show that the proposed CIET algorithm can speed up QTBT block partitioning structure by reducing 54.7% encoding time with only 1.12% increase in terms of bit rate. Moreover, the proposed scheme performs consistently well for the high resolution sequences, of which the video coding efficiency is crucial in real applications.

  19. Fairness in Paper and Video Resume Screening

    NARCIS (Netherlands)

    A.M.F. Hiemstra (Annemarie)

    2013-01-01

    markdownabstract__Abstract__ Recent technological developments have resulted in the introduction of a new type of resume, the video resume, which can be described as a video message in which applicants present themselves to potential employers. Research is struggling to keep pace with the speed

  20. Can Skype be used beyond video calling?

    NARCIS (Netherlands)

    Exarchakos, G.; Menkovski, V.; Liotta, A.

    2011-01-01

    Skype nodes generate a substantial part of real-time bi-directional video traffic nowadays. Employing a range of adaptive mechanisms, the application configures video streaming to meet the requirements of the communication and constraints of the underlying network. While other related works focus on

  1. DML in VIDEO-CONFERENCING APPLICATIONS

    OpenAIRE

    Giske, Mats Andreas

    2012-01-01

    Today's audio in video-conference rooms do not in general have high quality audio standards.Most of the set-ups are PC-Speakers mounted on the wall, with a microphone on the table. With this, strong room modes are often excited from the speakers. When speaking to someone which also have bad equipment, the speech-intelligibility can be really bad. One solution presented with this Master thesis is to improve the loudspeaker set-up by mounting a DML(Distributed Mode Loudspeaker) above and parall...

  2. Design of batch audio/video conversion platform based on JavaEE

    Science.gov (United States)

    Cui, Yansong; Jiang, Lianpin

    2018-03-01

    With the rapid development of digital publishing industry, the direction of audio / video publishing shows the diversity of coding standards for audio and video files, massive data and other significant features. Faced with massive and diverse data, how to quickly and efficiently convert to a unified code format has brought great difficulties to the digital publishing organization. In view of this demand and present situation in this paper, basing on the development architecture of Sptring+SpringMVC+Mybatis, and combined with the open source FFMPEG format conversion tool, a distributed online audio and video format conversion platform with a B/S structure is proposed. Based on the Java language, the key technologies and strategies designed in the design of platform architecture are analyzed emphatically in this paper, designing and developing a efficient audio and video format conversion system, which is composed of “Front display system”, "core scheduling server " and " conversion server ". The test results show that, compared with the ordinary audio and video conversion scheme, the use of batch audio and video format conversion platform can effectively improve the conversion efficiency of audio and video files, and reduce the complexity of the work. Practice has proved that the key technology discussed in this paper can be applied in the field of large batch file processing, and has certain practical application value.

  3. Real-time billboard trademark detection and recognition in sports video

    Science.gov (United States)

    Bu, Jiang; Lao, Song-Yan; Bai, Liang

    2013-03-01

    Nowadays, different applications like automatic video indexing, keyword based video search and TV commercials can be developed by detecting and recognizing the billboard trademark. We propose a hierarchical solution for real-time billboard trademark recognition in various sports video, billboard frames are detected in the first level, fuzzy decision tree with easily-computing features are employed to accelerate the process, while in the second level, color and regional SIFT features are combined for the first time to describe the appearance of trademarks, and the shared nearest neighbor (SNN) clustering with x2 distance is utilized instead of traditional K-means clustering to construct the SIFT vocabulary, at last, Latent Semantic Analysis (LSA) based SIFT vocabulary matching is performed on the template trademark and the candidate regions in billboard frame. The preliminary experiments demonstrate the effectiveness of the hierarchical solution, and real time constraints are also met by our solution.

  4. CONTEXT-BASED URBAN TERRAIN RECONSTRUCTION FROM UAV-VIDEOS FOR GEOINFORMATION APPLICATIONS

    Directory of Open Access Journals (Sweden)

    D. Bulatov

    2012-09-01

    Full Text Available Urban terrain reconstruction has many applications in areas of civil engineering, urban planning, surveillance and defense research. Therefore the needs of covering ad-hoc demand and performing a close-range urban terrain reconstruction with miniaturized and relatively inexpensive sensor platforms are constantly growing. Using (miniaturized unmanned aerial vehicles, (MUAVs, represents one of the most attractive alternatives to conventional large-scale aerial imagery. We cover in this paper a four-step procedure of obtaining georeferenced 3D urban models from video sequences. The four steps of the procedure – orientation, dense reconstruction, urban terrain modeling and geo-referencing – are robust, straight-forward, and nearly fully-automatic. The two last steps – namely, urban terrain modeling from almost-nadir videos and co-registration of models 6ndash; represent the main contribution of this work and will therefore be covered with more detail. The essential substeps of the third step include digital terrain model (DTM extraction, segregation of buildings from vegetation, as well as instantiation of building and tree models. The last step is subdivided into quasi- intrasensorial registration of Euclidean reconstructions and intersensorial registration with a geo-referenced orthophoto. Finally, we present reconstruction results from a real data-set and outline ideas for future work.

  5. Film grain noise modeling in advanced video coding

    Science.gov (United States)

    Oh, Byung Tae; Kuo, C.-C. Jay; Sun, Shijun; Lei, Shawmin

    2007-01-01

    A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand, video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we present a method to remove film grain noise from image/video without distorting its original content. Besides, we describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The proposed model generates the film grain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed scheme.

  6. Efficient Foreground Extraction From HEVC Compressed Video for Application to Real-Time Analysis of Surveillance 'Big' Data.

    Science.gov (United States)

    Dey, Bhaskar; Kundu, Malay K

    2015-11-01

    While surveillance video is the biggest source of unstructured Big Data today, the emergence of high-efficiency video coding (HEVC) standard is poised to have a huge role in lowering the costs associated with transmission and storage. Among the benefits of HEVC over the legacy MPEG-4 Advanced Video Coding (AVC), is a staggering 40 percent or more bitrate reduction at the same visual quality. Given the bandwidth limitations, video data are compressed essentially by removing spatial and temporal correlations that exist in its uncompressed form. This causes compressed data, which are already de-correlated, to serve as a vital resource for machine learning with significantly fewer samples for training. In this paper, an efficient approach to foreground extraction/segmentation is proposed using novel spatio-temporal de-correlated block features extracted directly from the HEVC compressed video. Most related techniques, in contrast, work on uncompressed images claiming significant storage and computational resources not only for the decoding process prior to initialization but also for the feature selection/extraction and background modeling stage following it. The proposed approach has been qualitatively and quantitatively evaluated against several other state-of-the-art methods.

  7. Video transmission on ATM networks. Ph.D. Thesis

    Science.gov (United States)

    Chen, Yun-Chung

    1993-01-01

    The broadband integrated services digital network (B-ISDN) is expected to provide high-speed and flexible multimedia applications. Multimedia includes data, graphics, image, voice, and video. Asynchronous transfer mode (ATM) is the adopted transport techniques for B-ISDN and has the potential for providing a more efficient and integrated environment for multimedia. It is believed that most broadband applications will make heavy use of visual information. The prospect of wide spread use of image and video communication has led to interest in coding algorithms for reducing bandwidth requirements and improving image quality. The major results of a study on the bridging of network transmission performance and video coding are: Using two representative video sequences, several video source models are developed. The fitness of these models are validated through the use of statistical tests and network queuing performance. A dual leaky bucket algorithm is proposed as an effective network policing function. The concept of the dual leaky bucket algorithm can be applied to a prioritized coding approach to achieve transmission efficiency. A mapping of the performance/control parameters at the network level into equivalent parameters at the video coding level is developed. Based on that, a complete set of principles for the design of video codecs for network transmission is proposed.

  8. Deriving video content type from HEVC bitstream semantics

    Science.gov (United States)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio R.

    2014-05-01

    As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can

  9. Application of video imaging for improvement of patient set-up

    International Nuclear Information System (INIS)

    Ploeger, Lennert S.; Frenay, Michel; Betgen, Anja; Bois, Josien A. de; Gilhuijs, Kenneth G.A.; Herk, Marcel van

    2003-01-01

    Background and purpose: For radiotherapy of prostate cancer, the patient is usually positioned in the left-right (LR) direction by aligning a single marker on the skin with the projection of a room laser. The aim of this study is to investigate the feasibility of a room-mounted video camera in combination with previously acquired CT data to improve patient set-up along the LR axis. Material and methods: The camera was mounted in the treatment room at the caudal side of the patient. For 22 patients with prostate cancer 127 video and portal images were acquired. The set-up error determined by video imaging was found by matching video images with rendered CT images using various techniques. This set-up error was retrospectively compared with the set-up error derived from portal images. It was investigated whether the number of corrections based on portal imaging would decrease if the information obtained from the video images had been used prior to irradiation. Movement of the skin with respect to bone was quantified using an analysis of variance method. Results: The measurement of the set-up error was most accurate for a technique where outlines and groins on the left and right side of the patient were delineated and aligned individually to the corresponding features extracted from the rendered CT image. The standard deviations (SD) of the systematic and random components of the set-up errors derived from the portal images in the LR direction were 1.5 and 2.1 mm, respectively. When the set-up of the patients was retrospectively adjusted based on the video images, the SD of the systematic and random errors decreased to 1.1 and 1.3 mm, respectively. From retrospective analysis, a reduction of the number of set-up corrections (from nine to six corrections) is expected when the set-up would have been adjusted using the video images. The SD of the magnitude of motion of the skin of the patient with respect to the bony anatomy was estimated to be 1.1 mm. Conclusion: Video

  10. Quality and noise measurements in mobile phone video capture

    Science.gov (United States)

    Petrescu, Doina; Pincenti, John

    2011-02-01

    The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.

  11. Web Audio/Video Streaming Tool

    Science.gov (United States)

    Guruvadoo, Eranna K.

    2003-01-01

    In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.

  12. Exterior field evaluation of new generation video motion detection systems

    International Nuclear Information System (INIS)

    Malone, T.P.

    1988-01-01

    Recent advancements in video motion detection (VMD) system design and technology have resulted in several new commercial VMD systems. Considerable interest in the new VMD systems has been generated because the systems are advertised to work effectively in exterior applications. Previous VMD systems, when used in an exterior environment, tended to have very high nuisance alarm rates due to weather conditions, wildlife activity and lighting variations. The new VMD systems advertise more advanced processing of the incoming video signal which is aimed at rejecting exterior environmental nuisance alarm sources while maintaining a high detection capability. This paper discusses the results of field testing, in an exterior environment, of two new VMD systems

  13. QoS INVESTIGATION ON MOODLE’S VIDEO CONFERENCE

    Directory of Open Access Journals (Sweden)

    LINAWATI LINAWATI

    2011-06-01

    Full Text Available Learning Management System (LMS supports e-learning as a distant learning. Moodle is one of open source LMS applications that allow embedding multimedia into learning activity in a course, such as video conference session. The paper investigates quality of service (QoS of video conference session embedded in Moodle, i.e. end-to-end delay, jitter, throughput, packet loss and PSNR. Three scenarios were implemented in the experiment. The scenarios were applied on both wire and wireless transmission, and p2p and p2m connections. The investigation results show that the QoS of video conference session meets the standards issued by ITU-T G.1010 and G.114, for minimum bandwidth of 128 kbps. Thus the application of video conferencing that is integrated in Moodle can run well with minimum bandwidth of 128 Kbps. 

  14. Image Processing: Some Challenging Problems

    Science.gov (United States)

    Huang, T. S.; Aizawa, K.

    1993-11-01

    Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing.

  15. Study on a High Compression Processing for Video-on-Demand e-learning System

    Science.gov (United States)

    Nomura, Yoshihiko; Matsuda, Ryutaro; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    The authors proposed a high-quality and small-capacity lecture-video-file creating system for distance e-learning system. Examining the feature of the lecturing scene, the authors ingeniously employ two kinds of image-capturing equipment having complementary characteristics : one is a digital video camera with a low resolution and a high frame rate, and the other is a digital still camera with a high resolution and a very low frame rate. By managing the two kinds of image-capturing equipment, and by integrating them with image processing, we can produce course materials with the greatly reduced file capacity : the course materials satisfy the requirements both for the temporal resolution to see the lecturer's point-indicating actions and for the high spatial resolution to read the small written letters. As a result of a comparative experiment, the e-lecture using the proposed system was confirmed to be more effective than an ordinary lecture from the viewpoint of educational effect.

  16. Applicability of Existing Objective Metrics of Perceptual Quality for Adaptive Video Streaming

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Krasula, Lukás; Shahid, Muhammad

    2016-01-01

    Objective video quality metrics are designed to estimate the quality of experience of the end user. However, these objective metrics are usually validated with video streams degraded under common distortion types. In the presented work, we analyze the performance of published and known full......-reference and noreference quality metrics in estimating the perceived quality of adaptive bit-rate video streams knowingly out of scope. Experimental results indicate not surprisingly that state of the art objective quality metrics overlook the perceived degradations in the adaptive video streams and perform poorly...

  17. SIRSALE: integrated video database management tools

    Science.gov (United States)

    Brunie, Lionel; Favory, Loic; Gelas, J. P.; Lefevre, Laurent; Mostefaoui, Ahmed; Nait-Abdesselam, F.

    2002-07-01

    Video databases became an active field of research during the last decade. The main objective in such systems is to provide users with capabilities to friendly search, access and playback distributed stored video data in the same way as they do for traditional distributed databases. Hence, such systems need to deal with hard issues : (a) video documents generate huge volumes of data and are time sensitive (streams must be delivered at a specific bitrate), (b) contents of video data are very hard to be automatically extracted and need to be humanly annotated. To cope with these issues, many approaches have been proposed in the literature including data models, query languages, video indexing etc. In this paper, we present SIRSALE : a set of video databases management tools that allow users to manipulate video documents and streams stored in large distributed repositories. All the proposed tools are based on generic models that can be customized for specific applications using ad-hoc adaptation modules. More precisely, SIRSALE allows users to : (a) browse video documents by structures (sequences, scenes, shots) and (b) query the video database content by using a graphical tool, adapted to the nature of the target video documents. This paper also presents an annotating interface which allows archivists to describe the content of video documents. All these tools are coupled to a video player integrating remote VCR functionalities and are based on active network technology. So, we present how dedicated active services allow an optimized video transport for video streams (with Tamanoir active nodes). We then describe experiments of using SIRSALE on an archive of news video and soccer matches. The system has been demonstrated to professionals with a positive feedback. Finally, we discuss open issues and present some perspectives.

  18. Video Golf

    Science.gov (United States)

    1995-01-01

    George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.

  19. Real time video processing software for the analysis of endoscopic guided-biopsies

    International Nuclear Information System (INIS)

    Ordoñez, C; Bouchet, A; Pastore, J; Blotta, E

    2011-01-01

    The severity in Barrett esophagus disease is, undoubtedly, the possibility of its malignization. To make an early diagnosis in order to avoid possible complications, it is absolutely necessary collect biopsies to make a histological analysis. This should be done under endoscopic control to avoid mucus areas that may co-exist within the columnar epithelial, which could lead to a false diagnosis. This paper presents a video processing software in real-time in order to delineate and enhance areas of interest to facilitate the work of the expert.

  20. The influence of sexual music videos on adolescents' misogynistic beliefs: the role of video content, gender, and affective engagement

    NARCIS (Netherlands)

    van Oosten, J.M.F.; Peter, J.; Valkenburg, P.M.

    2015-01-01

    Research on how sexual music videos affect beliefs related to sexual aggression is rare and has not differentiated between the effects of music videos by male and female artists. Moreover, little is known about the affective processes that underlie the effects of sexual music videos. Using data from

  1. Gas leak detection in infrared video with background modeling

    Science.gov (United States)

    Zeng, Xiaoxia; Huang, Likun

    2018-03-01

    Background modeling plays an important role in the task of gas detection based on infrared video. VIBE algorithm is a widely used background modeling algorithm in recent years. However, the processing speed of the VIBE algorithm sometimes cannot meet the requirements of some real time detection applications. Therefore, based on the traditional VIBE algorithm, we propose a fast prospect model and optimize the results by combining the connected domain algorithm and the nine-spaces algorithm in the following processing steps. Experiments show the effectiveness of the proposed method.

  2. ONLINE LEARNING: CAN VIDEOS ENHANCE LEARNING?

    OpenAIRE

    HAJHASHEMI, Karim; ANDERSON, Neil; JACKSON, Cliff; CALTABIANO, Nerina

    2015-01-01

    Highereducation lecturers integrate different media into their courses. Internet-basededucational video clips have gained prominence, as this media is perceived topromote deeper thought processes, communication and interaction among users,and makeclassroom content more diverse.This paper provides a literature overview of the increasing importance ofonline videos across all modes of instruction. It discusses a quantitative andqualitative research design that was used to assess on-line video pe...

  3. Automatic video segmentation employing object/camera modeling techniques

    NARCIS (Netherlands)

    Farin, D.S.

    2005-01-01

    Practically established video compression and storage techniques still process video sequences as rectangular images without further semantic structure. However, humans watching a video sequence immediately recognize acting objects as semantic units. This semantic object separation is currently not

  4. Women as Video Game Consumers

    OpenAIRE

    Kiviranta, Hanna

    2017-01-01

    The purpose of this Thesis is to study women as video game consumers through the games that they play. This was done by case studies on the content of five video games from genres that statistically are popular amongst women. To introduce the topic and to build the theoretical framework, the key terms and the video game industry are introduced. The reader is acquainted with theories on consumer behaviour, buying processes and factors that influence our consuming habits. These aspects are...

  5. Interactive Video, The Next Step

    Science.gov (United States)

    Strong, L. R.; Wold-Brennon, R.; Cooper, S. K.; Brinkhuis, D.

    2012-12-01

    Video has the ingredients to reach us emotionally - with amazing images, enthusiastic interviews, music, and video game-like animations-- and it's emotion that motivates us to learn more about our new interest. However, watching video is usually passive. New web-based technology is expanding and enhancing the video experience, creating opportunities to use video with more direct interaction. This talk will look at an Educaton and Outreach team's experience producing video-centric curriculum using innovative interactive media tools from TED-Ed and FlixMaster. The Consortium for Ocean Leadership's Deep Earth Academy has partnered with the Center for Dark Energy Biosphere Investigations (C-DEBI) to send educators and a video producer aboard three deep sea research expeditions to the Juan de Fuca plate to install and service sub-seafloor observatories. This collaboration between teachers, students, scientists and media producers has proved a productive confluence, providing new ways of understanding both ground-breaking science and the process of science itself - by experimenting with new ways to use multimedia during ocean-going expeditions and developing curriculum and other projects post-cruise.

  6. Classification of video sequences into chosen generalized use classes of target size and lighting level.

    Science.gov (United States)

    Leszczuk, Mikołaj; Dudek, Łukasz; Witkowski, Marcin

    The VQiPS (Video Quality in Public Safety) Working Group, supported by the U.S. Department of Homeland Security, has been developing a user guide for public safety video applications. According to VQiPS, five parameters have particular importance influencing the ability to achieve a recognition task. They are: usage time-frame, discrimination level, target size, lighting level, and level of motion. These parameters form what are referred to as Generalized Use Classes (GUCs). The aim of our research was to develop algorithms that would automatically assist classification of input sequences into one of the GUCs. Target size and lighting level parameters were approached. The experiment described reveals the experts' ambiguity and hesitation during the manual target size determination process. However, the automatic methods developed for target size classification make it possible to determine GUC parameters with 70 % compliance to the end-users' opinion. Lighting levels of the entire sequence can be classified with an efficiency reaching 93 %. To make the algorithms available for use, a test application has been developed. It is able to process video files and display classification results, the user interface being very simple and requiring only minimal user interaction.

  7. Playing with Process: Video Game Choice as a Model of Behavior

    Science.gov (United States)

    Waelchli, Paul

    2010-01-01

    Popular culture experience in video games creates avenues to practice information literacy skills and model research in a real-world setting. Video games create a unique popular culture experience where players can invest dozens of hours on one game, create characters to identify with, organize skill sets and plot points, collaborate with people…

  8. Videotrees: Improving video surrogate presentation using hierarchy

    NARCIS (Netherlands)

    Jansen, Michel; Heeren, W.F.L.; van Dijk, Elisabeth M.A.G.

    As the amount of available video content increases, so does the need for better ways of browsing all this material. Because the nature of video makes it hard to process, the need arises for adequate surrogates for video that can readily be skimmed and browsed. In this paper, the effects of the use

  9. User interface using a 3D model for video surveillance

    Science.gov (United States)

    Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru

    1998-02-01

    These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.

  10. Signal Processing

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Signal processing techniques, extensively used nowadays to maximize the performance of audio and video equipment, have been a key part in the design of hardware and software for high energy physics detectors since pioneering applications in the UA1 experiment at CERN in 1979

  11. 3D scene reconstruction based on multi-view distributed video coding in the Zernike domain for mobile applications

    Science.gov (United States)

    Palma, V.; Carli, M.; Neri, A.

    2011-02-01

    In this paper a Multi-view Distributed Video Coding scheme for mobile applications is presented. Specifically a new fusion technique between temporal and spatial side information in Zernike Moments domain is proposed. Distributed video coding introduces a flexible architecture that enables the design of very low complex video encoders compared to its traditional counterparts. The main goal of our work is to generate at the decoder the side information that optimally blends temporal and interview data. Multi-view distributed coding performance strongly depends on the side information quality built at the decoder. At this aim for improving its quality a spatial view compensation/prediction in Zernike moments domain is applied. Spatial and temporal motion activity have been fused together to obtain the overall side-information. The proposed method has been evaluated by rate-distortion performances for different inter-view and temporal estimation quality conditions.

  12. Bandwidth allocation for video under quality of service constraints

    CERN Document Server

    Anjum, Bushra

    2014-01-01

    We present queueing-based algorithms to calculate the bandwidth required for a video stream so that the three main Quality of Service constraints, i.e., end-to-end delay, jitter and packet loss, are ensured. Conversational and streaming video-based applications are becoming a major part of the everyday Internet usage. The quality of these applications (QoS), as experienced by the user, depends on three main metrics of the underlying network, namely, end-to-end delay, jitter and packet loss. These metrics are, in turn, directly related to the capacity of the links that the video traffic trave

  13. Application of discriminative models for interactive query refinement in video retrieval

    Science.gov (United States)

    Srivastava, Amit; Khanwalkar, Saurabh; Kumar, Anoop

    2013-12-01

    The ability to quickly search for large volumes of videos for specific actions or events can provide a dramatic new capability to intelligence agencies. Example-based queries from video are a form of content-based information retrieval (CBIR) where the objective is to retrieve clips from a video corpus, or stream, using a representative query sample to find more like this. Often, the accuracy of video retrieval is largely limited by the gap between the available video descriptors and the underlying query concept, and such exemplar queries return many irrelevant results with relevant ones. In this paper, we present an Interactive Query Refinement (IQR) system which acts as a powerful tool to leverage human feedback and allow intelligence analyst to iteratively refine search queries for improved precision in the retrieved results. In our approach to IQR, we leverage discriminative models that operate on high dimensional features derived from low-level video descriptors in an iterative framework. Our IQR model solicits relevance feedback on examples selected from the region of uncertainty and updates the discriminating boundary to produce a relevance ranked results list. We achieved 358% relative improvement in Mean Average Precision (MAP) over initial retrieval list at a rank cutoff of 100 over 4 iterations. We compare our discriminative IQR model approach to a naïve IQR and show our model-based approach yields 49% relative improvement over the no model naïve system.

  14. Mobile video with mobile IPv6

    CERN Document Server

    Minoli, Daniel

    2012-01-01

    Increased reliance on mobile devices and streaming of video content are two of the most recent changes that have led those in the video distribution industry to be concerned about the shifting or erosion of traditional advertising revenues. Infrastructure providers also need to position themselves to take advantage of these trends. Mobile Video with Mobile IPv6provides an overview of the current mobile landscape, then delves specifically into the capabilities and operational details of IPv6. The book also addresses 3G and 4G services, the application of Mobile IPv6 to streaming and other mobil

  15. Human recognition in a video network

    Science.gov (United States)

    Bhanu, Bir

    2009-10-01

    Video networks is an emerging interdisciplinary field with significant and exciting scientific and technological challenges. It has great promise in solving many real-world problems and enabling a broad range of applications, including smart homes, video surveillance, environment and traffic monitoring, elderly care, intelligent environments, and entertainment in public and private spaces. This paper provides an overview of the design of a wireless video network as an experimental environment, camera selection, hand-off and control, anomaly detection. It addresses challenging questions for individual identification using gait and face at a distance and present new techniques and their comparison for robust identification.

  16. Video game training and the reward system.

    Science.gov (United States)

    Lorenz, Robert C; Gleich, Tobias; Gallinat, Jürgen; Kühn, Simone

    2015-01-01

    Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training. Fifty healthy participants were randomly assigned to a video game training (TG) or control group (CG). Before and after training/control period, functional magnetic resonance imaging (fMRI) was conducted using a non-video game related reward task. At pretest, both groups showed strongest activation in ventral striatum (VS) during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated. This longitudinal study revealed that video game training may preserve reward responsiveness in the VS in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.

  17. Secure and Efficient Reactive Video Surveillance for Patient Monitoring

    Directory of Open Access Journals (Sweden)

    An Braeken

    2016-01-01

    Full Text Available Video surveillance is widely deployed for many kinds of monitoring applications in healthcare and assisted living systems. Security and privacy are two promising factors that align the quality and validity of video surveillance systems with the caliber of patient monitoring applications. In this paper, we propose a symmetric key-based security framework for the reactive video surveillance of patients based on the inputs coming from data measured by a wireless body area network attached to the human body. Only authenticated patients are able to activate the video cameras, whereas the patient and authorized people can consult the video data. User and location privacy are at each moment guaranteed for the patient. A tradeoff between security and quality of service is defined in order to ensure that the surveillance system gets activated even in emergency situations. In addition, the solution includes resistance against tampering with the device on the patient’s side.

  18. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  19. Cost-Effective Video Filtering Solution for Real-Time Vision Systems

    Directory of Open Access Journals (Sweden)

    Karl Martin

    2005-08-01

    Full Text Available This paper presents an efficient video filtering scheme and its implementation in a field-programmable logic device (FPLD. Since the proposed nonlinear, spatiotemporal filtering scheme is based on order statistics, its efficient implementation benefits from a bit-serial realization. The utilization of both the spatial and temporal correlation characteristics of the processed video significantly increases the computational demands on this solution, and thus, implementation becomes a significant challenge. Simulation studies reported in this paper indicate that the proposed pipelined bit-serial FPLD filtering solution can achieve speeds of up to 97.6 Mpixels/s and consumes 1700 to 2700 logic cells for the speed-optimized and area-optimized versions, respectively. Thus, the filter area represents only 6.6 to 10.5% of the Altera STRATIX EP1S25 device available on the Altera Stratix DSP evaluation board, which has been used to implement a prototype of the entire real-time vision system. As such, the proposed adaptive video filtering scheme is both practical and attractive for real-time machine vision and surveillance systems as well as conventional video and multimedia applications.

  20. Video Game Characters. Theory and Analysis

    OpenAIRE

    Felix Schröter; Jan-Noël Thon

    2014-01-01

    This essay develops a method for the analysis of video game characters based on a theoretical understanding of their medium-specific representation and the mental processes involved in their intersubjective construction by video game players. We propose to distinguish, first, between narration, simulation, and communication as three modes of representation particularly salient for contemporary video games and the characters they represent, second, between narrative, ludic, and social experien...

  1. Creating a streaming video collection for your library

    CERN Document Server

    Duncan, Cheryl J

    2014-01-01

    Creating a Streaming Video Collection for Your Library covers the main processes associated with streaming video, from licensing to access and evaluation, and will serve as a key reference and source of best practices for libraries adding streaming video titles to their collections.

  2. Standardized access, display, and retrieval of medical video

    Science.gov (United States)

    Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.

    1999-05-01

    The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video sequences into DICOM 3.0. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital video documents of diagnostic and therapeutic procedures should be examined regarding the clip length and size necessary for second opinion and manageable with today's hardware. Image sources relevant for this paper include 3D laparoscope, 3D surgical microscope, 3D open surgery camera, synthetic video, and monoscopic endoscopes, etc. The global DICOM video concept and three special workplaces of distinct applications are described. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video-cutting. Digital stereoscopic video sequences are especially in demand for surgery . Therefore DSVS are also integrated into the DICOM video concept. Results are presented describing the suitability of stereoscopic display techniques for the operating room.

  3. The art of assessing quality for images and video

    International Nuclear Information System (INIS)

    Deriche, M.

    2011-01-01

    The early years of this century have witnessed a tremendous growth in the use of digital multimedia data for di?erent communication applications. Researchers from around the world are spending substantial research efforts in developing techniques for improving the appearance of images/video. However, as we know, preserving high quality is a challenging task. Images are subject to distortions during acquisition, compression, transmission, analysis, and reconstruction. For this reason, the research area focusing on image and video quality assessment has attracted a lot of attention in recent years. In particular, compression applications and other multimedia applications need powerful techniques for evaluating quality objectively without human interference. This tutorial will cover the di?erent faces of image quality assessment. We will motivate the need for robust image quality assessment techniques, then discuss the main algorithms found in the literature with a critical perspective. We will present the di?erent metrics used for full reference, reduced reference and no reference applications. We will then discuss the difference between image and video quality assessment. In all of the above, we will take a critical approach to explain which metric can be used for which application. Finally we will discuss the different approaches to analyze the performance of image/video quality metrics, and end the tutorial with some perspectives on newly introduced metrics and their potential applications.

  4. Perceptual video quality assessment in H.264 video coding standard using objective modeling.

    Science.gov (United States)

    Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu

    2014-01-01

    Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.

  5. Design and Smartphone-Based Implementation of a Chaotic Video Communication Scheme via WAN Remote Transmission

    Science.gov (United States)

    Lin, Zhuosheng; Yu, Simin; Li, Chengqing; Lü, Jinhu; Wang, Qianxue

    This paper proposes a chaotic secure video remote communication scheme that can perform on real WAN networks, and implements it on a smartphone hardware platform. First, a joint encryption and compression scheme is designed by embedding a chaotic encryption scheme into the MJPG-Streamer source codes. Then, multiuser smartphone communications between the sender and the receiver are implemented via WAN remote transmission. Finally, the transmitted video data are received with the given IP address and port in an Android smartphone. It should be noted that, this is the first time that chaotic video encryption schemes are implemented on such a hardware platform. The experimental results demonstrate that the technical challenges on hardware implementation of secure video communication are successfully solved, reaching a balance amongst sufficient security level, real-time processing of massive video data, and utilization of available resources in the hardware environment. The proposed scheme can serve as a good application example of chaotic secure communications for smartphone and other mobile facilities in the future.

  6. Analyzing Structure and Function of Vascularization in Engineered Bone Tissue by Video-Rate Intravital Microscopy and 3D Image Processing.

    Science.gov (United States)

    Pang, Yonggang; Tsigkou, Olga; Spencer, Joel A; Lin, Charles P; Neville, Craig; Grottkau, Brian

    2015-10-01

    Vascularization is a key challenge in tissue engineering. Three-dimensional structure and microcirculation are two fundamental parameters for evaluating vascularization. Microscopic techniques with cellular level resolution, fast continuous observation, and robust 3D postimage processing are essential for evaluation, but have not been applied previously because of technical difficulties. In this study, we report novel video-rate confocal microscopy and 3D postimage processing techniques to accomplish this goal. In an immune-deficient mouse model, vascularized bone tissue was successfully engineered using human bone marrow mesenchymal stem cells (hMSCs) and human umbilical vein endothelial cells (HUVECs) in a poly (D,L-lactide-co-glycolide) (PLGA) scaffold. Video-rate (30 FPS) intravital confocal microscopy was applied in vitro and in vivo to visualize the vascular structure in the engineered bone and the microcirculation of the blood cells. Postimage processing was applied to perform 3D image reconstruction, by analyzing microvascular networks and calculating blood cell viscosity. The 3D volume reconstructed images show that the hMSCs served as pericytes stabilizing the microvascular network formed by HUVECs. Using orthogonal imaging reconstruction and transparency adjustment, both the vessel structure and blood cells within the vessel lumen were visualized. Network length, network intersections, and intersection densities were successfully computed using our custom-developed software. Viscosity analysis of the blood cells provided functional evaluation of the microcirculation. These results show that by 8 weeks, the blood vessels in peripheral areas function quite similarly to the host vessels. However, the viscosity drops about fourfold where it is only 0.8 mm away from the host. In summary, we developed novel techniques combining intravital microscopy and 3D image processing to analyze the vascularization in engineered bone. These techniques have broad

  7. Ensemble of Chaotic and Naive Approaches for Performance Enhancement in Video Encryption

    Directory of Open Access Journals (Sweden)

    Jeyamala Chandrasekaran

    2015-01-01

    Full Text Available Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES and (ii generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security.

  8. Adaptive format conversion for scalable video coding

    Science.gov (United States)

    Wan, Wade K.; Lim, Jae S.

    2001-12-01

    The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.

  9. Mosaicking Techniques for Deep Submergence Vehicle Video Imagery - Applications to Ridge2000 Science

    Science.gov (United States)

    Mayer, L.; Rzhanov, Y.; Fornari, D. J.; Soule, A.; Shank, T. M.; Beaulieu, S. E.; Schouten, H.; Tivey, M.

    2004-12-01

    Severe attenuation of visible light and limited power capabilities of many submersible vehicles require acquisition of imagery from short ranges, rarely exceeding 8-10 meters. Although modern video- and photo-equipment makes high-resolution video surveying possible, the field of view of each image remains relatively narrow. To compensate for the deficiencies in light and field of view researchers have been developing techniques allowing for combining images into larger composite images i.e., mosaicking. A properly constructed, accurate mosaic has a number of well-known advantages in comparison with the original sequence of images, the most notable being improved situational awareness. We have developed software strategies for PC-based computers that permit conversion of video imagery acquired from any underwater vehicle, operated within both absolute (e.g. LBL or USBL) or relative (e.g. Doppler Velocity Log-DVL) navigation networks, to quickly produce a set of geo-referenced photomosaics which can then be directly incorporated into a Geographic Information System (GIS) data base. The timescale of processing is rapid enough to permit analysis of the resulting mosaics between submersible dives thus enhancing the efficiency of deep-sea research. Commercial imaging processing packages usually handle cases where there is no or little parallax - an unlikely situation for undersea world where terrain has pronounced 3D content and imagery is acquired from moving platforms. The approach we have taken is optimized for situations in which there is significant relief and thus parallax in the imagery (e.g. seafloor fault scarps or constructional volcanic escarpments and flow fronts). The basis of all mosaicking techniques is a pair-wise image registration method that finds a transformation relating pixels of two consecutive image frames. We utilize a "rigid affine model" with four degrees of freedom for image registration that allows for camera translation in all directions and

  10. Remote stereoscopic video play platform for naked eyes based on the Android system

    Science.gov (United States)

    Jia, Changxin; Sang, Xinzhu; Liu, Jing; Cheng, Mingsheng

    2014-11-01

    As people's life quality have been improved significantly, the traditional 2D video technology can not meet people's urgent desire for a better video quality, which leads to the rapid development of 3D video technology. Simultaneously people want to watch 3D video in portable devices,. For achieving the above purpose, we set up a remote stereoscopic video play platform. The platform consists of a server and clients. The server is used for transmission of different formats of video and the client is responsible for receiving remote video for the next decoding and pixel restructuring. We utilize and improve Live555 as video transmission server. Live555 is a cross-platform open source project which provides solutions for streaming media such as RTSP protocol and supports transmission of multiple video formats. At the receiving end, we use our laboratory own player. The player for Android, which is with all the basic functions as the ordinary players do and able to play normal 2D video, is the basic structure for redevelopment. Also RTSP is implemented into this structure for telecommunication. In order to achieve stereoscopic display, we need to make pixel rearrangement in this player's decoding part. The decoding part is the local code which JNI interface calls so that we can extract video frames more effectively. The video formats that we process are left and right, up and down and nine grids. In the design and development, a large number of key technologies from Android application development have been employed, including a variety of wireless transmission, pixel restructuring and JNI call. By employing these key technologies, the design plan has been finally completed. After some updates and optimizations, the video player can play remote 3D video well anytime and anywhere and meet people's requirement.

  11. Healthcare4VideoStorm: Making Smart Decisions Based on Storm Metrics.

    Science.gov (United States)

    Zhang, Weishan; Duan, Pengcheng; Chen, Xiufeng; Lu, Qinghua

    2016-04-23

    Storm-based stream processing is widely used for real-time large-scale distributed processing. Knowing the run-time status and ensuring performance is critical to providing expected dependability for some applications, e.g., continuous video processing for security surveillance. The existing scheduling strategies' granularity is too coarse to have good performance, and mainly considers network resources without computing resources while scheduling. In this paper, we propose Healthcare4Storm, a framework that finds Storm insights based on Storm metrics to gain knowledge from the health status of an application, finally ending up with smart scheduling decisions. It takes into account both network and computing resources and conducts scheduling at a fine-grained level using tuples instead of topologies. The comprehensive evaluation shows that the proposed framework has good performance and can improve the dependability of the Storm-based applications.

  12. Student’s Video Production as Formative Assessment

    Directory of Open Access Journals (Sweden)

    Eduardo Gama

    2017-04-01

    Full Text Available Learning assessments are subject of discussion both in their theoretical and practical approaches. The process of measuring learning in physics by high school students, either qualitatively or quantitatively, is one in which it should be possible to identify not only the concepts and contents students failed to achieve but also the reasons for the failure. We propose that students’ video production offers a very effective formative assessment tool to teachers: as a formative assessment, it produces information that allows the understanding of where and when the learning process succeeded or failed, of identifying, as a subject or as a group, the deficiencies or misunderstandings related to the theme under analysis and their interpretation by students, and it provides also a different kind of assessment, related to some other life skills, such as ability to carry on a project till its conclusion and to work cooperatively. In this paper, we describe the use of videos produced by high school students as an assessment resource. The students were asked to prepare a short video, which was then presented to the whole group and discussed. The videos reveal aspects of students’ difficulties that usually do not appear in formal assessments such as tests and questionnaires. After the use of the videos as a component of classroom assessments and the use of the discussions to rethink learning activities in the group, the videos were analysed and classified in various categories. This analysis showed a strong correlation between the technical quality of the video and the content quality of the students’ argumentation. Also, it was shown that the students do not prepare their video based on quick and easy production; they usually choose forms of video production that require careful planning and implementation, and this reflects directly on the overall quality of the video and of the learning process.

  13. Flexible Human Behavior Analysis Framework for Video Surveillance Applications

    Directory of Open Access Journals (Sweden)

    Weilun Lao

    2010-01-01

    Full Text Available We study a flexible framework for semantic analysis of human motion from surveillance video. Successful trajectory estimation and human-body modeling facilitate the semantic analysis of human activities in video sequences. Although human motion is widely investigated, we have extended such research in three aspects. By adding a second camera, not only more reliable behavior analysis is possible, but it also enables to map the ongoing scene events onto a 3D setting to facilitate further semantic analysis. The second contribution is the introduction of a 3D reconstruction scheme for scene understanding. Thirdly, we perform a fast scheme to detect different body parts and generate a fitting skeleton model, without using the explicit assumption of upright body posture. The extension of multiple-view fusion improves the event-based semantic analysis by 15%–30%. Our proposed framework proves its effectiveness as it achieves a near real-time performance (13–15 frames/second and 6–8 frames/second for monocular and two-view video sequences.

  14. VideoSET: Video Summary Evaluation through Text

    OpenAIRE

    Yeung, Serena; Fathi, Alireza; Fei-Fei, Li

    2014-01-01

    In this paper we present VideoSET, a method for Video Summary Evaluation through Text that can evaluate how well a video summary is able to retain the semantic information contained in its original video. We observe that semantics is most easily expressed in words, and develop a text-based approach for the evaluation. Given a video summary, a text representation of the video summary is first generated, and an NLP-based metric is then used to measure its semantic distance to ground-truth text ...

  15. Progress in video immersion using Panospheric imaging

    Science.gov (United States)

    Bogner, Stephen L.; Southwell, David T.; Penzes, Steven G.; Brosinsky, Chris A.; Anderson, Ron; Hanna, Doug M.

    1998-09-01

    Having demonstrated significant technical and marketplace advantages over other modalities for video immersion, PanosphericTM Imaging (PI) continues to evolve rapidly. This paper reports on progress achieved since AeroSense 97. The first practical field deployment of the technology occurred in June-August 1997 during the NASA-CMU 'Atacama Desert Trek' activity, where the Nomad mobile robot was teleoperated via immersive PanosphericTM imagery from a distance of several thousand kilometers. Research using teleoperated vehicles at DRES has also verified the exceptional utility of the PI technology for achieving high levels of situational awareness, operator confidence, and mission effectiveness. Important performance enhancements have been achieved with the completion of the 4th Generation PI DSP-based array processor system. The system is now able to provide dynamic full video-rate generation of spatial and computational transformations, resulting in a programmable and fully interactive immersive video telepresence. A new multi- CCD camera architecture has been created to exploit the bandwidth of this processor, yielding a well-matched PI system with greatly improved resolution. While the initial commercial application for this technology is expected to be video tele- conferencing, it also appears to have excellent potential for application in the 'Immersive Cockpit' concept. Additional progress is reported in the areas of Long Wave Infrared PI Imaging, Stereo PI concepts, PI based Video-Servoing concepts, PI based Video Navigation concepts, and Foveation concepts (to merge localized high-resolution views with immersive views).

  16. You Tube Video Genres. Amateur how-to Videos Versus Professional Tutorials

    Directory of Open Access Journals (Sweden)

    Andreea Mogoș

    2015-12-01

    Full Text Available In spite of the fact that there is a vast literature on traditional textual and visual genre classifications, the categorization of web content is still a difficult task, because this medium is fluid, unstable and fast-paced on one hand and, on the other hand, the genre classifications are socially constructed through the tagging process and the interactions (commenting, rating, chatting. This paper focuses on YouTube tutorials and aims to compare video tutorials produced by professionals with amateur video tutorials.

  17. Multimodal location estimation of videos and images

    CERN Document Server

    Friedland, Gerald

    2015-01-01

    This book presents an overview of the field of multimodal location estimation, i.e. using acoustic, visual, and/or textual cues to estimate the shown location of a video recording. The authors' sample research results in this field in a unified way integrating research work on this topic that focuses on different modalities, viewpoints, and applications. The book describes fundamental methods of acoustic, visual, textual, social graph, and metadata processing as well as multimodal integration methods used for location estimation. In addition, the text covers benchmark metrics and explores the limits of the technology based on a human baseline. ·         Discusses localization of multimedia data; ·         Examines fundamental methods of establishing location metadata for images and videos (other than GPS tagging); ·         Covers Data-Driven as well as Semantic Location Estimation.

  18. The H.264/MPEG4 advanced video coding

    Science.gov (United States)

    Gromek, Artur

    2009-06-01

    H.264/MPEG4-AVC is the newest video coding standard recommended by International Telecommunication Union - Telecommunication Standardization Section (ITU-T) and the ISO/IEC Moving Picture Expert Group (MPEG). The H.264/MPEG4-AVC has recently become leading standard for generic audiovisual services, since deployment for digital television. Nowadays is commonly used in wide range of video application ranging like mobile services, videoconferencing, IPTV, HDTV, video storage and many more. In this article, author briefly describes the technology applied in the H.264/MPEG4-AVC video coding standard, the way of real-time implementation and the way of future development.

  19. Exploring the role of educational videos in radiation oncology practice

    International Nuclear Information System (INIS)

    Dally, M.J.; Denham, J.W.; Boddy, G.A.

    1994-01-01

    Patient, staff, and medical student education are essential components of modern radiation oncology practice. Greater involvement of patients in the clinical decision-making process, and the need for other health professionals to be more informed about radiation oncology, provided further demand on resources, despite ever increasing logistic constraints. Videos made by individual departments may augment traditional teaching methods and have applications in documenting clinical practice and response. 8 refs., 1 tab

  20. [Serious video games in pediatrics].

    Science.gov (United States)

    Drummond, D; Tesnière, A; Hadchouel, A

    2018-01-01

    Playing video games has been associated with several negative effects in children. However, serious games, which are video games designed for a primary purpose other than pure entertainment, should not be neglected by pediatricians. In the field of public health, some serious games are a means to decrease drug consumption and improve sexual health behavior in adolescents. In schools, serious games can be used to change students' perception of the disease of one of their classmates, or to train students on basic life support. Serious games are also used with patients: they can distract them from a painful procedure, increase their compliance to treatments, or participate in their rehabilitation. Finally, serious games allow healthcare professionals to train on the management of various medical situations without risk. For every field of application, this review presents the rationale of the use of video games, followed by concrete examples of video games and the results of their scientific evaluation. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  1. Efficient Temporal Action Localization in Videos

    KAUST Repository

    Alwassel, Humam

    2018-04-17

    State-of-the-art temporal action detectors inefficiently search the entire video for specific actions. Despite the encouraging progress these methods achieve, it is crucial to design automated approaches that only explore parts of the video which are the most relevant to the actions being searched. To address this need, we propose the new problem of action spotting in videos, which we define as finding a specific action in a video while observing a small portion of that video. Inspired by the observation that humans are extremely efficient and accurate in spotting and finding action instances in a video, we propose Action Search, a novel Recurrent Neural Network approach that mimics the way humans spot actions. Moreover, to address the absence of data recording the behavior of human annotators, we put forward the Human Searches dataset, which compiles the search sequences employed by human annotators spotting actions in the AVA and THUMOS14 datasets. We consider temporal action localization as an application of the action spotting problem. Experiments on the THUMOS14 dataset reveal that our model is not only able to explore the video efficiently (observing on average 17.3% of the video) but it also accurately finds human activities with 30.8% mAP (0.5 tIoU), outperforming state-of-the-art methods

  2. Attitudes of older adults toward shooter video games: An initial study to select an acceptable game for training visual processing.

    Science.gov (United States)

    McKay, Sandra M; Maki, Brian E

    2010-01-01

    A computer-based 'Useful Field of View' (UFOV) training program has been shown to be effective in improving visual processing in older adults. Studies of young adults have shown that playing video games can have similar benefits; however, these studies involved realistic and violent 'first-person shooter' (FPS) games. The willingness of older adults to play such games has not been established. OBJECTIVES: To determine the degree to which older adults would accept playing a realistic, violent FPS-game, compared to video games not involving realistic depiction of violence. METHODS: Sixteen older adults (ages 64-77) viewed and rated video-clip demonstrations of the UFOV program and three video-game genres (realistic-FPS, cartoon-FPS, fixed-shooter), and were then given an opportunity to try them out (30 minutes per game) and rate various features. RESULTS: The results supported a hypothesis that the participants would be less willing to play the realistic-FPS game in comparison to the less violent alternatives (p'svideo-clip demonstrations, 10 of 16 participants indicated they would be unwilling to try out the realistic-FPS game. Of the six who were willing, three did not enjoy the experience and were not interested in playing again. In contrast, all 12 subjects who were willing to try the cartoon-FPS game reported that they enjoyed it and would be willing to play again. A high proportion also tried and enjoyed the UFOV training (15/16) and the fixed-shooter game (12/15). DISCUSSION: A realistic, violent FPS video game is unlikely to be an appropriate choice for older adults. Cartoon-FPS and fixed-shooter games are more viable options. Although most subjects also enjoyed UFOV training, a video-game approach has a number of potential advantages (for instance, 'addictive' properties, low cost, self-administration at home). We therefore conclude that non-violent cartoon-FPS and fixed-shooter video games warrant further investigation as an alternative to the UFOV program

  3. Design and Optimization of the VideoWeb Wireless Camera Network

    Directory of Open Access Journals (Sweden)

    Nguyen HoangThanh

    2010-01-01

    Full Text Available Sensor networks have been a very active area of research in recent years. However, most of the sensors used in the development of these networks have been local and nonimaging sensors such as acoustics, seismic, vibration, temperature, humidity. The emerging development of video sensor networks poses its own set of unique challenges, including high-bandwidth and low latency requirements for real-time processing and control. This paper presents a systematic approach by detailing the design, implementation, and evaluation of a large-scale wireless camera network, suitable for a variety of practical real-time applications. We take into consideration issues related to hardware, software, control, architecture, network connectivity, performance evaluation, and data-processing strategies for the network. We also perform multiobjective optimization on settings such as video resolution and compression quality to provide insight into the performance trade-offs when configuring such a network and present lessons learned in the building and daily usage of the network.

  4. Video segmentation and camera motion characterization using compressed data

    Science.gov (United States)

    Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain

    1997-10-01

    We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.

  5. Video Game Training and the Reward System

    Directory of Open Access Journals (Sweden)

    Robert C. Lorenz

    2015-02-01

    Full Text Available Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual towards playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training.Fifty healthy participants were randomly assigned to a video game training (TG or control group (CG. Before and after training/control period, functional magnetic resonance imaging (fMRI was conducted using a non-video game related reward task.At pretest, both groups showed strongest activation in ventral striatum (VS during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated.This longitudinal study revealed that video game training may preserve reward responsiveness in the ventral striatum in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.

  6. Video game training and the reward system

    Science.gov (United States)

    Lorenz, Robert C.; Gleich, Tobias; Gallinat, Jürgen; Kühn, Simone

    2015-01-01

    Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training. Fifty healthy participants were randomly assigned to a video game training (TG) or control group (CG). Before and after training/control period, functional magnetic resonance imaging (fMRI) was conducted using a non-video game related reward task. At pretest, both groups showed strongest activation in ventral striatum (VS) during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated. This longitudinal study revealed that video game training may preserve reward responsiveness in the VS in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training. PMID:25698962

  7. Video astronomy on the go using video cameras with small telescopes

    CERN Document Server

    Ashley, Joseph

    2017-01-01

    Author Joseph Ashley explains video astronomy's many benefits in this comprehensive reference guide for amateurs. Video astronomy offers a wonderful way to see objects in far greater detail than is possible through an eyepiece, and the ability to use the modern, entry-level video camera to image deep space objects is a wonderful development for urban astronomers in particular, as it helps sidestep the issue of light pollution. The author addresses both the positive attributes of these cameras for deep space imaging as well as the limitations, such as amp glow. The equipment needed for imaging as well as how it is configured is identified with hook-up diagrams and photographs. Imaging techniques are discussed together with image processing (stacking and image enhancement). Video astronomy has evolved to offer great results and great ease of use, and both novices and more experienced amateurs can use this book to find the set-up that works best for them. Flexible and portable, they open up a whole new way...

  8. Video change detection for fixed wing UAVs

    Science.gov (United States)

    Bartelsen, Jan; Müller, Thomas; Ring, Jochen; Mück, Klaus; Brüstle, Stefan; Erdnüß, Bastian; Lutz, Bastian; Herbst, Theresa

    2017-10-01

    In this paper we proceed the work of Bartelsen et al.1 We present the draft of a process chain for an image based change detection which is designed for videos acquired by fixed wing unmanned aerial vehicles (UAVs). From our point of view, automatic video change detection for aerial images can be useful to recognize functional activities which are typically caused by the deployment of improvised explosive devices (IEDs), e.g. excavations, skid marks, footprints, left-behind tooling equipment, and marker stones. Furthermore, in case of natural disasters, like flooding, imminent danger can be recognized quickly. Due to the necessary flight range, we concentrate on fixed wing UAVs. Automatic change detection can be reduced to a comparatively simple photogrammetric problem when the perspective change between the "before" and "after" image sets is kept as small as possible. Therefore, the aerial image acquisition demands a mission planning with a clear purpose including flight path and sensor configuration. While the latter can be enabled simply by a fixed and meaningful adjustment of the camera, ensuring a small perspective change for "before" and "after" videos acquired by fixed wing UAVs is a challenging problem. Concerning this matter, we have performed tests with an advanced commercial off the shelf (COTS) system which comprises a differential GPS and autopilot system estimating the repetition accuracy of its trajectory. Although several similar approaches have been presented,23 as far as we are able to judge, the limits for this important issue are not estimated so far. Furthermore, we design a process chain to enable the practical utilization of video change detection. It consists of a front-end of a database to handle large amounts of video data, an image processing and change detection implementation, and the visualization of the results. We apply our process chain on the real video data acquired by the advanced COTS fixed wing UAV and synthetic data. For the

  9. Design and implementation of parallel video encoding strategies using divisible load analysis

    NARCIS (Netherlands)

    Li, Ping; Veeravalli, Bharadwaj; Kassim, A.A.

    2005-01-01

    The processing time needed for motion estimation usually accounts for a significant part of the overall processing time of the video encoder. To improve the video encoding speed, reducing the execution time for motion estimation process is essential. Parallel implementation of video encoding systems

  10. Large-Scale Query-by-Image Video Retrieval Using Bloom Filters

    OpenAIRE

    Araujo, Andre; Chaves, Jason; Lakshman, Haricharan; Angst, Roland; Girod, Bernd

    2016-01-01

    We consider the problem of using image queries to retrieve videos from a database. Our focus is on large-scale applications, where it is infeasible to index each database video frame independently. Our main contribution is a framework based on Bloom filters, which can be used to index long video segments, enabling efficient image-to-video comparisons. Using this framework, we investigate several retrieval architectures, by considering different types of aggregation and different functions to ...

  11. The impact of video technology on learning: A cooking skills experiment.

    Science.gov (United States)

    Surgenor, Dawn; Hollywood, Lynsey; Furey, Sinéad; Lavelle, Fiona; McGowan, Laura; Spence, Michelle; Raats, Monique; McCloat, Amanda; Mooney, Elaine; Caraher, Martin; Dean, Moira

    2017-07-01

    This study examines the role of video technology in the development of cooking skills. The study explored the views of 141 female participants on whether video technology can promote confidence in learning new cooking skills to assist in meal preparation. Prior to each focus group participants took part in a cooking experiment to assess the most effective method of learning for low-skilled cooks across four experimental conditions (recipe card only; recipe card plus video demonstration; recipe card plus video demonstration conducted in segmented stages; and recipe card plus video demonstration whereby participants freely accessed video demonstrations as and when needed). Focus group findings revealed that video technology was perceived to assist learning in the cooking process in the following ways: (1) improved comprehension of the cooking process; (2) real-time reassurance in the cooking process; (3) assisting the acquisition of new cooking skills; and (4) enhancing the enjoyment of the cooking process. These findings display the potential for video technology to promote motivation and confidence as well as enhancing cooking skills among low-skilled individuals wishing to cook from scratch using fresh ingredients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Use of a Video Module to Improve Faculty Understanding of the Pharmacists’ Patient Care Process

    Directory of Open Access Journals (Sweden)

    Crystal Deas

    2018-05-01

    Full Text Available Objective: To evaluate change in faculty’s knowledge and perceptions after an online video module on the Pharmacists’ Patient Care Process (PPCP. Innovation: An educational video module on the PPCP was developed and disseminated to full-time faculty members at Samford University, McWhorter School of Pharmacy. Voluntary and anonymous pre- and post-test assessments were evaluated and analyzed. Critical Analysis: Thirty faculty completed the pre-assessment, and 31 completed the post-assessment (73% and 75% response rates, respectively. A significant improvement in faculty perceptions was indicated by an increase in agreement with the majority (80% of questions on attitudes toward the PPCP on the post-test. Faculty’s knowledge of the introduction and assessment of PPCP within the school’s curriculum was significantly increased after viewing the video module. After viewing the module, more faculty were also able to correctly identify the majority of the PPCP components and their corresponding practice activities. Next Steps: A short video module was effective at improving faculty knowledge and perceptions of the PPCP. Development of a similar faculty development module is feasible for implementation in other Schools of Pharmacy. Conflict of Interest We declare no conflicts of interest or financial interests that the authors or members of their immediate families have in any product or service discussed in the manuscript, including grants (pending or received, employment, gifts, stock holdings or options, honoraria, consultancies, expert testimony, patents and royalties. Treatment of Human Subjects: IRB exemption granted   Type: Note

  13. Improving Students� Ability in Writing Hortatory Exposition Texts by Using Process-Genre Based Approach with YouTube Videos as the Media

    Directory of Open Access Journals (Sweden)

    fifin naili rizkiyah

    2017-06-01

    Full Text Available Abstract: This research is aimed at finding out how Process-Genre Based Approach strategy with YouTube Videos as the media are employed to improve the students� ability in writing hortatory exposition texts. This study uses collaborative classroom action research design following the procedures namely planning, implementing, observing, and reflecting. The procedures of carrying out the strategy are: (1 relating several issues/ cases to the students� background knowledge and introducing the generic structures and linguistic features of hortatory exposition text as the BKoF stage, (2 analyzing the generic structure and the language features used in the text and getting model on how to write a hortatory exposition text by using the YouTube Video as the MoT stage, (3 writing a hortatory exposition text collaboratively in a small group and in pairs through process writing as the JCoT stage, and (4 writing a hortatory exposition text individually as the ICoT stage. The result shows that the use of Process-Genre Based Approach and YouTube Videos can improve the students� ability in writing hortatory exposition texts. The percentage of the students achieving the score above the minimum passing grade (70 had improved from only 15.8% (3 out of 19 students in the preliminary study to 100% (22 students in the Cycle 1. Besides, the score of each aspect; content, organization, vocabulary, grammar, and mechanics also improved. � Key Words: writing ability, hortatory exposition text, process-genre based approach, youtube video

  14. A generic flexible and robust approach for intelligent real-time video-surveillance systems

    Science.gov (United States)

    Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit

    2004-05-01

    In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.

  15. Digital video for the desktop

    CERN Document Server

    Pender, Ken

    1999-01-01

    Practical introduction to creating and editing high quality video on the desktop. Using examples from a variety of video applications, benefit from a professional's experience, step-by-step, through a series of workshops demonstrating a wide variety of techniques. These include producing short films, multimedia and internet presentations, animated graphics and special effects.The opportunities for the independent videomaker have never been greater - make sure you bring your understanding fully up to date with this invaluable guide.No prior knowledge of the technology is assumed, with explanati

  16. Enhance Video Film using Retnix method

    Science.gov (United States)

    Awad, Rasha; Al-Zuky, Ali A.; Al-Saleh, Anwar H.; Mohamad, Haidar J.

    2018-05-01

    An enhancement technique used to improve the studied video quality. Algorithms like mean and standard deviation are used as a criterion within this paper, and it applied for each video clip that divided into 80 images. The studied filming environment has different light intensity (315, 566, and 644Lux). This different environment gives similar reality to the outdoor filming. The outputs of the suggested algorithm are compared with the results before applying it. This method is applied into two ways: first, it is applied for the full video clip to get the enhanced film; second, it is applied for every individual image to get the enhanced image then compiler them to get the enhanced film. This paper shows that the enhancement technique gives good quality video film depending on a statistical method, and it is recommended to use it in different application.

  17. Content-based TV sports video retrieval using multimodal analysis

    Science.gov (United States)

    Yu, Yiqing; Liu, Huayong; Wang, Hongbin; Zhou, Dongru

    2003-09-01

    In this paper, we propose content-based video retrieval, which is a kind of retrieval by its semantical contents. Because video data is composed of multimodal information streams such as video, auditory and textual streams, we describe a strategy of using multimodal analysis for automatic parsing sports video. The paper first defines the basic structure of sports video database system, and then introduces a new approach that integrates visual stream analysis, speech recognition, speech signal processing and text extraction to realize video retrieval. The experimental results for TV sports video of football games indicate that the multimodal analysis is effective for video retrieval by quickly browsing tree-like video clips or inputting keywords within predefined domain.

  18. A Comparison of Comprehension Processes in Sign Language Interpreter Videos with or without Captions.

    Science.gov (United States)

    Debevc, Matjaž; Milošević, Danijela; Kožuh, Ines

    2015-01-01

    One important theme in captioning is whether the implementation of captions in individual sign language interpreter videos can positively affect viewers' comprehension when compared with sign language interpreter videos without captions. In our study, an experiment was conducted using four video clips with information about everyday events. Fifty-one deaf and hard of hearing sign language users alternately watched the sign language interpreter videos with, and without, captions. Afterwards, they answered ten questions. The results showed that the presence of captions positively affected their rates of comprehension, which increased by 24% among deaf viewers and 42% among hard of hearing viewers. The most obvious differences in comprehension between watching sign language interpreter videos with and without captions were found for the subjects of hiking and culture, where comprehension was higher when captions were used. The results led to suggestions for the consistent use of captions in sign language interpreter videos in various media.

  19. Selecting salient frames for spatiotemporal video modeling and segmentation.

    Science.gov (United States)

    Song, Xiaomu; Fan, Guoliang

    2007-12-01

    We propose a new statistical generative model for spatiotemporal video segmentation. The objective is to partition a video sequence into homogeneous segments that can be used as "building blocks" for semantic video segmentation. The baseline framework is a Gaussian mixture model (GMM)-based video modeling approach that involves a six-dimensional spatiotemporal feature space. Specifically, we introduce the concept of frame saliency to quantify the relevancy of a video frame to the GMM-based spatiotemporal video modeling. This helps us use a small set of salient frames to facilitate the model training by reducing data redundancy and irrelevance. A modified expectation maximization algorithm is developed for simultaneous GMM training and frame saliency estimation, and the frames with the highest saliency values are extracted to refine the GMM estimation for video segmentation. Moreover, it is interesting to find that frame saliency can imply some object behaviors. This makes the proposed method also applicable to other frame-related video analysis tasks, such as key-frame extraction, video skimming, etc. Experiments on real videos demonstrate the effectiveness and efficiency of the proposed method.

  20. Serious Video Games for Health How Behavioral Science Guided the Development of a Serious Video Game.

    Science.gov (United States)

    Thompson, Debbe; Baranowski, Tom; Buday, Richard; Baranowski, Janice; Thompson, Victoria; Jago, Russell; Griffith, Melissa Juliano

    2010-08-01

    Serious video games for health are designed to entertain players while attempting to modify some aspect of their health behavior. Behavior is a complex process influenced by multiple factors, often making it difficult to change. Behavioral science provides insight into factors that influence specific actions that can be used to guide key game design decisions. This article reports how behavioral science guided the design of a serious video game to prevent Type 2 diabetes and obesity among youth, two health problems increasing in prevalence. It demonstrates how video game designers and behavioral scientists can combine their unique talents to create a highly focused serious video game that entertains while promoting behavior change.

  1. Digital video applications in radiologic education: theory, technique, and applications.

    Science.gov (United States)

    Hennessey, J G; Fishman, E K; Ney, D R

    1994-05-01

    Computer-assisted instruction (CAI) has great potential in medical education. The recent explosion of multimedia platforms provides an environment for the seemless integration of text, images, and sound into a single program. This article discusses the role of digital video in the current educational environment as well as its future potential. An indepth review of the technical decisions of this new technology is also presented.

  2. Application of the Coastal and Marine Ecological Classification Standard to ROV Video Data for Enhanced Analysis of Deep-Sea Habitats in the Gulf of Mexico

    Science.gov (United States)

    Ruby, C.; Skarke, A. D.; Mesick, S.

    2016-02-01

    The Coastal and Marine Ecological Classification Standard (CMECS) is a network of common nomenclature that provides a comprehensive framework for organizing physical, biological, and chemical information about marine ecosystems. It was developed by the National Oceanic and Atmospheric Administration (NOAA) Coastal Services Center, in collaboration with other feral agencies and academic institutions, as a means for scientists to more easily access, compare, and integrate marine environmental data from a wide range of sources and time frames. CMECS has been endorsed by the Federal Geographic Data Committee (FGDC) as a national metadata standard. The research presented here is focused on the application of CMECS to deep-sea video and environmental data collected by the NOAA ROV Deep Discoverer and the NOAA Ship Okeanos Explorer in the Gulf of Mexico in 2011-2014. Specifically, a spatiotemporal index of the physical, chemical, biological, and geological features observed in ROV video records was developed in order to allow scientist, otherwise unfamiliar with the specific content of existing video data, to rapidly determine the abundance and distribution of features of interest, and thus evaluate the applicability of those video data to their research. CMECS units (setting, component, or modifier) for seafloor images extracted from high-definition ROV video data were established based upon visual assessment as well as analysis of coincident environmental sensor (temperature, conductivity), navigation (ROV position, depth, attitude), and log (narrative dive summary) data. The resulting classification units were integrated into easily searchable textual and geo-databases as well as an interactive web map. The spatial distribution and associations of deep-sea habitats as indicated by CMECS classifications are described and optimized methodological approaches for application of CMECS to deep-sea video and environmental data are presented.

  3. Precision Security: Integrating Video Surveillance with Surrounding Environment Changes

    Directory of Open Access Journals (Sweden)

    Wenfeng Wang

    2018-01-01

    Full Text Available Video surveillance plays a vital role in maintaining the social security although, until now, large uncertainty still exists in danger understanding and recognition, which can be partly attributed to intractable environment changes in the backgrounds. This article presents a brain-inspired computing of attention value of surrounding environment changes (EC with a processes-based cognition model by introducing a ratio value λ of EC-implications within considered periods. Theoretical models for computation of warning level of EC-implications to the universal video recognition efficiency (quantified as time cost of implication-ratio variations from λk to λk+1, k=1,2,… are further established. Imbedding proposed models into the online algorithms is suggested as a future research priority towards precision security for critical applications and, furthermore, schemes for a practical implementation of such integration are also preliminarily discussed.

  4. An improvement analysis on video compression using file segmentation

    Science.gov (United States)

    Sharma, Shubhankar; Singh, K. John; Priya, M.

    2017-11-01

    From the past two decades the extreme evolution of the Internet has lead a massive rise in video technology and significantly video consumption over the Internet which inhabits the bulk of data traffic in general. Clearly, video consumes that so much data size on the World Wide Web, to reduce the burden on the Internet and deduction of bandwidth consume by video so that the user can easily access the video data.For this, many video codecs are developed such as HEVC/H.265 and V9. Although after seeing codec like this one gets a dilemma of which would be improved technology in the manner of rate distortion and the coding standard.This paper gives a solution about the difficulty for getting low delay in video compression and video application e.g. ad-hoc video conferencing/streaming or observation by surveillance. Also this paper describes the benchmark of HEVC and V9 technique of video compression on subjective oral estimations of High Definition video content, playback on web browsers. Moreover, this gives the experimental ideology of dividing the video file into several segments for compression and putting back together to improve the efficiency of video compression on the web as well as on the offline mode.

  5. Video in Non-Formal Education: A Bibliographical Study.

    Science.gov (United States)

    Lewis, Peter M.

    Intended to inform United Nations member states about the application of electronic recording and replaying devices in the nonformal education domain, this bibliographic study surveys the literature on video. Since the study is meant to be of particular use to decision makers in developing countries, video projects in North America and Western…

  6. Distributed Video Coding: Iterative Improvements

    DEFF Research Database (Denmark)

    Luong, Huynh Van

    Nowadays, emerging applications such as wireless visual sensor networks and wireless video surveillance are requiring lightweight video encoding with high coding efficiency and error-resilience. Distributed Video Coding (DVC) is a new coding paradigm which exploits the source statistics...... and noise modeling and also learn from the previous decoded Wyner-Ziv (WZ) frames, side information and noise learning (SING) is proposed. The SING scheme introduces an optical flow technique to compensate the weaknesses of the block based SI generation and also utilizes clustering of DCT blocks to capture...... cross band correlation and increase local adaptivity in noise modeling. During decoding, the updated information is used to iteratively reestimate the motion and reconstruction in the proposed motion and reconstruction reestimation (MORE) scheme. The MORE scheme not only reestimates the motion vectors...

  7. Audiovisual focus of attention and its application to Ultra High Definition video compression

    Science.gov (United States)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  8. MAC-Layer Active Dropping for Real-Time Video Streaming in 4G Access Networks

    KAUST Repository

    She, James

    2010-12-01

    This paper introduces a MAC-layer active dropping scheme to achieve effective resource utilization, which can satisfy the application-layer delay for real-time video streaming in time division multiple access based 4G broadband wireless access networks. When a video frame is not likely to be reconstructed within the application-layer delay bound at a receiver for the minimum decoding requirement, the MAC-layer protocol data units of such video frame will be proactively dropped before the transmission. An analytical model is developed to evaluate how confident a video frame can be delivered within its application-layer delay bound by jointly considering the effects of time-varying wireless channel, minimum decoding requirement of each video frame, data retransmission, and playback buffer. Extensive simulations with video traces are conducted to prove the effectiveness of the proposed scheme. When compared to conventional cross-layer schemes using prioritized-transmission/retransmission, the proposed scheme is practically implementable for more effective resource utilization, avoiding delay propagation, and achieving better video qualities under certain conditions.

  9. Video interactivo en realidad virtual inmersiva

    OpenAIRE

    Gordo Ara, Juan

    2016-01-01

    Currently, developers are creating new virtual reality applications related to the field of video games or graphics environments created by computers. This is due largely to the arrival to the consumer market of new technologies to experience these virtual reality environments. This has provoked a wide adoption of 360º videos, which can be viewed directly from smartphones. In addition, cheap adapters allow converting the phone into a virtual reality display. In this project we investigated me...

  10. Temporal Segmentation of MPEG Video Streams

    Directory of Open Access Journals (Sweden)

    Janko Calic

    2002-06-01

    Full Text Available Many algorithms for temporal video partitioning rely on the analysis of uncompressed video features. Since the information relevant to the partitioning process can be extracted directly from the MPEG compressed stream, higher efficiency can be achieved utilizing information from the MPEG compressed domain. This paper introduces a real-time algorithm for scene change detection that analyses the statistics of the macroblock features extracted directly from the MPEG stream. A method for extraction of the continuous frame difference that transforms the 3D video stream into a 1D curve is presented. This transform is then further employed to extract temporal units within the analysed video sequence. Results of computer simulations are reported.

  11. Color in Image and Video Processing: Most Recent Trends and Future Research Directions

    OpenAIRE

    Tominaga Shoji; Plataniotis KonstantinosN; Trémeau Alain

    2008-01-01

    Abstract The motivation of this paper is to provide an overview of the most recent trends and of the future research directions in color image and video processing. Rather than covering all aspects of the domain this survey covers issues related to the most active research areas in the last two years. It presents the most recent trends as well as the state-of-the-art, with a broad survey of the relevant literature, in the main active research areas in color imaging. It also focuses on the mos...

  12. A Review on Video-Based Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Shian-Ru Ke

    2013-06-01

    Full Text Available This review article surveys extensively the current progresses made toward video-based human activity recognition. Three aspects for human activity recognition are addressed including core technology, human activity recognition systems, and applications from low-level to high-level representation. In the core technology, three critical processing stages are thoroughly discussed mainly: human object segmentation, feature extraction and representation, activity detection and classification algorithms. In the human activity recognition systems, three main types are mentioned, including single person activity recognition, multiple people interaction and crowd behavior, and abnormal activity recognition. Finally the domains of applications are discussed in detail, specifically, on surveillance environments, entertainment environments and healthcare systems. Our survey, which aims to provide a comprehensive state-of-the-art review of the field, also addresses several challenges associated with these systems and applications. Moreover, in this survey, various applications are discussed in great detail, specifically, a survey on the applications in healthcare monitoring systems.

  13. Realistic generation of natural phenomena based on video synthesis

    Science.gov (United States)

    Wang, Changbo; Quan, Hongyan; Li, Chenhui; Xiao, Zhao; Chen, Xiao; Li, Peng; Shen, Liuwei

    2009-10-01

    Research on the generation of natural phenomena has many applications in special effects of movie, battlefield simulation and virtual reality, etc. Based on video synthesis technique, a new approach is proposed for the synthesis of natural phenomena, including flowing water and fire flame. From the fire and flow video, the seamless video of arbitrary length is generated. Then, the interaction between wind and fire flame is achieved through the skeleton of flame. Later, the flow is also synthesized by extending the video textures using an edge resample method. Finally, we can integrate the synthesized natural phenomena into a virtual scene.

  14. Content-Aware Video Adaptation under Low-Bitrate Constraint

    Directory of Open Access Journals (Sweden)

    Hsiao Ming-Ho

    2007-01-01

    Full Text Available With the development of wireless network and the improvement of mobile device capability, video streaming is more and more widespread in such an environment. Under the condition of limited resource and inherent constraints, appropriate video adaptations have become one of the most important and challenging issues in wireless multimedia applications. In this paper, we propose a novel content-aware video adaptation in order to effectively utilize resource and improve visual perceptual quality. First, the attention model is derived from analyzing the characteristics of brightness, location, motion vector, and energy features in compressed domain to reduce computation complexity. Then, through the integration of attention model, capability of client device and correlational statistic model, attractive regions of video scenes are derived. The information object- (IOB- weighted rate distortion model is used for adjusting the bit allocation. Finally, the video adaptation scheme dynamically adjusts video bitstream in frame level and object level. Experimental results validate that the proposed scheme achieves better visual quality effectively and efficiently.

  15. Implications of the law on video recording in clinical practice

    OpenAIRE

    Henken, Kirsten R.; Jansen, Frank-Willem; Klein, Jan; Stassen, Laurents; Dankelman, Jenny; Dobbelsteen, John

    2012-01-01

    textabstractBackground: Technological developments allow for a variety of applications of video recording in health care, including endoscopic procedures. Although the value of video registration is recognized, medicolegal concerns regarding the privacy of patients and professionals are growing. A clear understanding of the legal framework is lacking. Therefore, this research aims to provide insight into the juridical position of patients and professionals regarding video recording in health ...

  16. Implications of the law on video recording in clinical practice

    NARCIS (Netherlands)

    K.R. Henken (Kirsten R.); F-W. Jansen (Frank-Willem); J. Klein (Jan); L.P. Stassen (Laurents); J. Dankelman (Jenny); J.J. van den Dobbelsteen (John)

    2012-01-01

    textabstractBackground: Technological developments allow for a variety of applications of video recording in health care, including endoscopic procedures. Although the value of video registration is recognized, medicolegal concerns regarding the privacy of patients and professionals are growing. A

  17. Implications of the law on video recording in clinical practice

    NARCIS (Netherlands)

    Henken, K.R.; Jansen, F.W.; Klein, J.; Stassen, L.P.S.; Dankelman, J.; Van den Dobbelsteen, J.J.

    2012-01-01

    Background Technological developments allow for a variety of applications of video recording in health care, including endoscopic procedures. Although the value of video registration is recognized, medicolegal concerns regarding the privacy of patients and professionals are growing. A clear

  18. Distributed coding/decoding complexity in video sensor networks.

    Science.gov (United States)

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  19. Content-based video retrieval by example video clip

    Science.gov (United States)

    Dimitrova, Nevenka; Abdel-Mottaleb, Mohamed

    1997-01-01

    This paper presents a novel approach for video retrieval from a large archive of MPEG or Motion JPEG compressed video clips. We introduce a retrieval algorithm that takes a video clip as a query and searches the database for clips with similar contents. Video clips are characterized by a sequence of representative frame signatures, which are constructed from DC coefficients and motion information (`DC+M' signatures). The similarity between two video clips is determined by using their respective signatures. This method facilitates retrieval of clips for the purpose of video editing, broadcast news retrieval, or copyright violation detection.

  20. Popular video for rural development in Peru.

    Science.gov (United States)

    Calvelo Rios, J M

    1989-01-01

    Peru developed its first use of video for training and education in rural areas over a decade ago. On completion of the project in 1986, over 400,000 peasants had attended video courses lasting from 5-20 days. The courses included rural health, family planning, reforestation, agriculture, animal husbandry, housing, nutrition, and water sanitation. There were 125 course packages made and 1,260 video programs from 10-18 minutes in length. There were 780 additional video programs created on human resource development, socioeconomic diagnostics and culture. 160 specialists were trained to produce audiovisual materials and run the programs. Also, 70 trainers from other countries were trained. The results showed many used the training in practical applications. To promote rural development 2 things are needed , capital and physical inputs, such as equipment, fertilizers, pesticides, etc. The video project provided peasants an additional input that would help them manage the financial and physical inputs more efficiently. Video was used because many farmers are illiterate or speak a language different from the official one. Printed guides that contained many illustrations and few words served as memory aids and group discussions reinforced practical learning. By seeing, hearing, and doing, the training was effective. There were 46% women which made fertility and family planning subjects more easily communicated. The production of teaching modules included field investigations, academic research, field recording, tape editing, and experimental application in the field. An agreement with the peasants was initiated before a course began to help insure full participation and to also make sure resources were available to use the knowledge gained. The courses were limited to 30 and the cost per participant was $34 per course.

  1. A simplified 2D to 3D video conversion technology——taking virtual campus video production as an example

    Directory of Open Access Journals (Sweden)

    ZHUANG Huiyang

    2012-10-01

    Full Text Available This paper describes a simplified 2D to 3D Video Conversion Technology, taking virtual campus 3D video production as an example. First, it clarifies the meaning of the 2D to 3D Video Conversion Technology, and points out the disadvantages of traditional methods. Second, it forms an innovative and convenient method. A flow diagram, software and hardware configurations are presented. Finally, detailed description of the conversion steps and precautions are given in turn to the three processes, namely, preparing materials, modeling objects and baking landscapes, recording screen and converting videos .

  2. Optimization of video capturing and tone mapping in video camera systems

    NARCIS (Netherlands)

    Cvetkovic, S.D.

    2011-01-01

    Image enhancement techniques are widely employed in many areas of professional and consumer imaging, machine vision and computational imaging. Image enhancement techniques used in surveillance video cameras are complex systems involving controllable lenses, sensors and advanced signal processing. In

  3. Image and Video for Hearing Impaired People

    Directory of Open Access Journals (Sweden)

    Aran Oya

    2007-01-01

    Full Text Available We present a global overview of image- and video-processing-based methods to help the communication of hearing impaired people. Two directions of communication have to be considered: from a hearing person to a hearing impaired person and vice versa. In this paper, firstly, we describe sign language (SL and the cued speech (CS language which are two different languages used by the deaf community. Secondly, we present existing tools which employ SL and CS video processing and recognition for the automatic communication between deaf people and hearing people. Thirdly, we present the existing tools for reverse communication, from hearing people to deaf people that involve SL and CS video synthesis.

  4. Usability of aerial video footage for 3-D scene reconstruction and structural damage assessment

    Science.gov (United States)

    Cusicanqui, Johnny; Kerle, Norman; Nex, Francesco

    2018-06-01

    Remote sensing has evolved into the most efficient approach to assess post-disaster structural damage, in extensively affected areas through the use of spaceborne data. For smaller, and in particular, complex urban disaster scenes, multi-perspective aerial imagery obtained with unmanned aerial vehicles and derived dense color 3-D models are increasingly being used. These type of data allow the direct and automated recognition of damage-related features, supporting an effective post-disaster structural damage assessment. However, the rapid collection and sharing of multi-perspective aerial imagery is still limited due to tight or lacking regulations and legal frameworks. A potential alternative is aerial video footage, which is typically acquired and shared by civil protection institutions or news media and which tends to be the first type of airborne data available. Nevertheless, inherent artifacts and the lack of suitable processing means have long limited its potential use in structural damage assessment and other post-disaster activities. In this research the usability of modern aerial video data was evaluated based on a comparative quality and application analysis of video data and multi-perspective imagery (photos), and their derivative 3-D point clouds created using current photogrammetric techniques. Additionally, the effects of external factors, such as topography and the presence of smoke and moving objects, were determined by analyzing two different earthquake-affected sites: Tainan (Taiwan) and Pescara del Tronto (Italy). Results demonstrated similar usabilities for video and photos. This is shown by the short 2 cm of difference between the accuracies of video- and photo-based 3-D point clouds. Despite the low video resolution, the usability of these data was compensated for by a small ground sampling distance. Instead of video characteristics, low quality and application resulted from non-data-related factors, such as changes in the scene, lack of

  5. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  6. Digital video technologies and their network requirements

    Energy Technology Data Exchange (ETDEWEB)

    R. P. Tsang; H. Y. Chen; J. M. Brandt; J. A. Hutchins

    1999-11-01

    Coded digital video signals are considered to be one of the most difficult data types to transport due to their real-time requirements and high bit rate variability. In this study, the authors discuss the coding mechanisms incorporated by the major compression standards bodies, i.e., JPEG and MPEG, as well as more advanced coding mechanisms such as wavelet and fractal techniques. The relationship between the applications which use these coding schemes and their network requirements are the major focus of this study. Specifically, the authors relate network latency, channel transmission reliability, random access speed, buffering and network bandwidth with the various coding techniques as a function of the applications which use them. Such applications include High-Definition Television, Video Conferencing, Computer-Supported Collaborative Work (CSCW), and Medical Imaging.

  7. First results of the multi-purpose real-time processing video camera system on the Wendelstein 7-X stellarator and implications for future devices

    Science.gov (United States)

    Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team

    2018-01-01

    A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.

  8. Video auto stitching in multicamera surveillance system

    Science.gov (United States)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  9. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  10. Error and Congestion Resilient Video Streaming over Broadband Wireless

    Directory of Open Access Journals (Sweden)

    Laith Al-Jobouri

    2015-04-01

    Full Text Available In this paper, error resilience is achieved by adaptive, application-layer rateless channel coding, which is used to protect H.264/Advanced Video Coding (AVC codec data-partitioned videos. A packetization strategy is an effective tool to control error rates and, in the paper, source-coded data partitioning serves to allocate smaller packets to more important compressed video data. The scheme for doing this is applied to real-time streaming across a broadband wireless link. The advantages of rateless code rate adaptivity are then demonstrated in the paper. Because the data partitions of a video slice are each assigned to different network packets, in congestion-prone wireless networks the increased number of packets per slice and their size disparity may increase the packet loss rate from buffer overflows. As a form of congestion resilience, this paper recommends packet-size dependent scheduling as a relatively simple way of alleviating the buffer-overflow problem arising from data-partitioned packets. The paper also contributes an analysis of data partitioning and packet sizes as a prelude to considering scheduling regimes. The combination of adaptive channel coding and prioritized packetization for error resilience with packet-size dependent packet scheduling results in a robust streaming scheme specialized for broadband wireless and real-time streaming applications such as video conferencing, video telephony, and telemedicine.

  11. Video Conferencing for a Virtual Seminar Room

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Fosgerau, A.; Hansen, Peter Søren K.

    2002-01-01

    A PC-based video conferencing system for a virtual seminar room is presented. The platform is enhanced with DSPs for audio and video coding and processing. A microphone array is used to facilitate audio based speaker tracking, which is used for adaptive beam-forming and automatic camera...

  12. ALGORITHM OF PLACEMENT OF VIDEO SURVEILLANCE CAMERAS AND ITS SOFTWARE IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available Comprehensive distributed safety, control, and monitoring systems applied by companies and organizations of different ownership structure play a substantial role in the present-day society. Video surveillance elements that ensure image processing and decision making in automated or automatic modes are the essential components of new systems. This paper covers the modeling of video surveillance systems installed in buildings, and the algorithm, or pattern, of video camera placement with due account for nearly all characteristics of buildings, detection and recognition facilities, and cameras themselves. This algorithm will be subsequently implemented as a user application. The project contemplates a comprehensive approach to the automatic placement of cameras that take account of their mutual positioning and compatibility of tasks. The project objective is to develop the principal elements of the algorithm of recognition of a moving object to be detected by several cameras. The image obtained by different cameras will be processed. Parameters of motion are to be identified to develop a table of possible options of routes. The implementation of the recognition algorithm represents an independent research project to be covered by a different article. This project consists in the assessment of the degree of complexity of an algorithm of camera placement designated for identification of cases of inaccurate algorithm implementation, as well as in the formulation of supplementary requirements and input data by means of intercrossing sectors covered by neighbouring cameras. The project also contemplates identification of potential problems in the course of development of a physical security and monitoring system at the stage of the project design development and testing. The camera placement algorithm has been implemented as a software application that has already been pilot tested on buildings and inside premises that have irregular dimensions. The

  13. Spatiotemporal video deinterlacing using control grid interpolation

    Science.gov (United States)

    Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin

    2015-03-01

    With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.

  14. Video Browsing on Handheld Devices

    Science.gov (United States)

    Hürst, Wolfgang

    Recent improvements in processing power, storage space, and video codec development enable users now to playback video on their handheld devices in a reasonable quality. However, given the form factor restrictions of such a mobile device, screen size still remains a natural limit and - as the term "handheld" implies - always will be a critical resource. This is not only true for video but any data that is processed on such devices. For this reason, developers have come up with new and innovative ways to deal with large documents in such limited scenarios. For example, if you look at the iPhone, innovative techniques such as flicking have been introduced to skim large lists of text (e.g. hundreds of entries in your music collection). Automatically adapting the zoom level to, for example, the width of table cells when double tapping on the screen enables reasonable browsing of web pages that have originally been designed for large, desktop PC sized screens. A multi touch interface allows you to easily zoom in and out of large text documents and images using two fingers. In the next section, we will illustrate that advanced techniques to browse large video files have been developed in the past years, as well. However, if you look at state-of-the-art video players on mobile devices, normally just simple, VCR like controls are supported (at least at the time of this writing) that only allow users to just start, stop, and pause video playback. If supported at all, browsing and navigation functionality is often restricted to simple skipping of chapters via two single buttons for backward and forward navigation and a small and thus not very sensitive timeline slider.

  15. Nuclear reactions video (knowledge base on low energy nuclear physics)

    International Nuclear Information System (INIS)

    Zagrebaev, V.; Kozhin, A.

    1999-01-01

    The NRV (nuclear reactions video) is an open and permanently extended global system of management and graphical representation of nuclear data and video-graphic computer simulation of low energy nuclear dynamics. It consists of a complete and renewed nuclear database and well known theoretical models of low energy nuclear reactions altogether forming the 'low energy nuclear knowledge base'. The NRV solves two main problems: 1) fast and visualized obtaining and processing experimental data on nuclear structure and nuclear reactions; 2) possibility for any inexperienced user to analyze experimental data within reliable commonly used models of nuclear dynamics. The system is based on the realization of the following principal things: the net and code compatibility with the main existing nuclear databases; maximal simplicity in handling: extended menu, friendly graphical interface, hypertext description of the models, and so on; maximal visualization of input data, dynamics of studied processes and final results by means of real three-dimensional images, plots, tables and formulas and a three-dimensional animation. All the codes are composed as the real Windows applications and work under Windows 95/NT

  16. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  17. Machinima and Video-Based Soft-Skills Training for Frontline Healthcare Workers.

    Science.gov (United States)

    Conkey, Curtis A; Bowers, Clint; Cannon-Bowers, Janis; Sanchez, Alicia

    2013-02-01

    Multimedia training methods have traditionally relied heavily on video-based technologies, and significant research has shown these to be very effective training tools. However, production of video is time and resource intensive. Machinima technologies are based on videogaming technology. Machinima technology allows videogame technology to be manipulated into unique scenarios based on entertainment or training and practice applications. Machinima is the converting of these unique scenarios into video vignettes that tell a story. These vignettes can be interconnected with branching points in much the same way that education videos are interconnected as vignettes between decision points. This study addressed the effectiveness of machinima-based soft-skills education using avatar actors versus the traditional video teaching application using human actors in the training of frontline healthcare workers. This research also investigated the difference between presence reactions when using avatar actor-produced video vignettes as compared with human actor-produced video vignettes. Results indicated that the difference in training and/or practice effectiveness is statistically insignificant for presence, interactivity, quality, and the skill of assertiveness. The skill of active listening presented a mixed result indicating the need for careful attention to detail in situations where body language and facial expressions are critical to communication. This study demonstrates that a significant opportunity exists for the exploitation of avatar actors in video-based instruction.

  18. Interactive design of patient-oriented video-games for rehabilitation: concept and application.

    Science.gov (United States)

    Lupinacci, Giorgia; Gatti, Gianluca; Melegari, Corrado; Fontana, Saverio

    2018-04-01

    Serious video-games are innovative tools used to train the motor skills of subjects affected by neurological disorders. They are often developed to train a specific type of patients and the rules of the game are standardly defined. A system that allows the therapist to design highly patient-oriented video-games, without specific informatics skills, is proposed. The system consists of one personal computer, two screens, a Kinect™ sensor and a specific software developed here for the design of the video-games. It was tested with the collaboration of three therapists and six patients, and two questionnaires were filled in by each patient to evaluate the appreciation of the rehabilitative sessions. The therapists learned easily how to use the system, and no serious difficulties were encountered by the patients. The questionnaires showed an overall good satisfaction by the patients and highlighted the key-role of the therapist in involving the patients during the rehabilitative session. It was found that the proposed system is effective for developing patient-oriented video-games for rehabilitation. The two main advantages are that the therapist is allowed to (i) develop personalized video-games without informatics skills and (ii) adapt the game settings to patients affected by different pathologies. Implications for rehabilitation Virtual reality and serious video games offer the opportunity to transform the traditional therapy into a more pleasant experience, allowing patients to train their motor and cognitive skills. Both the therapists and the patients should be involved in the development of rehabilitative solutions to be highly patient-oriented. A system for the design of rehabilitative games by the therapist is described and the feedback of three therapists and six patients is reported.

  19. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution - an application in higher education.

    Science.gov (United States)

    Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan

    2015-11-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study. © 2015 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  20. Motion based parsing for video from observational psychology

    Science.gov (United States)

    Kokaram, Anil; Doyle, Erika; Lennon, Daire; Joyeux, Laurent; Fuller, Ray

    2006-01-01

    In Psychology it is common to conduct studies involving the observation of humans undertaking some task. The sessions are typically recorded on video and used for subjective visual analysis. The subjective analysis is tedious and time consuming, not only because much useless video material is recorded but also because subjective measures of human behaviour are not necessarily repeatable. This paper presents tools using content based video analysis that allow automated parsing of video from one such study involving Dyslexia. The tools rely on implicit measures of human motion that can be generalised to other applications in the domain of human observation. Results comparing quantitative assessment of human motion with subjective assessment are also presented, illustrating that the system is a useful scientific tool.

  1. Automatic processing of CERN video, audio and photo archives

    International Nuclear Information System (INIS)

    Kwiatek, M

    2008-01-01

    The digitalization of CERN audio-visual archives, a major task currently in progress, will generate over 40 TB of video, audio and photo files. Storing these files is one issue, but a far more important challenge is to provide long-time coherence of the archive and to make these files available on-line with minimum manpower investment. An infrastructure, based on standard CERN services, has been implemented, whereby master files, stored in the CERN Distributed File System (DFS), are discovered and scheduled for encoding into lightweight web formats based on predefined profiles. Changes in master files, conversion profiles or in the metadata database (read from CDS, the CERN Document Server) are automatically detected and the media re-encoded whenever necessary. The encoding processes are run on virtual servers provided on-demand by the CERN Server Self Service Centre, so that new servers can be easily configured to adapt to higher load. Finally, the generated files are made available from the CERN standard web servers with streaming implemented using Windows Media Services

  2. Automatic processing of CERN video, audio and photo archives

    Energy Technology Data Exchange (ETDEWEB)

    Kwiatek, M [CERN, Geneva (Switzerland)], E-mail: Michal.Kwiatek@cem.ch

    2008-07-15

    The digitalization of CERN audio-visual archives, a major task currently in progress, will generate over 40 TB of video, audio and photo files. Storing these files is one issue, but a far more important challenge is to provide long-time coherence of the archive and to make these files available on-line with minimum manpower investment. An infrastructure, based on standard CERN services, has been implemented, whereby master files, stored in the CERN Distributed File System (DFS), are discovered and scheduled for encoding into lightweight web formats based on predefined profiles. Changes in master files, conversion profiles or in the metadata database (read from CDS, the CERN Document Server) are automatically detected and the media re-encoded whenever necessary. The encoding processes are run on virtual servers provided on-demand by the CERN Server Self Service Centre, so that new servers can be easily configured to adapt to higher load. Finally, the generated files are made available from the CERN standard web servers with streaming implemented using Windows Media Services.

  3. Effective Quality-of-Service Renegotiating Schemes for Streaming Video

    Directory of Open Access Journals (Sweden)

    Song Hwangjun

    2004-01-01

    Full Text Available This paper presents effective quality-of-service renegotiating schemes for streaming video. The conventional network supporting quality of service generally allows a negotiation at a call setup. However, it is not efficient for the video application since the compressed video traffic is statistically nonstationary. Thus, we consider the network supporting quality-of-service renegotiations during the data transmission and study effective quality-of-service renegotiating schemes for streaming video. The token bucket model, whose parameters are token filling rate and token bucket size, is adopted for the video traffic model. The renegotiating time instants and the parameters are determined by analyzing the statistical information of compressed video traffic. In this paper, two renegotiating approaches, that is, fixed renegotiating interval case and variable renegotiating interval case, are examined. Finally, the experimental results are provided to show the performance of the proposed schemes.

  4. Searching for Concurrent Design Patterns in Video Games

    Science.gov (United States)

    Best, Micah J.; Fedorova, Alexandra; Dickie, Ryan; Tagliasacchi, Andrea; Couture-Beil, Alex; Mustard, Craig; Mottishaw, Shane; Brown, Aron; Huang, Zhi Feng; Xu, Xiaoyuan; Ghazali, Nasser; Brownsword, Andrew

    The transition to multicore architectures has dramatically underscored the necessity for parallelism in software. In particular, while new gaming consoles are by and large multicore, most existing video game engines are essentially sequential and thus cannot easily take advantage of this hardware. In this paper we describe techniques derived from our experience parallelizing an open-source video game Cube 2. We analyze the structure and unique requirements of this complex application domain, drawing conclusions about parallelization tools and techniques applicable therein. Our experience and analysis convinced us that while existing tools and techniques can be used to solve parts of this problem, none of them constitutes a comprehensive solution. As a result we were inspired to design a new parallel programming environment (PPE) targeted specifically at video game engines and other complex soft real-time systems. The initial implementation of this PPE, Cascade, and its performance analysis are also presented.

  5. A video-polarimeter and its applications in physics and astronometric observations

    Science.gov (United States)

    Dollfus, Audouin; Fauconnier, Thierry; Dreux, Michel; Boumier, Patrick; Pouchol, Thierry

    1989-01-01

    A video-polarimeter system is described which can image a field in nonpolarized, circularly polarized, or linearly polarized light. Images are formed using a Peltier-effect cooled CCD detector array and a quick look video system, and are stored in a 6-Mo random access memory. The system is demonstrated with a two-dimensional measurement of a plexiglass rod, an open-air inspection of a car park, and a telescopic observation of the moon.

  6. Joint Optimized CPU and Networking Control Scheme for Improved Energy Efficiency in Video Streaming on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Sung-Woong Jo

    2017-01-01

    Full Text Available Video streaming service is one of the most popular applications for mobile users. However, mobile video streaming services consume a lot of energy, resulting in a reduced battery life. This is a critical problem that results in a degraded user’s quality of experience (QoE. Therefore, in this paper, a joint optimization scheme that controls both the central processing unit (CPU and wireless networking of the video streaming process for improved energy efficiency on mobile devices is proposed. For this purpose, the energy consumption of the network interface and CPU is analyzed, and based on the energy consumption profile a joint optimization problem is formulated to maximize the energy efficiency of the mobile device. The proposed algorithm adaptively adjusts the number of chunks to be downloaded and decoded in each packet. Simulation results show that the proposed algorithm can effectively improve the energy efficiency when compared with the existing algorithms.

  7. Continuous Video Modeling to Prompt Completion of Multi-Component Tasks by Adults with Moderate Intellectual Disability

    Science.gov (United States)

    Mechling, Linda C.; Ayres, Kevin M.; Purrazzella, Kaitlin; Purrazzella, Kimberly

    2014-01-01

    This investigation examined the ability of four adults with moderate intellectual disability to complete multi-component tasks using continuous video modeling. Continuous video modeling, which is a newly researched application of video modeling, presents video in a "looping" format which automatically repeats playing of the video while…

  8. Video-based real-time on-street parking occupancy detection system

    Science.gov (United States)

    Bulan, Orhan; Loce, Robert P.; Wu, Wencheng; Wang, YaoRong; Bernal, Edgar A.; Fan, Zhigang

    2013-10-01

    Urban parking management is receiving significant attention due to its potential to reduce traffic congestion, fuel consumption, and emissions. Real-time parking occupancy detection is a critical component of on-street parking management systems, where occupancy information is relayed to drivers via smart phone apps, radio, Internet, on-road signs, or global positioning system auxiliary signals. Video-based parking occupancy detection systems can provide a cost-effective solution to the sensing task while providing additional functionality for traffic law enforcement and surveillance. We present a video-based on-street parking occupancy detection system that can operate in real time. Our system accounts for the inherent challenges that exist in on-street parking settings, including illumination changes, rain, shadows, occlusions, and camera motion. Our method utilizes several components from video processing and computer vision for motion detection, background subtraction, and vehicle detection. We also present three traffic law enforcement applications: parking angle violation detection, parking boundary violation detection, and exclusion zone violation detection, which can be integrated into the parking occupancy cameras as a value-added option. Our experimental results show that the proposed parking occupancy detection method performs in real-time at 5 frames/s and achieves better than 90% detection accuracy across several days of videos captured in a busy street block under various weather conditions such as sunny, cloudy, and rainy, among others.

  9. Video denoising, deblocking, and enhancement through separable 4-D nonlocal spatiotemporal transforms.

    Science.gov (United States)

    Maggioni, Matteo; Boracchi, Giacomo; Foi, Alessandro; Egiazarian, Karen

    2012-09-01

    We propose a powerful video filtering algorithm that exploits temporal and spatial redundancy characterizing natural video sequences. The algorithm implements the paradigm of nonlocal grouping and collaborative filtering, where a higher dimensional transform-domain representation of the observations is leveraged to enforce sparsity, and thus regularize the data: 3-D spatiotemporal volumes are constructed by tracking blocks along trajectories defined by the motion vectors. Mutually similar volumes are then grouped together by stacking them along an additional fourth dimension, thus producing a 4-D structure, termed group, where different types of data correlation exist along the different dimensions: local correlation along the two dimensions of the blocks, temporal correlation along the motion trajectories, and nonlocal spatial correlation (i.e., self-similarity) along the fourth dimension of the group. Collaborative filtering is then realized by transforming each group through a decorrelating 4-D separable transform and then by shrinkage and inverse transformation. In this way, the collaborative filtering provides estimates for each volume stacked in the group, which are then returned and adaptively aggregated to their original positions in the video. The proposed filtering procedure addresses several video processing applications, such as denoising, deblocking, and enhancement of both grayscale and color data. Experimental results prove the effectiveness of our method in terms of both subjective and objective visual quality, and show that it outperforms the state of the art in video denoising.

  10. Video Surveillance of Epilepsy Patients using Color Image Processing

    DEFF Research Database (Denmark)

    Bager, Gitte; Vilic, Kenan; Alving, Jørgen

    2007-01-01

    This report introduces a method for tracking of patients under video surveillance based on a marker system. The patients are not restricted in their movements, which requires a tracking system that can overcome non-ideal scenes e.g. occlusions, very fast movements, lightning issues and other movi...

  11. Video surveillance of epilepsy patients using color image processing

    DEFF Research Database (Denmark)

    Bager, Gitte; Vilic, Kenan; Vilic, Adnan

    2014-01-01

    This paper introduces a method for tracking patients under video surveillance based on a color marker system. The patients are not restricted in their movements, which requires a tracking system that can overcome non-ideal scenes e.g. occlusions, very fast movements, lighting issues and other mov...

  12. Computer-Aided Video Differential Planimetry

    Science.gov (United States)

    Tobin, Michael; Djoleto, Ben D.

    1984-08-01

    THE VIDEO DIFFERENTIAL PLANIMETER (VDP)1 is a re-mote sensing instrument that can measure minute changes in the area of any object seen by an optical scanning system. The composite video waveforms obtained by scanning the object against a contrasting back-ground are amplified and shaped to yield a sequence of constant amplitude pulses whose polarity distinguishes the studied area from its background and whose varying widths reflect the dynamics of the viewed object. These pulses are passed through a relatively long time-constant capacitor-resistor circuit and are then fed into an integrator. The net integration voltage resulting from the most recent sequence of object-background time pulses is recorded and the integrator is returned to zero at the end of each video frame. If the object's area remains constant throughout the following frame, the integrator's summation will also remain constant. However, if the object's area varies, the positive and negative time pulses entering the integrator will change, and the integrator's summation will vary proportionately. The addition of a computer interface and a video recorder enhances the versatility and the resolving power of the VDP by permitting the repeated study and analysis of selected portions of the recorded data, thereby uncovering the major sources of the object's dynamics. Among the medical and biological procedures for which COMPUTER-AIDED VIDEO DIFFERENTIAL PLANIMETRY is suitable are Ophthalmoscopy, Endoscopy, Microscopy, Plethysmography, etc. A recent research study in Ophthalmoscopy2 will be cited to suggest a useful application of Video Differential Planimetry.

  13. Training basic laparoscopic skills using a custom-made video game

    OpenAIRE

    Goris, Jetse; Jalink, Maarten B.; ten Cate Hoedemaker, Henk O.

    2014-01-01

    Video games are accepted and used for a wide variety of applications. In the medical world, research on the positive effects of playing games on basic laparoscopic skills is rapidly increasing. Although these benefits have been proven several times, no institution actually uses video games for surgical training. This Short Communication describes some of the theoretical backgrounds, development and underlying educational foundations of a specifically designed video game and custom-made hardwa...

  14. GRAPHICS PROCESSING UNITS: MORE THAN THE PATHWAY TO REALISTIC VIDEO-GAMES

    Directory of Open Access Journals (Sweden)

    CARLOS TRUJILLO

    2011-01-01

    Full Text Available El amplio mercado de los juegos de video ha impulsado un acelerado progreso del hardware y software orientado a lograr ambientes de juego de mayor realidad. Entre estos desarrollos se cuentan las unidades de procesamiento gráfico (GPU, cuyo objetivo es liberar la unidad de procesamiento principal (CPU de los elaborados cómputos que proporcionan "vida" a los juegos de video. Para lograrlo, las GPUs son equipadas con múltiples núcleos de procesamiento operando en paralelo, lo cual permite utilizarlas en tareas mucho más diversas que el desarrollo de juegos de video. En este artículo se presenta una breve descripción de las características de compute unified device architecture (CUDA TM, una arquitectura de cómputo paralelo en GPUs. Se presenta una aplicación de esta arquitectura en la reconstrucción numérica de hologramas, para la cual se reporta una aceleración de 11X con respecto al desempeño alcanzado en una CPU.

  15. An integrated multispectral video and environmental monitoring system for the study of coastal processes and the support of beach management operations

    Science.gov (United States)

    Ghionis, George; Trygonis, Vassilis; Karydis, Antonis; Vousdoukas, Michalis; Alexandrakis, George; Drakopoulos, Panos; Amdreadis, Olympos; Psarros, Fotis; Velegrakis, Antonis; Poulos, Serafim

    2016-04-01

    Effective beach management requires environmental assessments that are based on sound science, are cost-effective and are available to beach users and managers in an accessible, timely and transparent manner. The most common problems are: 1) The available field data are scarce and of sub-optimal spatio-temporal resolution and coverage, 2) our understanding of local beach processes needs to be improved in order to accurately model/forecast beach dynamics under a changing climate, and 3) the information provided by coastal scientists/engineers in the form of data, models and scientific interpretation is often too complicated to be of direct use by coastal managers/decision makers. A multispectral video system has been developed, consisting of one or more video cameras operating in the visible part of the spectrum, a passive near-infrared (NIR) camera, an active NIR camera system, a thermal infrared camera and a spherical video camera, coupled with innovative image processing algorithms and a telemetric system for the monitoring of coastal environmental parameters. The complete system has the capability to record, process and communicate (in quasi-real time) high frequency information on shoreline position, wave breaking zones, wave run-up, erosion hot spots along the shoreline, nearshore wave height, turbidity, underwater visibility, wind speed and direction, air and sea temperature, solar radiation, UV radiation, relative humidity, barometric pressure and rainfall. An innovative, remotely-controlled interactive visual monitoring system, based on the spherical video camera (with 360°field of view), combines the video streams from all cameras and can be used by beach managers to monitor (in real time) beach user numbers, flow activities and safety at beaches of high touristic value. The high resolution near infrared cameras permit 24-hour monitoring of beach processes, while the thermal camera provides information on beach sediment temperature and moisture, can

  16. Video segmentation for post-production

    Science.gov (United States)

    Wills, Ciaran

    2001-12-01

    Specialist post-production is an industry that has much to gain from the application of content-based video analysis techniques. However the types of material handled in specialist post-production, such as television commercials, pop music videos and special effects are quite different in nature from the typical broadcast material which many video analysis techniques are designed to work with; shots are short and highly dynamic, and the transitions are often novel or ambiguous. We address the problem of scene change detection and develop a new algorithm which tackles some of the common aspects of post-production material that cause difficulties for past algorithms, such as illumination changes and jump cuts. Operating in the compressed domain on Motion JPEG compressed video, our algorithm detects cuts and fades by analyzing each JPEG macroblock in the context of its temporal and spatial neighbors. Analyzing the DCT coefficients directly we can extract the mean color of a block and an approximate detail level. We can also perform an approximated cross-correlation between two blocks. The algorithm is part of a set of tools being developed to work with an automated asset management system designed specifically for use in post-production facilities.

  17. Gradual cut detection using low-level vision for digital video

    Science.gov (United States)

    Lee, Jae-Hyun; Choi, Yeun-Sung; Jang, Ok-bae

    1996-09-01

    Digital video computing and organization is one of the important issues in multimedia system, signal compression, or database. Video should be segmented into shots to be used for identification and indexing. This approach requires a suitable method to automatically locate cut points in order to separate shot in a video. Automatic cut detection to isolate shots in a video has received considerable attention due to many practical applications; our video database, browsing, authoring system, retrieval and movie. Previous studies are based on a set of difference mechanisms and they measured the content changes between video frames. But they could not detect more special effects which include dissolve, wipe, fade-in, fade-out, and structured flashing. In this paper, a new cut detection method for gradual transition based on computer vision techniques is proposed. And then, experimental results applied to commercial video are presented and evaluated.

  18. Zoneminder as ‘Software as a Service’ and Load Balancing of Video Surveillance Requests

    DEFF Research Database (Denmark)

    Deshmukh, Aaradhana A.; Mihovska, Albena D.; Prasad, Ramjee

    2012-01-01

    Cloud computing is evolving as a key computing platform for sharing resources that include infrastructures, softwares, applications, and business processes. Virtualization is a core technology for enabling cloud resource sharing. Software as a Service (SaaS) on the cloud platform provides software...... application vendors a Web based delivery model to serve large amount of clients with multi-tenancy based infrastructure and application sharing architecture so as to get great benefit from the economy of scale. The emergence of the Software-as-a-Service (SaaS) business model has attracted great attentions...... from both researchers and practitioners. SaaS vendors deliver on demand information processing services to users, and thus offer computing utility rather than the standalone software itself. This paper proposes a deployment of an open source video surveillance application named Zoneminder...

  19. Segmentation of Pollen Tube Growth Videos Using Dynamic Bi-Modal Fusion and Seam Carving.

    Science.gov (United States)

    Tambo, Asongu L; Bhanu, Bir

    2016-05-01

    The growth of pollen tubes is of significant interest in plant cell biology, as it provides an understanding of internal cell dynamics that affect observable structural characteristics such as cell diameter, length, and growth rate. However, these parameters can only be measured in experimental videos if the complete shape of the cell is known. The challenge is to accurately obtain the cell boundary in noisy video images. Usually, these measurements are performed by a scientist who manually draws regions-of-interest on the images displayed on a computer screen. In this paper, a new automated technique is presented for boundary detection by fusing fluorescence and brightfield images, and a new efficient method of obtaining the final cell boundary through the process of Seam Carving is proposed. This approach takes advantage of the nature of the fusion process and also the shape of the pollen tube to efficiently search for the optimal cell boundary. In video segmentation, the first two frames are used to initialize the segmentation process by creating a search space based on a parametric model of the cell shape. Updates to the search space are performed based on the location of past segmentations and a prediction of the next segmentation.Experimental results show comparable accuracy to a previous method, but significant decrease in processing time. This has the potential for real time applications in pollen tube microscopy.

  20. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  1. Helping activate children through the use of video games

    OpenAIRE

    Lomax, Jørn Vollan

    2015-01-01

    The video games industry is now one of the biggest entertainment industries in the world. Forb es magazine estimates that the video game industry will sell games for 70 billion dollars by the end of 2015, and the biggest growth is in the mobile market. While most of the video game industry is creating games strictly for entertainment purp oses, there is a growing demand for games that can b e used for other applications. This pap er will lo ok into making games that help chi...

  2. GIF Video Sentiment Detection Using Semantic Sequence

    Directory of Open Access Journals (Sweden)

    Dazhen Lin

    2017-01-01

    Full Text Available With the development of social media, an increasing number of people use short videos in social media applications to express their opinions and sentiments. However, sentiment detection of short videos is a very challenging task because of the semantic gap problem and sequence based sentiment understanding problem. In this context, we propose a SentiPair Sequence based GIF video sentiment detection approach with two contributions. First, we propose a Synset Forest method to extract sentiment related semantic concepts from WordNet to build a robust SentiPair label set. This approach considers the semantic gap between label words and selects a robust label subset which is related to sentiment. Secondly, we propose a SentiPair Sequence based GIF video sentiment detection approach that learns the semantic sequence to understand the sentiment from GIF videos. Our experiment results on GSO-2016 (GIF Sentiment Ontology data show that our approach not only outperforms four state-of-the-art classification methods but also shows better performance than the state-of-the-art middle level sentiment ontology features, Adjective Noun Pairs (ANPs.

  3. [Flexible ENT endoscopy--video technic].

    Science.gov (United States)

    Rasinger, G A; Horak, F

    1985-02-01

    This study discusses the solutions to the problem of documenting moving processes in the field of otolaryngology. A flexible bronchoscope and video equipment connected to it are presented as a specific solution of the problem, with ample of observations. A technical comparison is used as the basis for a discussion of the pros and cons of the video and film techniques. A successful arrangement of examination facilities illustrates the future of flexible-endoscope techniques in the field of otolaryngology.

  4. Training basic laparoscopic skills using a custom-made video game

    NARCIS (Netherlands)

    Goris, Jetse; Jalink, Maarten B; Ten Cate Hoedemaker, Henk O

    Video games are accepted and used for a wide variety of applications. In the medical world, research on the positive effects of playing games on basic laparoscopic skills is rapidly increasing. Although these benefits have been proven several times, no institution actually uses video games for

  5. Experimental evidence for suspence as determinant of video game enjoyment

    NARCIS (Netherlands)

    Klimmt, C.; Rizzo, A.; Vorderer, P.A.; Koch, J.; Fischer, T.

    2009-01-01

    Based on theoretical assumptions from film psychology and their application to video games, the hypothesis is tested that suspense is a major factor in video game enjoyment. A first-person shooter game was experimentally manipulated to create either a low level or a high level of suspense.

  6. The newest digital signal processing

    International Nuclear Information System (INIS)

    Lee, Chae Uk

    2002-08-01

    This book deal with the newest digital signal processing, which contains introduction on conception of digital signal processing, constitution and purpose, signal and system such as signal, continuos signal, discrete signal and discrete system, I/O expression on impress response, convolution, mutual connection of system and frequency character,z transform of definition, range, application of z transform and relationship with laplace transform, Discrete fourier, Fast fourier transform on IDFT algorithm and FFT application, foundation of digital filter of notion, expression, types, frequency characteristic of digital filter and design order of filter, Design order of filter, Design of FIR digital filter, Design of IIR digital filter, Adaptive signal processing, Audio signal processing, video signal processing and application of digital signal processing.

  7. Automatic Association of Chats and Video Tracks for Activity Learning and Recognition in Aerial Video Surveillance

    Directory of Open Access Journals (Sweden)

    Riad I. Hammoud

    2014-10-01

    Full Text Available We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA and multi-media indexing and explorer (MINER. VIVA utilizes analyst call-outs (ACOs in the form of chat messages (voice-to-text to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1 a fusion of graphical track and text data using probabilistic methods; (2 an activity pattern learning framework to support querying an index of activities of interest (AOIs and targets of interest (TOIs by movement type and geolocation; and (3 a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV. VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat Sensors 2014, 14 19844 messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports.

  8. Automatic association of chats and video tracks for activity learning and recognition in aerial video surveillance.

    Science.gov (United States)

    Hammoud, Riad I; Sahin, Cem S; Blasch, Erik P; Rhodes, Bradley J; Wang, Tao

    2014-10-22

    We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat Sensors 2014, 14 19844 messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports.

  9. Enhanced visual short-term memory in action video game players.

    Science.gov (United States)

    Blacker, Kara J; Curby, Kim M

    2013-08-01

    Visual short-term memory (VSTM) is critical for acquiring visual knowledge and shows marked individual variability. Previous work has illustrated a VSTM advantage among action video game players (Boot et al. Acta Psychologica 129:387-398, 2008). A growing body of literature has suggested that action video game playing can bolster visual cognitive abilities in a domain-general manner, including abilities related to visual attention and the speed of processing, providing some potential bases for this VSTM advantage. In the present study, we investigated the VSTM advantage among video game players and assessed whether enhanced processing speed can account for this advantage. Experiment 1, using simple colored stimuli, revealed that action video game players demonstrate a similar VSTM advantage over nongamers, regardless of whether they are given limited or ample time to encode items into memory. Experiment 2, using complex shapes as the stimuli to increase the processing demands of the task, replicated this VSTM advantage, irrespective of encoding duration. These findings are inconsistent with a speed-of-processing account of this advantage. An alternative, attentional account, grounded in the existing literature on the visuo-cognitive consequences of video game play, is discussed.

  10. Action video game training for cognitive enhancement

    OpenAIRE

    Green, C. Shawn; Bavelier, Daphné

    2015-01-01

    Here we review the literature examining the perceptual, attentional, and cognitive benefits of playing one sub-type of video games known as ‘action video games,’ as well as the mechanistic underpinnings of these behavioral effects. We then outline evidence indicating the potential usefulness of these commercial off-the-shelf games for practical, real-world applications such as rehabilitation or the training of job-related skills. Finally, we discuss potential core characteristics of action vi...

  11. Video interviewing as a learning resource

    DEFF Research Database (Denmark)

    Hedemann, Lars; Søndergaard, Helle Alsted

    2011-01-01

    The present investigation was carried out as a pilot study, with the aim of obtaining exploratory insights into the field of learning, and more specifically, how the use of video technology can be used as a mean to excel the outcome of the learning process. The motivation behind the study has its...... basis in the management education literature, and thereby in the discussion of how to organize teaching, in order to equip students with improved skills in reflective realization. Following the notion that experience is the basis for knowledge, the study was set out to explore how students at higher...... education programmes, i.e. at MSc and MBA level, can benefit from utilizing video recorded interviews in their process of learning and reflection. On the basis of the study, it is suggested that video interviewing makes up an interesting alternative to other learning approaches such as Simulation...

  12. Surgical gesture classification from video and kinematic data.

    Science.gov (United States)

    Zappella, Luca; Béjar, Benjamín; Hager, Gregory; Vidal, René

    2013-10-01

    Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Open-source telemedicine platform for wireless medical video communication.

    Science.gov (United States)

    Panayides, A; Eleftheriou, I; Pantziaris, M

    2013-01-01

    An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings.

  14. Open-Source Telemedicine Platform for Wireless Medical Video Communication

    Science.gov (United States)

    Panayides, A.; Eleftheriou, I.; Pantziaris, M.

    2013-01-01

    An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings. PMID:23573082

  15. Open-Source Telemedicine Platform for Wireless Medical Video Communication

    Directory of Open Access Journals (Sweden)

    A. Panayides

    2013-01-01

    Full Text Available An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN and 3.5G high-speed packet access (HSPA wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings.

  16. Cross-Layer Measurement on an IEEE 802.11g Wireless Network Supporting MPEG-2 Video Streaming Applications in the Presence of Interference

    Directory of Open Access Journals (Sweden)

    Alessandro Sona

    2010-01-01

    Full Text Available The performance of wireless local area networks supporting video streaming applications, based on MPEG-2 video codec, in the presence of interference is here dealt with. IEEE 802.11g standard wireless networks, that do not support QoS in according with IEEE 802.11e standard, are, in particular, accounted for and Bluetooth signals, additive white Gaussian noise, and competitive data traffic are considered as sources of interference. The goal is twofold: from one side, experimentally assessing and correlating the values that some performance metrics assume at the same time at different layers of an IEEE 802.11g WLAN delivering video streaming in the presence of in-channel interference; from the other side, deducing helpful and practical hints for designers and technicians, in order to efficiently assess and enhance the performance of an IEEE 802.11g WLAN supporting video streaming in some suitable setup conditions and in the presence of interference. To this purpose, an experimental analysis is planned following a cross-layer measurement approach, and a proper testbed within a semianechoic chamber is used. Valuable results are obtained in terms of signal-to-interference ratio, packet loss ratio, jitter, video quality, and interference data rate; helpful hints for designers and technicians are finally gained.

  17. Star Wars in psychotherapy: video games in the office.

    Science.gov (United States)

    Ceranoglu, Tolga Atilla

    2010-01-01

    Video games are used in medical practice during psycho-education in chronic disease management, physical therapy, rehabilitation following traumatic brain injury, and as an adjunct in pain management during medical procedures or cancer chemotherapy. In psychiatric practice, video games aid in social skills training of children with developmental delays and in cognitive behavioral therapy (CBT). This most popular children's toy may prove a useful tool in dynamic psychotherapy of youth. The author provides a framework for using video games in psychotherapy by considering the characteristics of video games and describes the ways their use has facilitated various stages of therapeutic process. Just as other play techniques build a relationship and encourage sharing of emotional themes, sitting together in front of a console and screen facilitates a relationship and allows a safe path for the patient's conflict to emerge. During video game play, the therapist may observe thought processes, impulsivity, temperament, decision-making, and sharing, among other aspects of a child's clinical presentation. Several features inherent to video games require a thoughtful approach as resistance and transference in therapy may be elaborated differently in comparison to more traditional toys. Familiarity with the video game content and its dynamics benefits child mental health clinicians in their efforts to help children and their families.

  18. Low Cost Efficient Deliverying Video Surveillance Service to Moving Guard for Smart Home.

    Science.gov (United States)

    Gualotuña, Tatiana; Macías, Elsa; Suárez, Álvaro; C, Efraín R Fonseca; Rivadeneira, Andrés

    2018-03-01

    Low-cost video surveillance systems are attractive for Smart Home applications (especially in emerging economies). Those systems use the flexibility of the Internet of Things to operate the video camera only when an intrusion is detected. We are the only ones that focus on the design of protocols based on intelligent agents to communicate the video of an intrusion in real time to the guards by wireless or mobile networks. The goal is to communicate, in real time, the video to the guards who can be moving towards the smart home. However, this communication suffers from sporadic disruptions that difficults the control and drastically reduces user satisfaction and operativity of the system. In a novel way, we have designed a generic software architecture based on design patterns that can be adapted to any hardware in a simple way. The implanted hardware is of very low economic cost; the software frameworks are free. In the experimental tests we have shown that it is possible to communicate to the moving guard, intrusion notifications (by e-mail and by instant messaging), and the first video frames in less than 20 s. In addition, we automatically recovered the frames of video lost in the disruptions in a transparent way to the user, we supported vertical handover processes and we could save energy of the smartphone's battery. However, the most important thing was that the high satisfaction of the people who have used the system.

  19. Low Cost Efficient Deliverying Video Surveillance Service to Moving Guard for Smart Home

    Science.gov (United States)

    Gualotuña, Tatiana; Fonseca C., Efraín R.; Rivadeneira, Andrés

    2018-01-01

    Low-cost video surveillance systems are attractive for Smart Home applications (especially in emerging economies). Those systems use the flexibility of the Internet of Things to operate the video camera only when an intrusion is detected. We are the only ones that focus on the design of protocols based on intelligent agents to communicate the video of an intrusion in real time to the guards by wireless or mobile networks. The goal is to communicate, in real time, the video to the guards who can be moving towards the smart home. However, this communication suffers from sporadic disruptions that difficults the control and drastically reduces user satisfaction and operativity of the system. In a novel way, we have designed a generic software architecture based on design patterns that can be adapted to any hardware in a simple way. The implanted hardware is of very low economic cost; the software frameworks are free. In the experimental tests we have shown that it is possible to communicate to the moving guard, intrusion notifications (by e-mail and by instant messaging), and the first video frames in less than 20 s. In addition, we automatically recovered the frames of video lost in the disruptions in a transparent way to the user, we supported vertical handover processes and we could save energy of the smartphone's battery. However, the most important thing was that the high satisfaction of the people who have used the system. PMID:29494551

  20. Low Cost Efficient Deliverying Video Surveillance Service to Moving Guard for Smart Home

    Directory of Open Access Journals (Sweden)

    Tatiana Gualotuña

    2018-03-01

    Full Text Available Low-cost video surveillance systems are attractive for Smart Home applications (especially in emerging economies. Those systems use the flexibility of the Internet of Things to operate the video camera only when an intrusion is detected. We are the only ones that focus on the design of protocols based on intelligent agents to communicate the video of an intrusion in real time to the guards by wireless or mobile networks. The goal is to communicate, in real time, the video to the guards who can be moving towards the smart home. However, this communication suffers from sporadic disruptions that difficults the control and drastically reduces user satisfaction and operativity of the system. In a novel way, we have designed a generic software architecture based on design patterns that can be adapted to any hardware in a simple way. The implanted hardware is of very low economic cost; the software frameworks are free. In the experimental tests we have shown that it is possible to communicate to the moving guard, intrusion notifications (by e-mail and by instant messaging, and the first video frames in less than 20 s. In addition, we automatically recovered the frames of video lost in the disruptions in a transparent way to the user, we supported vertical handover processes and we could save energy of the smartphone's battery. However, the most important thing was that the high satisfaction of the people who have used the system.

  1. Replacing Non-Active Video Gaming by Active Video Gaming to Prevent Excessive Weight Gain in Adolescents.

    Science.gov (United States)

    Simons, Monique; Brug, Johannes; Chinapaw, Mai J M; de Boer, Michiel; Seidell, Jaap; de Vet, Emely

    2015-01-01

    The aim of the current study was to evaluate the effects of and adherence to an active video game promotion intervention on anthropometrics, sedentary screen time and consumption of sugar-sweetened beverages and snacks among non-active video gaming adolescents who primarily were of healthy weight. We assigned 270 gaming (i.e. ≥ 2 hours/week non-active video game time) adolescents randomly to an intervention group (n = 140) (receiving active video games and encouragement to play) or a waiting-list control group (n = 130). BMI-SDS (SDS = adjusted for mean standard deviation score), waist circumference-SDS, hip circumference and sum of skinfolds were measured at baseline, at four and ten months follow-up (primary outcomes). Sedentary screen time, physical activity, consumption of sugar-sweetened beverages and snacks, and process measures (not at baseline) were assessed with self-reports at baseline, one, four and ten months follow-up. Multi-level-intention to treat-regression analyses were conducted. The control group decreased significantly more than the intervention group on BMI-SDS (β = 0.074, 95%CI: 0.008;0.14), and sum of skinfolds (β = 3.22, 95%CI: 0.27;6.17) (overall effects). The intervention group had a significantly higher decrease in self-reported non-active video game time (β = -1.76, 95%CI: -3.20;-0.32) and total sedentary screen time (Exp (β = 0.81, 95%CI: 0.74;0.88) than the control group (overall effects). The process evaluation showed that 14% of the adolescents played the Move video games every week ≥ 1 hour/week during the whole intervention period. The active video game intervention did not result in lower values on anthropometrics in a group of 'excessive' non-active video gamers (mean ~ 14 hours/week) who primarily were of healthy weight compared to a control group throughout a ten-month-period. Even some effects in the unexpected direction were found, with the control group showing lower BMI-SDS and skin folds than the intervention group

  2. Replacing Non-Active Video Gaming by Active Video Gaming to Prevent Excessive Weight Gain in Adolescents.

    Directory of Open Access Journals (Sweden)

    Monique Simons

    Full Text Available The aim of the current study was to evaluate the effects of and adherence to an active video game promotion intervention on anthropometrics, sedentary screen time and consumption of sugar-sweetened beverages and snacks among non-active video gaming adolescents who primarily were of healthy weight.We assigned 270 gaming (i.e. ≥ 2 hours/week non-active video game time adolescents randomly to an intervention group (n = 140 (receiving active video games and encouragement to play or a waiting-list control group (n = 130. BMI-SDS (SDS = adjusted for mean standard deviation score, waist circumference-SDS, hip circumference and sum of skinfolds were measured at baseline, at four and ten months follow-up (primary outcomes. Sedentary screen time, physical activity, consumption of sugar-sweetened beverages and snacks, and process measures (not at baseline were assessed with self-reports at baseline, one, four and ten months follow-up. Multi-level-intention to treat-regression analyses were conducted.The control group decreased significantly more than the intervention group on BMI-SDS (β = 0.074, 95%CI: 0.008;0.14, and sum of skinfolds (β = 3.22, 95%CI: 0.27;6.17 (overall effects. The intervention group had a significantly higher decrease in self-reported non-active video game time (β = -1.76, 95%CI: -3.20;-0.32 and total sedentary screen time (Exp (β = 0.81, 95%CI: 0.74;0.88 than the control group (overall effects. The process evaluation showed that 14% of the adolescents played the Move video games every week ≥ 1 hour/week during the whole intervention period.The active video game intervention did not result in lower values on anthropometrics in a group of 'excessive' non-active video gamers (mean ~ 14 hours/week who primarily were of healthy weight compared to a control group throughout a ten-month-period. Even some effects in the unexpected direction were found, with the control group showing lower BMI-SDS and skin folds than the intervention

  3. Replacing Non-Active Video Gaming by Active Video Gaming to Prevent Excessive Weight Gain in Adolescents

    Science.gov (United States)

    Simons, Monique; Brug, Johannes; Chinapaw, Mai J. M.; de Boer, Michiel; Seidell, Jaap; de Vet, Emely

    2015-01-01

    Objective The aim of the current study was to evaluate the effects of and adherence to an active video game promotion intervention on anthropometrics, sedentary screen time and consumption of sugar-sweetened beverages and snacks among non-active video gaming adolescents who primarily were of healthy weight. Methods We assigned 270 gaming (i.e. ≥2 hours/week non-active video game time) adolescents randomly to an intervention group (n = 140) (receiving active video games and encouragement to play) or a waiting-list control group (n = 130). BMI-SDS (SDS = adjusted for mean standard deviation score), waist circumference-SDS, hip circumference and sum of skinfolds were measured at baseline, at four and ten months follow-up (primary outcomes). Sedentary screen time, physical activity, consumption of sugar-sweetened beverages and snacks, and process measures (not at baseline) were assessed with self-reports at baseline, one, four and ten months follow-up. Multi-level-intention to treat-regression analyses were conducted. Results The control group decreased significantly more than the intervention group on BMI-SDS (β = 0.074, 95%CI: 0.008;0.14), and sum of skinfolds (β = 3.22, 95%CI: 0.27;6.17) (overall effects). The intervention group had a significantly higher decrease in self-reported non-active video game time (β = -1.76, 95%CI: -3.20;-0.32) and total sedentary screen time (Exp (β = 0.81, 95%CI: 0.74;0.88) than the control group (overall effects). The process evaluation showed that 14% of the adolescents played the Move video games every week ≥1 hour/week during the whole intervention period. Conclusions The active video game intervention did not result in lower values on anthropometrics in a group of ‘excessive’ non-active video gamers (mean ~ 14 hours/week) who primarily were of healthy weight compared to a control group throughout a ten-month-period. Even some effects in the unexpected direction were found, with the control group showing lower BMI

  4. The presentation of seizures and epilepsy in YouTube videos.

    Science.gov (United States)

    Wong, Victoria S S; Stevenson, Matthew; Selwa, Linda

    2013-04-01

    We evaluated videos on the social media website, YouTube, containing references to seizures and epilepsy. Of 100 videos, 28% contained an ictal event, and 25% featured a person with epilepsy recounting his or her personal experience. Videos most commonly fell into categories of Personal Experience/Anecdotal (44%) and Informative/Educational (38%). Fifty-one percent of videos were judged as accurate, and 9% were inaccurate; accuracy was not an applicable attribute in the remainder of the videos. Eighty-five percent of videos were sympathetic towards those with seizures or epilepsy, 9% were neutral, and only 6% were derogatory. Ninety-eight percent of videos were thought to be easily understood by a layperson. The user-generated content on YouTube appears to be more sympathetic and accurate compared to other forms of mass media. We are optimistic that with a shifting ratio towards sympathetic content about epilepsy, the amount of stigma towards epilepsy and seizures will continue to lessen. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  6. Video Analytics for Business Intelligence

    CERN Document Server

    Porikli, Fatih; Xiang, Tao; Gong, Shaogang

    2012-01-01

    Closed Circuit TeleVision (CCTV) cameras have been increasingly deployed pervasively in public spaces including retail centres and shopping malls. Intelligent video analytics aims to automatically analyze content of massive amount of public space video data and has been one of the most active areas of computer vision research in the last two decades. Current focus of video analytics research has been largely on detecting alarm events and abnormal behaviours for public safety and security applications. However, increasingly CCTV installations have also been exploited for gathering and analyzing business intelligence information, in order to enhance marketing and operational efficiency. For example, in retail environments, surveillance cameras can be utilised to collect statistical information about shopping behaviour and preference for marketing (e.g., how many people entered a shop; how many females/males or which age groups of people showed interests to a particular product; how long did they stay in the sho...

  7. Digital image processing in NDT : Application to industrial radiography

    International Nuclear Information System (INIS)

    Aguirre, J.; Gonzales, C.; Pereira, D.

    1988-01-01

    Digital image processing techniques are applied to image enhancement discontinuity detection and characterization is radiographic test. Processing is performed mainly by image histogram modification, edge enhancement, texture and user interactive segmentation. Implementation was achieved in a microcomputer with video image capture system. Results are compared with those obtained through more specialized equipment main frame computers and high precision mechanical scanning digitisers. Procedures are intended as a precious stage for automatic defect detection

  8. Indexed Captioned Searchable Videos: A Learning Companion for STEM Coursework

    Science.gov (United States)

    Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher

    2017-02-01

    Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character recognition (OCR) technology was enhanced with image transformations for extraction of text from video frames to support indexing and search. The images and text on video frames is analyzed to divide lecture videos into topical segments. The ICS video player integrates indexing, search, and captioning in video playback providing instant access to the content of interest. This video framework has been used by more than 70 courses in a variety of STEM disciplines and assessed by more than 4000 students. Results presented from the surveys demonstrate the value of the videos as a learning resource and the role played by videos in a students learning process. Survey results also establish the value of indexing and search features in a video platform for education. This paper reports on the development and evaluation of ICS videos framework and over 5 years of usage experience in several STEM courses.

  9. On the relative importance of audio and video in the presence of packet losses

    DEFF Research Database (Denmark)

    Korhonen, Jari; Reiter, Ulrich; Myakotnykh, Eugene

    2010-01-01

    In streaming applications, unequal protection of audio and video tracks may be necessary to maintain the optimal perceived overall quality. For this purpose, the application should be aware of the relative importance of audio and video in an audiovisual sequence. In this paper, we propose...... a subjective test arrangement for finding the optimal tradeoff between subjective audio and video qualities in situations when it is not possible to have perfect quality for both modalities concurrently. Our results show that content poses a significant impact on the preferred compromise between audio...... and video quality, but also that the currently used classification criteria for content are not sufficient to predict the users’ preference...

  10. Automated UAV-based video exploitation using service oriented architecture framework

    Science.gov (United States)

    Se, Stephen; Nadeau, Christian; Wood, Scott

    2011-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.

  11. Automated Generation of Geo-Referenced Mosaics From Video Data Collected by Deep-Submergence Vehicles: Preliminary Results

    Science.gov (United States)

    Rhzanov, Y.; Beaulieu, S.; Soule, S. A.; Shank, T.; Fornari, D.; Mayer, L. A.

    2005-12-01

    Many advances in understanding geologic, tectonic, biologic, and sedimentologic processes in the deep ocean are facilitated by direct observation of the seafloor. However, making such observations is both difficult and expensive. Optical systems (e.g., video, still camera, or direct observation) will always be constrained by the severe attenuation of light in the deep ocean, limiting the field of view to distances that are typically less than 10 meters. Acoustic systems can 'see' much larger areas, but at the cost of spatial resolution. Ultimately, scientists want to study and observe deep-sea processes in the same way we do land-based phenomena so that the spatial distribution and juxtaposition of processes and features can be resolved. We have begun development of algorithms that will, in near real-time, generate mosaics from video collected by deep-submergence vehicles. Mosaics consist of >>10 video frames and can cover 100's of square-meters. This work builds on a publicly available still and video mosaicking software package developed by Rzhanov and Mayer. Here we present the results of initial tests of data collection methodologies (e.g., transects across the seafloor and panoramas across features of interest), algorithm application, and GIS integration conducted during a recent cruise to the Eastern Galapagos Spreading Center (0 deg N, 86 deg W). We have developed a GIS database for the region that will act as a means to access and display mosaics within a geospatially-referenced framework. We have constructed numerous mosaics using both video and still imagery and assessed the quality of the mosaics (including registration errors) under different lighting conditions and with different navigation procedures. We have begun to develop algorithms for efficient and timely mosaicking of collected video as well as integration with navigation data for georeferencing the mosaics. Initial results indicate that operators must be properly versed in the control of the

  12. Performance evaluation software moving object detection and tracking in videos

    CERN Document Server

    Karasulu, Bahadir

    2013-01-01

    Performance Evaluation Software: Moving Object Detection and Tracking in Videos introduces a software approach for the real-time evaluation and performance comparison of the methods specializing in moving object detection and/or tracking (D&T) in video processing. Digital video content analysis is an important item for multimedia content-based indexing (MCBI), content-based video retrieval (CBVR) and visual surveillance systems. There are some frequently-used generic algorithms for video object D&T in the literature, such as Background Subtraction (BS), Continuously Adaptive Mean-shift (CMS),

  13. Dashboard Videos

    Science.gov (United States)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  14. The Simple Video Coder: A free tool for efficiently coding social video data.

    Science.gov (United States)

    Barto, Daniel; Bird, Clark W; Hamilton, Derek A; Fink, Brandi C

    2017-08-01

    Videotaping of experimental sessions is a common practice across many disciplines of psychology, ranging from clinical therapy, to developmental science, to animal research. Audio-visual data are a rich source of information that can be easily recorded; however, analysis of the recordings presents a major obstacle to project completion. Coding behavior is time-consuming and often requires ad-hoc training of a student coder. In addition, existing software is either prohibitively expensive or cumbersome, which leaves researchers with inadequate tools to quickly process video data. We offer the Simple Video Coder-free, open-source software for behavior coding that is flexible in accommodating different experimental designs, is intuitive for students to use, and produces outcome measures of event timing, frequency, and duration. Finally, the software also offers extraction tools to splice video into coded segments suitable for training future human coders or for use as input for pattern classification algorithms.

  15. Modification and Validation of an Automotive Data Processing Unit, Compessed Video System, and Communications Equipment

    Energy Technology Data Exchange (ETDEWEB)

    Carter, R.J.

    1997-04-01

    The primary purpose of the "modification and validation of an automotive data processing unit (DPU), compressed video system, and communications equipment" cooperative research and development agreement (CRADA) was to modify and validate both hardware and software, developed by Scientific Atlanta, Incorporated (S-A) for defense applications (e.g., rotary-wing airplanes), for the commercial sector surface transportation domain (i.e., automobiles and trucks). S-A also furnished a state-of-the-art compressed video digital storage and retrieval system (CVDSRS), and off-the-shelf data storage and transmission equipment to support the data acquisition system for crash avoidance research (DASCAR) project conducted by Oak Ridge National Laboratory (ORNL). In turn, S-A received access to hardware and technology related to DASCAR. DASCAR was subsequently removed completely and installation was repeated a number of times to gain an accurate idea of complete installation, operation, and removal of DASCAR. Upon satisfactory completion of the DASCAR construction and preliminary shakedown, ORNL provided NHTSA with an operational demonstration of DASCAR at their East Liberty, OH test facility. The demonstration included an on-the-road demonstration of the entire data acquisition system using NHTSA'S test track. In addition, the demonstration also consisted of a briefing, containing the following: ORNL generated a plan for validating the prototype data acquisition system with regard to: removal of DASCAR from an existing vehicle, and installation and calibration in other vehicles; reliability of the sensors and systems; data collection and transmission process (data integrity); impact on the drivability of the vehicle and obtrusiveness of the system to the driver; data analysis procedures; conspicuousness of the vehicle to other drivers; and DASCAR installation and removal training and documentation. In order to identify any operational problems not captured by the systems

  16. Adventure Racing and Organizational Behavior: Using Eco Challenge Video Clips to Stimulate Learning

    Science.gov (United States)

    Kenworthy-U'Ren, Amy; Erickson, Anthony

    2009-01-01

    In this article, the Eco Challenge race video is presented as a teaching tool for facilitating theory-based discussion and application in organizational behavior (OB) courses. Before discussing the intricacies of the video series itself, the authors present a pedagogically based rationale for using reality TV-based video segments in a classroom…

  17. Video game addiction, ADHD symptomatology, and video game reinforcement.

    Science.gov (United States)

    Mathews, Christine L; Morrell, Holly E R; Molle, Jon E

    2018-06-06

    Up to 23% of people who play video games report symptoms of addiction. Individuals with attention deficit hyperactivity disorder (ADHD) may be at increased risk for video game addiction, especially when playing games with more reinforcing properties. The current study tested whether level of video game reinforcement (type of game) places individuals with greater ADHD symptom severity at higher risk for developing video game addiction. Adult video game players (N = 2,801; Mean age = 22.43, SD = 4.70; 93.30% male; 82.80% Caucasian) completed an online survey. Hierarchical multiple linear regression analyses were used to test type of game, ADHD symptom severity, and the interaction between type of game and ADHD symptomatology as predictors of video game addiction severity, after controlling for age, gender, and weekly time spent playing video games. ADHD symptom severity was positively associated with increased addiction severity (b = .73 and .68, ps .05. The relationship between ADHD symptom severity and addiction severity did not depend on the type of video game played or preferred most, ps > .05. Gamers who have greater ADHD symptom severity may be at greater risk for developing symptoms of video game addiction and its negative consequences, regardless of type of video game played or preferred most. Individuals who report ADHD symptomatology and also identify as gamers may benefit from psychoeducation about the potential risk for problematic play.

  18. Joint Rendering and Segmentation of Free-Viewpoint Video

    Directory of Open Access Journals (Sweden)

    Ishii Masato

    2010-01-01

    Full Text Available Abstract This paper presents a method that jointly performs synthesis and object segmentation of free-viewpoint video using multiview video as the input. This method is designed to achieve robust segmentation from online video input without per-frame user interaction and precomputations. This method shares a calculation process between the synthesis and segmentation steps; the matching costs calculated through the synthesis step are adaptively fused with other cues depending on the reliability in the segmentation step. Since the segmentation is performed for arbitrary viewpoints directly, the extracted object can be superimposed onto another 3D scene with geometric consistency. We can observe that the object and new background move naturally along with the viewpoint change as if they existed together in the same space. In the experiments, our method can process online video input captured by a 25-camera array and show the result image at 4.55 fps.

  19. Video coding for decoding power-constrained embedded devices

    Science.gov (United States)

    Lu, Ligang; Sheinin, Vadim

    2004-01-01

    Low power dissipation and fast processing time are crucial requirements for embedded multimedia devices. This paper presents a technique in video coding to decrease the power consumption at a standard video decoder. Coupled with a small dedicated video internal memory cache on a decoder, the technique can substantially decrease the amount of data traffic to the external memory at the decoder. A decrease in data traffic to the external memory at decoder will result in multiple benefits: faster real-time processing and power savings. The encoder, given prior knowledge of the decoder"s dedicated video internal memory cache management scheme, regulates its choice of motion compensated predictors to reduce the decoder"s external memory accesses. This technique can be used in any standard or proprietary encoder scheme to generate a compliant output bit stream decodable by standard CPU-based and dedicated hardware-based decoders for power savings with the best quality-power cost trade off. Our simulation results show that with a relatively small amount of dedicated video internal memory cache, the technique may decrease the traffic between CPU and external memory over 50%.

  20. Automatic generation of pictorial transcripts of video programs

    Science.gov (United States)

    Shahraray, Behzad; Gibbon, David C.

    1995-03-01

    An automatic authoring system for the generation of pictorial transcripts of video programs which are accompanied by closed caption information is presented. A number of key frames, each of which represents the visual information in a segment of the video (i.e., a scene), are selected automatically by performing a content-based sampling of the video program. The textual information is recovered from the closed caption signal and is initially segmented based on its implied temporal relationship with the video segments. The text segmentation boundaries are then adjusted, based on lexical analysis and/or caption control information, to account for synchronization errors due to possible delays in the detection of scene boundaries or the transmission of the caption information. The closed caption text is further refined through linguistic processing for conversion to lower- case with correct capitalization. The key frames and the related text generate a compact multimedia presentation of the contents of the video program which lends itself to efficient storage and transmission. This compact representation can be viewed on a computer screen, or used to generate the input to a commercial text processing package to generate a printed version of the program.

  1. MovieRemix: Having Fun Playing with Videos

    Directory of Open Access Journals (Sweden)

    Nicola Dusi

    2011-01-01

    scenario. Known as remix or video remix, the produced video may have new and different meanings with respect to the source material. Unfortunately, when managing audiovisual objects, the technological aspect can be a burden for many creative users. Motivated by the large success of the gaming market, we propose a novel game and an architecture to make the remix process a pleasant and stimulating gaming experience. MovieRemix allows people to act like a movie director, but instead of dealing with cast and cameras, the player has to create a remixed video starting from a given screenplay and from video shots retrieved from the provided catalog. MovieRemix is not a simple video editing tool nor is a simple game: it is a challenging environment that stimulates creativity. To temp to play the game, players can access different levels of screenplay (original, outline, derived and can also challenge other players. Computational and storage issues are kept at the server side, whereas the client device just needs to have the capability of playing streaming videos.

  2. Logistical and safeguards aspects related to the introduction and implementation of video surveillance equipment by EURATOM

    International Nuclear Information System (INIS)

    Chare, P.J.; Wagner, H.G.; Otto, P.; Schenkel, R.

    1987-01-01

    With the growing availability of reliable video equipment for surveillance applications in safeguards and the disappearance of the Super 8 mm cameras, there will be a period of transition from film camera to video surveillance, a process which started two years ago. This gradual transition, as the film cameras come to the end of their useful lives, will afford the safeguards authorities the opportunity to examine in detail the logistical and procedural changes necessary. This paper examines, on the basis of existing video equipment in use or under development, the differences and problems to be encountered in the approach to surveillance. These problems include tamper resistance of signal transmission, on site and headquarters review, preventative maintenance, reliability, repair, and overall performance evaluation. In addition the advantages and flexibility offered by the introduction of video, such as on site review and increased memory capacity, are also discussed. The paper also considers the overall costs and manpower required by EURATOM for the implementation of the video systems as compared to the existing twin Minolta film camera system to ensure the most efficient use of manpower and equipment

  3. Coding the Complexity of Activity in Video Recordings

    DEFF Research Database (Denmark)

    Harter, Christopher Daniel; Otrel-Cass, Kathrin

    2017-01-01

    This paper presents a theoretical approach to coding and analyzing video data on human interaction and activity, using principles found in cultural historical activity theory. The systematic classification or coding of information contained in video data on activity can be arduous and time...... Bødker’s in 1996, three possible areas of expansion to Susanne Bødker’s method for analyzing video data were found. Firstly, a technological expansion due to contemporary developments in sophisticated analysis software, since the mid 1990’s. Secondly, a conceptual expansion, where the applicability...... of using Activity Theory outside of the context of human–computer interaction, is assessed. Lastly, a temporal expansion, by facilitating an organized method for tracking the development of activities over time, within the coding and analysis of video data. To expand on the above areas, a prototype coding...

  4. An automatic analyzer for sports video databases using visual cues and real-world modeling

    NARCIS (Netherlands)

    Han, Jungong; Farin, D.S.; With, de P.H.N.; Lao, Weilun

    2006-01-01

    With the advent of hard-disk video recording, video databases gradually emerge for consumer applications. The large capacity of disks requires the need for fast storage and retrieval functions. We propose a semantic analyzer for sports video, which is able to automatically extract and analyze key

  5. Playing a first-person shooter video game induces neuroplastic change.

    Science.gov (United States)

    Wu, Sijing; Cheng, Cho Kin; Feng, Jing; D'Angelo, Lisa; Alain, Claude; Spence, Ian

    2012-06-01

    Playing a first-person shooter (FPS) video game alters the neural processes that support spatial selective attention. Our experiment establishes a causal relationship between playing an FPS game and neuroplastic change. Twenty-five participants completed an attentional visual field task while we measured ERPs before and after playing an FPS video game for a cumulative total of 10 hr. Early visual ERPs sensitive to bottom-up attentional processes were little affected by video game playing for only 10 hr. However, participants who played the FPS video game and also showed the greatest improvement on the attentional visual field task displayed increased amplitudes in the later visual ERPs. These potentials are thought to index top-down enhancement of spatial selective attention via increased inhibition of distractors. Individual variations in learning were observed, and these differences show that not all video game players benefit equally, either behaviorally or in terms of neural change.

  6. Virtually transparent epidermal imagery (VTEI): on new approaches to in vivo wireless high-definition video and image processing.

    Science.gov (United States)

    Anderson, Adam L; Lin, Bingxiong; Sun, Yu

    2013-12-01

    This work first overviews a novel design, and prototype implementation, of a virtually transparent epidermal imagery (VTEI) system for laparo-endoscopic single-site (LESS) surgery. The system uses a network of multiple, micro-cameras and multiview mosaicking to obtain a panoramic view of the surgery area. The prototype VTEI system also projects the generated panoramic view on the abdomen area to create a transparent display effect that mimics equivalent, but higher risk, open-cavity surgeries. The specific research focus of this paper is on two important aspects of a VTEI system: 1) in vivo wireless high-definition (HD) video transmission and 2) multi-image processing-both of which play key roles in next-generation systems. For transmission and reception, this paper proposes a theoretical wireless communication scheme for high-definition video in situations that require extremely small-footprint image sensors and in zero-latency applications. In such situations the typical optimized metrics in communication schemes, such as power and data rate, are far less important than latency and hardware footprint that absolutely preclude their use if not satisfied. This work proposes the use of a novel Frequency-Modulated Voltage-Division Multiplexing (FM-VDM) scheme where sensor data is kept analog and transmitted via "voltage-multiplexed" signals that are also frequency-modulated. Once images are received, a novel Homographic Image Mosaicking and Morphing (HIMM) algorithm is proposed to stitch images from respective cameras, that also compensates for irregular surfaces in real-time, into a single cohesive view of the surgical area. In VTEI, this view is then visible to the surgeon directly on the patient to give an "open cavity" feel to laparoscopic procedures.

  7. Camera Networks The Acquisition and Analysis of Videos over Wide Areas

    CERN Document Server

    Roy-Chowdhury, Amit K

    2012-01-01

    As networks of video cameras are installed in many applications like security and surveillance, environmental monitoring, disaster response, and assisted living facilities, among others, image understanding in camera networks is becoming an important area of research and technology development. There are many challenges that need to be addressed in the process. Some of them are listed below: - Traditional computer vision challenges in tracking and recognition, robustness to pose, illumination, occlusion, clutter, recognition of objects, and activities; - Aggregating local information for wide

  8. Serious Video Games for Health How Behavioral Science Guided the Development of a Serious Video Game

    OpenAIRE

    Thompson, Debbe; Baranowski, Tom; Buday, Richard; Baranowski, Janice; Thompson, Victoria; Jago, Russell; Griffith, Melissa Juliano

    2010-01-01

    Serious video games for health are designed to entertain players while attempting to modify some aspect of their health behavior. Behavior is a complex process influenced by multiple factors, often making it difficult to change. Behavioral science provides insight into factors that influence specific actions that can be used to guide key game design decisions. This article reports how behavioral science guided the design of a serious video game to prevent Type 2 diabetes and obesity among you...

  9. Enhancement system of nighttime infrared video image and visible video image

    Science.gov (United States)

    Wang, Yue; Piao, Yan

    2016-11-01

    Visibility of Nighttime video image has a great significance for military and medicine areas, but nighttime video image has so poor quality that we can't recognize the target and background. Thus we enhance the nighttime video image by fuse infrared video image and visible video image. According to the characteristics of infrared and visible images, we proposed improved sift algorithm andαβ weighted algorithm to fuse heterologous nighttime images. We would deduced a transfer matrix from improved sift algorithm. The transfer matrix would rapid register heterologous nighttime images. And theαβ weighted algorithm can be applied in any scene. In the video image fusion system, we used the transfer matrix to register every frame and then used αβ weighted method to fuse every frame, which reached the time requirement soft video. The fused video image not only retains the clear target information of infrared video image, but also retains the detail and color information of visible video image and the fused video image can fluency play.

  10. Automated music selection of video ads

    Directory of Open Access Journals (Sweden)

    Wiesener Oliver

    2017-07-01

    Full Text Available The importance of video ads on social media platforms can be measured by views. For instance, Samsung’s commercial ad for one of its new smartphones reached more than 46 million viewers at Youtube. A video ad addresses the visual as well as the auditive sense of users. Often the visual sense is busy in the sense that users focus other screens than the screen with the video ad. This is called the second screen syndrome. Therefore, the importance of the audio channel seems to grow. To get back the visual attention of users that are deflected from other visual impulses it appears reasonable to adapt the music to the target group. Additionally, it appears useful to adapt the music to content of the video. Thus, the overall success of a video ad could by increased by increasing the attention of the users. Humans typically make the decision about the music of a video ad. If there is a correlation between music, products and target groups, a digitization of the music selection process seems to be possible. Since the digitization progress in the music sector is mainly focused on music composing this article strives for making a first step towards the digitization of the music selection.

  11. SnapVideo: Personalized Video Generation for a Sightseeing Trip.

    Science.gov (United States)

    Zhang, Luming; Jing, Peiguang; Su, Yuting; Zhang, Chao; Shaoz, Ling

    2017-11-01

    Leisure tourism is an indispensable activity in urban people's life. Due to the popularity of intelligent mobile devices, a large number of photos and videos are recorded during a trip. Therefore, the ability to vividly and interestingly display these media data is a useful technique. In this paper, we propose SnapVideo, a new method that intelligently converts a personal album describing of a trip into a comprehensive, aesthetically pleasing, and coherent video clip. The proposed framework contains three main components. The scenic spot identification model first personalizes the video clips based on multiple prespecified audience classes. We then search for some auxiliary related videos from YouTube 1 according to the selected photos. To comprehensively describe a scenery, the view generation module clusters the crawled video frames into a number of views. Finally, a probabilistic model is developed to fit the frames from multiple views into an aesthetically pleasing and coherent video clip, which optimally captures the semantics of a sightseeing trip. Extensive user studies demonstrated the competitiveness of our method from an aesthetic point of view. Moreover, quantitative analysis reflects that semantically important spots are well preserved in the final video clip. 1 https://www.youtube.com/.

  12. Retail video analytics: an overview and survey

    Science.gov (United States)

    Connell, Jonathan; Fan, Quanfu; Gabbur, Prasad; Haas, Norman; Pankanti, Sharath; Trinh, Hoang

    2013-03-01

    Today retail video analytics has gone beyond the traditional domain of security and loss prevention by providing retailers insightful business intelligence such as store traffic statistics and queue data. Such information allows for enhanced customer experience, optimized store performance, reduced operational costs, and ultimately higher profitability. This paper gives an overview of various camera-based applications in retail as well as the state-ofthe- art computer vision techniques behind them. It also presents some of the promising technical directions for exploration in retail video analytics.

  13. Spatial-Aided Low-Delay Wyner-Ziv Video Coding

    Directory of Open Access Journals (Sweden)

    Bo Wu

    2009-01-01

    Full Text Available In distributed video coding, the side information (SI quality plays an important role in Wyner-Ziv (WZ frame coding. Usually, SI is generated at the decoder by the motion-compensated interpolation (MCI from the past and future key frames under the assumption that the motion trajectory between the adjacent frames is translational with constant velocity. However, this assumption is not always true and thus, the coding efficiency for WZ coding is often unsatisfactory in video with high and/or irregular motion. This situation becomes more serious in low-delay applications since only motion-compensated extrapolation (MCE can be applied to yield SI. In this paper, a spatial-aided Wyner-Ziv video coding (WZVC in low-delay application is proposed. In SA-WZVC, at the encoder, each WZ frame is coded as performed in the existing common Wyner-Ziv video coding scheme and meanwhile, the auxiliary information is also coded with the low-complexity DPCM. At the decoder, for the WZ frame decoding, auxiliary information should be decoded firstly and then SI is generated with the help of this auxiliary information by the spatial-aided motion-compensated extrapolation (SA-MCE. Theoretical analysis proved that when a good tradeoff between the auxiliary information coding and WZ frame coding is achieved, SA-WZVC is able to achieve better rate distortion performance than the conventional MCE-based WZVC without auxiliary information. Experimental results also demonstrate that SA-WZVC can efficiently improve the coding performance of WZVC in low-delay application.

  14. The reliability and accuracy of estimating heart-rates from RGB video recorded on a consumer grade camera

    Science.gov (United States)

    Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik

    2017-03-01

    Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.

  15. Examining Feedback in an Instructional Video Game Using Process Data and Error Analysis. CRESST Report 817

    Science.gov (United States)

    Buschang, Rebecca E.; Kerr, Deirdre S.; Chung, Gregory K. W. K.

    2012-01-01

    Appropriately designed technology-based learning environments such as video games can be used to give immediate and individualized feedback to students. However, little is known about the design and use of feedback in instructional video games. This study investigated how feedback used in a mathematics video game about fractions impacted student…

  16. Training basic laparoscopic skills using a custom-made video game.

    Science.gov (United States)

    Goris, Jetse; Jalink, Maarten B; Ten Cate Hoedemaker, Henk O

    2014-09-01

    Video games are accepted and used for a wide variety of applications. In the medical world, research on the positive effects of playing games on basic laparoscopic skills is rapidly increasing. Although these benefits have been proven several times, no institution actually uses video games for surgical training. This Short Communication describes some of the theoretical backgrounds, development and underlying educational foundations of a specifically designed video game and custom-made hardware that takes advantage of the positive effects of games on basic laparoscopic skills.

  17. Creating Math Videos: Comparing Platforms and Software

    Science.gov (United States)

    Abbasian, Reza O.; Sieben, John T.

    2016-01-01

    In this paper we present a short tutorial on creating mini-videos using two platforms--PCs and tablets such as iPads--and software packages that work with these devices. Specifically, we describe the step-by-step process of creating and editing videos using a Wacom Intuos pen-tablet plus Camtasia software on a PC platform and using the software…

  18. PERANCANGAN VIDEO PANDUAN FITNES SEBAGAI MEDIA PEMBELAJARAN

    Directory of Open Access Journals (Sweden)

    Rizkysari Meimaharani

    2013-06-01

    Full Text Available ABSTRACT Designing fitness exercise tutorial level beginner as learning and promotion media for life gym was designed to provide guidelines of good movement in the fitness training sessions for beginners, especially the gym because life member will be distributed free of charge for new members sign up. For the process of editing video tutorial software and hardware needed adequate for smooth production. The results also depend on the ability of either constituent knowledge of a general nature and especially directing, editing, creativity, and the ability of hardware, software and technology / computer. Excess video guide allows members to understand the movement is good and right to avoid unwanted injury. Not only guides the movement are presented in this video project but also the member is given petuntuk diet and proper diet for target practice can be easily achieved. Excess video guide allows members to understand the movement is good and right to avoid unwanted injury. Not only guides the movement are presented in this video project but also the member is given guide of diet and proper diet for target practice can be easily achieved. The presence of video editing technology offers convenience to an agency to educate the public through video learning and served as media promotion of a service or related agency theme of the video.

  19. [Violent video games and aggression: long-term impact and selection effects].

    Science.gov (United States)

    Staude-Müller, Frithjof

    2011-01-01

    This study applied social-cognitive models of aggression in order to examine relations between video game use and aggressive tendencies and biases in social information processing. To this end, 499 secondary school students (aged 12-16) completed a survey on two occasions one year apart. Hierarchical regression analysis probed media effects and selection effects and included relevant contextual variables (parental monitoring of media consumption, impulsivity, and victimization). Results revealed that it was not the consumption of violent video games but rather an uncontrolled pattern of video game use that was associated with increasing aggressive tendencies. This increase was partly mediated by a hostile attribution bias in social information processing. The influence of aggressive tendencies on later video game consumption was also examined (selection path). Adolescents with aggressive traits intensified their video game behavior only in terms of their uncontrolled video game use. This was found even after controlling for sensation seeking and parental media control.

  20. Subjective video quality comparison of HDTV monitors

    Science.gov (United States)

    Seo, G.; Lim, C.; Lee, S.; Lee, C.

    2009-01-01

    HDTV broadcasting services have become widely available. Furthermore, in the upcoming IPTV services, HDTV services are important and quality monitoring becomes an issue, particularly in IPTV services. Consequently, there have been great efforts to develop video quality measurement methods for HDTV. On the other hand, most HDTV programs will be watched on digital TV monitors which include LCD and PDP TV monitors. In general, the LCD and PDP TV monitors have different color characteristics and response times. Furthermore, most commercial TV monitors include post-processing to improve video quality. In this paper, we compare subjective video quality of some commercial HD TV monitors to investigate the impact of monitor type on perceptual video quality. We used the ACR method as a subjective testing method. Experimental results show that the correlation coefficients among the HDTV monitors are reasonable high. However, for some video sequences and impairments, some differences in subjective scores were observed.

  1. Real-time CT-video registration for continuous endoscopic guidance

    Science.gov (United States)

    Merritt, Scott A.; Rai, Lav; Higgins, William E.

    2006-03-01

    Previous research has shown that CT-image-based guidance could be useful for the bronchoscopic assessment of lung cancer. This research drew upon the registration of bronchoscopic video images to CT-based endoluminal renderings of the airway tree. The proposed methods either were restricted to discrete single-frame registration, which took several seconds to complete, or required non-real-time buffering and processing of video sequences. We have devised a fast 2D/3D image registration method that performs single-frame CT-Video registration in under 1/15th of a second. This allows the method to be used for real-time registration at full video frame rates without significantly altering the physician's behavior. The method achieves its speed through a gradient-based optimization method that allows most of the computation to be performed off-line. During live registration, the optimization iteratively steps toward the locally optimal viewpoint at which a CT-based endoluminal view is most similar to a current bronchoscopic video frame. After an initial registration to begin the process (generally done in the trachea for bronchoscopy), subsequent registrations are performed in real-time on each incoming video frame. As each new bronchoscopic video frame becomes available, the current optimization is initialized using the previous frame's optimization result, allowing continuous guidance to proceed without manual re-initialization. Tests were performed using both synthetic and pre-recorded bronchoscopic video. The results show that the method is robust to initialization errors, that registration accuracy is high, and that continuous registration can proceed on real-time video at >15 frames per sec. with minimal user-intervention.

  2. Internet and video technology in psychotherapy supervision and training.

    Science.gov (United States)

    Wolf, Abraham W

    2011-06-01

    The seven articles in this special section on the use of Internet and video technology represent the latest growth on one branch of the increasingly prolific and differentiated work in the technology of psychotherapy. In addition to the work presented here on video and the Internet applications to supervision and training, information technology is changing the field of psychotherapy through computer assisted therapies and virtual reality interventions.

  3. The Important Elements of a Science Video

    Science.gov (United States)

    Harned, D. A.; Moorman, M.; McMahon, G.

    2012-12-01

    New technologies have revolutionized use of video as a means of communication. Films have become easier to create and to distribute. Video is omnipresent in our culture and supplements or even replaces writing in many applications. How can scientists and educators best use video to communicate scientific results? Video podcasts are being used in addition to journal, print, and online publications to communicate the relevance of scientific findings of the U.S. Geological Survey's (USGS) National Water-Quality Assessment (NAWQA) program to general audiences such as resource managers, educational groups, public officials, and the general public. In an effort to improve the production of science videos a survey was developed to provide insight into effective science communication with video. Viewers of USGS podcast videos were surveyed using Likert response- scaling to identify the important elements of science videos. The surveys were of 120 scientists and educators attending the 2010 and 2011 Fall Meetings of the American Geophysical Union and the 2012 meeting of the National Monitoring Council. The median age of the respondents was 44 years, with an education level of a Bachelor's Degree or higher. Respondents reported that their primary sources for watching science videos were YouTube and science websites. Video length was the single most important element associated with reaching the greatest number of viewers. The surveys indicated a median length of 5 minutes as appropriate for a web video, with 5-7 minutes the 25th-75th percentiles. An illustration of the effect of length: a 5-minute and a 20-minute version of a USGS film on the effect of urbanization on water-quality was made available on the same website. The short film has been downloaded 3 times more frequently than the longer film version. The survey showed that the most important elements to include in a science film are style elements including strong visuals, an engaging story, and a simple message, and

  4. 3D video coding: an overview of present and upcoming standards

    Science.gov (United States)

    Merkle, Philipp; Müller, Karsten; Wiegand, Thomas

    2010-07-01

    An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.

  5. International video project on natural analogues

    International Nuclear Information System (INIS)

    Guentensperger, Marcel

    1993-01-01

    A natural analogue can be defined as a natural process which has occurred in the past and is studied in order to test predictions about the future evolution of similar processes. In recent years, natural analogues have been used increasingly to test the mathematical models required for repository performance assessment. Analogues are, however, also of considerable use in public relations as they allow many of the principles involved in demonstrating repository safety to be illustrated in a clear manner using natural systems with which man is familiar. The international Natural Analogue Working Group (NAWG), organised under the auspices of the CEC, has recognised that such PR applications are of considerable importance and should be supported from a technical level. At the NAWG meeting in Pitlochry, Scotland (June 1990), it was recommended that the possibilities for making a video film on this topic be investigated and Nagra was requested to take the lead role in setting up such a project

  6. How players manage moral concerns to make video game violence enjoyable

    NARCIS (Netherlands)

    Klimmt, Christoph; Schmid, Hannah; Nosper, Andreas; Hartmann, Tilo; Vorderer, Peter

    Research on video game violence has focused on the impact of aggression, but has so far neglected the processes and mechanisms underlying the enjoyment of video game violence. The present contribution examines a specific process in this context, namely players' strategies to cope with moral concern

  7. A Multi-Frame Post-Processing Approach to Improved Decoding of H.264/AVC Video

    DEFF Research Database (Denmark)

    Huang, Xin; Li, Huiying; Forchhammer, Søren

    2007-01-01

    Video compression techniques may yield visually annoying artifacts for limited bitrate coding. In order to improve video quality, a multi-frame based motion compensated filtering algorithm is reported based on combining multiple pictures to form a single super-resolution picture and decimation......, and annoying ringing artifacts are effectively suppressed....

  8. Effect of 3D animation videos over 2D video projections in periodontal health education among dental students.

    Science.gov (United States)

    Dhulipalla, Ravindranath; Marella, Yamuna; Katuri, Kishore Kumar; Nagamani, Penupothu; Talada, Kishore; Kakarlapudi, Anusha

    2015-01-01

    There is limited evidence about the distinguished effect of 3D oral health education videos over conventional 2 dimensional projections in improving oral health knowledge. This randomized controlled trial was done to test the effect of 3 dimensional oral health educational videos among first year dental students. 80 first year dental students were enrolled and divided into two groups (test and control). In the test group, 3D animation and in the control group, regular 2D video projections pertaining to periodontal anatomy, etiology, presenting conditions, preventive measures and treatment of periodontal problems were shown. Effect of 3D animation was evaluated by using a questionnaire consisting of 10 multiple choice questions given to all participants at baseline, immediately after and 1month after the intervention. Clinical parameters like Plaque Index (PI), Gingival Bleeding Index (GBI), and Oral Hygiene Index Simplified (OHI-S) were measured at baseline and 1 month follow up. A significant difference in the post intervention knowledge scores was found between the groups as assessed by unpaired t-test (p3D animation videos are more effective over 2D videos in periodontal disease education and knowledge recall. The application of 3D animation results also demonstrate a better visual comprehension for students and greater health care outcomes.

  9. Quality of Experience management for video streams : the case of Skype

    NARCIS (Netherlands)

    Liotta, A.; Druda, L.; Exarchakos, G.; Menkovski, V.; Khalil, I.

    2012-01-01

    With the widespread adoption of mobile Internet, the process of streaming video has become varied and complex. A diversity of factors affect the way we perceive quality in video streaming (also known as 'quality of experience', or QoE), involving far more than the individual video and network

  10. Lossless Compression of Broadcast Video

    DEFF Research Database (Denmark)

    Martins, Bo; Eriksen, N.; Faber, E.

    1998-01-01

    We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...... cannot be expected to code losslessly at a rate of 125 Mbit/s. We investigate the rate and quality effects of quantization using standard JPEG-LS quantization and two new techniques: visual quantization and trellis quantization. Visual quantization is not part of baseline JPEG-LS, but is applicable...... in the framework of JPEG-LS. Visual tests show that this quantization technique gives much better quality than standard JPEG-LS quantization. Trellis quantization is a process by which the original image is altered in such a way as to make lossless JPEG-LS encoding more effective. For JPEG-LS and visual...

  11. Informational value and bias of videos related to orthodontics screened on a video-sharing Web site.

    Science.gov (United States)

    Knösel, Michael; Jung, Klaus

    2011-05-01

    To assess the informational value, intention, source, and bias of videos related to orthodontics screened by the video-sharing Internet platform YouTube. YouTube (www.youtube.com) was scanned in July 2010 for orthodontics-related videos using an adequately defined search term. Each of the first 30 search results of the scan was categorized with the system-generated sorts "by relevance" and "most viewed" (total: 60). These were rated independently by three assessors, who completed a questionnaire for each video. The data were analyzed statistically using Friedman's test for dependent samples, Kendall's tau, and Fleiss's kappa. The YouTube scan produced 5140 results. There was a wide variety of information about orthodontics available on YouTube, and the highest proportion of videos was found to originate from orthodontic patients. These videos were also the most viewed ones. The informational content of most of the videos was generally judged to be low, with a rather poor to inadequate representation of the orthodontic profession, although a moderately pro-orthodontics stance prevailed. It was noticeable that the majority of contributions of orthodontists to YouTube constituted advertising. This tendency was not viewed positively by the majority of YouTube users, as was evident in the divergence in the proportions when sorting by "relevance" and "most viewed." In the light of the very large number of people using the Internet as their primary source of information, orthodontists should recognize the importance of YouTube and similar social media Web sites in the opinion-forming process, especially in the case of adolescents.

  12. Video processing of remote sensor data applied to uranium exploration in Wyoming

    International Nuclear Information System (INIS)

    Levinson, R.A.; Marrs, R.W.; Crockell, F.

    1979-01-01

    LANDSAT satellite imagery and aerial photography can be used to map areas of altered sandstone associated with roll-front uranium deposits. Image data must be enhanced so that alteration spectral contrasts can be seen, and video image processing is a fast, low-cost, and efficient tool. For LANDSAT data, the 7/4 ratio produces the best enhancement of altered sandstone. The 6/4 ratio is most effective for color infrared aerial photography. Geochemical and mineralogical associations occur in unaltered, altered, and ore roll-front zones. Samples from Pumpkin Buttes show that iron is the primary coloring agent which makes alteration visually detectable. Eh and pH changes associated with passage of a roll front cause oxidation of magnetite and pyrite to hematite, goethite, and limonite in the host sandstone, thereby producing the alteration. Statistical analysis show that the detectability of geochemical and color zonation in host sands is weakened by soil-forming processes. Alteration can only be mapped in areas of thin soil cover and moderate to sparse vegetative cover

  13. Lending Video Game Consoles in an Academic Library

    Science.gov (United States)

    Buller, Ryan

    2017-01-01

    This paper will outline the process and discussions undertaken at the University of Denver's University Libraries to implement a lending service providing video game consoles. Faculty and staff at the University Libraries decided to pursue the new lending service, though not a traditional library offering, to support the needs of a video game…

  14. Multi-Task Video Captioning with Video and Entailment Generation

    OpenAIRE

    Pasunuru, Ramakanth; Bansal, Mohit

    2017-01-01

    Video captioning, the task of describing the content of a video, has seen some promising improvements in recent years with sequence-to-sequence models, but accurately learning the temporal and logical dynamics involved in the task still remains a challenge, especially given the lack of sufficient annotated data. We improve video captioning by sharing knowledge with two related directed-generation tasks: a temporally-directed unsupervised video prediction task to learn richer context-aware vid...

  15. Streaming Video--The Wave of the Video Future!

    Science.gov (United States)

    Brown, Laura

    2004-01-01

    Videos and DVDs give the teachers more flexibility than slide projectors, filmstrips, and 16mm films but teachers and students are excited about a new technology called streaming. Streaming allows the educators to view videos on demand via the Internet, which works through the transfer of digital media like video, and voice data that is received…

  16. Human features detection in video surveillance

    OpenAIRE

    Barbosa, Patrícia Margarida Silva de Castro Neves

    2016-01-01

    Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e Computadores Human activity recognition algorithms have been studied actively from decades using a sequence of 2D and 3D images from a video surveillance. This new surveillance solutions and the areas of image processing and analysis have been receiving special attention and interest from the scientific community. Thus, it became possible to witness the appearance of new video compression techniques, the tr...

  17. Video repairing under variable illumination using cyclic motions.

    Science.gov (United States)

    Jia, Jiaya; Tai, Yu-Wing; Wu, Tai-Pang; Tang, Chi-Keung

    2006-05-01

    This paper presents a complete system capable of synthesizing a large number of pixels that are missing due to occlusion or damage in an uncalibrated input video. These missing pixels may correspond to the static background or cyclic motions of the captured scene. Our system employs user-assisted video layer segmentation, while the main processing in video repair is fully automatic. The input video is first decomposed into the color and illumination videos. The necessary temporal consistency is maintained by tensor voting in the spatio-temporal domain. Missing colors and illumination of the background are synthesized by applying image repairing. Finally, the occluded motions are inferred by spatio-temporal alignment of collected samples at multiple scales. We experimented on our system with some difficult examples with variable illumination, where the capturing camera can be stationary or in motion.

  18. An Analysis of Video Navigation Behavior for Web Leisure

    Directory of Open Access Journals (Sweden)

    Ying-Han Chang

    2012-12-01

    Full Text Available People nowadays put much emphasis on leisure activities, and web video has gradually become one of the main sources for popular leisure. This article introduces the related concepts of leisure and navigation behavior as well as some recent research topics. Moreover, using YouTube as an experimental setting, the authors invited some experienced web video users and conducted an empirical study on their navigating the web videos for leisure purpose. The study used questionnaires, navigation logs, diaries, and interviews to collect data. Major results show: the subjects watched a variety of video content on the web either from traditional media or user-generated video; these videos can meet their leisure needs of both the broad and personal interests; during the navigation process, each subject quite focuses on video leisure, and is willingly to explore unknown videos; however, within a limited amount of time for leisure, a balance between leisure and rest becomes an issue of achieving real relaxation, which is worth of further attention. [Article content in Chinese

  19. Coding visual features extracted from video sequences.

    Science.gov (United States)

    Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2014-05-01

    Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.

  20. Effect of video decoder errors on video interpretability

    Science.gov (United States)

    Young, Darrell L.

    2014-06-01

    The advancement in video compression technology can result in more sensitivity to bit errors. Bit errors can propagate causing sustained loss of interpretability. In the worst case, the decoder "freezes" until it can re-synchronize with the stream. Detection of artifacts enables downstream processes to avoid corrupted frames. A simple template approach to detect block stripes and a more advanced cascade approach to detect compression artifacts was shown to correlate to the presence of artifacts and decoder messages.